-
-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update canonical-schema.json with "timeout" property #1882
base: main
Are you sure you want to change the base?
Conversation
Added non-required `timeout` (in milliseconds) property to `labeledTest.properties` to support tests with timeouts, such as for `alphametics` and `palindrome-products` exercises. This would support automatic generation/update of the tests rather than specific track manually modification of the relevant exercise's test file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no opinion on if this should be added. My review comment is purely on how to correctly add this if it is to be added
Added description reference for timeout property
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will that value be determined? The time it takes for a test to run will vary massively between languages, hardware, implementation.
It should also be added to the example in the README and documented there
Do we expect a timeout? As in, do we want students to write code that times out? |
@SleeplessByte For the test tooling that supports it, it is quite clear that this is a condition for a test to not time out and that a timeout is a test failure. The whole point is to make clear in the problem specification as to what is expected in the exercise as for the two already mentioned exercises. This is where it should be. Without this, the problem specification is inaccurate and/or misleading. Whether a specific track maintainer uses this property or not, they know it is part of the test constraints for a solution to pass. |
Added a section to give an example of using the "timeout" property
@SaschaMann example added to Readme |
I wonder if instead of this being property, it should instead be a scenario (see https://github.com/exercism/problem-specifications/blob/main/SCENARIOS.txt). That avoids any contention with how timeouts will be different between tracks, but you'd still be able to somehow mark a test case as being a performance-related test case (this is of course still contentious). For the timeouts, I think the |
I am happy to change to @ErikSchierboom 's suggestion using Shall I create a new PR and then link back this and then close this one? |
@martinfreedman normally we would try to get some consensus, but you, Erik, and I now agree, so that's a consensus of at least three. Perhaps you want to discuss first what the scenario would be? I would like to see: scenario: |
Great!
Yeah, something like that would be nice. Another option for a scenario is |
I think "performance" scenario best captures what is being tested, whereas "timeout" states how to do that and can be a function of different languages and test frameworks. "Inefficient" and "high-time-complexity" also don't quite get to the fundamental point, that this is a |
I would be happy to accept "performance". |
Works for me too! |
I don't like (the change request is about the previous comments, not the name of the scenario) |
Eh, are you against or in favor of using |
Against it and wanted to explain why because the reason wasn't brought up yet, but if people disagree with that reason, it's fine by me, too. Just wanted to clarify what the blocking change request is about. |
Do you have an alternative name perhaps? |
Maybe something like |
I disagree. Performance covers both and that is why I like it. Plus different languages might have different facilities. The exercises are both about combinatorics and that is likely to be the main use case for any exercise for these performance tests. Having built-in fast or optimised combinatorics alone will not or should not help. |
That's precisely why I think it's unsuitable. Both cases require different handling in the implementation, which is why the broad term is too general for a scenario. |
I do not follow this since, both scenarios (if there are really two , not sure) require same handling in a test, in the case of F# & C# by using the xunit timeout attribute but still keeping the same test name. |
@exercism/reviewers Your thoughts on what the scenario should be named? |
They might require the same handling in a test but not necessarily in the changes to the instructions. Using different scenarios adds that additional information so that human maintainers can make use of it, even if auto generators handle them identically. |
I am somewhat ambivalent to the name, as long as it gives me a clean way to flag tests that could be problematic, rather than turning off the runner altogether for an exercise. I favor any method that will also make it possible for me to add in explanation/warning for students. Something along the lines of: "While you can use a naive or 'brute force' methods to solve this exercise, large inputs may cause processor or memory issues. The following tests are designed with large inputs and may timeout, fail, or cause problems if your code is inefficient." or "This exercise can be solved using recursion, but carefully check your code. Python does not have tail-call optimization, and larger inputs could cause problems. The following tests are designed with large inputs and may timeout, fail, or cause other issues if your solution is inefficient." FWIW, Pytest (what we're using more or less for the Python test runner) has a To me, |
I quite like "slow"! |
What do other people think of "slow" as the scenario name? |
Given the point that @SaschaMann made namely " whereas the exercises you mention are primarily about algorithmic optimisations." how about Optimise? |
At the risk of continuing to bikeshed: I don't like @martinfreedman you've also downvoted the suggestion, could you elaborate why? |
Seeing that @SaschaMann and @martinfreedman have downvoted "slow", what do people things about the suggested "optimise"? |
I like optimized, but it leaves the question unanswered for "optimized for what? Time, Memory, Energy, Readability?" |
I like what @BethanyG implies "Efficient" better than "Optimise" (and especially not "Optimize"). Or maybe "TimeEfficiency" as that is the specific issue I am addressing and offering a solution for. In addition the expected BigO restriction could be added into a test name e.g. "Large Inputs require at least Logarithmic Performance" ) or (Linear or LogLinear etc.) |
Maybe |
@ErikSchierboom you mean "time-optimise" or "optimise-time". Most of us are not from the USA, why should we have the imposition of USA spelling? |
Because that's what our style guide defines :) |
@ErikSchierboom OK, Then I like "optimize-time" |
Cool! @SaschaMann @kotp @BethanyG @SleeplessByte @wolf99: what do you think about |
I would prefer if it specified that this is about time complexity, not just time to run in general, but I'd be happy with either. |
I'd be perfectly fine with |
On 22/01/14 07:17AM, Erik Schierboom wrote:
I'd be perfectly fine with `time-complexity` or some variant.
optimize-time is not as good as optimize-for-time in my opinion.
`time-complexity` sounds like Big O notation is implied, at least to me.
But going in the right direction, I think.
|
I'm fine with |
I was thinking |
Okay, anyone against using |
Okay, one final chance. Does anyone object to using |
Okay, let's use |
@martinfreedman Are you still interested in working on this? |
Added non-required
timeout
(in milliseconds) property tolabeledTest.properties
to support tests with timeouts, such as foralphametics
andpalindrome-products
exercises. This would support automatic generation/update of the tests rather than specific track manual modification of the relevant exercise's test file.