-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signaling when preconditions are not met on a reftest #178
base: master
Are you sure you want to change the base?
Conversation
58f1827
to
60223fe
Compare
As written the proposal is not to skip the reftest (which would require us to statically determine that it shouldn't be run), but to enable reftests to return the I think that's fine, but it should really be for behaviour that's really optional, or really depends on a precondition (i.e. a test for Y in "you must do X if Y"). Historically this distinction wasn't made clear in testharness.js tests and it caused problems, see https://github.com/web-platform-tests/rfcs/blob/master/rfcs/assert_precondition_rename.md I will also say that in general I'm not convinced that this extra status has turned out to be as useful as it sounds like it ought to be. For example we can't include tests that might return |
Is it not possible to report a test as skipped if it hits |
For an extra bit of background, I'm thinking of spec test coverage in addition to interop/regression testing. If I know I have a test suite that covers all my normative statements in a spec, and I want to two if I have at least two implementations of each feature in the spec, I don't want to count "PRECONDITION FAILED" as a pass. But for interop purposes, counting it as a FAIL isn't nice, since the browser did nothing wrong and there's nothing to be fixed. |
Skipped already means something specific (we didn't try to run the test). I'd be opposed to reusing it for "we ran the test, but the test decided it wasn't relevant". I think PRECONDITION_FAILED is as close to the right status as we have. Allowing different numbers of tests for different browsers also doesn't work well for Interop. You again have the possibility of some browser having 0 tests (e.g. if every test returns PRECONDITION_FAILED), plus it makes it impossible (or at least more difficult) to compute the overall interop score ("the fraction of tests passed by all browsers"). So, again, I think this is broadly fine if you update it to not talk about "skipped" tests but instead to allow a |
Done. Thanks for the feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable; the issue with interpreting this result already exists in th.js tests, so shouldn't block reusing it for reftests, IMO
|
||
### Risks | ||
|
||
Tests that return `PRECONDITION_FAILED` cannot be included in interop stats because it's not clear how they should be counted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually seems like a benefit to me :)
document.documentElement.classList.add("precondition-failed"); | ||
document.documentElement.classList.remove("reftest-wait"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Risk: would this work reliably / consistently if these two calls were reversed? Do we already define the exact timing of test finishing vs removing the class? (E.g., sync, on a (micro)task, by polling)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wptrunner implementation relies on MutationObservers, which implies it depends on microtasks IIRC. (That said, the internal Marionette implementation may well differ.)
No description provided.