You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think we should come up with some guidelines on how analyzers should be tested. My proposal would be:
Unit tests should exist to obtain as much code coverage as possible. Correctness of each exercise analyzer should be tested that way.
Each exercise should have at least two smoke tests: one with an optimal solution that receives no feedback, and one with a solution that receives at least one exercise-specific comment.
Next to that, a few smoke tests to cover exercises for which no analyzer is implemented should be present too, to make sure the analyzer works properly for every exercise. By that I mean that it shouldn't crash or something.
Once we come up with some concrete guidelines, we should probably write them down in the docs.
I think we should come up with some guidelines on how analyzers should be tested. My proposal would be:
Once we come up with some concrete guidelines, we should probably write them down in the docs.
Originally posted by @sanderploegsma in #122 (comment)
The text was updated successfully, but these errors were encountered: