You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Lifecycle has so many tests that the slevine/spec test runner output is hard to read.
There's an entire secondary tool dedicated to helping you parse the output the tests: https://github.com/aemengo/gswt , but it doesn't seem to work out of the box for most people and that raises the bar for casual / "drive-by" contributors. At the same time, the fact that it exists at all is a clear signal that there's room for improvement in the output.
Right now one of the best ways to find errors e.g. in the github test output or on your own terminal is to successively search for "Failed: 1", "Failed: 2", "Failed: 3", ... until you find which tests failed.
Proposed solution
This isn't a proposal yet . This issue is a placeholder to do the work of testing whether there are any 'quick wins' we can make.
One thought would be to summarize at the bottom, e.g. by capturing and then listing the names (and files?) of the first N tests that failed. Would we include the full output and failure messages of those tests, or just their names? What's a good value of N? is this even possible?
Another approach might be to suppress passing tests so that the failures are the only thing that prints (via a flag? by default?).
There's probably some other ideas and approaches also - this issue is not intended to prescribe, but rather to suggest that exploration could be helpful.
The text was updated successfully, but these errors were encountered:
Some food for thoughts:
I tried this tool: https://github.com/vakenbolt/go-test-report
and it works quite well, here an example running our unit tests on a failure scenario, the html output looks like:
And clicking on the red boxes:
And into the detail of each test clicking on it:
The integration is quite simple, I just `modified my local Makefile from this
it generates a test_report.html file.
Being a static webpage we can even think to serve it through our CI so we can always have a nice view of the tests status
Description
Lifecycle has so many tests that the slevine/spec test runner output is hard to read.
There's an entire secondary tool dedicated to helping you parse the output the tests: https://github.com/aemengo/gswt , but it doesn't seem to work out of the box for most people and that raises the bar for casual / "drive-by" contributors. At the same time, the fact that it exists at all is a clear signal that there's room for improvement in the output.
Right now one of the best ways to find errors e.g. in the github test output or on your own terminal is to successively search for "Failed: 1", "Failed: 2", "Failed: 3", ... until you find which tests failed.
Proposed solution
This isn't a proposal yet . This issue is a placeholder to do the work of testing whether there are any 'quick wins' we can make.
One thought would be to summarize at the bottom, e.g. by capturing and then listing the names (and files?) of the first N tests that failed. Would we include the full output and failure messages of those tests, or just their names? What's a good value of N? is this even possible?
Another approach might be to suppress passing tests so that the failures are the only thing that prints (via a flag? by default?).
There's probably some other ideas and approaches also - this issue is not intended to prescribe, but rather to suggest that exploration could be helpful.
The text was updated successfully, but these errors were encountered: