-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Common scoring system for vulnerability test coverage? #74
Comments
@tomato42 Can you provide an example of what this would look like for some specific CVE and library? If I understand your description, you're proposing some formalized way to recognize a particular vulnerability that may affect multiple implementations (and thus have several CVEs assigned) that could then be used to show (and test) if library X is vulnerable or not. Perhaps something akin to this table of XML vulns and how they affected different Python XML parsers? |
tbh, I don't have a complete set of metrics in mind, but few of the ones I think should be considered are:
so for many fixes/bugs the score would be rather low; for many issues some of those things are completely irrelevant, I was thinking of an open-ended scale, starting at 0, for no tests, and then growing up for better and better test coverage the problem is that some of them (like parameter coverage, or mutation score) are more subjective than others
well, I'd argue that if you have at least two implementations of the same format you can have the same bug in both of them
I may do, but I'm not sure if it would be illustrative... also, I'm not familiar with them, so it would be hard for me to say how I should score them |
When working the security issues, we have the CVSS to gauge how severe a given issue is.
The problem is, that when a fix for an issue is released, it's not obvious what kind of test coverage was employed to ensure that the fix actually fixes the issue, that it fixes the general case, not only the specific case, or how extensive the test coverage for that issue is.
Secondly, when we consider issues in common protocols or data exchange formats, it's not uncommon that multiple implementations have the same or similar issues. So having documentation that a CVE-XXXX-YYYYY-like issue from library Z isn't also present in libraries other than Z because they test it in "such, such, and such way", would also be really useful.
(technically, this idea has overlap between the work groups, especially the Best Practices, but I'm filing it here as I'd rather keep the scope focused on security at the beginning rather than correctness in general)
So, do you think this is the best workgroup to start work on this? If yes, what would you suggest as next steps?
The text was updated successfully, but these errors were encountered: