-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Burp parser aggregates findings on type
#11398
Comments
What is your suggestion for updating the behavior? Are you thinking to aggregate by a combo of type and severity? I don't have much experience with the Burp parser, but the general intention of handling dynamic findings in dojo is to aggregate as many endpoints to a single finding as possible |
I'll see if I can push a PR tomorrow as I've done the change locally anyway, easier to show.
However, this statement is probably the root of my confusion: why is this so? SAST findings are not aggregated by file path, why aggregate DAST ones per endpoint? Using dojo for "discovery to resolution", I'd need to track each endpoint separately, as their resolutions might be different. Also, as the hash code fields do not consider endpoints for "uniqueness", nor description, it means that:
The second report is considered duplicate of first. But the first does not mention endpoint B, so it is missed. And if it would include endpoints in the hash code fields, the opposite would happen:
Am I missing something in favor of aggregation? Once again, thanks for taking the time with such an out-of-nowhere "issue" 😀 |
The primary reason is value proposition. If a tool is generating 1000 results, and you import into Defectdojo, and get 100 aggregated findings, then DefectDojo is doing a lot of the organizing you would have to do otherwise
This is due to a historical architectural decision. Endpoints in DAST findings are stored in another database table, such that many endpoints can be associated with a single finding. File path and line number are single fields on a given finding, so there is really no possibility in aggregating those natively.
This is where reimport will shine! However there are two use cases for controlling the behavior you (specifically) may or may not desire. The two cases are whether endpoints are used in the deduplication algorithm for a given tool.
The nuance here is that are very valid use cases for each approach. I believe it comes down to using import vs reimport. My personal adage is that import is great for when you are getting audited, and reimport is great for when you are working with developers to mitigate issues. Reusing your SQLi example again:
Each of those examples are wildly different use cases but serve as very valuable tools to have. The point I am trying to make is that DefectDojo is not really a one size fits all, but with the right tuning can be the best tool you have! This may have been TMI, but I hope it's helpful! |
Definitely not TMI but wanted info, appreciated the time taken. These differences between
My main issue with it is that it's not "organizing", it's just this parser behavior. Others will aggregate on other criteria (broader or narrower). Resulting in inconsistency of what a "finding" is expected to be (and hence its lifecycle). I would expect default behavior from parsers to be 1 dojo finding per 1 report finding (or as close as possible to it) and, then, I can create finding groups as I please. I could group by If they are already aggregated by the parser itself, I cannot un-aggregate them. If they are not aggregated by the parser, I can aggregate them with groups. To be flexible, I'd expect parsers to do less, not more, if that makes sense...
Problem here is that I don't see any tuning I can do for my case, where each endpoint has findings handled differently and by different developers/teams, apart from splitting the report myself before uploading into different engagements or tests... Regardless I still agree it is the best tool I have for finding 😄 |
django-DefectDojo/dojo/tools/burp/parser.py
Line 42 in ca6628d
django-DefectDojo/dojo/tools/burp/parser.py
Line 142 in ca6628d
Burp parser aggregates all findings per
type
alone.This means all
Cross-site scripting (reflected)
(for instance) fall under the same finding, regardless of endpoints, parameters or even severity.I might be missing context on other people triage approach, but I find this confusing for mine as I'm unable to triage specific issues but rather an entire class of them.
Am I missing something?
Would a PR to change this be acceptable or too big of impact? Does it make sense to create a separate parser for this, if the latter? Or maybe make this dedupe_key configurable in settings.py?
The text was updated successfully, but these errors were encountered: