You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A lot of new findings are submitted using automatic discovery tools such as pattern matchers and data flow manipulators such as Google's oss-fuzz. The tools match known vulnerable code patterns or manipulate inputs in arbitrary data flows regardless of the availability of the inputs to anonymous, authenticated attackers or to confused delegates.
For example, an issue in quartz-scheduler appears discovered by a pattern match in a sample companion package quartz-jobs that demonstrates the kinds of jobs that can be scheduled. The sample code is only vulnerable when the inputs to the sample calls are manipulated. (The assignment to the quartz CPE rather than quartz-jobs seems imprecise, but let's forget about this for now).
I am suggesting to enhance CVSS to accommodate conditions that need to be satisfied before the weakness can be exploited. This does not imply publishing the confidential details of the actual attack or its mechanism. The conditions may need their own classification system, but to begin with they can be as simple as "exposure of property files to manipulation", "exposure of web request parameters to manipulation", "exposure of environment variables to manipulation", "user logged in with the application surfs through an insecure forum with malicious HTML code" etc. so that enterprise alert systems can eventually catch up with the new scoring system and allow filtering out the noise such as where the inputs cannot be manipulated in the particular use of the suspect component. The CVSS can even assume average expectation of the exposure such as 100% to web requests or 1% to property files to generate an exploitability-aware score. Hopefully the use of such score can be further improved by integrating verifiable promises of component usage in software development.
In the above example, the condition for the attack would be "rewrite the sample source code of quartz-jobs to accept parameters that can be manipulated AND pass the parameters to the sample org.quartz.jobs package that uses javax.naming.Context, jakarta.jms AND use the rewritten code". The last condition "use the code" can be dropped off as implicit.
CISA Vulnrichment comes close but they are adding 3 attributes after the fact, based on the availability of exploits in the wild. (They also volunteer to enrich the CVEs they review with CWE, CVSS and CPE).
CVE Prioritizer synthesizes CVSS, EPSS and KEV data to prioritize (sort) multiple CVEs (such as those affecting components of a product). The process of obtaining the prediction score is opaque and is based on observations of server attack attempts. The straightforward understanding of risk conditions is missing.
EPSS collects and aggregates evidence of exploits from multiple sources: Fortiguard, Alienvault OTX, the Shadowserver Foundation and GreyNoise
Proposed New Idea/Feature (required)
A lot of new findings are submitted using automatic discovery tools such as pattern matchers and data flow manipulators such as Google's oss-fuzz. The tools match known vulnerable code patterns or manipulate inputs in arbitrary data flows regardless of the availability of the inputs to anonymous, authenticated attackers or to confused delegates.
I am suggesting to enhance CVSS to accommodate conditions that need to be satisfied before the weakness can be exploited. This does not imply publishing the confidential details of the actual attack or its mechanism. The conditions may need their own classification system, but to begin with they can be as simple as "exposure of property files to manipulation", "exposure of web request parameters to manipulation", "exposure of environment variables to manipulation", "user logged in with the application surfs through an insecure forum with malicious HTML code" etc. so that enterprise alert systems can eventually catch up with the new scoring system and allow filtering out the noise such as where the inputs cannot be manipulated in the particular use of the suspect component. The CVSS can even assume average expectation of the exposure such as 100% to web requests or 1% to property files to generate an exploitability-aware score. Hopefully the use of such score can be further improved by integrating verifiable promises of component usage in software development.
Additional Notes (Optional)
The text was updated successfully, but these errors were encountered: