On January 22nd 2020, Compass Security Canada (CSCA) attended the monthly OWASP Toronto meeting at George Brown College where Rahul Raghavan gave a presentation titled “The Clutter that’s Choking AppSec”. This presentation mainly focused on the correlation and integration of results generated by automated tools in application security such as SAST, DAST and SCA.
During his presentation, Rahul first presented the jargon of the industry and emphasized on making the difference between the following key terms.
What is the difference between CWE, CVE and CVSS?
CWE, also known as Common Weakness Enumeration, is a list of software weaknesses, each associated with a unique CWE identifier (CWE id). A CWE id is a category that categorises many vulnerabilities. For example, “CWE-319: Cleartext Transmission of Sensitive Information” can be use to classify many vulnerabilities included in various software that are transmitting password in cleartext over the wire. The CWE database is maintained by the MITRE Corporation and is freely available at https://cwe.mitre.org/index.html.
CVE, also known as Common Vulnerabilities and Exposures, is a list of uniquely identified vulnerabilities specific to a particular software. A CVE comprises a unique vulnerability identifier, a technical description of the weakness as well as a public disclosure or advisory. For example, LabF nfsAxe 3.7 is vulnerable to a buffer overflow vulnerability identified as CVE-2017-18047 (https://www.cvedetails.com/cve/CVE-2017-18047/). This unique vulnerability falls under the umbrella of “CWE – 119 : Failure to Constrain Operations within the Bounds of a Memory Buffer”. The CVE is also freely available at https://cve.mitre.org/. The website https://www.cvedetails.com/ can also be used to get more information on a particular vulnerability.
CVSS, also known as Common Vulnerability Scoring System, is an open framework that can be used to classify the severity of vulnerabilities. The current CVSS version 3.1 provides base, temporal and environmental metric groups (i.e. benchmark criteria) to classify the severity of a vulnerability. The quantitative scores are then transposed in qualitative severity rankings of “Low”, “Medium”, “High”, and “Critical” severity. The framework is owned and maintained by FIRST.Org, Inc. and more details can be found at https://www.first.org/cvss/v3.1/specification-document.
What is the difference between SAST, DAST, IAST and SCA?
SAST stands for static application security testing. A SAST tool is essentially used to perform static source code analysis. This kind of testing is also referred to as “white box” testing since the tests performed require the availability of the application’s source code. An example of a SAST tool would be a source code analyzer that performs security tests on an application’s code base in conjunction with its continuous integration/continuous deployment solution (CI/CD).
DAST stands for dynamic application security testing. A DAST tool is used to perform tests while the application is currently running. An example of a DAST tool would be the scanner used on a web application when performing a vulnerability assessment (VA).
SCA stands for software composition analysis. A SCA tool analyzes an application’s source code to identify and catalogue its software component dependencies. As the popularity open source software and libraries grew over the year, many applications are now dependant on other pieces of software that aren’t maintained by their development team. These external dependencies can very well be vulnerable or prone to license issues. A SCA tool can also detect direct and transitive vulnerabilities (i.e. not just the direct dependencies, but also identifies vulnerabilities in the dependency’s dependencies, if any).
IAST stands for interactive application security analysis and uses a combination of SAST and DAST tools to perform various checks. The application is assessed as it is being used interactively with a human tester or a set of automated tests. An IAST solution will monitor the application dynamically in memory and compare with the source code for coverage. It may also combine a SCA module.
What is the difference between security analysts and developers?
A great point brought forth by the presenter was that security analysts live in a world of security vulnerabilities and PDF reports as deliverables. However, developers see vulnerabilities as bugs that should be handle as entries in their ticketing system (e.g. Jira tickets).
What is the root cause of this so called “Appsec clutter”?
SAST, DAST, and SCA are automated tools that generate a lot of data that can easily become overwhelming. Aside form the sheer size of the generated reports, the results themselves are prone to false positives, exaggerated risk rating, and confusing or generic vulnerability descriptions and remediation recommendations.
How to solve the “Appsec clutter” problem?
Correlation:
The presenter proposed to normalize all findings via a common denominator: the CWE id. Once each finding has a CWE id associated with it, then results from various tools can be agglomerated and grouped together.
Integration:
Once the scans results are normalized, they must then be integrated into the developers’ pipeline. This is usually done by ingesting the results into a ticketing system. The speaker remained vendor-agnostic and did not mention any commercial nor free open source solutions to solve this issue.
Our conclusions:
An enterprise-level organization with a well-established cyber security team should indeed have a way to automatically ingest scan results from various automated tools, prioritize them efficiently and to ingest them in a ticketing system to facilitate remediation by application teams and re-test validation by the security team.
However, small and medium businesses generally just simply don’t have the budget for this scenario. For S&B, there is often no muscle memory, no proper processes in place. Therefore, along with the actual scan results, the security analyst should present them with a curated, shorter list of top priority vulnerabilities that have been manually confirmed.
The speaker did a great job explaining “Appsec” from an automation standpoint. However, application security has a broader reach as it englobes results from both automated tools and manual testing. The presentation did imply that PDF reports are dead. In the case of manual testing, penetration test reports (e.g. PDFs), and to some extend vulnerability assessment reports, are here to stay because they provide benefits:
- Penetration test reports offer non-generic content that has been uniquely written by a subject matter expert, the analyst, who manually confirmed the vulnerabilities. This provides more insight on the vulnerability, helping developers to quickly identify, reproduce and remediate the vulnerability.
- Manual penetration testing prevents against false positives, or at least, has a very low chance of them because at the minimum two pairs of eyes have validated the technical details of each identified findings.
- A penetration test report can be a “legal” document. A penetration test is a common requirement for many compliance standards. Moreover, in a court of law, a penetration test report issued by a reputable vendor is a proof that the organization has done its due diligence to ensure the security of its products and customers.
In conclusion, while PDF reports can be cumbersome to integrate, they remain very relevant. For integration, a client may be provided with an XML or a CSV file including the results of the pentest report. The automated correlation and integration of results from scanners is yet another tool in the Blue team arsenal and is by no mean mutually exclusive with manual penetration testing. Au contraire, they are complimentary and both highly recommended. Furthermore, manual penetration test results can always be formatted to be ingested in ticketing systems. The problem is that currently there seems to be no industry-wide standard or format that would allow universal integration amongst different scanning tools and integration products. Will we see a solution to this problem in the near future? We certainly hope so.
CSCA would like to thank both Rahul Raghavan for the great presentation and OWASP Toronto for organizing this event.
Leave a Reply