Reworking changes late in the Software Development Lifecycle (SDLC) can be quite disruptive. Security assumptions made early in the lifecycle can be difficult to undo especially when there is pressure to deliver features rapidly.
To get around this problem, our research (Managing Application Security Survey, 2017) indicates that large organizations typically complement their current testing approach with code scanning as part of static analysis during development. Their intent is to try and catch security errors before initiating a release. Unfortunately, some teams have used code scanning results as a means of implying their software is secure.
There are four different possible results when using scanners:
The big question is, what is the ratio of the red squares (indicating errors) to green squares (indicating correct results)?
Here we focus our attention on false negatives. In other words, are there situations where scanners are unable to catch certain types of software security errors?
Without getting into technical details, we found several reasons why scanners will always produce False Positives and False Negatives. A noble goal of the code scanning community is to minimize both types of errors in a reasonable time. (Note: for those who would like a more in-depth treatment, please download the full report by completing the forms on this page)
1. Code execution paths In programming, our intent is to have a software program execute controllably from start to finish. In other words, we want our software to halt in a controlled manner. However, as our applications become more complex, it is not possible to determine every path through our code to ensure we have identified all possible execution paths. That means we can’t truly know whether our system will, in fact, halt in a controlled manner. One of the entry points of security vulnerabilities is exploitation of such halting problems.
2. Scanner optimization There are different classes of scanners and they typically focus on specific patterns. Some focus on the syntax and try to look, for example, at whether SQL statements are being statically generated through string manipulation. Other scanners focus on how information flows through code and determine where external input is not being properly validated.
Because code scanners tend to focus on specific classes of security vulnerabilities, they will not catch everything. In other words, they are very useful, but they don’t catch everything.
Here are the results of one study focused just on buffer overflows where a combination of single and multiple scanners was used:
3. Compiler optimization After fixing security vulnerabilities in code, it is possible for an optimization compiler to reorder the execution of program logic in order to maximize performance. This can unknowingly lead to a security vulnerability especially when the areas being affected use critical variables.
4. Semantic meaning When we perform manual code reviews, there is a good understanding of what each of the variables means. A scanner, however, has no such context. Variables that ought to be treated with more sensitivity may not get this level of priority.
5. Framework gaps OWASP has identified areas that are not possible candidates for automation. Given that code scanners are typically used for automation and scalability, it is important to realize that not everything can be caught even using a best practice mindset.
Given the fact that scanners will not catch everything, and will produce false alarms, we need to shift even earlier in the lifecycle—to the design and requirements stages to help reduce some of the noise. Thankfully, such a framework already exists in policy to procedure platforms, such as SD Elements by Security Compass.
1. Application Archetypes Automatically generates the correct security controls to be implemented based on several different archetypes (web, ERP, IoT, etc).
2. Software Security Requirements Being able to generate a set of software security requirements from well known repositories like OWASP, NIST, and regulations like GDPR and SOX.
3. Integration with ALM Providing a least invasive approach to the DevOps team by seamlessly integrating control points into their environment (Jira, Micro Focus ALM, Microsoft TFS, CA Agile Central, IBM Rational Team Concert, etc).
4. Integration with Testing Using a bidirectional integration with code scanning tools and other third party testing tools to determine whether or not a particular control was met successfully.
5. Audit Reporting Generating an easy to read audit report that shows what controls were met, not needed, or incomplete.
There are several tools in the market today that implement a portion of this framework, but fall short of addressing all of the five attributes described above:
SD Elements by Security Compass is the only solution that will fully enable this framework, and has the capability to operate in virtually any enterprise environment.