GUEST ESSAY: The case for network defenders to focus on leading — not lagging — indicators

By Rohit Sethi

Akey CEO responsibility is reporting results that deliver on a company’s mission to shareholders. This reporting often requires a host of metrics that define success, like Annual Recurring Revenue and sales for software as a service (SaaS) companies. These are lagging indicators where the results follow behind the work required to achieve them.

Related: Automating SecOps

Lagging indicators are separate from leading indicators that could include marketing leads, pipeline generation and demos. When it comes to sales targets there is a correlation between increased sales to shareholder value creation, but closing sales in B2B transactions can be time consuming. Ideally, companies should know that their work will lead to the appropriate future lagging indicators, and not the other way around.

Leading indicators provide a shorter feedback loop. This enables employees to drive improvement, and are more motivating because employees know what they have to do to succeed. In cybersecurity we often face a bias towards lagging indicators, unfortunately.

Cybersecurity nuances

One could argue that the true lagging indicator in cybersecurity is a breach, and that anything that helps prevent a breach, like adopting a “shift left” philosophy as part of a DevSecOps initiative, is a leading indicator.

However, “vulnerabilities” are lagging indicators because you don’t know how many vulnerabilities you have until you test for them. If targets such as defect density or compliance to scanner policy (i.e. having only a certain number of “allowable” vulnerabilities before releasing software) are the only targets, there are few ways of predicting success. You could start initiatives, like upgrading frameworks, but it’s not clear how much these initiatives will impact the quantity of vulnerabilities.

One solution for secure software development is tracking security requirements and controls. Well-defined security requirements that are tailored to a particular piece of software are designed to prevent vulnerabilities. Minimum adherence to security requirements policy, based on an application’s inherent risk profile, can lead to reduction in risk vulnerabilities.


Tracking security requirement completion is a leading indicator because development teams can incorporate it within their normal planning cycle. Contrast this with unpredictable vulnerabilities that lead to unplanned work, like remediation, allocated to building features or fixing bugs.

This is exacerbated when factoring in the security and compliance defects that can’t be scanned for, like obtaining consent for privacy laws. Using a metric like adherence to minimum security requirements, including regulatory obligations, alleviates these problems by clarifying planned work, making it not limited by what an automated scanner can’t catch.

Effective leading indicators

There are reasons why the practice isn’t widespread, including an organizational problem requiring executive support. Accountability for software security often falls under the Chief Information Security Officer (CISO). CISOs are usually given authority to implementing perimeter controls and testing for defects, but don’t have the authority, mandate or incentive to change software development. Oftentimes, CISOs that go beyond their mandate often spend significant political capital to introduce changes to the development process, while software development leaders have little incentive to prioritize these changes, leading to the death of many initiatives.

The adage, “what’s measured matters” certainly applies. When working with enterprise software security leaders, implementing software security requirements without targets and tracking is unlikely to succeed. Successful organizations, in our experience, use a simple measure, like compliance to the company’s minimum policy for security requirements. Aggregating and reporting this metric across a business unit in a visible way, like a central company dashboard, drives behavior.

One common mistake is taking a broad set of security controls, like ISO 27001, or a top 10 list, like the OWASP Top 10, and calling them security requirements. These are not designed to be specific to a particular kind of software, actionable for developers or easily testable by quality assurance.

Another problem is organizations creating their own requirements in documents or home-grown tools. Keeping current with the myriad of technologies, compliance requirements and growing security vulnerabilities requires your experts’ full-time focus. The topics are often complex and require contextual education for non-security experts to interpret correctly.

Moreover, there is significant work in narrowing down relevant security requirements for a particular software product. Additionally, the appetite for risk can vary based on the data each application processes, stores and transmits.

Like other major changes, it’s important to set realistic expectations. Your organization is unlikely to have compliance with 100 percent of security requirements the first year. Begin by making your policies minimal to start and gradually add a more comprehensive set over time.

Organizations can evolve their software security programs from being unpredictable and focused on lagging indicators to methodically incorporating leading indicators. Companies that fail to adopt the leading indicator mindset will create more risk and unplanned work than their competitors, ultimately hurting revenue and market share.

About the essayist: Rohit Sethi joined Security Compass as the second full-time employee. As CEO, Rohit is responsible for setting and achieving corporate objectives, company alignment and driving strategy to execution. Rohit specializes in building security into software, working with several large companies in different organizations.