Taming the Appsec Monster

Taming the appsec Monster

Most organizations either have their own centra​​l security teams or rely on external security consultants for building and rolling out AppSec programs. As a starting point, a couple of cooperative development teams are selected for a “pilot rollout”. Upon seeing ​successful implementation results from a subset of the pilot candidates (development teams), the security professionals conclude that scalability to the rest of the organization would be easily attainable. Now, different development teams have different sizes, cultures, processes and AppSec maturity levels so, efforts to use a one-size-for-all are met with resistance of varying levels. Tight schedules and lack of resources are cited as the most common reasons. This is when the “scalability monster” makes its presence felt. Organizational mandates from the Information Security team or regulatory/compliance mandates from target markets result in development teams forming a checklist, bare minimum, ”let us just take care of it for now or for this customer segment” attitude.

Well, why does this happen? This is because baking security into the development process is generally perceived by the development teams as inherently riddled with a number of difficulties:

  1. Obstructive: I (Development team) will need to overhaul my existing development process and would possibly need to integrate new (and potentially costly) tools. This is going to be a significant management overhead. So, let’s revisit the AppSec program in the next release cycle (Read: never!).
  2. Zero value for most customers: Security requirements are NFRs (Non-Functional Requirements) with no tangible value to most customers. Each cycle expended on meeting some or all of them would be a waste of development team’s time. Further, this would decrease the usability of the product/service in question. ​​
  3. Out of scope: Most of the security activities expected from me (development team) do not apply for one reason or the other. What might be working well for some other development team in the organization would simply not work in my development environment.
  4. Unattainable levels: With a two week sprint (as an Agile scrum example), how am I expected to integrate this huge list of security requirements and that too, just to get to a baseline? Striving for higher levels would be impossible.

These are real valid concerns, not hypothetical. All of them need to be suitably addressed in order for an AppSec program to scale smoothly inside the organization. The following covers potential ways to do so:

To address challenge #1 (being obstructive):

  • Consider profiling the current development environment and making a list of the various tools that are currently in use.
  • Then, identify ways to leverage the existing tools for conducting/consuming the results of various security activities, as applicable.
  • While evaluating new security tools, lean on the ones that can integrate with the existing development tools or ones that can help manage various security activities in the AppSec pipeline itself. For example, if a development team uses JIRA ALM tool and a security scanner is being evaluated, ensure that the latter can easily integrate its results with the former.

The idea is to help strike the right balance by re-using tools that may already be in use. Yes, this would make things cumbersome for the security team initially but as the AppSec program matures, using reports from varied tools and putting them through a “single picture” organizational AppSec filter would become doable.

To address challenge #2 (lack of customer value):

Data breaches are often a result of software that fails to suitably protect its customers’ sensitive information; organizational, financial or personal. When such breaches occur, customers rightly assume that their vendors would have done suitable security diligence during the development process. Unprepared vendors stare at face-saving responses, reputational damage and potential lawsuits.

Many customers even require some tangible form of assurance that the software they are buying is coded securely and that it is, in fact, free from significant known defect. To that effect, SAFECode’s Buyer’s Guide identifies three commonly requested assurance methods.

To address challenge #3 (being out of scope):

Consider customizing the security activities based on a development team’s processes, goals and AppSec appetite. Security activities recommended cannot follow a one-size-fits-all approach. Few examples are:

  1. Penetration tests and Threat Models:A development team using the waterfall development process would have a longer release cycle and therefore, the ability to get a 4-week manual penetration test conducted or undergo a detailed Threat Modelling exercise. On the other hand, a development team on Agile or Continuous Integration/Deployment would not have that liberty and would need to meet such objectives by some form of out-of-the-box thinking such as, getting a penetration test done in parallel sprints, getting a security tool-based penetration test, identifying security requirements for each iteration/set of iterations or, finding a less time-intensive method to do Threat Modelling.
  2. Security-aware team member(s):If the development team has team members (irrespective of their roles) who are security-aware, an AppSec program implementation is likely to garner better support. Consequently, the security activities recommended for such a team could contain seemingly heavy/seemingly difficult ones.

To address challenge #4 (unattainable levels):

The AppSec program should enable development teams to implement and measure security activities iteratively (incremental chunks across releases) versus, one-time massive list with a one-time compliance check.

On that note, AppSec program compliance checks should not be about checking whether a set of baseline security activities are being done or not. Instead, they should be around evaluating whether the underlying objective(s) for doing a particular security activity are being met or not. For example, Assume that improving the code’s security quality is a development team’s objective. This can be accomplished by means of manual code reviews or automated code reviews or frameworks that check for insecure code check-ins or a mixture of all. An effective check for compliance should evaluate whether the underlying objective of improvement of the code’s quality is being met or not.

In conclusion, a top-down organizational mandate or a compliance policy should only be used to guide development teams to put AppSec programs with basic building blocks in place. Detailed implementation should be left to the teams. Scalability of AppSec programs is better achieved by understanding and addressing the pain points of development teams. By addressing the pain points, you can build a culture of AppSec in individual teams, thereby, helping improve the security posture of the entire organization, in the long run.

Share this article on Linkedin


Authored By: Vishal Asthana — [email protected]