Challenges we face in DevSecOps pipelines
DevSecOps pipelines aim to integrate security and compliance in each phase of the software development life cycle. During this whole process, we are collecting a lot of valuable data from numerous instances.
● Project managers collect data around cost, resourcing, and time.
● Product owners collect data from issue tracking systems.
● Software architects collect data from static analysis.
● Developers collect data from functional test results.
● QA/testers collect various test cases and reports
If you step back and notice, people are collecting and using data throughout the software delivery pipeline.
The challenge is everybody uses the data for their own needs. Either they export the data in a spreadsheet or add some additional automation for their specific context. There is no holistic, automated, shared learning with this approach. The learning largely takes place in people’s minds. In practice, DevSecOps definitely helps to deploy faster, but most of the decisions are still made by the experts.
We know this happens — because if we ask, “What is the estimate?”, you will get a vague response like, “here’s my estimate based on experience.” However, this approach is not scalable for the speed we are trying to attain in software delivery. Also, a siloed knowledge creation and learning model isn’t sustainable in the long term.
Managing business value and risk through the DevSecOps life cycle
By adding intelligence to our DevSecOps pipelines, we can create smart software pipelines which would be the next stage of evolution around DevSecOps automation data.
The goal is to help us get smarter about secure software development. We can do this by leveraging emerging concepts like machine learning while still being lean and agile. We should help people become more efficient by using the data generated from the software delivery life cycle.
There are a lot of benefits to adding this intelligence.
Getting a prediction based on patterns saves time. It increases the speed of software delivery. We also get higher accuracy and quick learning from our mistakes. It can eliminate many manual processes in every phase of the life cycle.
One of the biggest areas for improvement lies in the planning stage. If a plan is not accurate, based on what we are doing, it means very little. Instead, we should be able to dynamically forecast based on past patterns of behavior. Let’s take another example of risk.
What is the impact of risk on an organization? If you don’t implement a specific security control, what are the consequences? Based on historical data analysis, we should be able to get a better understanding of the impact. In the end, when there is clear alignment with business risk, you can defend why something was done, how it was done, and what components were used. This also helps to be proactive with respect to deploying secure applications.
Having multiple sources of risk insight can also help reduce organizational risk. For instance, in the past, we would use code analysis to provide a high, low, and medium risk profile. But if your dominant risk is different from somebody else’s, that should be relevant to you. A risk score based on all DevSecOps life cycle risks will add value to your pipelines.
A myopic risk score doesn’t really mean anything unless we tie that with organizational risk scores.
Implementing a smart DevSecOps pipeline
Implementing a smart software pipeline does not imply throwing away existing DevSecOps tools. In fact, we already have a lot of excellent tools. Almost every tool reports data through log files or an API which we can use to feed into a data lake to learn and gain insights from.
These insights provide feedback to all other stakeholders as well. So, we can still use existing DevSecOps tools but with a different mindset toward smarter software delivery. We create layer upon layer in the data lake and connect the dots. This is the next logical evolution of DevSecOps.
The future of DevSecOps
Eventually, every stakeholder in DevSecOps will be required to analyze data. We are generating a lot of data from every component in every pipeline. We now need to learn from the behavioral data we generate. With a smart software pipeline, we have the ability to radiate insights and help us make more informed decisions.
There is work already being done to build smart software pipelines from a continuous delivery perspective. Focus right now is on what type of data needs to be collected around delivery management. We envision that eventually there will be a lot of tools with analytical dashboards to help us reduce the ambiguity in DevSecOps.
You can listen to our podcast to learn more about building smart software delivery pipelines.