Shifting security left. What does it mean and why bother?
Let me start by saying, I’m a massive advocate of automation. The more you can automate, the less repetitive work you’ve got on your plate and the more cool shit (by work standards) you can do!
In my case, everything I’m writing here is the cool stuff, because I know anything I automate means I’m spending less time scrambling to get bits done late on a Friday afternoon for the next release (which have once again been dumped on me at the last minute), and more time experiencing that newest beer in my local which is more expensive and tastes the same as all the rest I’ve tried but is infinitely better than the tea I’d otherwise be drinking at my desk.
Now there are many ways of achieving a decent security setup and this is just based on my own experience, but I can guarantee, that they all include shifting security to the left.
Left turn Clyde
So, what exactly does shift left mean? Take a look at the flow “diagram” below and you’ll see my very crude attempt at how the software development life cycle works. The planning stage is where it all begins and it’s your earliest opportunity to get involved. Embed yourself in the team(s) and sit in on the refinement sessions. Don’t be afraid to stick your hand up and ask questions if you don’t know what’s being discussed as you may miss an opportunity to prevent a vulnerability before it even gets to writing a line of code. Is the data being grabbed in an API call being validated to ensure it’s the type you’re expecting and not something malicious? Should we be writing all that sensitive information to the error logs? How long can I get away with sitting perfectly still before someone asks if my Zoom has crashed? All the important stuff. Now a lot of the issues you raise will be pretty standard as will the precautions/solutions you advise, so why not automate it? Why not create simple faqs in a central location that you can point a comment on the ticket towards and let the developers learn? It doesn’t take long and saves you rewriting the same thing over and over and over and over….
A little bit to the right, but not too much!
Fast forward past all the initial meetings talking about what we’re going to do, to the actual doing bit. Now imagine a developer unwittingly introduces a security vulnerability and the first you know about it is a customer giving you a bell to let you know that their servers have been compromised, all their data has been stolen and you should expect a call from their solicitors any day now. That’s security to about as far to the right as you’re going to get and I think we can all agree, it’s about as bad as things are going to get and certainly not a strategy I’d recommend.
Now think of the same issue being introduced but you’ve equipped the developer with a handy IDE plugin which monitors their code in real-time and alerts them to any issues as they type. Don’t get me wrong, they could commit it with the issues in place, but they’re not going to because of the next thing we have up our sleeve.
As we know, not all tooling is created equal and there may be some performance issues with the latest iteration of the plugin you’ve picked, so your friendly dev has taken it upon themselves to disable it. Not ideal, but it happens, and you’ve got no way of knowing about it unless they tell you. Which they probably won’t as they just want to get their bit done and get down the pub themselves. That’s cool, we’re in security and we don’t trust anyone, not even our own teams. So, we put further steps in place just a little to the right and we add security gates to the CI/CD pipeline.
Security gates are basically an automated check to see if the code being committed contains any vulnerabilities. In my case, this came in the form of a webhook that makes an API call to the SAST server upon creation/update of a pull request. The SAST server scans the code and returns the results to the PR in a decorated (fancy looking) comment and if there are any vulnerabilities found, marks that comment as something other than resolved. This is then picked up by a branch policy in place that prevents the PR from being merged. Get in! I’ve stopped a vulnerability from worming its way into the dev branch. I am so good at this.
Woah there Nelly. Not so quick!!!!
Now the main responsibility of any ancillary department is to support the primary activities or operation of an organisation. In this case the development teams. So, it’s important not to become a blocker for those teams and stop them every time we see an issue regardless of whether it’s the most secure way of doing things. Producing any product is a balance of quality, speed and…. probably some other stuff. There’s no point in creating the world’s best sausage roll that takes so long to arrive, it’s stale by the time it hits the shop. The customer’s eventually going to trot down Greggs and join that ever-increasing queue. The same goes for development (sort of). If you can’t deliver the product in a timely fashion, the customer’s going to find another way of getting what they want…. sausage. Wait, what?
This is where it gets a bit tricky and the bad news is, that there’s no one-size-fits-all solution. So, I’ll just tell you what’s worked for me and you’ll need to decide if it’s for you or not.
Not all teams are created equal
I found the example given above worked great for your standard development team but when it came to the research team, it was a whole different ball game. These guys are wizards and write whole applications in 30 lines of code which make less sense than usual. That’s because they’re using bleeding-edge tech to get a proof of concept up and running as soon as possible to stick in front of the business in the hope it’s the first step in becoming the first to market with the next big thing. They do not like you and they do not want you interfering. They are very good at what they do and more importantly, quick where speed is of the essence. But we need to make sure the software they’re producing is still secure right? Wrong… kind of. They’re producing proof of concept and not a finished product that’s being shipped out the door. They’re going to pass that POC onto a dev team once it has sign off, and whilst the dev team will use that concept, they’re essentially going to rewrite it in a friendlier language and follow the full dev security process. But it doesn’t hurt to make people aware.
Horses for courses
So, what I did was introduce the same security gates as explained above but made them optional rather than mandatory. This meant the whole of the security process remains the same, except for the branch policy. Now when a comment marked with anything other than resolved appears, it doesn’t block the wizards from merging the PR. It’s the best of both worlds. We give them the information and they decide what to do with it. At first, it was given a damn good ignoring. But as the POCs were being fed to the dev teams and there were the usual moans of not being able to use a certain technology, something magical happened… the research team started taking notice and using the information they had been given. They took the time to look at some of the issues raised in the PR comment and realised that with minimal effort, they could make life a little easier for the devs and so they did. And they all lived happily ever after.
These are just 3 very shortened, basic examples of shifting left in a long list of possibilities and shouldn’t replace security on the right, far from it. There’s plenty more you can be doing further down the line and once the product has left the door, but that’s a story for another day. Shifting security left enables you to catch issues earlier in the process which will almost certainly save you heartache further down the line. Nobody likes patches. Not you, not the business and not the customer who would’ve preferred it was right the first time. Don’t be the reason your product needs one. Shift left and drink more beer!