Why is 2016 the year of “Developer Driven Security” as RSA has stated?
It’s telling that it took us, as an industry, several decades just to talk about bringing security directly into software development. It’s an important issue and am glad to see RSA recognize that the traditional security model needs to change. Overall, I think their statement is fueled by three things:
- the lack of change we’ve seen in security practices
- the continued presence of malicious or weak code in software released to the public, and
- the rapidly increasing occurrence of security breaches.
Ultimately, the millions of dollars we’re spending each year on operational security products hasn’t reduced risk, so companies are looking for ways to reduce it on their own and that starts with developers.
Why are developers often leaving out security measures in the code they’re writing?
It’s certainly not intentional. Undergraduate CS programs don’t typically cover security, and most developers haven’t had more than a few hours of on-the-job training. Their current state of mind is to focus on delivering features rather than hardening their code, but most are very interested in making secure code and we are starting to see a mind-shift.
Who is usually in charge of reviewing code for security flaws or backdoors?
Often, no one. For companies who have a security resource, they usually handle the code review tools and triage findings to the development team. That process usually means that developers have to go back and make changes to code they wrote three weeks ago, or legacy code that they didn’t write at all.
For companies who don’t have a security resource, security is usually a nonexistent practice outside of IT. We’ve talked to a lot of developers who want to step up to take on the role as a security lead, but they need resources that focus on quality and security to help them get started.
In a continuous release environment, what is a best practice for doing a code security review?
The most effective time to perform a code security review is while the developer is writing the code. By pointing out security issues and giving devs training right away, they’re more likely to remember how to handle those situations in the future. Higher learning emphasizes a “tight feedback loop” for a reason.
There are a lot of developers who’d prefer to automate their security efforts and that can be effective too. By adding a tool into your CI process, you’re still getting the information at a time when it’s easier to fix than it would be the day before you’re scheduled to ship.
In a pre-cloud world everyone relied on boxed methods of security around their products, vs within. Is there a fear people will get too comfortable with assuming AWS or Azure’s built-in protections will be enough for sloppy code, and if it’s not, is the liability on them?
It is absolutely not enough to assume your application is secure based on these boxed solutions. The bad guys are hacking our applications daily by taking advantage of the same exploits we’ve heard about for years. Services like AWS can’t protect you from improper configuration, malicious users, or scorned employees. With multiple attack surfaces in software we can’t possibly build a moat big enough or wide enough to keep everyone out. Look at the latest hacks at Yahoo! and LinkedIn, they’ve got unimaginably deep pockets for IT security and still haven’t been able to keep their records safe. We’ve got to be accountable for the applications we write and give our customers confidence that we’ll keep them safe.
Want to know more about Developer Driven Security? Check out Gary's Session at Data, Development, and Drive - Pushing the Throttle to Innovation on Oct 6th!