What is it and why should I care?
This article (and several of those remaining in the series) is not so much technical in nature, but rather deals more with processes related to security problem solving.
It’s a fact of life in most development and/or security shops that there are those fire-drill days, and that is the case for security practitioners many times due to the “we have a security problem and the sky is falling … fix it” mentality. This course of action, however, doesn’t lend itself to fixing things properly (root-cause analysis), and certainly doesn’t allow for the methodical eradication of entire classes of vulnerabilities.
That is a problem.
In order to make a dent in the security problems plaguing the Internet, we can not solve problems as they come up (referring only to known attack vectors, certainly we don’t know what we don’t know). We have to get ahead of them. We can’t fix problems when they get here – at that point it’s too late. This brings to mind several ideas, like building security in from the start and looking at what others have done to solve their problems and using their good ideas in our own processes.
However, one issue I don’t see addressed much in the security realm (though it does come up) is the idea that we’re trying to tackle too many problems at once. Make no mistake, there’s a lot of issues, and they all seem important, but there’s generally a prioritized order for most situations – some type of risk ranking. If you get the appropriate stakeholders in the room, and make the possible security issues clear, there will be some issues that are clearly more important or impactful than others.
If we work from the assumption that we have ranked our problems in priority order, why should we be haphazard in our approach to their resolution? We absolutely shouldn’t. Security tooling has approached this with the familiar red-yellow-green solution in many cases, which is fairly helpful if you can inform the tool what constitutes red-yellow-green in your environment. However, this otherwise helpful approach missed the point that solving many security problems requires an architectural solution.
Let’s consider 2 examples from a different problem set – performance.
1. One problem might be a particular method that’s reasonably slow and gets executed many, many times. In this case, you’d probably just go in, rewrite the method using some optimizations specific to the method, and be done with it. Instant performance increase, and very little fuss.
2. Another problem might be queries taking too long across the application. If we assume a relational data access layer, there could be lots of solutions. You might scale the database hardware somehow, swap out the database vendor, add caching either internal or external to the application, tune queries, or a handful of other things. The point is that many of these “fixes” involve significant software and/or hardware architectural changes, and you wouldn’t think of making a decision on those nearly so quickly.
Some security issues (eg. session fixation) are pretty simple fixes, and you make them in one place and you’re done. Others (sqli, xss, etc.) are certainly more complex and generally are best solved with architectural changes.
What should I do about it?
Hopefully I’ve convinced you that solving security problems fire-drill style is a bad idea and that many require a more rigorous approach, so how do we solve them correctly?
My recommendation to developers is that you approach them individually (I caveat this with “you need to fix the easy/terrible ones first” to knock out the true fire-drills). This means that you pick your biggest problem (calculated by some risk rating methodology) and try to a)eradicate the issue from your codebase(s) and b)make it as impossible as you can for it to ever happen again.
That can be daunting, but here are a few recommendations to get that process started.
1. Understand the problem.
Don’t ever try to tell anyone how to solve a problem you don’t understand yourself. You usually don’t actually improve anything and you look foolish. This is a common problem in security, so enough said here.
2. Consider all use cases where the issue can occur.
Figure out the ways that developers can cause the issue, as well as any they might not be using yet, but will be soon. This gives you the breadth of functionality that a possible solution has to at least consider, if not account for. The goal is that you don’t give developers an excuse to go around your solutions because “we need this feature”.
3. Evaluate solutions.
This is certainly a broad topic with lots of possible tasks, but there are a few obvious ones.
– Distill the known “secure” approaches and their associated tradeoffs
– Look for known attacks against those approaches
– Decide on a single or hybrid solution (most of the time, building your own is the wrong idea)
– Try to find a good implementation that matches your chosen solution
– Follow the guidance to implement the solution properly
4. Institutionalize the chosen solution.
Once you have a chosen solution for your problem and a working implementation, you now need to make sure that is the solution that actually gets used. One approach that seems to work pretty well is the ESAPI model. Here, you build a set of controls specific to your organization that function as the “approved” solution for a given problem area. You also build appropriate documentation showing developers how to use it properly. This brings in all the benefits of code reuse, as well as the consistent application of security controls.
5. Add technology and processes for verification.
This is an important step that is often not done. After you’ve considered the problem, come up with a solution, and got people to use it, you need to make sure they keep using it. Again, this could mean a lot of things, but here are a few ideas to get you going:
– Get (or build) a tool that not only allows you to check if you’re not doing something wrong, but that you are doing something right. This is probably going to be custom, but it’s very cool to be able to see everywhere you’re sending queries to a database that DON’T go through these 10 “approved” methods. That’s a much more manageable problem.
– Add people and processes to cover areas where tools don’t work. At the moment, software can’t catch all of these things, but humans can if they have the time. By only requiring humans to step in and evaluate those areas that technology can’t deal with, you cut down on the time requirement, and give folks a chance to focus on those human-only tasks where they’re actually needed.
In conclusion, there are lots of security problems to be solved, and not enough time or people to solve them. However if we prioritize our problems, and then deal with each one thoroughly, we can create significantly more secure applications consistently.