Year Of Security for Java – Week 45 – Do Threat Modeling

No Gravatar

What is it and why should I care?
After the last post covering secure the concept of a secure SDLC, this week we’ll look at a specific activity recommended by the various secure SDLC models: threat modeling. From the view of the secure SDLC, this is an activity that takes place fairly early in the cycle.

Threat modeling is an exercise intended to improve the security of your application(s) by considering the attacks that a threat can perform against your specific application. That data is then used to inform the development of controls and tests to verify the prevention of those attack vectors.

There are various components in the above definition that will be discussed below. However, given the short post, I’d like to make a “further reading/watching” suggestion up front. If you’re interested in the topic, please go look at the resources. I even included an excellent video of jOHN Steven talking about threat modeling. It’s worth an hour as it’s a good overview. jOHN is an expert on threat modeling and has a wealth of experience in making it useful and practical – his wisdom has certainly heavily influenced my own views on the topic.

What should I do about it?

A few initial caveats:
– This is fairly thin coverage of threat modeling as it is a complex process. Many complete books have been written on the topic.
– There are various models of threat modeling. I am attempting to portray the basic process only. You should use the model that most closely aligns with your organizational needs and then further tailor it to your specific environment.
– Along with various models, there is a wide array of terminology used to describe various components in threat modeling. A good example I like is the work Cigital has done on producing a useful vocabulary.

With those initial points out of the way, what do you actually do when threat modeling? Here are a few steps:

1. Identify what you want to protect
This should already be understood in your organization, but I mention it as a predecessor because writing down what is important can be a useful activity. One stakeholder won’t often have all the information, and this process is really helpful to get everyone on the same page from the get-go. It also helps you determine what’s worth protecting. This process should involve assigning a specific (or at least relative) value to your assets in order to determine priority.

2. Consider your application
Look at what you want to build, are building, or have built (depending on where you are in the life of the application) and make a picture of that (pictures are pretty and easier to understand). Do NOT use a network architecture diagram (too generic) or a set of UML class diagrams (too specific). My personal rule of thumb (informed by work of others) is that your picture should have 2 basic criteria. It should be specific to this application (the diagram of 2 similar apps in your portfolio should still have different components and therefore look somewhat different), and it should all fit onto 1 page in reasonably readable form (helps keep complexity at the right level). This diagram is not the only one that will exist. You can layer additional data on top of it with different views or you can expand certain subsections for more data, but there should be a single-page overall view. This view should include application-specific information, such as the software components along with design patterns used, frameworks used, etc.

3. Add more views
In addition to the basic view of your system, you’ll want to start to annotate your view with additional information. You can add data like:
– entry points: Think about areas of your application that can be invoked externally. These are important in that they provide attack surface so should be carefully considered.
– trust boundaries: These are areas where you make a decision about levels of trust granted to a given component. See the jOHN Steven video for an example of how trust boundaries aren’t necessarily so cleancut.
– data flow: Consider how information flows through your system. Are there areas where data can flow in different paths depending on conditions such as role or other context? Think these through and map them out.
– critical functionality: The parts of the application that are most important deserve a look and some thinking about how they are different. Are there actually controls in place that make them different. Should there be?

4. Consider the attackers
We need to think about who is attempting to attack the application. Is a curious user the same as an insider in the types of attacks they can launch. What about an angry customer versus an angry employee or a customer that is not familiar with computers at all? They could all do things to attack the application, but they’ll come in different forms and may require different controls. In this phase, you can build a simple spreadsheet to keep track of the different users, or try attack graphs or even attack trees. These vary in complexity and usefulness. The idea here is to give sufficient thought to the types of attackers you could encounter, determine which of those you care about, and then think about how those specific attackers may attempt to attack your application (specific attack vectors).

5. Rate the issues
Given the specific components of the application (what) and the attackers (who) and their attack vectors (how), we should have a decent picture of our threat environment, assuming we did a good job on the previous activities. At this point, we need to decide which threat scenarios (who attacking what how) we care about and how much. This prioritization process will rank issues for the next step.

6. Resolve the issues
Now we need to do something with this data that we’ve worked so hard to produce. Once we know the specific issues we care about preventing and which ones are most important, we go through the process of resolving them. I find I go through a few simple steps naturally.
– reduce attack surface: Are there things that can be resolved by removing unnecessary functionality or architecture from the application? There are tradeoffs here, but simplicity is your friend in security, and I try to look for this ways to apply this solution relentlessly.
– design around them: Can you build your system in a different way that doesn’t require that specific technique. Are there alternative technologies or solutions that don’t suffer that weakness? There is again a tradeoff here, as you’ll need to reconsider your threat model in light of your given design alternative .
– find controls: If I can find quality reusable solutions, I use them. There are obvious benefits to reuse.
– build controls: If I can’t find something that already does the job, I go the route of building something to handle the issue. When I do that, I also try to make it reusable.
– build tests: The threat modeling process should be a drive for test cases, just as requirements would. An additional benefit to doing this task as part of the process is that you can use a traceability matrix to tie the threat/attack vector to a test case, and show that it’s covered.

7. Iterate
Threat modeling is an activity that, if done, is usually done once at the beginning of the project and not again. This should not be the case. Practically, I don’t think you need to re-evaluate it for every development sprint, but you should re-evaluate it periodically. I generally recommend that you revisit the threat model in earnest when you hit a specific trigger (new attack, incident, operations data on attack scenarios, new assets, change in personnel) and on a periodic timed basis (3m, 6m, 1y).

The coverage I have given to threat modeling is admittedly basic, but should hit the high points well enough. This is an extremely useful security process that adds value, but is often not done because either 1) people don’t know how to do it or 2) people think it’s too heavyweight. Reason 1 is an education issue, and is easy enough to remedy for those who are interested. Reason 2 goes away in part with education, but also realize you should customize the process to your environment, which can save you time (certain threat actors may be of no concern to you, eg.) Also consider that we all generally build similar things for the most part (web, mobile, desktop) – our threat models often have similar or common components. We can build reuse across applications, internal groups, and even across organizations. There are publically available threat models (some better than others) for common application models that you can use as a starting point. From there, reuse what you can from app to app to save time.

In conclusion, threat modeling is a very useful exercise to evaluate and improve the security of your application. By considering your application and the threats against it, you can have a better understanding of how to effectively design the security of your system.

References
———–
jOHN Steven’s threat modeling talk
http://software-security.sans.org/blog/2012/08/06/ask-the-expert-john-steven
https://www.owasp.org/index.php/Application_Threat_Modeling
http://www.cigital.com/justice-league-blog/2011/05/11/threat-modeling-vocabulary/
http://software-security.sans.org/blog/2012/08/06/ask-the-expert-john-steven
https://www.owasp.org/index.php/Category:Threat_Modeling
http://msdn.microsoft.com/en-us/library/ff648644.aspx
https://www.owasp.org/index.php/Threat_Risk_Modeling
http://www.schneier.com/paper-attacktrees-ddj-ft.html
http://www.cigital.com/justice-league-blog/2011/03/29/moving-to-mobile-new-threats/

Technorati Tags: , , , ,

Year Of Security for Java – Week 44 – Follow a Secure SDLC

No Gravatar

What is it and why should I care?

Software development has taken an interesting path over the short lifetime of the field. It began as a deeply technical field where only the best and brightest could participate, which is not unusual since it was born out of engineering, a very technical and structured field itself. However, as the field opened more widely to the general population due to the Internet as well as widespread access to computers and simpler programming paradigms, the barrier of entry was significantly lowered. There’s recently been some educated guesswork that places the number of java developers in the world at 6-10 million. That’s just one (albeit popular) language. In addition, the newer development platforms (web/mobile) and reduced time to market (days/weeks instead of months/years) have made the field even more popular and populous. Add to these points the fact that the web in particular was built to be open and that most developers haven’t had significant security training, often including even those building platforms and languages.

From a software security perspective, that’s a challenging environment in which to function: simple and accessible languages/frameworks that are fairly insecure being used by novices, or even professionals, with little to no security training. Historically in software, security has been an afterthought – we’ve done a poor job overall with basic concepts such as validating input, encoding output, authentication, access control, etc. As for more esoteric issues, we have typically engineered solutions well after the attacks are discovered and known (this is partially acceptable since “you don’t know what you don’t know”). However, we also seem to have very bad memories, and tend to re-introduce the same weaknesses of design repeatedly (see web->mobile).

But there is hope! I have no doubt that the software security field is going to be active for a long time (lots of problems), but I am a firm believer that we can make significant improvement, especially within our sphere of influence. What it requires though is baking security into the way we build and deliver software, our software development life cycle (SDLC). Many of the posts in this series (and on this blog generally) are point solutions to specific problems. However, securing the SDLC is far more broad and reaching. The SDLC is just the way you build software, the set of steps you follow to get from thought or need to working software to retirement of the product. Examples you’ve heard of are agile, waterfall, spiral, etc. The point is that whether yours is heavily structured or very ad-hoc, you follow some process.

The simple idea of securing the SDLC is that you modify your current process (or adopt a new one) in a way that accounts for security. The truth is that if you want specific attributes out of something you build, you must plan for those attributes before you build it, or the re-engineering effort is not acceptable. If I want a chair that rocks, but I just start cutting pieces of wood in the shape of a chair that I’ve seen before (copy-paste code from stackoverflow or previous projects), I’m going to end up with something like a chair … but not one that rocks, and certainly not one that rocks smoothly.

What should I do about it?

We want to build secure software, and we know we have to plan for it. Now, we need a process that allows that and helps us plan for that. Luckily there are several open source popular models that you can consider to help you get started. All of these should be customized and tailored for your environment, but they are very good at giving you ideas of areas of consideration. It’s a good idea to get familiar with several, so even if you use one as your basic model, you can borrow from others to create a plan that works in your organization. While there are many good resources (the US CERT catalogs several), a few of noted interest are the Microsoft Security Development Lifecycle (SDL), Software Security Framework – Building Security In (SSF) and the Software Assurance Maturity Model (OpenSAMM).

All of the models (including those I haven’t mentioned) have strong and weak areas, but I personally like these three a lot, and for different reasons (though they’re all reasonably similar, when push comes to shove). The Microsoft SDL has great documentation and lots of great tools and worksheets ready-built to help you get going. The SSF-BSI is grounded with real data (openly published, see BSIMM) and is extremely logical and simple and clean. The OpenSAMM is great because it shows very clearly how to extend and customize it for your environment. You get a great idea of the options available, and you can make an informed choice of what steps work for you.

It is actually pretty rare in our field that we have so many quality options that are open and available to solve a certain problem. In this case, we’re fortunate that we have options. Make sure that whatever route you choose, that you follow it, and certainly improve over time. If you find a hole in your methodology, fill it in with a practice so it’s no longer an issue.

In conclusion, software security is a long and arduous journey. Building secure software is no easy feat, but it helps tremendously to have a plan. Fortunately, there are several quality options for SDLC frameworks that account for security. Read about the different models to get ideas before you start – some cover certain areas better than others. Build a model that works in your environment, and by all means, use it!

References
———–
https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/sdlc/326-BSI.html
http://www.microsoft.com/security/sdl/
http://www.swsec.com/
http://www.opensamm.org/
http://bsimm.com/

Technorati Tags: , , , , ,

Year Of Security for Java – Week 43 – Build Something (and Give It Away)

No Gravatar

What is it and why should I care?
This will admittedly be a short post because it’s a pretty simple concept. Here’s the simple idea in bullet form:

– Developers are builders of software (and security systems and even documentation sometimes)
– There is a need for software & docs
– Developers build software & docs and contribute it to the community
– Developers help others in the community, give back, and build credibility.

It’s admittedly sometimes a hard sell to ask someone to come home after work and do more of the same. It’s certainly not for everyone. However, we as developers certainly have a useful skill that can help others out if used properly. There are many drivers for why to do this, as well as benefits which I’ll list below. A couple personal reasons for me are that a) I can, and others can’t. It’s a good feeling to help others in need, and b) I was helped when I couldn’t. I received a lot from the community both in the way of software that I used as well as mentoring. It’s nice to give back in some way.

What should I do about it?
So, if you buy into my notion above, what are a few concrete things you can do? Here are just a couple of ideas.

1. Build (better) documentation for an existing tool.
Lots of great projects exist that aren’t being used or are chastised for being bad when they really just need documentation. Taking an existing tool and building some useful documentation (Spring framework and Twitter Bootstrap docs are great examples generally) can be a really helpful thing to the community. This is also a great way to get your feet wet with a project before jumping into coding.

2. Find some bugs in a project and patch them.
This is along the lines of my last post. Figure out how to break something, then do a responsible disclosure, and better yet – fix it! Developers will be grateful for the patch. If it’s a project you’re interested in working on, it’s a good way to show the developers you write good code, and to get involved in a helpful way.

3. Write an open-source security (or other) tool
Write a tool that scratches an itch either for you or the community at large. It’s a great use of your skill, you’re likely to learn a lot and you benefit others as well as help secure software. It’s tough to beat those benefits. You can use your own distribution channel or you can do something like make it an OWASP project if you want some good visibility and prefer to have it in a community ecosystem.

4. Do something with charity.
You can use your skills and donate them to charities. There are lots of things you can do in your community or around the world. Many find this to be a rewarding experience, and it can provide new opportunities for people and change communities for the better.

Projects like these also have a benefit that you build up a portfolio of work that you can share – a bit of a virtual resume. This is a concept that’s become pretty popular in tech circles in the last few years with the so-called “social coding” movement. While there are lots of business and likely financial rewards for doing some of the things listed above, there are also a lot of intangibles, and those are what I see as the real benefit.

References
———–
http://www.hackersforcharity.org/

Technorati Tags: , ,

Year Of Security for Java – Week 42 – Break Something

No Gravatar

What is it and why should I care?
Breaking something (legally, of course) is one of the best ways to learn how it works. Software is no different. Breaking software is sometimes trivial and sometimes extremely complex, but either way is a great exercise. In particular for developers, it forces you out of the mindset of building, and gets you to think about how your software might break. It also brings you to a harsh reality of securing anything: the protector has to secure every avenue of attack, whereas an attacker only has to find a single path unprotected. The scales are a bit unfair in that respect.

In software testing, we generally look at blackbox or whitebox testing. Blackbox testing is testing externally, essentially focused on the inputs and outputs. Whitebox testing is testing internally where you know the internal properties of the software. Both are extremely useful, and I’ve discussed both in previous posts. However, for the sake of our discussion in breaking something, we’re usually talking about some form of blackbox testing.

What should I do about it?

What I generally recommend to people learning about a new type of attack or vulnerability is to do 3 things (in a loop):

1. Build it
Let’s say you were wanting to learn about SQL injection. In order to “break it”, you have to have something to break. The first thing to do is go out and collect or build code samples that perform SQL queries. Some may be susceptible, and some may not, but you’ll need code to get started. If you’re talking about some other attack, or even an SQL injection attack through an application, you will need some type of running site. That means setting up local servers and applications, etc. There are lots of vulnerable applications available (ala webgoat) in order to practice your skills.

2. Break it
Once you have an environment, you can begin to try and break what you’ve created. Start simple and get a boost from exploiting something trivial. Work your way up to something more difficult. As you learn more techniques to exploit individual vulnerabilities (think of all the XSS options!), you’ll want to try some of them out. A tool with progressive “lessons” like webgoat can be great for this reason. It can be enlightening (and frightening) to see your application fail in so many spectacular ways.

3. Secure it
Lastly, you’ll want to work on building protections. It’s great to show something can be broken, but that’s not enough. Remediation can be very complex depending on the issue, but is always necessary. Also, you’ll find that many of your attempts at remediation are either lacking in completeness or affect functionality in other portions of the system or both. Building appropriately scoped controls is a difficult but necessary task. The good news here is that there is a lot of good information available to help with this, but the bad news is that it’s not in common developer guides. Most of the time, you have to go to a security-specific resource to get this information.

Here are a few extra thoughts around breaking software

– Keep your test cases around
It turns out to be really helpful to have a collection of small sample code snippets that express a single issue in a contained way. I end up using my rough collection of vulnerable code in lots of different unexpected ways. There’s no reason to throw away work you’ve done anyhow. Also, it gives you that warm feeling of fright when you see code you wrote from years ago.

– Sometimes breaking something is the only way to prove it can be broken.
This is true in a couple respects. First, sometimes the only way to prove to a developer that they’re writing insecure code is to show it breaking (but make sure you’re doing this legally :>). Second, while it is sad, this is often a helpful way to get non-technical folks on board with security. If they see something break, something just clicks for them. I’ve often seen this technique used to help justify security investment, though I’d argue that there are better alternatives for that.

– Try a bug bounty program and win!
Over the last few years, a number of organizations have come forward with bug bounty programs, essentially a financial incentive for finding bugs in their software and then being “nice” by reporting the bug to them and letting them fix it before exposing it to the world. This is an interesting concept for a lot of reasons, but for the person interested in learning to break something, it can be a great option since you can get paid if you find something.

In conclusion, breaking software can be a great way to learn how it works, and can be a crucial link in the effort to secure it. Go forth and break!

Technorati Tags: , ,

Year Of Security for Java – Week 41 – Spend (Wisely) on Developer Security Training

No Gravatar

What is it and why should I care?
In the last post, I gave some justifications for getting security people into your organization, as well as reasons to have them closely knitted into your team. In this post, I’d like to move the attention to the developers already on your team.

Let’s say you’ve got a model where you at least have some security representation on your team, whether that be an enterprise group that consults periodically, or someone that spends 2 days a week helping write code, or anything in between. (I discussed various models of execution in the last post, and mentioned that your organizational structure will likely dictate the best model.) You have a security expert on your team, but that person probably won’t be full time on your project, unless it’s huge, and many times, won’t be contributing to your codebase at all. That means we need additional security knowledge embedded into the team.

What should I do about it?

You could go about this in lots of ways, but I think Jim Bird’s approach is fantastic. It argues for a scaled model where you have different developers with varying skill-levels in security, but all with at least a minimal understanding (basic training – think OWASP Top 10 / SANS Top 25 with a 1-2 times yearly refresh). This is exactly what we do with other concerns in coding, such as performance or scalability. We often have 1 or 2 experts, then a smattering of capabilities among the remainder of the team.

Everyone on the team should know that you shouldn’t concatenate request parameters from the user into a sql query string, but maybe not everyone will understand the intricacies of DOM XSS encoding or the ins-and-outs of Content Security Policy (CSP). By ensuring the most common vulnerabilities and associated controls are well-understood by everyone on the team, you create an environment that generally produces more secure code. By having an expert or two on the team, you have resources that know about the latest and greatest protections and who also understand implementation caveats for the basic protections.

The scaled training approach has several benefits:

(More) Secure Code
You can produce rather secure code if the general team has basic training, and there are 1 or 2 experts helping out with the difficult problems.

Cheaper than Training Everyone
As Jim points out, this model produces secure code in a much more scalable way than trying to make everyone an expert. Training developers and turning them into security experts is not a cheap proposition. The basics are usually pretty easy, but there are lots of gotchas to be found, and that takes lots of time and money. By focusing the majority of your money on fewer resources, you’re able to make it count for more.

Easier Hiring
Training developers to be security-aware is a requirement, but many will try to hire in that talent. It’s well-known that it’s quite tough to find security-knowledgable developers. Needing fewer means you have a bigger talent pool to pull from, and most of your needs will come from the standard developer bucket, which is much more abundant.

Natural Path to Secure Frameworks
By setting up a model whereby a few people are experts in a single area (and others aren’t), it creates a natural environment to encode that knowledge into some system for the benefit of the larger group. In this case, that encoding is likely to involve creating a reusable security framework. Now, you get the benefits of that knowledge and have it codified in an executable form. In addition, it’s simpler to “update” the knowledge store by adding features to the framework and fixing bugs over time. By hiding the gory details in a framework, you give the standard developer security capabilities that they wouldn’t have otherwise had, and at the same time increased your security posture.

In conclusion, developers in your organization should get trained on security. The majority should be familiar with the basics, while a few should be experts. By scaling your investment, you’re able to efficiently build more secure code and create an environment that fosters institutionalizing the security knowledge of your experts for the benefit of your full team.

References
———–
http://swreflections.blogspot.com/2012/05/building-security-into-development-team.html

Technorati Tags: ,

Year Of Security for Java – Week 40 – Get a Security Person (or Some People) if You Can

No Gravatar

What is it and why should I care?
I spend a good bit of time talking about both development and security. I spend a lot of time working with other developers and other security people. There are a precious few that I know of that excel at both development and security. This is a sentiment echoed by many, so I won’t spend time belaboring the point. If you can’t have everyone be an expert in both, how should you structure your team so you have the optimal blend of both? There is some usefulness in discussing the make-up of teams with regards to development and security, as it can heavily affect your security posture long-term.

What should I do about it?

Let’s consider a few different options when it comes to team make-up:

No security people
I thought about leaving this group out, but it’s so prevalent that I just couldn’t. Many small and medium sized organizations haven’t yet added security to their SDLC (another post in the coming weeks on this topic). This is tough. This will take a long time to resolve, and will require changes to developer education and training programs as well as general industry awareness. There’s a lot of work being done to get the information out there, but this will just take time.

Developers with some security training
This is a popular option. Ok, we need to do better on security – send one of the team members to a week’s training! This is better than nothing, but a pretty weak option. Unless the person is passionate about security and spends time coming up to speed on his/her own, you’re going to get little benefit. You may pick up a few of the obvious things, which is certainly helpful, but it does not usually improve your overall security stance. Additionally, this person is not going to have a mentor of any type for the security work they are doing, which can be important in the security field particularly.

Security people at the enterprise level
I think this is a great option for a lot of things, and should certainly be considered depending on the size of the enterprise. Security people at the enterprise level can do things that security people embedded in development organizations just can’t do. They can set high-level standards and policy. They can also build security strategies and architectures for development.

As an organization grows, it becomes more and more important to have consistency (assuming the standard is good) across the enterprise. There’s a lot of time and money being spent in just trying to figure out what organizations have deployed. It quickly becomes a nightmarish problem, particularly for organizations that have lots of legacy software.

Security people on the development team
Having security embedded in the development organization is also a great option for making impactful changes on application architecture, design and implementation. Producing standards at the enterprise is great, but useless if no one follows them. Also, having security folks deployed in the team helps tremendously with training as your “non-security-trained” developers get direct on the job training tailored to your organization. In addition, you have a built-in mentor to ask questions of if something comes up that’s security related. You can also catch issues earlier in the development cycle, since the security person can help do things like code review or design review with an eye for security.

There are a couple of models for this. One options is to have a security minded person doing actual development, but also security stuff, essentially splitting time and focus. Alternatively, you have a security person that round-robins for a few dev teams and functions as a kind of internal consultant. I’ve seen both of these models work, and it often comes down to organizational culture as to which one is a better fit.

A good article related to this (and with REAL data!) is from David Rook (@securityninja) and is found here. In it, David says that their company embeds security people in with the development team. They do code reviews as well as other security related activities. He has tracked their data over time, and has found that 1 security person to 10 developers is a ratio that works well in their organization. Compared to current standards, that’s a LOT. According to BSIMM, there’s a ration of 1.95%. That means on average (for the companies that participated in BSIMM), there are 2 security folks for every 100 developers. That includes people who sit at the enterprise level as well as those are directly related to security in development and architecture teams.

Security people outside the organization
A final option for consideration is the “security consultant”. This can come in lots of forms. It could be paying people to come in and build your code for you in a secure way. It could be someone coming in and reviewing/testing the code you wrote for security. It could be purchasing or using tools/services.

Using outside consultants is often a business decision in many fields. Is it cheaper for us to develop this talent internally or outsource it? However, that’s often not an option in security, though it’s getting closer. At the current moment, a lot of the “security” people are at outside consultancies. There are clearly domains (financials, government, etc.) where there is a lot of security knowledge, but many verticals just don’t have the internal knowledge.

Using and consuming external security knowledge can be a great idea, but IMHO, shouldn’t come at the cost of building at least some of that talent internally. By creating that skill-set internal to your organization, you can tailor your strategy to your organization, a powerful concept.

In conclusion, if you’re developing or deploying software, you should be building security into your process, and that means getting good security people on board. Security talent can come from internal and/or external resources. Considering your organizational model and embedding security in the appropriate places can greatly improve your overall security posture.

References
———–
http://www.securityninja.co.uk/application-security/application-security-data/
http://bsimm.com/facts/

Technorati Tags: , ,

Year Of Security for Java – Week 39 – Don’t Reinvent the Wheel (Unless It’s Square)

No Gravatar

What is it and why should I care?

This is a bit of a follow-up to my last post with a bit of a different viewpoint. In that post, I specifically looked at code reuse from the perspective of creating an internal framework to centralize code related to security functionality.

This week, I want to consider security a little more generally. Code is not the only thing produced by “security” mechanisms. There are processes, documentation, code, people, etc. Re-inventing each of those per-application or even per-organization is a huge waste of time and money. In addition, for the same reasons that I mentioned regarding code reuse, each individual incarnation (say, by a given company) of a given mechanism is likely to be of a lower quality than if multiple organizations pooled their knowledge and/or resources to produce a common template.

By looking for areas of commonality and building tools, processes and documentation to meet those needs, everyone benefits.

What should I do about it?

Essentially, don’t reinvent the wheel.

– If you’re looking to teach your developers (and/or yourself) about how to prevent XSS, don’t start writing your own doc, look at resources that already exist like the OWASP cheat sheets. The OWASP material is really good on XSS. In some other areas, OWASP’s documentation is weak, but you can often find other good resources, and take a best-of-breed approach.

– If you’re building out a plan for how to add security to the SDLC in your organization, don’t try to roll your own. At least evaluate existing models and see if they fit your needs. Chances are they cover most if not all of what you need. Additionally, they probably have thought of things you haven’t considered yet, particularly if it’s an established methodology.

– If you’re considering which, of all the important security issues, you should cover first in your application, look at existing analysis. Consider breach reports and what’s being exploited. Use the data that’s available, and don’t just start with what you *think* is true. Measurement of the data will often quickly change your opinion.

Here are a few concrete steps to prepare you for working in this way:

1. Assume you’re not the only smart person around
Let’s go out on a limb and assume there may have been others that have come before you that have had an intelligent thought or two. It’s OK (excepting certain license/copyright issues) to reuse what others have done. It’s OK if it’s not all done in-house. Reuse what you can from others, and then use your time to make improvements instead of starting from scratch.

2. Read, read, read
In order to know what others have done, it’s really beneficial to be knowledgeable about what’s current in your field. This obviously applies to many fields, but technology and security change quickly, so you must stay current.

3. When you have a need, do the appropriate research
When you have a specific need, you should have a general idea of what’s available (if you followed step 2 above) to meet that need. However, you’ll probably still need to do a bit of directed research to get the detailed information and see what will fill the current requirement you have.

4. Tailor the mechanism to your environment
While it’s great that there is a lot available for reuse if you look for it, there are often good reasons to tailor the solution to your specific environment. Often times, you may need to change terminology, or you might have a simpler or more complex model to work from. You might have different threats, or you may have solutions that prevent entire classes of issues across the board. Tailoring a solution makes it specific to your organization and can add a lot of value.

5. Fix things that are broken – sometimes the wheel IS actually square
Sometimes what’s out there is bad … sometimes it’s really bad. When that’s the case, you’ll have to start from scratch. You may be able to glean some things from what’s out there, but you may end up needing to do some or all of the work. This is obviously not the greatest option for most situations, but is sometimes necessary. If this happens, try to build something that is as reusable as possible – you never know how others might use your work.

6. Contribute back to the community
Whether you have something you tailored, or if you had to build something from scratch, you can often help others by sharing your work with the community. Obviously, this is not always possible, but when it is, it can be a boon to you and the community. If you put something out that gets used, then you could be a) boosting your career prospects, b) creating a standard tool/process/document, and c) helping others by sharing your insights.

7. Build a custom holistic plan
The real benefit to an organization of the “thinking” of its’ security people is that they can evaluate the environment that the organization operates in, and build a working model that custom-fits the organization using best-of-breed individual solutions. If you don’t need something, throw it out. If you have a gap, fill it. That’s tremendous value-add to the organization. By knowing both the general landscape as well as your specific environment, you can build a tailor-made plan that evolves over time and is best suited to secure your organization. This plan should be holistic in nature, and will certainly fill out over time, but you should begin your planning with your end-goal in mind.

In conclusion, there are lots of existing resources at the disposal of the security practitioner. If there’s no resource to meet your specific need, you can often tailor something that already exists, or in some cases, you may have to build one from scratch. Either way, sharing your work with the community can help us all move forward.

Technorati Tags: , ,

Year Of Security for Java – Week 38 – Create A Reusable Security Framework

No Gravatar

What is it and why should I care?
Software reuse is a ubiquitous practice in software development. One study says that “80% of the code in today‚Äôs applications comes from libraries and frameworks”. That’s a lot. There is already a lot of research about software reuse and its benefits.

While the research exists, there’s no need to read it to get the basic answer, as any developer will tell you that it’s imperative to create and use reusable code in order to save time and increase quality. We don’t have time to re-write every basic function every time, and if you do that, you also won’t get the benefit of making a single function/feature more robust over time by sending it through all the rigorous quality measures you would with reusable code that’s been evaluated in lots of ways.

Given that reusable code is a must for software development in general, it follows that it would make sense to use it for security-sensitive code as well, right? While that logic seems sound, I’ve seen too much code to believe that it’s happening regularly in many organizations. Usually, at best, what you end up seeing is almost every project has a class named something like “SecurityUtils”, and it will have a couple “filterXYZ” or “escape/encodeXYZ” methods in it. That is the extent of software reuse for security for many, many applications.

Let’s be clear: Fixing XSS is hard. Performing proper access control across your site is hard. Performing appropriate input validation across your application is hard. These things require people, process and technology to be used to solve them properly, and even then, it’s still going to be a difficult process. Security is a difficult thing to get right (just like other aspects of software), so we may as well take any help we can get.

What should I do about it?
In order to help with some of these hard problems, we can leverage reusable frameworks for security. The idea is simple: those controls you put in place to protect your application, say from CSRF (like CSRFGuard), are likely useful for other applications in your organization. Of course, other teams might have a better solution for a different problem, such as a good context-aware encoding solution for XSS (like JXT). If you pool your resources across developers on your team, organization, or even the world (open source software), you get a lot of reuse of code. Additionally, the code is more likely to be good code necessarily since, with many people seeing it and using it, bad code should get weeded out more quickly. If you’re applying your other good processes, such as code review and testing, the quality continues to go up.

I do want to make a special note at this point. I’ve heard lots of folks (including my younger self) argue that we need more secure coding examples, ie. snippets. While I think there is certainly value in this approach (especially for those writing the reusable frameworks), I’ve come to believe that this in and of itself is not particularly useful. A developer will use your framework if it’s useful, provides value to him/her, and doesn’t get in the way (buggy or too difficult to use). A developer will, in general, NOT come and grab your snippet of code for doing XYZ. Snippets can work in very constrained environments and use cases, but I think the general approach should be towards the framework model.

At this point, I’d be remiss if I didn’t mention the excellent ESAPI library. It has about 10 useful controls that cover various parts of security across your applications. It’s in the process of being re-architected to make it more modern, but the idea that there is a security library that offers a holistic approach to security is awesome. If you’re not using ESAPI, give it a try or at least a look. If you’re responsible for building a security related framework, start with ESAPI or at least look at it for inspiration. It will certainly provide value.

In conclusion, software reuse is an old tried and true concept. There are lots of benefits to be realized from it. It’s common for standard application development frameworks, but there aren’t a lot of security libraries out there. Reuse applied to security makes sense just like it does for other features. Building a framework with a comprehensive approach to security is a big win for both development and security, so give it a try.

Technorati Tags: , ,

Year Of Security for Java – Week 37 – Solve Cross-Site Scripting

No Gravatar

What is it and why should I care?
Cross-Site Scripting (XSS) is another issue that is caused because of poor code/data separation. The general issue is that a developer intends the user input to be interpreted as data, but an attacker can manipulate the input to cause the browser to interpret the input as tags or commands.

XSS is exceedingly popular and well-known, and along with SQLi, is probably 1 of the 2 best-known vulnerabilities across web app security.

What should I do about it?
Note: One great resource which I drew from for both this post as well as the more in-depth post is the XSS Prevention Cheat Sheet over at OWASP.

I’ve already written here on the problems with and solutions for XSS. However, I’ll cover the basic solutions briefly here for clarity’s sake.

1. Canonicalize Input
Get your input data into it’s simplest base form in preparation for validation so that you’re more confident your validation routines aren’t being circumvented.

2. Validate Input Using a Whitelist
Whitelist validation (accept only known good data) is a key tenet of good security. Check the content of the data, the length, data type, etc.

3. Contextual Output Encoding/Escaping
Output Escaping is the last step, and must be done for the appropriate context. You must understand where you’re sticking data, and how that location is interpreted from the browser’s view. This can be a sticky issue, so this one gets a bit problematic.

Now that we’ve covered the basic technology solution for XSS, let’s consider the larger context of eradicating entire classes of vulnerabilities from our codebase as I discussed a couple weeks ago. Let’s look at each specific subpoint I mentioned in that post and consider what you might do regarding XSS.

1. Understand the problem.
I’d read available documentation on XSS to make sure I understood the issue as well as I *think* I do. The cheat sheets at OWASP are good for basic theory, but there are books that talk about the issue in more detail, and there are additionally books that discuss how browsers work that can be helpful to understand. XSS is actually one of the more complex vulnerabilities around, and it actually is still changing because it’s so closely tied to browser feature sets which are always expanding. In addition, javascript frameworks of significant capability have had a huge impact on web applications in the last 5-7 years, thereby compounding the problem.

2. Consider all use cases where the issue can occur.
I would think about all the different types of XSS interactions that occur in my application portfolio. How many languages am I developing in or supporting? What frameworks am I developing with? Do I need to allow users to enter HTML that is then displayed (JSoup / AntiSamy problem space)? Do I have a constrained browser environment (or do I still need to support IE 6)?

3. Evaluate solutions.
I would look at the XSS cheat sheet from OWASP (and any associated weaknesses with the approaches that it espouses). Once I understood the recommended solution(s), I would consider whether said solution(s) would work in the environment in which I’m working. For XSS, there’s really not a lot of alternatives to how to solve the basic problem. However, there are differing implementations of solutions. For example, there are solutions where that require more effort to solve the issue, but do leave the maximum of flexibility (plain ol’ JSP’s), there are frameworks that default to solving the most common version of the problem (struts, springmvc, etc. – all default to basic html entity encoding), all the way up to templating solutions which constrain flexibility but offer a safer-by-default scenario (JXT). Consider also solutions for special use cases, such as the requirement to allow certain constrained amounts of HTML along with plain data, and how you might solve that class of issue (JSoup / AntiSamy).

4. Institutionalize the chosen solution.
After deciding on your collective solution, you have to work with development teams to determine the current status of vulnerability to XSS (beware: it’s probably gonna be pretty bad). This step is a good place to (re)train developers on this specific topic from a security perspective, since they’ll be implementing the solution. You then just have to do the work of implementing the chosen framework(s), making necessary modifications to the source, and re-testing applications to make sure everything still works as expected.

5. Add technology and processes for verification.
At this point, I would add a few tools in to make sure the technology is being used. I would make sure that there were static and dynamic analysis tools in place to check for XSS, though tools vary in usefulness here and coverage can sometimes be spotty. I would also make sure to code review UIs to see where dynamic data is emitted. I might go so far as to write custom rules for my static toolset that “trusts” my safe output functions to reduce any false positives. Lastly, I would specifically look for “known bad data”. Using something like AppSensor or a WAF is a way to automate some of that process.

In conclusion, hopefully it’s clear that solving XSS is a manageable task, but just takes some focused effort on research, training, implementation and verification. The pattern I’ve described for both SQLi and XSS are useful to follow for other vulnerability classes in order to really build a robust environment and suite of applications.

References
———–
https://www.owasp.org/index.php/XSS
https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
https://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet
https://www.owasp.org/index.php/Abridged_XSS_Prevention_Cheat_Sheet

Technorati Tags: , ,

Year Of Security for Java – Week 36 – Solve SQL Injection

No Gravatar

What is it and why should I care?
SQL Injection (SQLi) is an issue that is caused because of poor code/data separation. The general issue is that a developer intends the user input to be interpreted as data, but an attacker can manipulate the input to cause the database to interpret the input as commands.

There have been a lot of devastating attacks recently using SQLi. You don’t have to look very far to see the amount of damage it’s caused, involving great financial, political and reputational impact. You probably won’t have a tough time selling the C-Level exec on the idea that SQLi protection is important.

What should I do about it?
Note: One great resource which I drew from for both this post as well as the more in-depth post is the SQL Injection Prevention Cheat Sheet over at OWASP.

I’ve already written pretty extensively here on the basic problems with and solutions for SQLi. However, I’ll cover the solutions briefly here for clarity’s sake.

1. Use Parameterized Queries (Note that in Java, parameterized queries = prepared statements)
Parameterized queries are a great solution because they’re fairly simple to learn,are ubiquitous and don’t require your developers to learn another API – it’s just SQL. (One nice additional benefit: it will make your security guy happy)

2. Stored Procedures
Stored procedures are a good option in some environments where you want to have the query management done separately from the code, and are often helpful for performance if properly tuned.

3. Output Encoding/Escaping
Output Escaping is more of a last resort. It can be properly done, but it’s usually more work than the other 2 options, requires more customization, and is more error-prone for most developers. If you go this route, it’s definitely advisable to use a common security framework such as ESAPI so that your controls are consistent across all your applications.

Now that we’ve covered the basic technology solution for SQLi in Java, let’s consider the larger context of eradicating entire classes of vulnerabilities from our codebase as I discussed last week. Let’s look at each specific sub-point I mentioned in that post and consider what you might do regarding SQLi.

1. Understand the problem.
I’d read available documentation on SQLi to make sure I understood the issue as well as I *think* I do. I’d also go read about SQLi for the database platform(s) that is used in my environment. I’d look for any specific weaknesses in implementation that I need to be aware of.

2. Consider all use cases where the issue can occur.
I would think about all the different types of database interactions that occur in my application portfolio. How many languages am I developing in or supporting (Java, C#, Cobol)? What platforms am I developing for (web, services, desktop, mobile, mainframe, html5)? What interaction paradigms are they using (OLAP, OLTP, Warehousing, Batch)?

3. Evaluate solutions.
I would look at the SQLi cheat sheet from OWASP (and any associated weaknesses with the approaches that it espouses). Once I understood the recommended solution(s), I would consider whether said solution(s) would work in the environment in which I’m working. I would focus on using the best solution possible for everyone who can use it, then consider exceptions if it’s not supported by a given interaction paradigm based on business need.

4. Institutionalize the chosen solution.
After deciding on a solution (say parameterized queries), you have to work with development teams to determine the current status of database interactions (are they using dynamic sql now or are they already on parameterized queries?). This step is a good place to (re)train developers on this specific topic from a security perspective, since they’ll be implementing the solution. You then just have to do the work of fixing queries, and re-testing applications to make sure everything still works as expected. Note: In addition to solving SQLi from the “code” perspective, there are also additional steps you can implement here that increase assurance such as lowering the privileges/access of users so they can’t cause certain types of harm even if they were to be malicious.

5. Add technology and processes for verification.
At this point, I would add a few tools in to make sure the technology is being used. I would make sure that there were static and dynamic analysis tools in place to check for SQLi. I would also make sure to code review closely any queries made by the application. I might go so far as to write custom rules for my static toolset that “trusts” my safe output functions to reduce any false positives. Lastly, I would monitor logs for suspicious activity. Using something like AppSensor is a way to automate some of that process.

Hopefully the approach I described makes sense to you. There are certainly additional steps you can take, but this should be a good start. The idea is that you don’t just go and squash a few bugs. The hope is that you do solve the immediate problem, but make it impossible (or as close as you can get) for the problem to resurface.

References
———–
https://www.jtmelton.com/2009/12/01/the-owasp-top-ten-and-esapi-part-3-injection-flaws/
https://www.owasp.org/index.php/SQL_Injection
https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

Technorati Tags: , ,