John Melton's Weblog
Java, Security and Technology

Year Of Security for Java – Week 40 – Get a Security Person (or Some People) if You Can

No Gravatar

What is it and why should I care?
I spend a good bit of time talking about both development and security. I spend a lot of time working with other developers and other security people. There are a precious few that I know of that excel at both development and security. This is a sentiment echoed by many, so I won’t spend time belaboring the point. If you can’t have everyone be an expert in both, how should you structure your team so you have the optimal blend of both? There is some usefulness in discussing the make-up of teams with regards to development and security, as it can heavily affect your security posture long-term.

What should I do about it?

Let’s consider a few different options when it comes to team make-up:

No security people
I thought about leaving this group out, but it’s so prevalent that I just couldn’t. Many small and medium sized organizations haven’t yet added security to their SDLC (another post in the coming weeks on this topic). This is tough. This will take a long time to resolve, and will require changes to developer education and training programs as well as general industry awareness. There’s a lot of work being done to get the information out there, but this will just take time.

Developers with some security training
This is a popular option. Ok, we need to do better on security – send one of the team members to a week’s training! This is better than nothing, but a pretty weak option. Unless the person is passionate about security and spends time coming up to speed on his/her own, you’re going to get little benefit. You may pick up a few of the obvious things, which is certainly helpful, but it does not usually improve your overall security stance. Additionally, this person is not going to have a mentor of any type for the security work they are doing, which can be important in the security field particularly.

Security people at the enterprise level
I think this is a great option for a lot of things, and should certainly be considered depending on the size of the enterprise. Security people at the enterprise level can do things that security people embedded in development organizations just can’t do. They can set high-level standards and policy. They can also build security strategies and architectures for development.

As an organization grows, it becomes more and more important to have consistency (assuming the standard is good) across the enterprise. There’s a lot of time and money being spent in just trying to figure out what organizations have deployed. It quickly becomes a nightmarish problem, particularly for organizations that have lots of legacy software.

Security people on the development team
Having security embedded in the development organization is also a great option for making impactful changes on application architecture, design and implementation. Producing standards at the enterprise is great, but useless if no one follows them. Also, having security folks deployed in the team helps tremendously with training as your “non-security-trained” developers get direct on the job training tailored to your organization. In addition, you have a built-in mentor to ask questions of if something comes up that’s security related. You can also catch issues earlier in the development cycle, since the security person can help do things like code review or design review with an eye for security.

There are a couple of models for this. One options is to have a security minded person doing actual development, but also security stuff, essentially splitting time and focus. Alternatively, you have a security person that round-robins for a few dev teams and functions as a kind of internal consultant. I’ve seen both of these models work, and it often comes down to organizational culture as to which one is a better fit.

A good article related to this (and with REAL data!) is from David Rook (@securityninja) and is found here. In it, David says that their company embeds security people in with the development team. They do code reviews as well as other security related activities. He has tracked their data over time, and has found that 1 security person to 10 developers is a ratio that works well in their organization. Compared to current standards, that’s a LOT. According to BSIMM, there’s a ration of 1.95%. That means on average (for the companies that participated in BSIMM), there are 2 security folks for every 100 developers. That includes people who sit at the enterprise level as well as those are directly related to security in development and architecture teams.

Security people outside the organization
A final option for consideration is the “security consultant”. This can come in lots of forms. It could be paying people to come in and build your code for you in a secure way. It could be someone coming in and reviewing/testing the code you wrote for security. It could be purchasing or using tools/services.

Using outside consultants is often a business decision in many fields. Is it cheaper for us to develop this talent internally or outsource it? However, that’s often not an option in security, though it’s getting closer. At the current moment, a lot of the “security” people are at outside consultancies. There are clearly domains (financials, government, etc.) where there is a lot of security knowledge, but many verticals just don’t have the internal knowledge.

Using and consuming external security knowledge can be a great idea, but IMHO, shouldn’t come at the cost of building at least some of that talent internally. By creating that skill-set internal to your organization, you can tailor your strategy to your organization, a powerful concept.

In conclusion, if you’re developing or deploying software, you should be building security into your process, and that means getting good security people on board. Security talent can come from internal and/or external resources. Considering your organizational model and embedding security in the appropriate places can greatly improve your overall security posture.


Technorati Tags: , ,

Year Of Security for Java – Week 35 – Solve Security Problems One at a Time

No Gravatar

What is it and why should I care?
This article (and several of those remaining in the series) is not so much technical in nature, but rather deals more with processes related to security problem solving.

It’s a fact of life in most development and/or security shops that there are those fire-drill days, and that is the case for security practitioners many times due to the “we have a security problem and the sky is falling … fix it” mentality. This course of action, however, doesn’t lend itself to fixing things properly (root-cause analysis), and certainly doesn’t allow for the methodical eradication of entire classes of vulnerabilities.

That is a problem.

In order to make a dent in the security problems plaguing the Internet, we can not solve problems as they come up (referring only to known attack vectors, certainly we don’t know what we don’t know). We have to get ahead of them. We can’t fix problems when they get here – at that point it’s too late. This brings to mind several ideas, like building security in from the start and looking at what others have done to solve their problems and using their good ideas in our own processes.

However, one issue I don’t see addressed much in the security realm (though it does come up) is the idea that we’re trying to tackle too many problems at once. Make no mistake, there’s a lot of issues, and they all seem important, but there’s generally a prioritized order for most situations – some type of risk ranking. If you get the appropriate stakeholders in the room, and make the possible security issues clear, there will be some issues that are clearly more important or impactful than others.

If we work from the assumption that we have ranked our problems in priority order, why should we be haphazard in our approach to their resolution? We absolutely shouldn’t. Security tooling has approached this with the familiar red-yellow-green solution in many cases, which is fairly helpful if you can inform the tool what constitutes red-yellow-green in your environment. However, this otherwise helpful approach missed the point that solving many security problems requires an architectural solution.

Let’s consider 2 examples from a different problem set – performance.

1. One problem might be a particular method that’s reasonably slow and gets executed many, many times. In this case, you’d probably just go in, rewrite the method using some optimizations specific to the method, and be done with it. Instant performance increase, and very little fuss.

2. Another problem might be queries taking too long across the application. If we assume a relational data access layer, there could be lots of solutions. You might scale the database hardware somehow, swap out the database vendor, add caching either internal or external to the application, tune queries, or a handful of other things. The point is that many of these “fixes” involve significant software and/or hardware architectural changes, and you wouldn’t think of making a decision on those nearly so quickly.

Some security issues (eg. session fixation) are pretty simple fixes, and you make them in one place and you’re done. Others (sqli, xss, etc.) are certainly more complex and generally are best solved with architectural changes.

What should I do about it?

Hopefully I’ve convinced you that solving security problems fire-drill style is a bad idea and that many require a more rigorous approach, so how do we solve them correctly?

My recommendation to developers is that you approach them individually (I caveat this with “you need to fix the easy/terrible ones first” to knock out the true fire-drills). This means that you pick your biggest problem (calculated by some risk rating methodology) and try to a)eradicate the issue from your codebase(s) and b)make it as impossible as you can for it to ever happen again.

That can be daunting, but here are a few recommendations to get that process started.

1. Understand the problem.
Don’t ever try to tell anyone how to solve a problem you don’t understand yourself. You usually don’t actually improve anything and you look foolish. This is a common problem in security, so enough said here.

2. Consider all use cases where the issue can occur.
Figure out the ways that developers can cause the issue, as well as any they might not be using yet, but will be soon. This gives you the breadth of functionality that a possible solution has to at least consider, if not account for. The goal is that you don’t give developers an excuse to go around your solutions because “we need this feature”.

3. Evaluate solutions.
This is certainly a broad topic with lots of possible tasks, but there are a few obvious ones.
– Distill the known “secure” approaches and their associated tradeoffs
– Look for known attacks against those approaches
– Decide on a single or hybrid solution (most of the time, building your own is the wrong idea)
– Try to find a good implementation that matches your chosen solution
– Follow the guidance to implement the solution properly

4. Institutionalize the chosen solution.
Once you have a chosen solution for your problem and a working implementation, you now need to make sure that is the solution that actually gets used. One approach that seems to work pretty well is the ESAPI model. Here, you build a set of controls specific to your organization that function as the “approved” solution for a given problem area. You also build appropriate documentation showing developers how to use it properly. This brings in all the benefits of code reuse, as well as the consistent application of security controls.

5. Add technology and processes for verification.
This is an important step that is often not done. After you’ve considered the problem, come up with a solution, and got people to use it, you need to make sure they keep using it. Again, this could mean a lot of things, but here are a few ideas to get you going:
– Get (or build) a tool that not only allows you to check if you’re not doing something wrong, but that you are doing something right. This is probably going to be custom, but it’s very cool to be able to see everywhere you’re sending queries to a database that DON’T go through these 10 “approved” methods. That’s a much more manageable problem.
– Add people and processes to cover areas where tools don’t work. At the moment, software can’t catch all of these things, but humans can if they have the time. By only requiring humans to step in and evaluate those areas that technology can’t deal with, you cut down on the time requirement, and give folks a chance to focus on those human-only tasks where they’re actually needed.

In conclusion, there are lots of security problems to be solved, and not enough time or people to solve them. However if we prioritize our problems, and then deal with each one thoroughly, we can create significantly more secure applications consistently.


Technorati Tags: ,

Year Of Security for Java – Week 28 – Unit Test

No Gravatar

What is it and why should I care?
Unit testing is the term generally associated with the process of writing code specifically purposed for testing your application functionality. You write test code to run your functional application code and verify the results.

Note: Unit testing is actually a specific subset of this idea focused on the individual unit (generally recognized as a single class and/or method). Other testing techniques (integration, end-to-end, functional) are also extremely valuable, but unit testing is the most well-known and most of the concepts are transferrable.

So, what does unit testing provide to me and why should I want to use it? Let me try to address that in two parts.

1. Development
Since unit testing is fundamentally an activity that goes on during the development of software, let me first consider what developers use it for. Considering the proper use of unit testing (I’ll address improper use below), it provides you primarily with confidence and quality (which are arguably the same thing in my constrained definition here). I’ve had discussions with people who say it offers a lot more, and certainly you can argue that point, but for me all of the myriad reasons come back to confidence and quality. You’ve exercised the code enough (and you continually do so) that you feel confident that your code works the way you expected it to work. There’s a lot that goes into this ideal world where you have good tests, but it can be practically done, and it makes an immense difference in the quality of code that you produce.

2. Security
So, unit testing inside the development process produces confidence and quality. What do we get from it when applied to security? … The same thing. Security is no different from other code from the perspective of correctness, though it is arguably less well understood in many organizations. At the end of the day, though, we should treat security requirements with the same rigor we would treat performance, functionality, or any other requirement. We should specify what we expect, build to that, test to that, and produce that. Unit testing is a fantastic tool that can be applied to produce confidence and quality in the implementation of security in an application.

What should I do about it?
“You should stop everything you’re doing on the app you’re currently working on and go back and make sure you have at least 80% code coverage with your tests.” I put that in quotes because, sadly, I had a previous job (long ago) where I was informed that this was exactly what I needed to do. I thought they were joking at first, but I quickly found out that this was their real requirement. No matter that the number was seemingly arbitrary and that the tests they gave me as “good examples of what we’re looking for” didn’t actually examine the output of the test, but rather just ran the code (READ: not a real test). No, testing was all about hitting a number, and saying you did it.

If that’s the way you look at testing, you get no value from it, and it actually costs you quite a bit in development, configuration and execution.

Ok, so if not that, then what?

Getting Started
Here are a couple ideas I’d recommend when first starting out on the path to good unit testing.

1. Read a good book or article on unit testing and the concepts and ideas. Kent Beck in particular has produced excellent resources on the topic.

2. Read somebody else’s (good) test code. I always recommend this when folks are getting started. I usually recommend something like the Spring framework, which has some of the highest quality code around, to use as a starting point to get good ideas for how to test. Look at the class they are testing and the tests they wrote for it. This will get you a good idea of what’s going on.

Once you’ve gotten through the initial learning phase regarding real unit testing, here are a few more ideas that I’ve seen work pretty well. Certainly add to or remove from this list to make it work in your environment, but at least consider the concepts. (Also, here is someone else’s list if you’d like some additional ideas)

Good Ideas

1. Build a regression suite
A huge advantage of writing another full set of code to test your functional code is that the code stays around (also can have a bad result at times for maintenance, but let’s focus on the positive here). That means you have an incredible regression test suite. This is one of the most powerful things about unit testing. You’ve written some function, and a few tests to cover it. You modify the function in order to add some new functionality, and now one of your tests breaks. This happens constantly, but is a strong safety net. This is one of the single best things I’ve ever had to learn the hard way that I needed.

2. Write good tests
Writing a test that exercises a method is trivial – call the method with necessary parameters … done. This is the cheap (and useless) way to do testing. What you should do (and you’ll learn this in the book) is write positive and negative tests for the method. Think about nulls. Think about boundary conditions for your parameters. What happens if I send a negative number in or what if I send Integer.MAX_VALUE? Inspect the return values from functions and make sure they align with your expectations.

This type of “how can I break this” thinking usually comes natural to good security folks, so it’s actually a good fit for them. However, good developers certainly get this mindset as they test more, and it’s extremely beneficial for both functional and security tests. This is a cool and powerful way to get developers who aren’t security minded going with security.

3. Start small
This piece of advice is given for almost everything, but that’s because it works. Certainly begin practising on some insignificant code or even just make a sample app to get going. However, when getting good tests into your code-base, I generally recommend picking either/or a) your most important classes or b) your most buggy classes. You’ll likely be surprised at the number of bugs you find when you start the process, but just remember those are bugs your customer won’t find!

4. Capture metrics (and use them)
There are lots of tools that help you capture metrics over source code or bug repositories. It’s really a fun exercise to see the number of reported bugs go down as your number of unit tests goes up. You can also use data like the number of bugs in code in a certain part of the application, or written by a certain developer to identify where to focus your testing efforts.

5. Use tools to help you out
You could certainly write your own unit testing framework, but why when others have already done it. In the Java world, the reigning king is JUnit. Martin Fowler is quoted as saying “never in the field of software development have so many owed so much to so few lines of code” when referencing JUnit. The idea behind JUnit is simple, but it’s notable because the execution is great, and there are lots of additional tools, like IDE support and build tool support, that make using it so simple.

6. Do code coverage
A bit earlier, I knocked on code coverage because I feel it’s a pretty weak metric in and of itself. However, it is a valuable tool in combination with a quality process. If you know your tests are good, and you know that you cover 85% of your significant code, then you’re doing pretty well. Again assuming that you’re tests are good, this can be another metric in the process that points to improvement.

As far as tools go, there are several, but I generally recommend EclEmma for this purpose.

7. Code review your tests with the code they’re testing
Yes, you should code review your tests. This is part of the “writing good tests” process. You need to ensure that people are testing their code, and that they are doing it correctly. Just fold this into your code review process, and you’re golden.

8. Test different requirements
Don’t just test functionality. Test for performance. Test for security. Test for other types of requirements. All these things can be added, and they’ll improve your confidence and quality.

9. Add a test when you get a bug
When you find a bug, do this:
– add a test that covers the bug
– make sure all the existing tests pass, but the new test fails
– fix the code
– make sure all tests pass

This process allows you to make sure you don’t regress over time and that the same bug doesn’t come back to haunt you.

10. Write your tests first … at least on paper
A lot of people swear by Test Driven Development, the idea that you write all your tests for a method/class _before_ you write the class. You ensure all of them fail. You then write the class, and by the time all the tests pass (assuming you’ve written tests to cover every scenario and that the tests are accurate and “good), then you’re done with coding.

I’m personally not a stickler for this process, though there are advantages. If you’re doing a brand-new project, this can be great, but a lot of the work developers do is on legacy code (even if only a few weeks/months old), and a lot of that doesn’t have tests, so we have to have a process that allows for that.

What I will say is that you should write your tests separately from the code. All tests come from the requirements, not the code. This is an important point, and is where most new test writers fail – they write the tests to work with the code that’s there, not to test the actual code. Of course if you’re writing tests meant to work for the code you see, those tests should pass. It’s divorcing yourself from the implementation when you’re writing tests that’s important. This is something you shouldn’t sacrifice on.

11. Don’t let testing replace modelling altogether
In this superb talk, Glenn Vanderburg points out that software engineering relies quite heavily on modelling, when we can achieve many of the same desired results through testing. He talks about aerospace engineers using modelling, followed by a prototype, but says that of course they would use the prototype every time if it were as cheap as the modelling, since it’s actually testing the “real thing”. I think this point does ring true to an extent. I don’t personally like using modelling extensively because in practice it’s a) overkill and b) outdated as quickly as you can create it. However, I do think higher-level modelling adds significant value and that testing is never going to be able to effectively replace it because the value is in the mental exercise of considering the system at a higher level, which testing often doesn’t do properly, or at least effectively.

12. Continuous testing
Continuous integration platforms are common nowadays, but there are still lots of people not using them (but you SHOULD!). However, they really are extremely helpful for testing. They all include the idea that the unit tests should be run on every build. This is powerful because it forces you to find the broken tests quickly and fix them while what you changed is still fresh on your mind. If you don’t have a CI environment, your build system and configuration should at least allow you to do this every time you build, and then you should have your process include that step.

13. Get unit testing into your SDLC.
It sounds sad to say, but you have to force testing as a requirement or there will be those that won’t do it. Testing is a definite and important step in the SDLC and should be represented as such. I will note here that I fully believe your good developers will really enjoy the effect of the unit tests once they use them (even if they still don’t like writing them). Every good developer I’ve worked with has thought good tests were worth their weight in gold.

14. Add in tests for integration, functional, end-to-end, etc.
Unit testing is really a subset of the larger testing scope. There are tests at the component, business function, application, etc. levels and all of these should be tested. All of the same rules above apply to these as well. It’s really great to be able to hit a button and know that your application is being run through the gauntlet of unit, integration, functional, and end-to-end tests. It’s a pretty powerful concept.

One notable (and cool) example of this specifically as it relates to security is the use of security regression testing using the OWASP ZAP tool.

Unit testing is one of those ideas that’s not really specifically for security, but actually does quite a bit for security if applied correctly. It’s also a great way to get developers and security people working together for a common goal.

It’s also near and dear to my heart as I think it’s been the single most important idea that’s helped me improve as a developer over the years. I hope it’s as useful for you as it has been for me.


Technorati Tags: , , , , ,

Year Of Security for Java – Week 18 – Perform Application Layer Intrusion Detection

No Gravatar

What is it and why should I care?
Application layer intrusion detection is a simple concept that I believe is very, very powerful when it comes to protecting applications. Most of the topics I’ve covered thus far have focused on the development portion of the software life-cycle, but this topic really covers the entire span of an application, from the requirements and planning to sun-setting.

The basic concept is that you plan for, implement and monitor “bad” things that occur in your application. With this type of system in place, you look for events that appear to be undesirable in some way and then keep track of them. Over time, you can make decisions about whether those individual events turn into an actual attack.

Many developers actually do most of the work of detection already. Consider the following pseudo-code:

if (user has access to record) {
    get data 
    redirect to view/edit page
} else {
    log exception
    send user error message

I’ve seen code just like this lots of times. The problem here is the handling exceptional condition. In general, people don’t review logs, so if there’s an attacker trying to break your application, the only person seeing the error you’ve caught is the _attacker_. With one quick addition of sending a message to your intrusion detection engine, you can start tracking these events and actually gaining knowledge into the real-time (and historical if you choose to store it) usage of your application. After you’ve detected an actual intrusion, you also have the ability to respond to the activity in any [legal] way you see fit. Popular options include ideas like: increased logging, manipulating user account (logout, disable), or even blocking access to certain functionality.

What should I do about it?
Let’s assume I’ve sold you on the idea of implementing something like this (hopefully I have). What now?

Well, you have a few options on how to proceed that I’m aware of: ESAPI, AppSensor or roll-your-own.

ESAPI does have an intrusion detection engine built-in that performs some of these ideas. It is admittedly not extensive, but the core is there and can certainly be extended.

AppSensor is one such extension of the ESAPI intrusion detection engine. The implementation is more extensive than what’s available for ESAPI. Additionally, the project offers a book about the overall idea, as well as significant documentation in addition to the code. Lastly, there is actually a significant update being worked on currently on the project to update both the documentation and the code.

Rolling your own analysis engine can be a small or very large project depending on your needs. Nevertheless, you can certainly take the ideas and implement them in your applications and get significant benefit.

By just adding a little bit of effort, you can gain significant insight into the overall security health of your application(s). You can see who attacked/is attacking your application in real-time or the past, and you can actually respond to events as they occur. Who wouldn’t like that?

Author note: I work on the AppSensor project, so this whole topic is near and dear to me. Please take advantage of the idea whether it’s in our implementation or not!


Technorati Tags: , , ,

Year Of Security for Java – Week 15 – Audit Security Related Events

No Gravatar

What is it and why should I care?
Auditing security related events includes two basic concepts, so we’ll begin by treating them individually.

Auditing is a key part of any real software system. Many people treat logging and auditing as the same idea, though they’re actually different. Definitions might vary, but mine boils down to the consumer of the output. In general, logging data is consumed by developers (most often for debugging problems), and possibly business owners to see basic trending information (likely through some basic log parsing for usage statistics, etc). Auditing, on the other hand, is meant to be used by auditors to reconstruct the events that occurred in the system. The view of these events is often constrained by a time period, a specific user or set of users, a specific function or set of functions, etc.

Usually, logged data is unstructured and can be or represent anything. Audit data, on the other hand, is generally structured, and can be thought of more like a database record where there are specific fields that are always filled in, and the only thing that changes is the data in the column, not the column itself (to use the DB analogy).

Security Related Events
Security related events are going to be determined by you as part of your development process, but there are several obvious candidates, such as login, logout, user management, credential management. etc. All of these are clearly security related and could be important to the security posture of your application either generally or specifically related to a single user or set of users.

Knowing that a security related event has occurred is important. Not knowing could lead to not only unauthorized access or usage of the system, but the inability to know that it even occurred.

What should I do about it?
Auditing is the option to choose when you’re talking about security-related events. For any security-related event that occurs in the system, you should be auditing the activity. You should collect appropriate data on each event, such as event type (what happened), actor performing event (who did it), timestamp (when did they do it), etc. This type of data will allow you to filter the dataset by user, time, function, or any of the other data points when needed to determine specifically what occurred from an auditing perspective. Structured data in this form also lets you do helpful things like look at generic patterns and find that a specific user did a bunch of things outside work hours (unusual?) or everyone in the system all performed a single function within an hour of each other (maybe strange?). Some of these ideas are found in the concept of AppSensor, an OWASP project I work on.

I would also like to point out a great talk Gunnar Peterson gave at an OWASP chapter meeting called “Audit Logging Done Right“. That video goes into detail about auditing and the power it has when used appropriately.

Auditing is not a new technology, and often is viewed as a boring “have to do”, but it is actually a very powerful concept that lets us gain access and visibility into what the application is doing. It also gives us all of the nice capabilities of dealing with structured data . Once you recognize the utility, I hope you’ll start auditing a few more things out of the realization of it’s power, not just obligation!

———– – Gunnar Peterson – Audit Logging Done Right

Technorati Tags: , ,

Year Of Security for Java – Week 13 – Know Your Frameworks

No Gravatar

What is it and why should I care?
Libraries and frameworks are a reality for every J2EE developer (pretty much any developer, actually) out there. We use them for MVC, DB, logging, web services, security, XML processing, as well as a host of other features. We rely on them in our production apps every single day. All this code written by someone else. Code that likely hasn’t been internally vetted. Code that likely hasn’t even been looked at. Yet, we still use these masses of code (generally MUCH larger than the custom code written for the app itself) to add functionality to our applications.

Knowing your frameworks means you don’t accept the code blindly. When you include a piece of software in your application, you’ve inherited and are now responsible for it. From a functionality perspective, you fix it when it breaks. From a security perspective, you are now responsible for dealing with it’s vulnerabilities. This is the crux of the problem: we manage a LOT of code now (code we didn’t write) and are responsible for making sure it is functional and secure: no easy task.

What should I do about it?
There are many things you should do when dealing with frameworks. I’ll cover the two I think are most important.

First, you should patch your frameworks when new vulnerabilities are found. This is a significant effort because it obviously requires much testing and coordination to upgrade frameworks within applications. However, there have been significant vulnerabilities found in libraries that are extremely popular, and that necessitates patching. Sometimes, patching can be done without upgrading the library actually. It could be moved off to a WAF or some such product. The point is you need to prevent the vulnerability that’s been exposed.

Second, you should really know and understand how your framework functions. While most frameworks patch vulnerabilities reasonably quickly (especially if the vuln public knowledge), they will often not patch their “design decisions”. These are often architectural patterns that benefit functionality, but not security. One popular pattern that comes to mind is auto-binding / mass assignment. The technique of populating the model using request data is not new, and is very powerful. It can make code much easier and cleaner to write. However, it’s often implemented with no security at all. The best you’ll usually get is an opt-in mechanism for securing it. However, most people are not going to opt-in, so it will be used insecurely in many cases. Patterns like this are frequently seen in modern frameworks, and developers really need to be aware of what’s going on internally in the framework to understand how the security and functionality of their application is going to be affected.

Frameworks are a necessary piece to most any development work going on today, but blindly trusting them is not. Be aware of what the frameworks you’re using do and how they do it. Keep an eye on them and patch them as necessary. This will help manage the risk of using them in your applications.

This post turned out to be very timely. Aspect Security just put out a nice paper (sorry, behind registration wall) on some analysis they did regarding the usage of java libraries through the maven central repo. They analyzed 113 million downloads and found that 26% of those downloads have known vulnerabilities! That’s a significant number. Their analysis doesn’t say whether or not those downloads were followed by requests for the patched versions, but I would bet not.


Technorati Tags: , , , , , ,

Year Of Security for Java – Week 12 – Log Forging Prevention

No Gravatar

What is it and why should I care?
Log forging is an issue that can occur if you allow un-trusted data to be written to a log storage mechanism. The intent of the attacker using log forging is to cover his tracks in the logs or at least make understanding what he was doing more difficult. Unfortunately, like most log-related issues, it’s generally not a concern until something happens and you actually need the logs.

A simple example of log forging might look like this: (first the code)

String someVar = getRequestParameter("xyz");
log("Data is: " + someVar);

And now for what a normal request and the associated log entry might look like:

?xyz=my name is Bob

[2012-03-15 02:04:31] [bob] Data is: my name is Bob

And finally what a forged request and the associated log entry might look like:

?xyz=my name is Bob\r\n[2012-03-15 02:04:39] [mary] Mary created new user\r\n[2012-03-15 02:04:46] [josh] Josh logged out\r\n[2012-03-15 02:04:55] [susan] Susan performed an important transaction

[2012-03-15 02:04:31] [bob] Data is: my name is Bob
[2012-03-15 02:04:39] [mary] Mary created new user
[2012-03-15 02:04:46] [josh] Josh logged out
[2012-03-15 02:04:55] [susan] Susan performed an important transaction

The idea here is that the attacker has surmised what a standard log entry might look like and then using simple newline characters created what appear to be new legitimate log entries.

Note: If you are using a database for logging, you likely won’t have as much of an issue since each entry is going to be in a single row. However, it could still affect you if your log viewer doesn’t distinguish between rows. However, you still need to be aware of SQL injection here, which is actually a much more serious issue.

What should I do about it?
Fortunately, log forging has a relatively simple fix.

The general approach is to validate input (you should already be doing this) and encode output (you also should be doing this). Validating input alone is not generally going to stop this attack, since there are valid cases to allow input with newlines. Encoding output in addition to validating the input, however, should solve your problem. There are various options for encoding depending on your needs. A simple fix might be to strip out any user-supplied newlines or replace them with some benign character or character sequence. Another alternative might be to HTML encode the data before storing it. This allows you to decode the data later if you need to get back to the original data, as well as have it set up nicely for a web-based log viewing experience if that’s desirable.

Log forging is a simple issue to understand and solve – it just takes some planning ahead to deal with properly. You’ll be glad you did though when you get that 3am call to look through the logs and figure out what’s happening!

I’ve actually already written a longer, more detailed article about log forging prevention here, but this shorter version was meant to show the essentials and fits in with the year of security for Java content.


Technorati Tags: , , ,

Year Of Security for Java – Week 11 – X-XSS-Protection

No Gravatar

What is it and why should I care?
X-XSS-Protection is a Microsoft IE technology used to help prevent reflected XSS attacks in IE.

Note 1: This is not a “panacea” for XSS. There is no excuse for not developing your site in a secure manner to prevent XSS. This however is a protection offered by the browser itself (as opposed to an application), meant to protect the masses from the vast amount of XSS litter on the internet.
Note 2: Firefox (by way of NoScript), Chrome (by way of WebKit) and Safari(also WebKit) have similar protections, but apparently don’t use the X-XSS-Protection header as a controlling mechanism.

The XSS protection provided essentially checks for request content that is matched in the response and would cause an XSS vulnerability to be exploited. The filter then performs some mangling of the content to prevent the attack from succeeding. According to the docs, IE has the protection turned on by default for most security zones, including the Internet zone, which is the primary concern for most users.

What should I do about it?
The first thing you should do is work towards resolving any and all XSS issues in your application. As a security minded developer, this is a must.

The recommendation for the use of this header is actually not so straightforward in my opinion. In general, the other HTTP headers I’ve described already in the series have had very little downside. However, the X-XSS-Protection header has had some problems in the past. As far as I’m aware, the IE folks have done a good job of dealing with the known vulns, but I still have concerns since some of the vulns have exposed security problems.

In general, I would recommend keeping the protection enabled, unless you are very sure you have XSS all cleaned up in your app. However, this comes with the caveat that you should at least put some thought into the use cases in your site first. Depending on your choice, here are the options you have available to use, and how you enable them in your application using the X-XSS-Protection HTTP header.

1. Enable the protection for all security zones in blocking mode (Blocking mode means the site won’t display at all if an XSS attempt is found, but rather a simple warning to the user that the attack has been blocked):

X-XSS-Protection: 1; mode=block

2. Enable the protection for all security zones:

X-XSS-Protection: 1

3. Leave the protection enabled for the default zones:

Do nothing.

4. Disable the protection entirely (I only recommend this in 2 cases: either you’re positive that you’ve completely resolved XSS in your app, or there’s an issue in the XSS filter that you’re aware of that causes an additional vulnerability) :

X-XSS-Protection: 0

The protection provided by the X-XSS-Protection header is not complete, but it does raise the bar against attackers and helps protect users. While there have certainly been some implementation issues, the fact that all the major browsers have some implementation of reflected XSS protection shows the importance of this issue. Be prudent in implementation, but certainly do everything you can to help your users be safe.


Technorati Tags: , , , , , ,

Year Of Security for Java – Week 10 – X-Content-Type-Options

No Gravatar

What is it and why should I care?
X-Content-Type-Options is an HTTP header that can help prevent browser content-type sniffing problems.

The content-type for a given resource should match the “type” (too obvious?) of the resource. For example, an HTML page would use “text/html”, a PNG image would use “image/png”, and a CSS document would use “text/css”. However, often times, the content-type is either not specified or is wrong. This has led to browsers having to implement “sniffing” algorithms to determine what the actual data is that is being served, and then apply the appropriate parsing and execution semantics for the sniffed type. This, however, has caused certain bugs. One well-known example allowed attackers to have files that were supposedly images be interpreted as javascript and executed.

What should I do about it?
There are actually 2 things to do here.

Step 1. When serving resources, make sure you send the content-type header to appropriately match the type of the resource being served. For example, if you’re serving an HTML page, you should send the HTTP header:

Content-Type: text/html; 

Step 2. Add the X-Content-Type-Options header with a value of “nosniff” to inform the browser to trust what the site has sent is the appropriate content-type, and to not attempt “sniffing” the real content-type. Adding this additional header would look like this:

X-Content-Type-Options: nosniff

These 2 simple steps will provide additional protection against content-type sniffing issues.

An important note to mention here is that while this is a useful protection, not all browsers have implemented it. As of this writing (3/6/2012), only Chrome and IE support this protection (though NoScript does apparently add the protection to Firefox). Even though it won’t save all your users, it’s a useful mechanism to provide even more assurance for your users.


Technorati Tags: , , , ,

Year Of Security for Java – Week 9 – X-Frame-Options

No Gravatar

What is it and why should I care?
X-Frame-Options (moving towards just Frame-Options in a draft spec – dropping the X-) is a new technology that allows an application to specify whether or not specific pages of the site can be framed. This is meant to help deal with the clickjacking problem.

The technology is implemented as an HTTP response header specified per-page. Browsers supporting the (X-)Frame-Options header will respect the declaration of the page and either allow or disallow the page to be framed depending upon the specification.

What should I do about it?
Yet again, this is a very low-risk item that only adds additional assurance. There are some limitations that may prevent the header from offering protection in some instances, but it does NOT make you less safe. It is an additional layer of protection.

A page can specify 3 different options for how it wants to be framed.

Option 1: DENY
This option means this page can never be framed by any page, including a page with the same origin. A sample code snippet is below:

HttpServletResponse response ...;
response.addHeader("X-FRAME-OPTIONS", "DENY");

This option means this page can be framed, but only by another page within the same origin. A sample code snippet is below:

HttpServletResponse response ...;
response.addHeader("X-FRAME-OPTIONS", "SAMEORIGIN");

Option 3: Allow-From
This option means the page can be framed, but only by the specified origin. A sample code snippet is below:

HttpServletResponse response ...;
response.addHeader("X-FRAME-OPTIONS", "Allow-From");

As an additional help, the good folks at owasp have put together a simple example J2EE filter for X-Frame-Options.

(X-)Frame-Options is a good additional layer of protection to add to your site to prevent clickjacking. While it won’t stop everything, it costs very little, and can help protect your users.


Technorati Tags: , , , , ,

Next Page →