Year Of Security for Java – Week 35 – Solve Security Problems One at a Time

No Gravatar

What is it and why should I care?
This article (and several of those remaining in the series) is not so much technical in nature, but rather deals more with processes related to security problem solving.

It’s a fact of life in most development and/or security shops that there are those fire-drill days, and that is the case for security practitioners many times due to the “we have a security problem and the sky is falling … fix it” mentality. This course of action, however, doesn’t lend itself to fixing things properly (root-cause analysis), and certainly doesn’t allow for the methodical eradication of entire classes of vulnerabilities.

That is a problem.

In order to make a dent in the security problems plaguing the Internet, we can not solve problems as they come up (referring only to known attack vectors, certainly we don’t know what we don’t know). We have to get ahead of them. We can’t fix problems when they get here – at that point it’s too late. This brings to mind several ideas, like building security in from the start and looking at what others have done to solve their problems and using their good ideas in our own processes.

However, one issue I don’t see addressed much in the security realm (though it does come up) is the idea that we’re trying to tackle too many problems at once. Make no mistake, there’s a lot of issues, and they all seem important, but there’s generally a prioritized order for most situations – some type of risk ranking. If you get the appropriate stakeholders in the room, and make the possible security issues clear, there will be some issues that are clearly more important or impactful than others.

If we work from the assumption that we have ranked our problems in priority order, why should we be haphazard in our approach to their resolution? We absolutely shouldn’t. Security tooling has approached this with the familiar red-yellow-green solution in many cases, which is fairly helpful if you can inform the tool what constitutes red-yellow-green in your environment. However, this otherwise helpful approach missed the point that solving many security problems requires an architectural solution.

Let’s consider 2 examples from a different problem set – performance.

1. One problem might be a particular method that’s reasonably slow and gets executed many, many times. In this case, you’d probably just go in, rewrite the method using some optimizations specific to the method, and be done with it. Instant performance increase, and very little fuss.

2. Another problem might be queries taking too long across the application. If we assume a relational data access layer, there could be lots of solutions. You might scale the database hardware somehow, swap out the database vendor, add caching either internal or external to the application, tune queries, or a handful of other things. The point is that many of these “fixes” involve significant software and/or hardware architectural changes, and you wouldn’t think of making a decision on those nearly so quickly.

Some security issues (eg. session fixation) are pretty simple fixes, and you make them in one place and you’re done. Others (sqli, xss, etc.) are certainly more complex and generally are best solved with architectural changes.

What should I do about it?

Hopefully I’ve convinced you that solving security problems fire-drill style is a bad idea and that many require a more rigorous approach, so how do we solve them correctly?

My recommendation to developers is that you approach them individually (I caveat this with “you need to fix the easy/terrible ones first” to knock out the true fire-drills). This means that you pick your biggest problem (calculated by some risk rating methodology) and try to a)eradicate the issue from your codebase(s) and b)make it as impossible as you can for it to ever happen again.

That can be daunting, but here are a few recommendations to get that process started.

1. Understand the problem.
Don’t ever try to tell anyone how to solve a problem you don’t understand yourself. You usually don’t actually improve anything and you look foolish. This is a common problem in security, so enough said here.

2. Consider all use cases where the issue can occur.
Figure out the ways that developers can cause the issue, as well as any they might not be using yet, but will be soon. This gives you the breadth of functionality that a possible solution has to at least consider, if not account for. The goal is that you don’t give developers an excuse to go around your solutions because “we need this feature”.

3. Evaluate solutions.
This is certainly a broad topic with lots of possible tasks, but there are a few obvious ones.
– Distill the known “secure” approaches and their associated tradeoffs
– Look for known attacks against those approaches
– Decide on a single or hybrid solution (most of the time, building your own is the wrong idea)
– Try to find a good implementation that matches your chosen solution
– Follow the guidance to implement the solution properly

4. Institutionalize the chosen solution.
Once you have a chosen solution for your problem and a working implementation, you now need to make sure that is the solution that actually gets used. One approach that seems to work pretty well is the ESAPI model. Here, you build a set of controls specific to your organization that function as the “approved” solution for a given problem area. You also build appropriate documentation showing developers how to use it properly. This brings in all the benefits of code reuse, as well as the consistent application of security controls.

5. Add technology and processes for verification.
This is an important step that is often not done. After you’ve considered the problem, come up with a solution, and got people to use it, you need to make sure they keep using it. Again, this could mean a lot of things, but here are a few ideas to get you going:
– Get (or build) a tool that not only allows you to check if you’re not doing something wrong, but that you are doing something right. This is probably going to be custom, but it’s very cool to be able to see everywhere you’re sending queries to a database that DON’T go through these 10 “approved” methods. That’s a much more manageable problem.
– Add people and processes to cover areas where tools don’t work. At the moment, software can’t catch all of these things, but humans can if they have the time. By only requiring humans to step in and evaluate those areas that technology can’t deal with, you cut down on the time requirement, and give folks a chance to focus on those human-only tasks where they’re actually needed.

In conclusion, there are lots of security problems to be solved, and not enough time or people to solve them. However if we prioritize our problems, and then deal with each one thoroughly, we can create significantly more secure applications consistently.

References
———–
https://www.owasp.org/index.php/Category:OWASP_Enterprise_Security_API
https://www.owasp.org/index.php/OWASP_Security_Blitz

Technorati Tags: ,

Year Of Security for Java – Week 34 – Separate Admin Functionality

No Gravatar

What is it and why should I care?
The idea of separating administrative functionality may strike some as odd. By administrative functionality, I’m just grouping those higher criticality functions (generally user/group/role management) that have the characteristic of affecting the application at large, generally through privilege escalation. The idea here is this:

– I have some critical functions that allow user management
– If these functions were exploited, privileges could be added to or removed from users
– I don’t trust that these functions are perfectly protected in all cases

You may disagree with that flow, particularly the last step, and you may feel you are perfectly secure. In my experience, most apps have holes, and exposing critical functionality *unnecessarily* can allow those latent vulnerabilities to have a much larger impact.

What should I do about it?

First, determine if this is even an issue in your application. If you don’t manage users in your app, you don’t need to think about this one. If your application is only internal-facing, maybe your risk profile means you don’t care about this issue in that circumstance.

Next, consider your options:

Status Quo
You could leave things as they are (assuming an existing app) and leave the admin functionality in place. Depending on the value of the protected resources, this may be a legitimate option, but for an important application, it likely isn’t.

Build another application
You could build a standalone app that is separate from the general application. This standalone app could have different authentication requirements (multi-factor), or it might only be deployed internally since all the admins are internal, or access to it might be restricted to certain IP addresses, etc. You have a lot of options in this scenario, but it does require a different codebase and/or deployment.

Subset the application
You could also build a specific subset of your application related to administrative tasks. This is probably the most common option across applications I’ve seen. The issue is that access to this functionality is treated the same way as the general application. You can improve this by having additional controls. Maybe you require client certificate authentication for this subsection, or maybe multi-factor authentication. You might also have additional source restrictions (IP addresses, browser, etc.) for this subsection of the application.

No matter which option you choose, make sure that you consider the issue as it applies to your environment, evaluate the alternatives, and make an informed decision based on the individual circumstances of the situation.

In conclusion, administrative functionality in an application is a high-priority target for exploitation by attackers given the valuable functionality it exposes. With some simple and straightforward changes to your application, you can greatly reduce the risk of this functionality being exploited.

Technorati Tags:

Year Of Security for Java – Week 33 – Access Control (3)

No Gravatar

What is it and why should I care?
We defined access control in part 1 of the access control sub-series, so let’s move on to talk more about what we do about it.

What should I do about it?

In part 1 we discussed limiting your users’ interactions with your application by functionality. In part 2 we discussed adding data access as a criteria. This time I’d like to discuss an additional consideration for limiting interaction: other. Yes, I mean to be generic. There are many other specific data points we can use to generate context in order to make decisions about access control. I’ll outline a few below for your consideration, but consider this a starting list to get you going. You should definitely expand this list and make it specific to your environment.

The following are a few ideas to provide additional context to your access control decision matrix.

Date / Time
The time of day can be a critical factor for deciding whether or not to allow access in certain environments. A simple example might be that you only allow access to employees M-F 8-6. Outside of those hours, the employee may have little to no access within the system. Another example might be that you only perform certain tasks once a quarter at a certain time. You can set your policy to only allow a small window wherein those changes can be made.

Physical Location
If you expect a given user to only login from the UK and they start showing up logged in from Australia, you may not want to allow access. This geo-location capability becomes all the more important with mobile devices. You could theoretically limit a user to only logging in when they are at home or work, and nowhere else. The granularity of location differs depending on your geo-location provider, but many services are starting to get pretty accurate, particularly in the mobile device space.

Type of Device
You might want to limit certain applications to only be used from a mobile device or to never be used from a mobile device.

IP Address
You may want to limit access by specific IP addresses or ranges.

Browser Type / Version
People have been limiting applications by browser type/version for a long time. It’s only been in the last few years, though, that I’ve heard of people doing it for security as opposed to functionality. I’ve seen apps now block older browser versions because they don’t support the security capabilities required for a given application.

These are just a few ideas I’ve seen implemented to help make access control decisions. Some of these data points are logical and simple to get. Some are more difficult to find. Some have higher accuracy than others. The point of this article is to point out that there are additional data points that can be used to make access control decisions. Decide which ones make sense for your application and then use them.

In conclusion, we saw that we can use additional contextual data to make access control decisions that are more sophisticated than the norm. We can use data points that we already have access to in order to limit access to our systems in a more granular way. Done properly, this can greatly improve the security of our applications.

References
———–
https://www.owasp.org/index.php/Category:Access_Control
https://www.owasp.org/index.php/Guide_to_Authorization
https://www.owasp.org/index.php/Access_Control_Cheat_Sheet

Technorati Tags: , ,

Year Of Security for Java – Week 32 – Access Control (2)

No Gravatar

What is it and why should I care?
We defined access control in part 1 of the access control sub-series, so let’s move on to talk more about what we do about it.

What should I do about it?
In part 1 we discussed limiting your users’ interactions with your application by functionality. This time I’d like to discuss an additional consideration for limiting interaction: by data. This simply means does your user have access to this specific data. A simple example should help clarify. Consider online banking, where millions of people might have access to the same function (view account), but only you (and bank employees) have access to view your specific account data.

One generally painful issue here is that there’s no generic way for me to say how this specific recommendation will play out in your application. The “data” aspect of this issue means that it is usually coupled to your data model. If users are limited to their bank accounts, you need to consider the way you store data and see how you link users to bank accounts. You then need to limit your queries in the appropriate manner. There are more generically applicable techniques that generally involve data labeling or tagging, but they’re often very involved and complex to get setup, and if you need a large solution like that, you probably are already aware of that fact.

A related issue is that most frameworks (save maybe ESAPI‘s interfaces) do not provide any API help with regards to access control related to data, or even a reminder that you should do it. For the most part, there are always custom APIs within an application to deal with this issue. That means that there’s often spotty coverage. Some APIs are protected properly, but then others will be wide open.

This means that part of your development process will include identifying those pieces of data that should be restricted to only certain users or groups, and providing centralized access control APIs that are used to evaluate permissions to the given data. You might have a simple API that looks something like the following:

if(isAuthorized(Bill, VIEW, Account.class, 17398)) {
    //do real work
} else {
    //user is doing something bad, or there's a bug in the app, fire off event to appsensor intrusion detector
}

The access control check above asks if “Bill” (user) has access to perform a “view account” (function) on the account (data) represented by id 17398. If you’ve done authentication correctly and performed a check like this, you’re better off than many applications out there.

The next post will look at a few other data points to consider in your access control decision matrix. However, the basic function/data checks described in the first 2 posts are good enough for many uses. However, it’s critical that both be applied correctly and consistently.

In conclusion, we saw that performing access control checks by considering access to data is an important part of any access control scheme. It can be more difficult to build than functionality checks since there’s rarely help from common frameworks and it is dependent in large part on the data model of the individual application, but rigorous focus on building the appropriate common APIs can make it much easier to apply consistently.

References
———–
https://www.owasp.org/index.php/Category:Access_Control
https://www.owasp.org/index.php/Guide_to_Authorization
https://www.owasp.org/index.php/Access_Control_Cheat_Sheet

Technorati Tags: , ,

Year Of Security for Java – Week 31 – Access Control (1)

No Gravatar

What is it and why should I care?

Access control, also known as authorization, is the step that comes after authentication. Access control is the process of “mediating access to resources on the basis of identity” [from here]. It assumes you have determined the identity of the user (whether known or anonymous) and are now making decisions about whether or not that user (or a group they represent) can access a given resource.

Access control is one of the most exploited security controls in many systems. Often times access control is not performed well or maybe not at all. When it exists, many times it is not applied consistently, and therefore leaves large uncovered gaps in the security of an application.

There’s a good bit to unpack in the statements above, and there are different models for access control (MAC, DAC, RBAC, etc). However, there are some basic concepts for access control that generally need to be met in web applications, and those are the ideas I plan to lay out in the next few posts.

What should I do about it?

Note: This is part 1 of 3 in the sub-series related to access control. More is coming.

Note: While there are plenty of helpful best practices for implementation that I’m not going to touch on (like: do not depend on the client at all for access control decisions, apply principle of least privilege, centralize routines for access control, use a mechanism that allows simple policy changes, etc.), I’ve added some references at the bottom that include some of these recommendations. Please read if you’re building access control systems or if you’re just interested.

Given the caveats in the notes above, what I would like to talk about for this first post is one way that you should consider limiting your users’ interactions with your application: by functionality. Limiting users by functionality simply means that certain users (or groups) can access certain functions on your site.

The first popular framework that made this really simple was Struts 1. In Struts 1, you could configure which roles had access to which actions in your application with a really simple mechanism that looked something like this:


    ...

J2EE in general has had a mechanism (roles and security constraints) that allows you to perform this function for a long time, but it can be a little clunky and requires a lot of configuration relative to the Struts 1 mechanism.

In more recent history, the Spring framework picked up the Acegi project and renamed it Spring Security (ok, not that recently). Spring security has a large focus on authorization since that’s one if it’s core functions.

Many frameworks provide a way to accomplish this in their own specific way, but you just have to be aware of the mechansims (or lack thereof) of any given framework you’re using.

The key capabilities you will generally want at a minimum are feature and function management.

What I mean by feature management is a mechanism to control access to several functions at once grouped by general capability area. For instance, in SpringMVC, you might have a TradeController that controls access to all of your trading features within your application. Certain users/groups may need no access whatsoever to that feature, so having the capability to exclude components/features of the application to whole subsets of users is nice and simplifies management.

Function management means you can control access to the individual function that is being called for a specific request, ie. tradeShares(). Having the ability to specify which users/groups may access a specific function is critical.

A couple of finer points here:
1. If you only get one or the other (feature/function management), pick function management – you need the specificity.
2. If you get both, be careful regarding inheritance. The most flexible system I’ve seen thus far (that doesn’t get you into a whole heap of trouble) is allowing inheritance from the feature to the function IFF the function does not specify any allowed users/groups of its’ own. Otherwise, you get into merging allowed users/groups, and that can get messy, not to mention very confusing for those trying to do the right thing for security.

In conclusion, limiting users and groups by access to features and functions is a good first step to providing good access control. There are other ideas coming in the next couple weeks to watch out for and handle additionally, but this is a good start.

References
———–
https://www.owasp.org/index.php/Category:Access_Control
https://www.owasp.org/index.php/Guide_to_Authorization
https://www.owasp.org/index.php/Access_Control_Cheat_Sheet

Technorati Tags: , , , ,

Year Of Security for Java – Week 30 – Authentication

No Gravatar

What is it and why should I care?
Authentication is the process of verifying that someone is who they say they are. Essentially a user claims an identity and then must provide some form of proof of identity. In most systems, the identity is some form of username and the proof is a password. In the simplest case, if the username and password provided match a user record known to the application, then the user is authenticated and proven to hold the claimed identity.

Authentication is one of the most important aspects of security within an application because it’s the starting point for most other conversations. Many security solutions start with the basic assumption of an authenticated user. That means we already got authentication right! Authentication is difficult to get right, but can be done.

What should I do about it?

I’ve already written down a few ideas about authentication in a previous series, but I’ll try to add a few additional ideas here.

Let’s start with a few easy ones that most people are very aware of.

Authenticate over a secure channel. For the web, this means SSL/TLS. Performing authentication over an insecure channel makes intercepting credentials trivial (unless other encryption mechanisms are used). This is an easy win.

Provide logout feature. Once you’ve logged in, you need to be able to logout. Putting a logout link/button on every page in an easily visible place makes it simple for your users to terminate their authenticated session when they please.

Provide related functions. If you provide an authentication function within your application, that means you’ll need to give your users a mechanism to update their credentials (passwords, tokens, etc.). For the common case of passwords, you’ll need to provide additional functions such as password change and reset. In addition, you’ll want to have an account lockout process to prevent brute-forcing of your authentication system. There are other functions you could add, but these are a good base set. These functions are also easy to get wrong, so pay attention to their implementation.

Use generic error messages. You’ll need to ensure you don’t leak information with error messages within the authentication subsystem. The classic example is someone logging in with a bad username and password and being told “user does not exist” versus logging in with a good username and bad password and being told “password does not match”. Here, you’ve leaked that a user exists with that username, so you just have to guess their password and you should be in. What should be done is the user should be given a generic error message like “Authentication failure” and that way, you can’t determine what the issue was.

Update (8/15/2012): There is far less benefit for generic error messages in systems where users can easily register for their own usernames, thereby circumventing the knowledge gap of whether or not an account actually exists.

Handle session management properly. Session management is a topic in and of itself. For J2EE folks, unless you have a good reason, you should be using the built-in session management capabilities of the platform (jsessionid). This is a solid implementation that you don’t have to build yourself – use it. One note here that is beyond this topic is stateless session management (think REST). Here, there are generally well-known practices (like http basic auth for REST) that are used instead of built-in session management. Just be aware of the systems you’re using, and consider their security implications.

Consider single sign-on. Single sign-on can be a helpful option if you’re in an organization that has a good implementation. Most of them work in a simple way. The user is authenticated by either being redirected to a common authentication function, or by each application accepting credentials and using these to authenticate against a single user repository. Either way, the actual user storage and authentication processes should only have to be built once, which is a good thing for reuse. This just makes it all the more important to get the process right.

Update (8/15/2012): While SSO does have it’s uses, there are some concerns.

– You’ve now got a much larger repository of users all coming through a single point of failure, not to mention the increased risk of losing all of your accounts at once.
– You are also granting access across all applications (regardless of security requirements and risk tolerance) with a common time-out, as well as other services which are provided by the SSO implementation. There are situations where you certainly might want to choose the features and configuration of your authentication system depending on the risk profile.

All of these issues should be weighed, both good and bad, when considering an SSO implementation.

Now let’s discuss a few of the less frequently implemented ideas.

Re-authenticate high value transactions. If you’re application has an important transaction that needs to take place with an already authenticated user (e-commerce purchase, wire funds transfer, view medical records), you should re-authenticate the individual transaction. This ensures the user viewing your site at that moment has been properly authenticated. This can help prevent issues where users step away while still involved in an authenticated session and another user being able to use the system as the originally authenticated user.

Use multi-factor authentication. This is a costly measure, but can be a great way to increase security. Many online systems are starting to use a form of this by allowing you to have a code sent to your phone that you have to enter during the authentication process. That means an attacker would have to know (or guess) your password and have access to your phone in order to authenticate as you. This decreases the likelihood exponentially, and is an excellent option.

Outsource your authentication. In the last few years, it’s become more common to let others do your authentication for you. Essentially, you can forego the authentication system altogether, and let your users authenticate to your site by sending you an authentication / access token from a well-known authentication provider (like google/twitter/facebook). All the authentication goes on at those other sites, and you are just told “OK, we know this is Bill and here’s Bill’s access token.” As long as you check Bill’s token and it’s valid, then you’re good to go. This is not an option for many organizations, but if you’re building a site that could use one of these services, it can shrink the size of your application, and the general sense is that “those guys” do a better job at security than you would. In general, that’s probably reasonably fair for the big providers in particular. Most of these providers are using the OAuth protocol right now.

In conclusion, authentication is a critical piece of most any application. It can be difficult to get right, but the steps listed above should put you on your way to getting it right.

Update (8/15/2012): Updates were made to the doc based on helpful review and feedback by Jim Manico.

References
———–
https://www.owasp.org/index.php/Authentication_Cheat_Sheet
https://www.owasp.org/index.php/Top_10_2010-A3-Broken_Authentication_and_Session_Management
https://www.owasp.org/index.php/Guide_to_Authentication
https://www.owasp.org/index.php/Testing_for_authentication
https://www.owasp.org/index.php/Session_Management_Cheat_Sheet
https://www.jtmelton.com/2010/06/16/the-owasp-top-ten-and-esapi-part-8-broken-authentication-and-session-management/
http://oauth.net/

Technorati Tags: , ,

Year Of Security for Java – Week 29 – Manage Resources

No Gravatar

What is it and why should I care?
Resource management has been an issue in programming for a very long time, and it’s one of those issues that affects the A (Availability) of the classical C-I-A triad in information security. It’s effectively where you gain access to some (generally expensive) resource (think database connection, file handle, etc.), and then don’t properly return access, hence you don’t release the associated resources (disk space, memory, file handles, network connectivity, etc.)

While not as high-profile as losing your user account database with all your passwords stored in roughly the equivalent as ROT-13 (the current flavour of the month in the security news), it can be pretty embarrassing to have your site fall over constantly due to resource leaks. Crashing, or at least instability, is the inevitable outcome of poor resource management. It’s also not fun being the guy or gal that gets to restart the server every night at 2AM to make sure it will work without crashing the next day (yes, I have a fried who used to do this).

What should I do about it?

In order to prevent resource leaks, you need to properly release resources when they are no longer needed. In Java, that can look like a few different things.

1. finally block
The traditional way to close resources in Java is to use the finally block that’s provided by the language. This is a fairly simple process once you know the idiom. It essentially looks like the following.

BufferedReader bufferedReader = null;
FileReader fileReader = null;

try {
    String line;
    fileReader = new FileReader("/tmp/myfile.txt");
    bufferedReader = new BufferedReader(fileReader);

    while((line = bufferedReader.readLine()) != null) {
	System.out.println(line);
    }

} catch(Exception e1) {
    e1.printStackTrace();
} finally {
    if(fileReader != null) {
    	try {
	    fileReader.close();
    	} catch(Exception e2) {
	    e2.printStackTrace();
    	}
    }

    if(bufferedReader != null) {
    	try {
	    bufferedReader.close();
    	} catch (Exception e2) {
	    e2.printStackTrace();
    	}
    }
}

Here we read a file, and print out each line. Notice that a good chunk of our code is devoted to resource management. However, if you generified this method by parameterizing the filename and returning a string representing the file contents, then you could make the method reusable. This is exactly what many file reader style utilities do.

2. try-with-resources
So why do people not close resources? Well, it’s part ignorance of the issue, copying bad code snippets and laziness from what I’ve seen. What would be really nice is if the language itself gave you a way to get around this issue by making it automatic. Well in Java 7, after many, many requests for the functionality, it’s finally been added as the try-with-resources statement.

Try-with-resources is only available in Java 7 and later, so you can’t use it if you’re on an older version of Java. However, if it is available to you, it’s a nice little abstraction of syntactic sugar that allows you to push the resource management grunt work to the language itself. Below is a modified version of our initial code using try-with-resources.

try (FileReader fileReader = = new FileReader("/tmp/myfile.txt");
     BufferedReader bufferedReader = new BufferedReader(fileReader)
) {
	String line;

	while((line = bufferedReader.readLine()) != null) {
	    System.out.println(line);
	}

} catch (Exception e) {
	e.printStackTrace();
} 

As you can see, the try-with-resources version is much shorter, cleaner and accomplishes the same thing. That’s generally a good thing when it comes to code.

3. Use and create libraries with proper resource management
There are good examples of open source libraries that do this effectively. For instance, Hibernate and Spring both do proper resource management for database access. Just by using their libraries in the recommended configuration, you are accessing and releasing resources properly.

In addition to using open source libraries, you’ll almost always find some utilities classes or libraries in applications related to resource management (think FileUtils, StreamUtils, DbUtils, etc.). If you build your resource utilization code into reusable libraries that perform proper management, then all your applications benefit.

In conclusion, resource management is one of those issues which doesn’t get a lot of attention, but that can be critical to the stability of your application. Also, we showed that it’s pretty easy to properly manage resources in Java if you know what to do, particularly when you move that functionality into reusable libraries.

References
———–
http://www.mkyong.com/java/try-with-resources-example-in-jdk-7/
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html

Technorati Tags: , , ,

Year Of Security for Java – Week 28 – Unit Test

No Gravatar

What is it and why should I care?
Unit testing is the term generally associated with the process of writing code specifically purposed for testing your application functionality. You write test code to run your functional application code and verify the results.

Note: Unit testing is actually a specific subset of this idea focused on the individual unit (generally recognized as a single class and/or method). Other testing techniques (integration, end-to-end, functional) are also extremely valuable, but unit testing is the most well-known and most of the concepts are transferrable.

So, what does unit testing provide to me and why should I want to use it? Let me try to address that in two parts.

1. Development
Since unit testing is fundamentally an activity that goes on during the development of software, let me first consider what developers use it for. Considering the proper use of unit testing (I’ll address improper use below), it provides you primarily with confidence and quality (which are arguably the same thing in my constrained definition here). I’ve had discussions with people who say it offers a lot more, and certainly you can argue that point, but for me all of the myriad reasons come back to confidence and quality. You’ve exercised the code enough (and you continually do so) that you feel confident that your code works the way you expected it to work. There’s a lot that goes into this ideal world where you have good tests, but it can be practically done, and it makes an immense difference in the quality of code that you produce.

2. Security
So, unit testing inside the development process produces confidence and quality. What do we get from it when applied to security? … The same thing. Security is no different from other code from the perspective of correctness, though it is arguably less well understood in many organizations. At the end of the day, though, we should treat security requirements with the same rigor we would treat performance, functionality, or any other requirement. We should specify what we expect, build to that, test to that, and produce that. Unit testing is a fantastic tool that can be applied to produce confidence and quality in the implementation of security in an application.

What should I do about it?
“You should stop everything you’re doing on the app you’re currently working on and go back and make sure you have at least 80% code coverage with your tests.” I put that in quotes because, sadly, I had a previous job (long ago) where I was informed that this was exactly what I needed to do. I thought they were joking at first, but I quickly found out that this was their real requirement. No matter that the number was seemingly arbitrary and that the tests they gave me as “good examples of what we’re looking for” didn’t actually examine the output of the test, but rather just ran the code (READ: not a real test). No, testing was all about hitting a number, and saying you did it.

If that’s the way you look at testing, you get no value from it, and it actually costs you quite a bit in development, configuration and execution.

Ok, so if not that, then what?

Getting Started
Here are a couple ideas I’d recommend when first starting out on the path to good unit testing.

1. Read a good book or article on unit testing and the concepts and ideas. Kent Beck in particular has produced excellent resources on the topic.

2. Read somebody else’s (good) test code. I always recommend this when folks are getting started. I usually recommend something like the Spring framework, which has some of the highest quality code around, to use as a starting point to get good ideas for how to test. Look at the class they are testing and the tests they wrote for it. This will get you a good idea of what’s going on.

Once you’ve gotten through the initial learning phase regarding real unit testing, here are a few more ideas that I’ve seen work pretty well. Certainly add to or remove from this list to make it work in your environment, but at least consider the concepts. (Also, here is someone else’s list if you’d like some additional ideas)

Good Ideas

1. Build a regression suite
A huge advantage of writing another full set of code to test your functional code is that the code stays around (also can have a bad result at times for maintenance, but let’s focus on the positive here). That means you have an incredible regression test suite. This is one of the most powerful things about unit testing. You’ve written some function, and a few tests to cover it. You modify the function in order to add some new functionality, and now one of your tests breaks. This happens constantly, but is a strong safety net. This is one of the single best things I’ve ever had to learn the hard way that I needed.

2. Write good tests
Writing a test that exercises a method is trivial – call the method with necessary parameters … done. This is the cheap (and useless) way to do testing. What you should do (and you’ll learn this in the book) is write positive and negative tests for the method. Think about nulls. Think about boundary conditions for your parameters. What happens if I send a negative number in or what if I send Integer.MAX_VALUE? Inspect the return values from functions and make sure they align with your expectations.

This type of “how can I break this” thinking usually comes natural to good security folks, so it’s actually a good fit for them. However, good developers certainly get this mindset as they test more, and it’s extremely beneficial for both functional and security tests. This is a cool and powerful way to get developers who aren’t security minded going with security.

3. Start small
This piece of advice is given for almost everything, but that’s because it works. Certainly begin practising on some insignificant code or even just make a sample app to get going. However, when getting good tests into your code-base, I generally recommend picking either/or a) your most important classes or b) your most buggy classes. You’ll likely be surprised at the number of bugs you find when you start the process, but just remember those are bugs your customer won’t find!

4. Capture metrics (and use them)
There are lots of tools that help you capture metrics over source code or bug repositories. It’s really a fun exercise to see the number of reported bugs go down as your number of unit tests goes up. You can also use data like the number of bugs in code in a certain part of the application, or written by a certain developer to identify where to focus your testing efforts.

5. Use tools to help you out
You could certainly write your own unit testing framework, but why when others have already done it. In the Java world, the reigning king is JUnit. Martin Fowler is quoted as saying “never in the field of software development have so many owed so much to so few lines of code” when referencing JUnit. The idea behind JUnit is simple, but it’s notable because the execution is great, and there are lots of additional tools, like IDE support and build tool support, that make using it so simple.

6. Do code coverage
A bit earlier, I knocked on code coverage because I feel it’s a pretty weak metric in and of itself. However, it is a valuable tool in combination with a quality process. If you know your tests are good, and you know that you cover 85% of your significant code, then you’re doing pretty well. Again assuming that you’re tests are good, this can be another metric in the process that points to improvement.

As far as tools go, there are several, but I generally recommend EclEmma for this purpose.

7. Code review your tests with the code they’re testing
Yes, you should code review your tests. This is part of the “writing good tests” process. You need to ensure that people are testing their code, and that they are doing it correctly. Just fold this into your code review process, and you’re golden.

8. Test different requirements
Don’t just test functionality. Test for performance. Test for security. Test for other types of requirements. All these things can be added, and they’ll improve your confidence and quality.

9. Add a test when you get a bug
When you find a bug, do this:
– add a test that covers the bug
– make sure all the existing tests pass, but the new test fails
– fix the code
– make sure all tests pass

This process allows you to make sure you don’t regress over time and that the same bug doesn’t come back to haunt you.

10. Write your tests first … at least on paper
A lot of people swear by Test Driven Development, the idea that you write all your tests for a method/class _before_ you write the class. You ensure all of them fail. You then write the class, and by the time all the tests pass (assuming you’ve written tests to cover every scenario and that the tests are accurate and “good), then you’re done with coding.

I’m personally not a stickler for this process, though there are advantages. If you’re doing a brand-new project, this can be great, but a lot of the work developers do is on legacy code (even if only a few weeks/months old), and a lot of that doesn’t have tests, so we have to have a process that allows for that.

What I will say is that you should write your tests separately from the code. All tests come from the requirements, not the code. This is an important point, and is where most new test writers fail – they write the tests to work with the code that’s there, not to test the actual code. Of course if you’re writing tests meant to work for the code you see, those tests should pass. It’s divorcing yourself from the implementation when you’re writing tests that’s important. This is something you shouldn’t sacrifice on.

11. Don’t let testing replace modelling altogether
In this superb talk, Glenn Vanderburg points out that software engineering relies quite heavily on modelling, when we can achieve many of the same desired results through testing. He talks about aerospace engineers using modelling, followed by a prototype, but says that of course they would use the prototype every time if it were as cheap as the modelling, since it’s actually testing the “real thing”. I think this point does ring true to an extent. I don’t personally like using modelling extensively because in practice it’s a) overkill and b) outdated as quickly as you can create it. However, I do think higher-level modelling adds significant value and that testing is never going to be able to effectively replace it because the value is in the mental exercise of considering the system at a higher level, which testing often doesn’t do properly, or at least effectively.

12. Continuous testing
Continuous integration platforms are common nowadays, but there are still lots of people not using them (but you SHOULD!). However, they really are extremely helpful for testing. They all include the idea that the unit tests should be run on every build. This is powerful because it forces you to find the broken tests quickly and fix them while what you changed is still fresh on your mind. If you don’t have a CI environment, your build system and configuration should at least allow you to do this every time you build, and then you should have your process include that step.

13. Get unit testing into your SDLC.
It sounds sad to say, but you have to force testing as a requirement or there will be those that won’t do it. Testing is a definite and important step in the SDLC and should be represented as such. I will note here that I fully believe your good developers will really enjoy the effect of the unit tests once they use them (even if they still don’t like writing them). Every good developer I’ve worked with has thought good tests were worth their weight in gold.

14. Add in tests for integration, functional, end-to-end, etc.
Unit testing is really a subset of the larger testing scope. There are tests at the component, business function, application, etc. levels and all of these should be tested. All of the same rules above apply to these as well. It’s really great to be able to hit a button and know that your application is being run through the gauntlet of unit, integration, functional, and end-to-end tests. It’s a pretty powerful concept.

One notable (and cool) example of this specifically as it relates to security is the use of security regression testing using the OWASP ZAP tool.

Conclusion
Unit testing is one of those ideas that’s not really specifically for security, but actually does quite a bit for security if applied correctly. It’s also a great way to get developers and security people working together for a common goal.

It’s also near and dear to my heart as I think it’s been the single most important idea that’s helped me improve as a developer over the years. I hope it’s as useful for you as it has been for me.

References
———–
http://www.confreaks.com/videos/282-lsrc2010-real-software-engineering
http://swreflections.blogspot.com/2012/01/static-analysis-isnt-development.html
http://www.readwriteweb.com/archives/12_unit_testing_tips_for_software_engineers.php

Technorati Tags: , , , , ,

Year Of Security for Java – Week 27 – Penetration Testing

No Gravatar

What is it and why should I care?
Penetration testing is a process of evaluating the security of a computer system or network by simulating an attack. The process involves an active analysis of the system for any potential vulnerabilities, is carried out from the position of a potential attacker and can involve active exploitation of security vulnerabilities. This is sometimes referred to as pen-testing or ethical hacking.

This means that you actually are probing the live system (usually in Dev, QA, or UAT) and trying to find (and sometimes exploit) actual vulnerabilities. There is tremendous value and power in being able to show not only that a vulnerability exists, but that it is directly exploitable. In my experience, it also opens up an honest dialog between development and security if you can show that something actually is exploitable, and approach it with the goal of getting it resolved together. There’s no more conversation about false positives at that point :>.

What should I do about it?

Just as you should do code reviews in addition to using static analysis, you should perform penetration testing in addition to dynamic analysis. Static and dynamic analysis give you the ability to point software at your applications and get back results (often good). However, there is a limit to the amount of analysis that current products (or even theory, for that matter) can provide.

So, that leaves us with supplementing our tools with humans (the common refrain in most security efforts). By adding code review to supplement static analysis, we’re able to find specific instances of vulnerabilities, and even whole classes of vulnerabilities we wouldn’t have found before. The same is true with penetration testing. By supplementing dynamic analysis, we find issues that the base tools wouldn’t have found. The tools generally continue to improve, but it’s debatable (and actually is heavily debated) whether or not the tools are even keeping up with the pace advance in software. Whichever side of the fence you sit on, the current situation is that we need to add humans to the mix to get better coverage.

There are a couple of very helpful resources that I would be remiss if I didn’t point out with respect to the penetration testing process. They are the Open Source Security Testing Methodology Manual and the OWASP Testing Guide. Both of these resoures are full of good (and thorough) information about both process implementation and integration within organizations.

In considering what to recommend with respect to the process of pen-testing, I came up with a similar process to my code-review list (so either the other list was good enough to work for both, or they’re both equally bad).

Define the Plan
Processes should have a set of goals. Define these within your organization based on your needs.

Do It
Again, simple, but not easy. This is a human-driven process, so it’s very common for people to either a) not do the tests, or b) to “rubber stamp” the tests. Neither are helpful, and both are common. This process must be setup and executed on a regular basis to provide the desired value.

Don’t Over-do It
Again, we have the idea that you have the law of diminishing returns. Showing that something is an issue in a couple places and thoughtfully working with the developers to come up with a resolution plan may very well be good enough. The developers can probably track down other instances of that pattern fairly easily.

Automate What You Can
You have to understand the capabilities and limits of your static and dynamic analysis tools. If there’s a coverage gap, you’ll need manual reviews or tests to cover it. Balancing this manual/automated approach is the key to security with scalability. Different organizations are going to slide along that scale a bit, but it’s important to know that the decision must be made, and to be conscious of the choices you’re making and why you’re making them.

Just like a code reviewer is going to use tools to assist in the review process (like a good IDE), a pen-tester uses special-purpose tools to have more control over the testing process. While I’m not a pen-tester by trade, I know some folks that are, and the tools I hear referenced most often as being good are OWASP ZAP and Burp. Again, I can’t personally vouch for either, but I really like the concept explained here regarding using ZAP for security regression tests, as this aligns nicely with my future article on testing.

Iterate and improve
This is, again, the idea of iterative and continual improvement. Come up with an initial process, try it out, then keep the things that work, and remove the things that don’t. I personally find that it’s helpful to do an evaluation/review of the process every 6 months to determine if everything’s still useful, and if anything needs to be added.

In conclusion, penetration testing is a necessary process if you really want to improve the security of your applications. It allows you to supplement the analysis from the tools and perform in-depth human-driven testing. This process has a high likelihood of success given you perform it regularly.

References
———–
https://www.owasp.org/index.php/Testing:_Introduction_and_objectives
http://en.wikipedia.org/wiki/Penetration_test
https://www.owasp.org/index.php/OWASP_Testing_Project
https://www.owasp.org/index.php/Category:Penetration_Testing_Tools
http://www.osstmm.org/

Technorati Tags: , , ,

Year Of Security for Java – Week 26 – Do Code Reviews

No Gravatar

What is it and why should I care?
Code reviews are an important process whereby developers have their code systematically examined by another set(s) of eyes in order to find defects. It’s a simple concept (double-check my work), but surprisingly effective. Studies show that you can detect 20-75% of defects with code review (range varies widely depending upon level of rigor applied). That is incredibly powerful, especially once you combine it with other quality processes.

Note: Since this series is specifically about security, I’m going to consider security focused code reviews, but the term is used commonly to refer to the generic defect finding reviews performed by developers.

The only thing that differentiates security code reviews from standard code reviews is that we’re looking for a specific set of issues. The same would be true of a “performance” code review where you only look for performance issues. This means that while a general code review might turn up plenty of poor programming practices that have no bearing on security, a security code review would usually ignore those. It’s just a matter of focus, not of different methods.

What should I do about it?

If we believe the studies (and I do based on my own experience) that tout the effectiveness of code review, what do we do now? We follow their advice and do code reviews! Specifically, here are the steps I think have a place in the security code review eco-system.

Define the Plan
As with any process, you need to set goals. These are obviously specific to an organization, but a few examples might be:
– Reducing vulns found in QA/Prod (basically find issues sooner)
– Ensure compliance with corporate standards or legal requirements
– Train developers on security (can be done with the process of review by close interaction with development team)
– Enhance overall security posture

Do It
Quite simple, but not always easy. As with all other code review types, it’s very common for people to either a) not do reviews, or b) do “rubber stamp” reviews. Neither are helpful, and both are common. Code reviews must be setup and executed on a regular basis to provide the desired value.

Don’t Over-Do It
There’s certainly a point of diminishing returns here. Reviewing low-value code is often not worth the effort, particularly from a security perspective (hint: reviewing java beans is usually a waste of time). It’s often fairly obvious (assuming you understand the app) what parts of the application are security-sensitive, so focus on those areas. In addition, you can also add in processes to detect changes to the security-relevant portions of the code and trigger automatic code reviews.

Automate What You Can
There are tools like static and dynamic analysis that can help you find some classes of issues. Use them. Automation should be preferred whenever it’s accurate and available. However, automation does NOT find everything. You also need to have manual reviews, especially of certain classes of vulnerabilities. This means you have to know the capabilities of your tools. If there’s a coverage gap, you’ll need manual review to cover it. Balancing this manual/automated approach is the key to security with scalability. Different organizations are going to slide along that scale a bit, but it’s important to know that the decision must be made, and to be conscious of the choices you’re making and why you’re making them.

In addition to using tools to help you find issues, there are tools that help you do manual reviews. You don’t want to just read all the code. You’ll want to look at diffs, view versioning comments, make notes for the developer, etc. and there are tools that help you do that, so use them.

Iterate and improve
Any good process has a feedback loop. Implement your first version, then evaluate what worked and what didn’t. Then remove things that are broken, and try new things. The point is that you want a holistic process to catch security issues. I usually view it similar to unit testing. I try to think of every valid test I can, and add them to my test suite. This makes my code better than if I had no tests. Invariably though, I find a bug that covers something I didn’t consider, so I add a test to cover that scenario, and now I’ve improved my code and test suite. That’s the way it should be with the code review process. Honestly evaluate the value of the steps in the process, then a) keep those that work, b) throw away those that don’t, and c) add to the process when you need to cover new issues.

I personally find that it’s helpful to do an evaluation/review of the process every 6 months to determine if everything’s still useful, and if anything needs to be added.

In conclusion, code review is a necessary process if you really want to improve your code. Security focused code review looks specifically at security related issues in the code, and gives you a double-check from another set of eyes. This process has a high likelihood of success given you perform it regularly.

References
———–
http://swreflections.blogspot.com/2011/05/not-doing-code-reviews-whats-your.html
http://www.aleax.it/osc08_crev.pdf
http://software-security.sans.org/blog/2012/06/26/different-ways-of-looking-at-security-bugs/
http://en.wikipedia.org/wiki/Fagan_inspection
http://www.slideshare.net/zanelackey/effective-approaches-to-web-application-security
http://kev.inburke.com/kevin/the-best-ways-to-find-bugs-in-your-code/
http://www.cc2e.com
http://www.codinghorror.com/blog/2006/01/code-reviews-just-do-it.html
http://en.wikipedia.org/wiki/Code_review

Technorati Tags: , ,