Year Of Security for Java – Week 25 – Use Dynamic Analysis

No Gravatar

What is it and why should I care?
Dynamic analysis is the analysis of computer software that is performed by executing programs built from that software system on a real or virtual processor. Essentially, it’s automated execution of an application.

Note: While dynamic analysis has no actual ties to security per-se, I’ll be referencing it’s use with respect to security since that’s the topic here. However, just note that these techniques are useful to solve general analysis problems, not only security. Also note dynamic analysis specifically used for security is often referred to as Dynamic Application Security Testing (or DAST) in the industry.

So, how do dynamic analysis tools do what they do? In the world of web application security (admittedly a constrained subset, but the topic of focus here), it’s building something akin to a special-purpose web browser that attempts to attack the running application by probing for vulnerabilities and detecting based on some output heuristic whether or not the attack was successful. For example, with XSS this is logically as simple as:

Step 1: Go to page with form.
Step 2: Fill in field of form with javascript alert.
Step 3: Submit form.
Step 4: If response has a javascript alert, we have an XSS vuln. 

This is certainly a simple example, and real scanners are quite complex in what they can do. However, logically this is the basic concept.

Dynamic analysis, as opposed to static analysis, has the added benefit of proof of exploitability. Many times the results of static analysis are either wrong or questioned because “well, the live system has security control X that prevents that”. In dynamic analysis, you’re generally testing the live environment, or at least the testing environment which is meant to look like the live environment. When you show someone a vulnerability found by actually exercising the deployed site, it’s hard for them to argue that it’s not exploitable.

What should I do about it?

You should use dynamic analysis as part of your development process. These types of tools are often executed in the QA and/or user/business testing environment. You should also get these going in whatever other environments you can, such as the continuous integration environment (have a task to build/deploy the site, then scan it), the integration test environment, QA, etc. The earlier you get these tools executed, the cheaper it is to resolve the issues they find.

I won’t venture into the debate about which product is better than another (especially given I currently work for a vendor), but I will say that all of them have tradeoffs (like any tool), and that you should consider the tools carefully before including them in your environment. If you want to get started (for free !!!), then I’d suggest taking a look at skipfish from some folks at Google. I’ve been told it’s pretty good, and the codebase is relatively small so you could learn about how it works pretty easily (It’s a C project by the way).

Finally, while dynamic analysis doesn’t solve the security problem, I hope I’ve shown it is a good tool to have in your tool-belt when it comes to securing your applications.

[Full Disclosure] I currently am employed by a company that provides a service related to dynamic analysis. However, I can certainly say I recommended the use of dynamic analysis before joining and will continue to in the future irrespective of my employer.

References
———–
https://www.owasp.org/index.php/Dynamic_Analysis
http://en.wikipedia.org/wiki/Dynamic_program_analysis

Technorati Tags:

Year Of Security for Java – Week 24 – Use Static Analysis

No Gravatar

What is it and why should I care?
Static analysis is the analysis of software that is performed without actually executing programs built from that software. Essentially, it’s automated inspection of source code. There are varying levels of complexity achieved by the different static analysis tools available. I will roughly group them into a couple of buckets: grep+ and data/control flow analysis.

Grep+
Grep is a great tool and you can do a lot with it, but it’s not really meant for serious static analysis. The earliest tools started here, but found out it wasn’t the best idea. You can certainly do simple things like “calls to this function (strcpy) are bad”. You can add in regular expressions and get a little better, but it usually gets unwieldy pretty quickly. There is some marginal value to be found here, but not a lot. While there are still some tools available today that work this way, most of the useful ones have moved beyond these techniques.

Data/Control Flow Analysis
Data Flow Analysis and Control Flow Analysis are (generally) the current standard techniques for the more advanced static analysis tools. These concepts are not new, but rather originated in compiler theory, specifically related to optimization techniques (Dragon book, anyone?). While the concept is not new, it took quite a while for it to be used heavily outside of compilers.

In general, these tools build up a data structure model (referred to as the AST – Abstract Syntax Tree) of an application, and then traverse the AST in various ways using different types of analysis to look for issues. There are lots of different types of analysis that can occur at this stage, but these tools generally use this as the model on which to base their analysis.

We as developers actually use these concepts constantly in the form of our IDE’s. These tools load up an AST and constantly do checks and will give you warnings and errors that are based on static analysis. Some IDE’s even include AST-driven security-related checks out of the box.

Note: While static analysis has no actual ties to security per-se, I’ll be referencing it’s use with respect to security since that’s the topic here. However, just note that these techniques are useful to solve general analysis problems, not only security. Also note static analysis specifically used for security is often referred to as Static Application Security Testing (or SAST) in the industry.

What should I do about it?

Beyond what you get in your IDE, you should use security focused static analysis as and embed it as part of your development process. I usually see tools deployed either in a single execution mode (collect your code and push “run scan now”) or in a continuous integration mode (scan automatically kicks off every time code is committed). Your environment may dictate that to an extent, but if your tool is good (and/or you’ve tuned it properly), then using it in continuous integration mode is a big win since you’ll find the issues earlier in the cycle and can often address them much easier.

I won’t venture into the debate about which product is better than another (especially given I currently work for a vendor), but I will say that all of them have trade-offs (like any tool), and that you should consider the tools carefully before including them in your environment. If you’re a Java shop, and want to get started (for free !!!), then I’d suggest taking a look at the excellent FindBugs from the folks at UMD. It’s a great tool to use and to learn how static analysis works if you’re interested.

A couple of caveats related to static analysis, specifically those tools that use it for security:

1. Static analysis tools do get the (sometimes well-deserved) bad rap that they produce too many false-positives. My experience with these tools is that you usually need to tune them (either yourself, or pay for some help) to get good results for your environment. This is an additional investment to consider, but can drastically improve your experience with the product. I mention it because I think it’s helpful to know going into the process.

2. While SAST tools are good at finding issues, they don’t find them all, and they’re not a replacement for testing your code.

Finally While static analysis doesn’t solve the security problem, I hope I’ve shown it is a good tool to have in your tool-belt when it comes to securing your applications.

[Full Disclosure] I currently am employed by a company that just released a static analysis product, which I work on. However, I can certainly say I recommended the use of static analysis tools before joining and will continue to suggest their use in the future irrespective of my employer.

References
———–
http://en.wikipedia.org/wiki/Static_program_analysis
http://en.wikipedia.org/wiki/Data_flow_analysis
http://en.wikipedia.org/wiki/Control_flow_analysis
https://blog.whitehatsec.com/mythbusting-static-analysis-software-testing-100-code-coverage/
http://swreflections.blogspot.com/2012/01/static-analysis-isnt-development.html

Technorati Tags: , , ,

Year Of Security for Java – Week 23 – HTTP Header Injection

No Gravatar

What is it and why should I care?
HTTP Header Injection is a specific injection attack that affects HTTP headers. It involves being able to manipulate the header data to cause various problems (response splitting, CRLF injection, cache poisoning, XSS, etc.). In general, it’s a lesser known and understood attack, which is usually a recipe for minimal protection in applications to prevent it, and that is certainly the case with header injection.

Likely the most common would be CRLF injection (Carriage Return [%0d or \r] Line Feed [%0a or \n]), which involves adding the CRLF to the header, which results in proxy servers/caches/browsers mis-interpreting the response in an insecure manner, and gives the attacker control of part or all of the “split” response. This can lead to the other issues mentioned above.

What should I do about it?
The recommendation for header injection is the same as that for all injections – validate and encode. For header injection specifically, you should ensure that you:

1. Canonicalize – make sure the data is in its’ simplest form before validation.

2. Validate – perform whitelist (only allow these few good characters) validation as opposed to blacklist (only reject these few bad characters) validation. Validation should ensure you don’t allow CR/LF characters in any encoded form.

3. Encode – encode the resulting output if necessary. If your input validation is tight enough, this step is just a layered defense, but you should encode in case anything ever slips by validation for some reason. Again, take special care to encode CR/LF here.

References
———–
http://www.cgisecurity.com/lib/HTTP-Request-Smuggling.pdf
http://www.securityfocus.com/archive/1/411585
http://packetstormsecurity.org/papers/general/whitepaper_httpresponse.pdf
http://lists.grok.org.uk/pipermail/full-disclosure/2006-February/042358.html
http://cwe.mitre.org/data/definitions/113.html
https://www.owasp.org/index.php/Interpreter_Injection#HTTP_Response_Splitting
https://www.owasp.org/index.php/HTTP_Response_Splitting
https://www.owasp.org/index.php/CRLF_Injection

Technorati Tags: , , ,

Year Of Security for Java – Week 22 – HTTP Parameter Pollution

No Gravatar

What is it and why should I care?
HTTP Parameter Pollution (HPP) is a technique that allows you to “override or add HTTP GET/POST parameters by injecting query string delimiters”. This term was created and popularized by a 2009 paper that showed you could tinker with request parameters, specifically by sending the same parameter multiple times, and cause some applications to behave unexpectedly. What happens if someone alters a request that looks like

/mypage.jsp?query=abc&acct=4

to look like

/mypage.jsp?query=abc&acct=5&acct=4

What value do you get when you call request.getParameter(“acct”)? What happens here depends on your server. Fortunately, there’s a handy slide on page 9 of this deck that shows a number of web and application servers and how they deal with this HPP issue. In general for Java, you get the first instance of a parameter – meaning you would get the value “5”.

What can go wrong here? Well, there are several ideas pointed out in the paper that are excellent, including overriding parameters, modifying behavior, accessing variables, and bypassing input validation.

One thought I had that’s similar to the HTTP path parameter issue I wrote about previously is that you could be doing some type of custom url-parsing for authorization checks and then could be getting a different value in the actual request. In the example I gave above, that would mean you might get the last value for “acct” in your url parsing authorization code and get the value “4”, determine the user is authorized to view that account, and then your actual database retrieval code uses request.getParameter(“acct”) to get the value of the account to retrieve from the DB, which is “5”, and which the user does NOT have access to. Now you have an insufficient authorization problem.

What should I do about it?

There are a handful things you can do to address HPP.

1. Awareness
You must know your environment. Know the application server you are running on and how it handles HPP.

2. Consistency
Be consistent in the way you access and evaluate parameters. If you need to access a parameter multiple times, make sure you access it in the same way both times. Don’t parse the url manually in one case, and then use request.getParameter() the next time.

3. Validate Input
Various techniques are available for performing this attack, and encoding is very common in most of them. Validate all your input to ensure it’s in the expected format. This requires canonicalization and validation. The ESAPI framework has a good validator that performs these tasks well.

4. Detect known attacks
You can use AppSensor or AppSensor-like detection to find out when an attacker is trying this type of an attack. You could do something simple like call request.getParameterValues(“acct”) and if you get a number of parameters that is not 1, then you know that either a) you have a bug, or b) someone’s tinkering with the site. You can then keep track of these events and detect when an intrusion has occurred. This should help keep you in front of the attackers. (Note: this specific example won’t work for some types of input like multi-select boxes, which you generally expect to return 0 -> many results.)

HPP is an interesting technique, and unfortunately not well known or understood. It’s not difficult to prevent, but does take some forethought. Hopefully this starting list helps. Comment if you have other ideas.

References
———–
http://blog.mindedsecurity.com/2009/05/http-parameter-pollution-new-web-attack.html
http://www.owasp.org/images/b/ba/AppsecEU09_CarettoniDiPaola_v0.8.pdf
https://www.jtmelton.com/2011/02/02/beware-the-http-path-parameter/
https://www.owasp.org/index.php/Category:OWASP_Enterprise_Security_API
https://www.owasp.org/index.php/OWASP_AppSensor_Project

Technorati Tags: , , , ,

Year Of Security for Java – Week 21 – Anti-Caching Headers

No Gravatar

What is it and why should I care?
Caching is a mechanism by which browsers and proxy servers store local copies of remote objects in order to improve performance of the system by not having to fetch these items repeatedly. (That’s actually a decent description of caching in general.) Caching is wonderful for performance, assuming it’s tuned properly and you know what you’re doing. For security, however, it can be a very bad thing indeed.

Imagine you have your bank account information or maybe your medical records up on the screen. Later another person (or maybe a piece of malware) is on your machine and is able to access the data you were viewing on that screen without you even being logged into the site anymore. Not good!

Luckily, we can use the caching directives available to enable or prevent caching, or some of both, depending on what we want.

What should I do about it?

From a security perspective, you should disable caching altogether on sensitive resources. It’s up to you to figure out what those are, but that’s a simple enough problem to solve generally.

As for actually setting these directives, that’s commonly done using HTTP headers, Cache-Control, Expires, and Pragma. The example below shows the setting of caching headers in Java to prevent caching altogether.

// for HTTP 1.1
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); 
// for HTTP 1.0
response.setHeader("Pragma", "no-cache"); 
// setting to 0 means epoch + 0 seconds, or expired in 1970, thus invalid now. 
// the setDateHeader method sets a date in the RFC-required date format
response.setDateHeader("Expires", 0); 

Just a few LOC and you’ve prevented caching. Many folks I’ve worked with find it helpful to put this code into a J2EE filter, or at least a convenient utility method. However works best for you, cache prevention is a helpful security measure that helps protect users and the data in your application, so use it!

References
———–
http://code.google.com/p/doctype-mirror/wiki/ArticleHttpCaching
http://stackoverflow.com/questions/49547/making-sure-a-web-page-is-not-cached-across-all-browsers
http://www.mnot.net/cache_docs/#CONTROL
http://securesoftware.blogspot.com/2008/02/web-caches-and-security-problems-in-web.html
https://www.golemtechnologies.com/articles/web-cache-security
http://palizine.plynt.com/issues/2008Jul/cache-control-attributes/
http://support.microsoft.com/kb/234067
http://owasp-esapi-java.googlecode.com/svn/trunk/src/main/java/org/owasp/esapi/reference/DefaultHTTPUtilities.java (setNoCacheHeaders method)

Technorati Tags: ,

Year Of Security for Java – Week 20 – Trust Nothing

No Gravatar

What is it and why should I care?
While trust spawns interesting philosophical discussions, here I want to discuss the implications of trust within the applications we build. Trust is a funny thing in that we implicitly give it frequently without considering what we’re trusting. A simple example:

//bad bad do not use
executeDbQuery("select * from my_table where id = " + request.getParameter("my_id"));
//bad bad do not use

Here we’ve said that we trust that the user of the application has not tampered with the my_id request parameter in any way that may cause problems for our application. Obviously this is a poor assumption. We can do better by moving the above query to a prepared statement with parameter binding to prevent SQL injection and we can also validate the my_id parameter for appropriate input, but why do we do that?

It’s because we don’t trust the input to our system. We don’t (and shouldn’t) trust that a user or system is going to use our application in the way we would expect, or even the ways we’ve thought of necessarily (a good reason against blacklisting for security). We must build systems that not only are functional (use) but stand up under attack (abuse) or ignorant usage. Our systems must be robust or as some have called it, rugged. Whatever your term, the idea of trust is either explicitly or implicitly central to the idea. We can’t trust the environment.

If we can’t trust the environment, what does that mean? Does that mean we deal with XSS and SQLi? Yes, but much more than that, it’s a different way of thinking about the application. It becomes that simple picture of input-processing-output at varying levels of scope. A single request has inputs (request parameters, headers, database input, etc.), processing (authn/z, logic, etc.) and outputs (DB, screen, file, etc.). The application as a whole has inputs, processing and outputs that are essentially the combination of all the individual components of the application, and then you can scale on up to systems and organizations.

The “environment” I’m referring to changes depending on your specific situation, and it’s difficult to say that you simply can’t trust anything, because that’s usually a non-starter. You may have to trust your configuration files or your external SSO system, or any number of other entities. The idea is that you specifically label those things as trusted (an assumption) and treat everything else as being tainted.

These types of issues are considered in threat modelling, which is another planned topic in this series. For now, it’s sufficient to simply note that you should be thinking in terms of what data am I taking in, processing and sending out?

What should I do about it?
Now that we’ve established the environment can’t be trusted, the next logical question is what constitutes the environment?

This could be a long answer depending on your setup, but a decent starting list for web applications in particular might look like the following:

  • web request data (parameters, headers, body, cookies)
  • database data
  • directory data (ldap)
  • filesystem data
  • web service data (any data in headers or body)
  • external system data (any data you receive from another system – software you’re integrating with)
  • network connection data (any data you receive while acting as the “server” – generally socket-based communication)
  • user input (command line input)
  • system environment variables
  • third party software (libraries that you call that provide you data)

This list is incomplete I’m sure, but the idea is there. Any data you receive from any of these users or systems is generally untrusted, possibly with certain organization/application-specific well defined exceptions. When you start to view your applications in this way, you start to build better protections around them. You build better defences, and better logging/auditing so that you can detect when something actually does break (it will, I promise). However, thinking in this way can go a long way to helping you build safer and more secure systems.

References
———–
http://www.ruggedsoftware.org/
http://www.schneier.com/book-lo.html
http://en.wikipedia.org/wiki/Robustness_principle
https://www.jtmelton.com/2012/05/01/year-of-security-for-java-week-18-perform-application-layer-intrusion-detection/
https://www.jtmelton.com/2012/04/10/year-of-security-for-java-week-15-audit-security-related-events/

Technorati Tags:

Year Of Security for Java – Week 19 – Reduce the Attack Surface

No Gravatar

What is it and why should I care?
Reducing the attack surface of an application or system means reducing the ways that you can interact with the application, and may involve reducing the functionality the application provides.

To most business folks, this sounds very, very bad. However, at its’ core, it’s really just a matter of simplifying the system. This is a really *good* thing to the business. Rarely do you find anything that people genuinely enjoy using that is complex. The best designs are simple, and that benefits us in this case from a security perspective.

What does this simplification look like? My favorite example is the difference between Google’s standard and advanced searches. It’s likely that 99.9% of people don’t need the advanced search features. Imagine if google removed that page – that greatly simplifies the “search” application they build. It reduces the application footprint, saves them money (dev, support, etc) and gives their customers a better experience – what could be better? (Note: I’m simplifying this case as google’s standard search does allow advanced operators, but you get the idea.)

Most developers I’ve worked with (myself included) have the tendency to a) want to build lots of cool stuff, and b) be poor designers. This results in designs that are larger than necessary in that they encompass more code than planned (feature creep). It also results in an often unpleasant user experience. By being ruthless in removing non-required functionality, and simplifying what is required, the user experience is enhanced along with security, not to mention the bottom line – time and money.

What should I do about it?
Saying you should remove features/functionality and simplify is a bit vague, I realize. I’d like to offer a few examples of common situations where you might be able to have some impact on your applications for the better.

1. Dead Code
Every modern IDE has a “dead code” detector. If you don’t use an IDE, tons of open source “code quality” tools have this feature as well. Use it. If you’re not using code, remove it. If you comment out code, but keep it in the code-base, stop. Remove it. Heck, you can get it back through your version control if you ever really need it.

As much as you can you should also remove code that is “dead” because it’s not enabled via configuration. This may not always hold depending on the specific circumstance, but if you don’t have a need for a feature, don’t have it in your code base.

Dead code doesn’t get looked at or dealt with as closely as “live” code, so that makes it even worse from a security perspective, as there are likely to be lingering issues that aren’t dealt with because “no one is using that”.

2. Copied code
Everybody’s done it. You’ve taken code from an old project and used it in a new one. You’ve taken an example from the web and plugged it into your app. It may have done more than you needed, even way more. What have you done? You’ve added extra code that has be maintained, debugged, supported, tuned, secured, etc. This is a bad idea. It’s fine to use others’ (assuming they’re ok with it) code, but don’t add a bunch of stuff you don’t need.

3. Extra features
It’s undoubtedly great to wow your customer. In my opinion, adding unplanned features is usually not the best way to do that. Usually, giving them the absolute best version of what they need is much better for both you and them. It’s the idea of doing a few things well as opposed to lots of things just OK. Adding in extra features is a common thing for developers to do, often because they saw some cool thing somewhere and thought “hey – that’d be cool here”. Again, adding extra features means extra code, and that’s more to do, and takes away from the quality of what you actually need to do.

4. Extra code – 3rd party libraries
3rd party libraries are great. They are core to most any development done today. They enable us to create more functional apps quicker. However, they also put into your application TONS of functionality and features you may not have planned on being there, and that you probably don’t know exist. I would venture a guess that most J2EE apps I see probably include hundreds, if not thousands, of times more code in 3rd party libraries than in the code written for the application. That’s great from the perspective of “I didn’t have to write this”, but could mean danger when it comes to securing your application with those frameworks. From a security perspective, it doesn’t help that most of these libraries are there to have things just work instead of having secure defaults. I’m not saying frameworks are bad; I’m saying you need to know their capabilities well, and have a plan for dealing with them from the security perspective.

5. Extra services enabled
This is particularly common with 3rd party applications, but can be true for custom apps as well. What happens is an application is built in a generic way, and then sold/used by several groups or companies to solve different problems or similar problems for different users, etc. The functionality in the app is the sum total of what all the customers need. You as an individual customer might only need 30% of the overall functionality, but you have 100% enabled. That’s a problem. The better apps give you a simple way to disable features you’re not using, and a simple way to verify it’s actually turned off. Use these features. It’s always better to have to do an update to enable a feature than to have to tell your boss you were hacked using a feature that wasn’t even needed.

The above represents just a handful of ideas on how to reduce attack surface in your application. They all really boil down to simplify, simplify, simplify. It helps your application be better, and thankfully helps your security be better as well. Next time you have a bug-hunting session, try some of these ideas out. Also add comments if you have more/better ideas.

References
———–
https://www.jtmelton.com/2012/03/29/year-of-security-for-java-week-13-know-your-frameworks/

Technorati Tags: , ,

Year Of Security for Java – Week 18 – Perform Application Layer Intrusion Detection

No Gravatar

What is it and why should I care?
Application layer intrusion detection is a simple concept that I believe is very, very powerful when it comes to protecting applications. Most of the topics I’ve covered thus far have focused on the development portion of the software life-cycle, but this topic really covers the entire span of an application, from the requirements and planning to sun-setting.

The basic concept is that you plan for, implement and monitor “bad” things that occur in your application. With this type of system in place, you look for events that appear to be undesirable in some way and then keep track of them. Over time, you can make decisions about whether those individual events turn into an actual attack.

Many developers actually do most of the work of detection already. Consider the following pseudo-code:

if (user has access to record) {
    get data 
    redirect to view/edit page
} else {
    log exception
    send user error message
}

I’ve seen code just like this lots of times. The problem here is the handling exceptional condition. In general, people don’t review logs, so if there’s an attacker trying to break your application, the only person seeing the error you’ve caught is the _attacker_. With one quick addition of sending a message to your intrusion detection engine, you can start tracking these events and actually gaining knowledge into the real-time (and historical if you choose to store it) usage of your application. After you’ve detected an actual intrusion, you also have the ability to respond to the activity in any [legal] way you see fit. Popular options include ideas like: increased logging, manipulating user account (logout, disable), or even blocking access to certain functionality.

What should I do about it?
Let’s assume I’ve sold you on the idea of implementing something like this (hopefully I have). What now?

Well, you have a few options on how to proceed that I’m aware of: ESAPI, AppSensor or roll-your-own.

ESAPI does have an intrusion detection engine built-in that performs some of these ideas. It is admittedly not extensive, but the core is there and can certainly be extended.

AppSensor is one such extension of the ESAPI intrusion detection engine. The implementation is more extensive than what’s available for ESAPI. Additionally, the project offers a book about the overall idea, as well as significant documentation in addition to the code. Lastly, there is actually a significant update being worked on currently on the project to update both the documentation and the code.

Rolling your own analysis engine can be a small or very large project depending on your needs. Nevertheless, you can certainly take the ideas and implement them in your applications and get significant benefit.

By just adding a little bit of effort, you can gain significant insight into the overall security health of your application(s). You can see who attacked/is attacking your application in real-time or the past, and you can actually respond to events as they occur. Who wouldn’t like that?

Author note: I work on the AppSensor project, so this whole topic is near and dear to me. Please take advantage of the idea whether it’s in our implementation or not!

References
———–
https://www.owasp.org/index.php/ApplicationLayerIntrustionDetection
https://www.jtmelton.com/2010/11/10/application-intrusion-detection-with-owasp-appsensor/
http://www.owasp.org/index.php/OWASP_AppSensor_Project
http://www.owasp.org/
http://www.youtube.com/watch?v=6gxg_t2ybcE
http://www.clerkendweller.com/2010/11/12/Application-Intrusion-Detection-and-Response-Planning-Methodology
https://www.owasp.org/index.php/Category:OWASP_Enterprise_Security_API

Technorati Tags: , , ,

Year Of Security for Java – Week 17 – Set a Hard Session Timeout

No Gravatar

What is it and why should I care?

A session timeout is an important security control for any application. It specifies the length of time that an application will allow a user to remain logged in before forcing the user to re-authenticate. There are 2 types: Soft Session Timeouts (last week’s topic) and Hard Session Timeouts (this week’s topic).

A hard session timeout is applied when the user has been logged in for a specific period of time, no matter what.

As an example, lets say we have a system where:
1. Access to the application requires authentication
2. Attempting to access any portion of the application except login (and change/reset pw, etc.) redirects you to the login page.
3. A user logs into your system and uses the system, actively or inactively, for 9 hours and you have a hard session timeout that is set to 9 hours

The net effect of this will be that the next interaction this user has with the system will then redirect them to the login page.

The section above shows what a hard session timeout is and does, but what is it protecting against? Whereas a soft session timeout is angled more towards preventing CSRF and similar attacks, a hard session timeout (while it does help protect against those as well) is helpful to prevent things like the permanent hijacking of an account. If an attacker does overtake an account, they can’t use it forever without re-authentication. For this same reason, you should force authentication (validate old password) whenever a user attempts to change the password of the account.

What should I do about it?

Many applications, even those that avoid the soft session timeout, do include a hard session timeout. Unfortunately, it’s not available simply to Java developers as an option for configuration. That means you have to either roll your own, or look for some existing software outside of the core Java/J2EE options.

In Java, there are a few ways you can enable a hard session timeout:

Option 1: Set timeout in code

There is no specific Java API call to do this. However, you could easily setup a filter (or your handler/interceptor of choice) to perform this task. Essentially it would require you to store the last logged in time of every user and tie that to their authenticated session id. If a request is made using a session id tied to a user who has been logged in > X minutes, invalidate the session, and redirect the request to the login screen. Fairly simple idea.

Option 2: Use a third party library

Though I’m not aware of any libraries off the top of my head that do this, it wouldn’t be hard to theoretically. (If one doesn’t exist, you could always build it and donate it to the community!)

Option 3: SSO sets the timeout

This is not a Java-only option, but still should be mentioned. Many enterprises use large single sign-on (SSO) identity systems to control access to applications. Many of these systems allow you to set the timeout (both a soft timeout and a hard timeout) for an application.

As you can see, the hard session timeout is a useful security control. It allows you to have another layer of protection for your application and your users.

References
———–
https://www.jtmelton.com/2012/04/17/year-of-security-for-java-week-16-set-a-soft-session-timeout/

Technorati Tags: ,

Year Of Security for Java – Week 16 – Set a Soft Session Timeout

No Gravatar

What is it and why should I care?

A session timeout is an important security control for any application. It specifies the length of time that an application will allow a user to remain logged in before forcing the user to re-authenticate. There are 2 types: Soft Session Timeouts (today’s topic) and Hard Session Timeouts (I’ll cover this next week).

A soft session timeout is applied when the user does not interact with the system for a period of time.

As an example, lets say we have a system where:
1. Access to the application requires authentication
2. Attempting to access any portion of the application except login (and change/reset pw, etc.) redirects you to the login page.
3. A user logs into your system and walks away for 20 minutes and you have a 15 minute timeout
The net effect of this will be that the next interaction this user has with the system will then redirect them to the login page.

The section above shows what a soft session timeout is and does, but what is it protecting against? There are many issues that are related (authentication,authorization,auditing,session hijacking, etc.), but one of the primary issues is CSRF. By forcing a reasonably low session timeout, you add another security control that increases the difficulty of launching CSRF style attacks. Essentially, any attack that attempts to exploit the fact that the user is logged in is now either prevented or complicated by using this simple control.

What should I do about it?

Like many security controls, there is a tradeoff with functionality related to session timeouts. Many popular web applications that we use have no soft session timeout configured, because they don’t want to trouble a user with an extra step of logging in repeatedly. As in other situations, this is a risk decision to make things less secure for your users in order to make things simpler and easier for them. If you have an application that protects sensitive data, or your users (or you) have a lower threshold of pain with risk decisions, you should opt for including a soft (and hard – see next week) session timeout.

In Java, there are several ways you can do this.

Option 1: Set the timeout in the web.xml

By far the most popular option, this is simple and allows you to configure this without having to set it in code. An example snippet showing a 15 minute timeout is below.


  15	

Option 2: Allow the app server to set the session timeout

This could mean that you allow the default (30 minutes for most app servers) or that you set the value specifically in your container. Either way, this is an option.

Option 3: Set timeout in code

This option allows you the ability to encode this setting in code. This does allow you the additional flexibility of setting differet timeouts for different users since it’s set on the session and not globally, but it’s far less common than it’s web.xml alternative

httpSession.setMaxInactiveInterval(15*60); // set in seconds

Option 4: SSO sets the timeout

This is not a Java-only option, but still should be mentioned. Many enterprises use large single sign-on (SSO) identity systems to control access to applications. Many of these systems allow you to set the timeout for an application.

As you can see, the soft session timeout is a useful security control. It allows you to have another layer of protection for your application and your users.

References
———–
http://software-security.sans.org/blog/2010/08/11/security-misconfigurations-java-webxml-files

Technorati Tags: , ,