In order to get value from your WAF investment, which basically means blocking threats, keeping unwanted requests and malware from hitting your applications, and virtually patching known vulnerabilities in the application stack, the WAF must be regularly tuned. As we mentioned in the introductory post, WAF is not a “set and forget” tool – instead it’s a security platform which requires adjustment to new and evolving threats.
As we flesh out the process we presented in the WAF Management Process post, let’s dig into the management of policies, and more specifically how to tune them to defend your site. But first it’s worth spending some time discussing the different types of polices at your disposal. Policies fall into two categories, a ‘Black List’ of stuff you don’t want hitting your WAF, which are basically attacks you know about. You also will define a ‘White List’ of actives that are permissible based on the characteristics of the specific application. These negative (black list) and positive (white list) security models complement one another to fully protect your applications.
Negative security models should be familiar, as this represents the model implemented by intrusion prevention devices on the network layer. It works by detecting explicit patterns of known malicious behavior. Things like ‘site-scraping’, injection attacks, XML attacks, suspected botnets/Tor nodes and even blog spam are universal application attacks that affect all sites. Most of these policies will come ‘out of the box’ from the vendors; they research and develop ‘signatures’ for their customers. These signatures explicitly describe an attack, and is a common approach to identifying SQL Injection and buffer overflow attacks. The downside is this method is fragile, meaning any variation of the attack will no longer match the signature, and thus bypass the WAF. This means signatures should only be used when you can reliably and deterministically describe an attack.
Because of this limitation vendors provide a myriad of other detection options such as heuristics, reputation scoring, evasion technique detection and several proprietary methods that are used to qualitatively determine if there is an attack. Each method has it’s own strength, and should be applied appropriately. These methods can be used in conjunction with one another to provide a sort of risk score for incoming requests, and subjectively block when a request just does not look right. But the devil is in the details, and there are literally thousands of variations of attacks, and how you apply policies to detect and stop these attacks is quite difficult.
Finally, fraud detection, business logic attacks or data leakage policies need to be adapted to the specific use models of your web application to be effective. These attacks are designed to find flaws in the way application developers implemented their code, looking for a gap in the way they enforce process and transaction state. Examples might include issuing order and cancellation requests in rapid succession to see if the web server can be confused, altering address information in a shopping cart, replay attacks, or changing the order of events. Policies to detect fraud for the most part will be provided by you. They are constructed with the above mentioned analysis techniques, but rather than focus on the structure and use of HTTP and XML grammars, they examine user behavior as it relates to the type of transaction being performed. Again, this type of policies requires an understanding of how your web application works, and the application of appropriate detection techniques.
The other side of the policy coin is the positive security model, or ‘white listing’. Yes, network folks, this is the metaphor implemented in your firewalls. In this model you catalog the legitimate application traffic, ensure those actions are actually legitimate, and set up the policies to block anything not on the list. In essence you block anything that you don’t explicitly allow.
The good news is that this approach is very effective at tripping up malicious requests you’ve never seen before (0-day attacks) without having to explicitly code a signature for it. This is also an excellent way to pare down the universe of all total threats into a smaller, more manageable subset of specific threats you need to account for with your black list, which are basically ways that authorized application actions (like GET and POST) can be gamed.
The bad news is that applications are dynamic and change on a regular basis, so unless you update your white list with each application update, the WAF will effectively disable new application features. Regardless, you will be using both approaches in tandem; without both approaches your workload goes up and your chance of being secure goes way down.
People Manage Policies
There is another requirement that needs to be addressed prior to adjusting your polices: assigning someone to manage policies. The in-house construction of new WAF signatures, especially with small and medium businesses, is not that common. Most organizations depend on the WAF vendor to do the research and update the policies accordingly. It’s a little like anti-virus: Companies theoretically could write write their own A/V signatures, but they don’t. They don’t monitor CERT advisories or any other source looking for issues they should protect their applications against. In fact they typically do not have the in house expertise to write these policies, even if they wanted to. And if you want your WAF to perform better than A/V, which usually will address about 30% of the viruses encountered, you’ll need to adjust your policies to your application environment.
That means you need someone who can understand the rule ‘grammars’ and how web protocols work. That person must also understand what type of information should not be leaving the company, what constitutes bad behavior, and the risks your web applications pose to the business. Having someone skilled enough to write WAF and manage policies is a pre-requisite to success. It can be an employee, or it can be a 3rd party, or perhaps you even pay for the vendor to assist, but you must have a skilled resource to manage WAF policies on an ongoing basis. There’s really is no shortcut here; you either have someone knowledgable and dedicated to this task, or you will be reliant on the canned policies that come with the WAF, which aren’t good enough. So the critical success factor in managing the policies is to find a person(s) who can manage the WAF, get them training if need be, and give them enough time to keep the policies up to date.
So what does this person need to do? Let’s break it down simply:
Baseline Application Traffic
The first step is for this person to learn how the applications are deployed and what they are doing. Oh sure, some of you likely installed the WAF, updated the attack signatures, then let it rip. Very little effort, but very little security. And we hear plenty of stories about frustrated users who botched their first deployment, but that’s water under the bridge. Taking a fresh look involves letting the WAF observe your application traffic for a period of time. All WAF platforms can be deployed in ‘monitor only’ to collect traffic and alert based on the policies, but some vendors also provide a specific ‘learning’ mode assisting in classifying application usage, users, structure and services. These baselines are your roadmap for the application actually does and gives you the information to know what you will need to secure, and provide clues as in the best way to approach writing security polices. This initial discovery process is really important to ensure your initial ruleset actually covers what’s really out there, as opposed to what you think is out there. It’s the basis for creating an application whitelist of acceptable actions (the positive security model) for each application. There is really no other effective way to understand what ‘normal’ application behavior is for all the applications on your network, and get a handle on the different applications and services you need to protect.
Understand the Applications
Now that you have have cataloged the application traffic, you want to understand the transactions that are being processed, and get an idea of the risks and threats you face in the context of that business application. In essence, you want want to map the baselines into a set of rules, and that means understanding the application activities and transactions in order to select the appropriate type of rule. Half the battle is understanding the application, and half the battle us understanding how attackers will undermine those functions. Understanding transactions is not something you’ll have coming into this process; unless you possess deep knowledge of the application in advance, thus you’ll need to work with closely DevOps to understand how the applications works and how events are processed. Part of your job will be to work with the people who possess the information you need to understand the application, and document key functions that will be subject to attack.
Once you understand the application functions, and have a pretty good idea of what attacks to expect, you need to determine how you will counter the threats. For example, if your application has a known defect that cannot be addressed through code changes in a timely fashion, you have several options:
- You can remove the offending function from the white list of allowed activity, thereby removing the threat. The downside is you’ve also removed an app function or service by doing this, which may break the application.
- You can write a specific signature that defines the attack so you can detect when someone is trying to exploit the vulnerability. This approach will stop known attacks, but it assumes you can account for all potential variations and evasion techniques, and it assumes that legitimate use will not break the application.
- You can use one or more heuristic tests to look for clues that indicate abnormal application use or an attack. Heuristics might include malformed requests, odd customer geo-location, customer IP reputation, know areas of application weakness, requests for sensitive data or any other number of indicators that suggest an attack.
Lots of customers we have spoken with remain apprehensive with this part of the process. They have a handle on the application functionality, so they can create the white lists for their applications. What they lack is knowledge of how attackers work, so they are not clear of what threats they should look for, and how to write the specific policies. Fear not, as there are several ways to fill in this knowledge gap.
The first thing to understand is the vendor will be most helpful in understanding hackers, and many of the common black list policies will come from your vendor. We recommend that you review the set of policies that come standard with the WAF. This will give you a very good idea of the types of attacks (spoofing, repudiation, tampering). The embedded policies reflect the long organizational memories of the WAF vendor as it pertains to web app attacks, and how they choose to address those threats. It should help you understand how attackers work and provide clues on how to approach writing your own policies. Second, there are publicly available lists of attacks, from OWASP, CERT and other organizations you can familiarize yourself with. Some even provide example attacks, WAF evasion examples and rules to detect them. Third, you can hire pen testers to probe the applications and identify weaknesses. This form of examination is excellent as it approaches exploitation – and WAF evasion – in the same manner an attacker would. Finally, if you have doubts about something, you can always buy professional services to get you over the hump.
Another daunting part of this exercise is the scope of the problem. It’s one thing if you’re only talking about a handful of applications; it’s entirely different when you consider 200 applications. Or 1000. The rules you implement for any given application will be different, so this process is iterative. When facing an overwhelming number of applications, it’s best to take a divide-and-conquer approach; implement basic security across all of the applications, and select a handful of applications to develop more advanced rules. Iteratively work your way through the rest of the applications and services, starting with the apps that pose the greatest risk to your organization. This involves working closely with development teams, which we’ll discuss in the next post to help cut this problem down to size.
To reiterate, if your WAF has a learning mode, use it. Baselining the application and building the profile of what it does will help you plow through the policies much faster. You’ll still need to apply a large dose of common sense to see which rules don’t make sense and what’s missing. You can do this by building threat models for dangerous edge cases and other situations to ensure nothing is missed. But the learning tools speed up the process.
Protect Against Attacks
Now that you’ve selected how you want to block threats to your application, it’s time to implement the rules and apply them to the WAF – or WAFs. This is where you want to write your rules and develop test cases to verify the rules you’ve deployed are effective. You’ll want to deploy your negative and positive rule sets, one at a time, then validate the rules have propagated to each WAF and are in place to protect the applications you’ve cataloged. Don’t assume the rules are in place or that they work – validate with the test cases you’ve developed. Testing is your friend. We mentioned in our introductory post a story shared with us about the company that taught their WAF to accept all attacks as legitimate traffic – essentially by running a pen test while the WAF was in learning mode. The failure was not that they messed up the WAF rules, the failure was they did not test the resulting rule set, not discovering the problem until a 3rd party pen tester probed their system some months later. The lesson here is to validate that your rules are in place and effective.
Also understand this is an ongoing process, especially given both the dynamic attack space (impacting your black list) and application changes (impacting your white list). You’re WAF Management process need to continually learn, continually catalog user and application behaviors, and you should collect metrics as part of the process. Which metrics are meaningful and what activities help you glean information differ across both the customers and vendors we have spoken with. The only consistent thing we found was you can’t measure your success if you don’t have metrics to measure progress and performance.
And on a final note, writing policies for a negative security model requires a balancing act: You want to ensure that the pattern reflects a specific threat, but it’s definition is not so specific that it misses variations of the same attack. Put another way, if Alice is a known attacker, you still want to recognize Alice when she’s wearing a wig and sunglasses. You want to detect the threats without generating false positives or false negatives. That’s easier said than done which means you’ll be revisiting policies over time to continually ensure the effectiveness of the WAF in protecting your applications.
In our next post we will tackle one of the trickier issues around effective WAF management: working with the application development teams to stay current as to what’s happening with the application.
- Adrian Lane