Wednesday, August 29, 2012

WAF Normalization and I18N

WAF Normalization and I18N:
Submitted By Breno Silva Pinto and Ryan Barnett

WAF Normalization and I18N

Web application firewalls must be able to handle Internationaliztion (I18N) and thus properly handle various data encodings including Unicode and UTF-8 in order to prevent not only evasion issues but also to minimize false positives.  In an earlier blog post, we highlighted ModSecurity's new support for Unicode mapping and decoding.  This capability helps us to more accurately decode characters from different Unicode code points.  While this certainly helps our accuracy, we still had the issue of UTF-8 encodings..  This is a challenge for any WAF as it must be able to handle UTF-8 encodings of characters for different languages such as Portuquese.  So, if you are running ModSecurity to protect a non-English language website then this blog post is for you!  We introduce a new transformation function called utf8toUnicode that helps to normalize data for inspection.

Incorrect UTF-8 Decoding

We have received some recent reports of false positive issues with the OWASP ModSecurity CRS that were due to existence of UTF-8 encoded characters.  As an example, the Portuguese language has many "special" characters with different accent characters.  For example:

1-    á Á  ã Ã â Â à À
2-    é É ê Ê í Í
3-    ó Ó õ Õ ô Ô
4-    ú Ú ü Ü
5-    ç Ç 
When these characters are UTF-8 encoded, they use multiple bytes.  As an example, the "ę" character is encoded as "%c4%99".  If ModSecurity only applies the standard t:urlDecodeUni, it will decode each byte individually which results in an impedance mismatch.  In this case, this incorrect decoding resulted in a false positive match against some SQL Injection rules in the OWASP ModSecurity CRS.  While this is a bit of a pain, it is not as bad as a false negative bypass situation that may be caused by this type of incorrect decoding.  Let's look an this type of SQL Injection evasion issue.  What if  we send the following request:
http://172.16.51.132/index.php?foo=’úníón+séléct+data+fróm+námés
Let's see how ModSecurity will decode this data when checking for an example SQL Injection keyword:
Recipe: Invoking rule 21b67e38;
Rule 21b67e38: SecRule "ARGS:foo"
"@rx select"
"phase:2,log,auditlog,pass,id:1111,t:urlDecodeUni"
T (0) urlDecodeUni:
"'\xc3\xban\xc3\xad\xc3\xb3n s\xc3\xa9l\xc3\xa9ct
data fr\xc3\xb3m n\xc3\xa1m\xc3\xa9s"
Transformation completed in 13 usec.
Executing operator "rx" with
param "select" against ARGS:foo.
Target value:
"'\xc3\xban\xc3\xad\xc3\xb3n s\xc3\xa9l\xc3\xa9ct data fr\xc3\xb3m
n\xc3\xa1m\xc3\xa9s"
Operator completed in 7 usec.
Rule
returned 0.
As you can see above the character sequence “úníón+séléct+data+fróm+námés“ was handled by the engine as “\xc3\xban\xc3\xad\xc3\xb3n
s\xc3\xa9l\xc3\xa9ct data fr\xc3\xb3m n\xc3\xa1m\xc3\xa9s”.
However the rule
was looking for the pattern “select”.  So, what happen if the application applies best-fit mapping conversions and removes the accents before sending the data to
the database ? This may allow the payload to bypass our signatures.

Utf8 to Unicode Mapping

In order to better handle this data, we should first map the UTF-8 encoded data to Unicode and then use the unicode point mapping capabilities mentioned at the beginning of the blog post.  This configuration is achieved by first setting the SecUnicodeCodePage and SecUnicodeMapFile directives in your main ModSecurity configuration file:
SecUnicodeCodePage 20127
SecUnicodeMapFile /etc/apache2/unicode.mapping
The SecUnicodeCodePage directive sets the proper Unicode code point used for your site.  The example 20127 is the US-ASCII code point.  The SecUnicodeMapFile poinst to the unicode.mapping file that comes with the ModSecurity source archive and includes all of the Unicode conversions.  Here is an example of the 20127 (US-ASCII) mapping data:
20127 (US-ASCII)
00a0:20 00a1:21 00a2:63 00a4:24 00a5:59 00a6:7c 00a9:43 00aa:61
00ab:3c 00ad:2d 00ae:52 00b2:32 00b3:33 00b7:2e 00b8:2c 00b9:31 00ba:6f 00bb:3e
00c0:41 00c1:41 00c2:41 00c3:41 00c4:41 00c5:41 00c6:41 00c7:43 00c8:45 00c9:45
00ca:45 00cb:45 00cc:49 00cd:49 00ce:49 00cf:49 00d0:44 00d1:4e 00d2:4f 00d3:4f
00d4:4f 00d5:4f 00d6:4f 00d8:4f 00d9:55 00da:55 00db:55 00dc:55 00dd:59 00e0:61
00e1:61 00e2:61 00e3:61 00e4:61
00e5:61 00e6:61 00e7:63 00e8:65
00e9:65 00ea:65 00eb:65 00ec:69 00ed:69 00ee:69 00ef:69 00f1:6e 00f2:6f 00f3:6f
00f4:6f 00f5:6f 00f6:6f 00f8:6f 00f9:75 00fa:75 00fb:75 00fc:75 00fd:79 00ff:79
0100:41 0101:61 0102:41 0103:61 0104:41 0105:61 0106:43 0107:63 0108:43 0109:63
010a:43 010b:63 010c:43 010d:63 010e:44 010f:64 0110:44 0111:64 0112:45 0113:65
0114:45 0115:65 0116:45 0117:65 0118:45 0119:65 011a:45 011b:65 011c:47 011d:67
011e:47 011f:67 0120:47 0121:67 0122:47 0123:67 0124:48 0125:68 0126:48 0127:68
0128:49 0129:69 012a:49 012b:69 012c:49 012d:69 012e:49 012f:69 0130:49 0131:69
0134:4a 0135:6a 0136:4b 0137:6b 0139:4c 013a:6c 013b:4c 013c:6c 013d:4c 013e:6c
0141:4c 0142:6c 0143:4e 0144:6e 0145:4e 0146:6e 0147:4e 0148:6e 014c:4f 014d:6f
014e:4f 014f:6f 0150:4f 0151:6f 0152:4f 0153:6f 0154:52 0155:72 0156:52 0157:72
0158:52 0159:72 015a:53 015b:73 015c:53 015d:73 015e:53 015f:73 0160:53 0161:73
0162:54 0163:74 0164:54 0165:74 0166:54 0167:74 0168:55 0169:75 016a:55 016b:75
016c:55 016d:75 016e:55 016f:75 0170:55 0171:75 0172:55 0173:75 0174:57 0175:77
0176:59 0177:79 0178:59 0179:5a 017b:5a 017c:7a 017d:5a 017e:7a 0180:62 0189:44
0191:46 0192:66 0197:49 019a:6c 019f:4f 01a0:4f 01a1:6f 01ab:74 01ae:54 01af:55
01b0:75 01b6:7a 01cd:41 01ce:61 01cf:49 01d0:69 01d1:4f 01d2:6f 01d3:55 01d4:75
01d5:55 01d6:75 01d7:55 01d8:75 01d9:55 01da:75 01db:55 01dc:75 01de:41 01df:61
01e4:47 01e5:67 01e6:47 01e7:67 01e8:4b 01e9:6b 01ea:4f 01eb:6f 01ec:4f 01ed:6f
01f0:6a 0261:67 02b9:27 02ba:22 02bc:27 02c4:5e 02c6:5e 02c8:27 02cb:60 02cd:5f
02dc:7e 0300:60 0302:5e 0303:7e 030e:22 0331:5f 0332:5f 2000:20 2001:20 2002:20
2003:20 2004:20 2005:20 2006:20 2010:2d 2011:2d 2013:2d 2014:2d 2018:27 2019:27
201a:2c 201c:22 201d:22 201e:22 2022:2e 2026:2e 2032:27 2035:60 2039:3c 203a:3e
2122:54 ff01:21 ff02:22 ff03:23 ff04:24 ff05:25 ff06:26 ff07:27 ff08:28 ff09:29
ff0a:2a ff0b:2b ff0c:2c ff0d:2d ff0e:2e ff0f:2f ff10:30 ff11:31 ff12:32 ff13:33
ff14:34 ff15:35 ff16:36 ff17:37 ff18:38 ff19:39 ff1a:3a ff1b:3b ff1c:3c ff1d:3d
ff1e:3e ff20:40 ff21:41 ff22:42 ff23:43 ff24:44 ff25:45 ff26:46 ff27:47 ff28:48
ff29:49 ff2a:4a ff2b:4b ff2c:4c ff2d:4d ff2e:4e ff2f:4f ff30:50 ff31:51 ff32:52
ff33:53 ff34:54 ff35:55 ff36:56 ff37:57 ff38:58 ff39:59 ff3a:5a ff3b:5b ff3c:5c
ff3d:5d ff3e:5e ff3f:5f ff40:60 ff41:61 ff42:62 ff43:63 ff44:64 ff45:65 ff46:66
ff47:67 ff48:68 ff49:69 ff4a:6a ff4b:6b ff4c:6c ff4d:6d ff4e:6e ff4f:6f ff50:70
ff51:71 ff52:72 ff53:73 ff54:74 ff55:75 ff56:76 ff57:77 ff58:78 ff59:79 ff5a:7a
ff5b:7b ff5c:7c ff5d:7d ff5e:7e
With these Unicode mapping directives in place, we now can use the new t:utf8toUnicode transformation function which was just added to the v2.7.0 code.  Here is a new example rule:
SecRule "ARGS:foo" "@rx select" "phase:2,log,auditlog,pass,id:1111,t:utf8toUnicode,t:urlDecodeUni"
Let’s see how ModSecurity will handle the same request, but now
using the mentioned features:

Recipe: Invoking rule 21717d58;
Rule 21717d58: SecRule "ARGS:foo"
"@rx select"
"phase:2,log,auditlog,pass,id:1111,t:utf8toUnicode,t:urlDecodeUni"
T (0) Utf8toUnicode:
"'%u00fan%u00ed%u00f3n
s%u00e9l%u00e9ct data fr%u00f3m n%u00e1m%u00e9s"
T (0) urlDecodeUni:
"'union select data from names"
Transformation completed in 29 usec.
Executing operator "rx" with
param "select" against ARGS:foo.
Target value: "'union select data from
names"
Operator completed in 16 usec.
Warning. Pattern match "select"
at ARGS:foo.
Rule
returned 1.
As you can see, the sequence “úníón+séléct+data+fróm+names” is now normalized to “union select data from names” and thus the rule matched.  Summarizing the engine steps for data
normalization:
  1. Attacker input: úníón+séléct+data+fróm+names.
  2. Input in UTF-8 format: \xc3\xban\xc3\xad\xc3\xb3n s\xc3\xa9l\xc3\xa9ct data
    fr\xc3\xb3m n\xc3\xa1m\xc3\xa9s
  3. Input in Unicode format: %u00fan%u00ed%u00f3n s%u00e9l%u00e9ct data fr%u00f3m
    n%u00e1m%u00e9s 
  4. Input in ASCII format: union select data from names
With this new UTF-8 and Unicode mapping and decoding support, ModSecurity can now more accurately normalize data from
non-english languages which results in better rule accuracy.

Backward Compatibility Plays to Malware’s Hands

Backward Compatibility Plays to Malware’s Hands:
Maintaining backward compatibility in software products is
hard. Technology evolves on a daily basis, and while it feels “right” to go
ahead and ditch the old technology in favor of the new, it sometimes might
cause issues, especially when a software platform which millions of developers develop
for is in question. However, it turns out that the desire of software vendors
to keep backward compatibly is abused by malware authors.
Let’s have a look at a piece of malware recently spotted in
the wild:

XML_heaplib
Most of you will find it familiar, since it is the latest MS
XML Core Services vulnerability (CVE-2012-1889) along with the notorious
heaplib which became popular once more thanks to this vulnerability. But wait,
something is weird about this snippet from heaplib… look at the if-else
statement at the beginning of the screenshot – it was modified from the
original version and now has those semicolons. So why did the malware authors
put them there?
Let’s have a look at a simpler case:

If_else_semicolon
All modern browsers consider this code as an invalid
JavaScript, and won’t execute even a single line of it. IE, on the other hand,
considers this as a perfectly legitimate JavaScript, and will execute the alert
function with x=3.
So why did the malware author modify heaplib like this? It
should be quite clear now that:
  1. It can be used as an
    evasion technique and avoid running unnecessary heap spraying on browsers that
    aren’t relevant to this specific CVE.
  2. It can be used as a method
    to trick various dynamic analysis engines such as Wepawet and JS-Unpack. Such
    engines usually handle well only strict JavaScript, based on the RFC, without
    vendor quirks.
Great, so we know what the problem is, and what it is good
for, but what about a solution?

We tried to get an answer from MS regarding why would IE allow such syntax for
JavaScript, and were responded that it is supported in IE versions <9 and in
the compatibility mode of 9 and 10. Since the compatibility mode can be easily
requested by the page (X-UA-Compatible), even users who use the most modern
version of Microsoft’s browsers are still vulnerable to this trick.
We learn 2 things from this event:
  1. Straying too far away from
    standards and supporting all sorts of quirks not only can, but will, turn into a
    security risk.
  2. Malware authors continue
    with their efforts to not only discover new vulnerabilities, but also to find interesting
    ways
    to evade security engines.
Unfortunately, it is not possible to force IE to use the
standards mode for internet sites, so our best advice for IE users would be to
keep the system up-to-date with the latest security updates at all times.

Tuesday, August 28, 2012

[News] Java 7 0-Day vulnerability information and mitigation.

[News] Java 7 0-Day vulnerability information and mitigation.: The cat is out of the bag. There is a 0-day out there currently being used in targeted attacks. The number of these attacks has been relatively low, but it is likely to increase due to the fact that this is a fast and reliable exploit that can be used in drive-by attacks and all kinds of links in emails. Interestingly, Mark Wuergler mentioned on August 10 that VulnDisco SA CANVAS exploit pack now has a new Java 0-day. It makes you wonder if it is the same exploit that leaked from, or was found in the wild and then added to the CANVAS pack. Or if it is totally unrelated and there are two 0-day exploits now.
The purpose of this post is not to provide the vulnerability analysis or samples, but to offer additional information that may help prevent infections on some targeted networks. We all know what kind of damage Java vulnerabilities can cause if used in drive by exploits or in exploit packs. We believe that revealing technical vulnerability details in the form of a detailed technical analysis is dangerous, and releasing working exploits before the patch is vain and irresponsible.
The Oracle patch cycle is 4 months (middle of February, June, October) with bugfixes 2 months after the patch. The next patch day is October 16 - almost two months away. Oracle almost never issue out-of-cycle patches but hopefully they will do consider it serious enough to do it this time.
We have been in contact with Michael Schierl the Java expert who discovered a number of Java vulnerabilities, including recent the Java Rhino CVE-2011-3544 / ZDI-11-305 and CVE-2012-1723. We asked him to have a look at this last exploit . Michael sent his detailed analysis, which we will publish in the nearest future and a patch , which we offer on a per request basis today.
The reason for limited release is the fact that this patch can be reversed, thus making the job of exploit creation easier, which certainly is not our goal.
Atif Mushtaq from FireEye covered the payload part of the exploit, which is helpful and something to look out for if you are protecting your network or your customers. We should note that attackers are not limited to .net addresses and already used other domains and IP addresses.

Monday, August 27, 2012

Post Exploitation Command Lists - Request to Edit

Post Exploitation Command Lists - Request to Edit:
The post exploitation command lists:
Linux/Unix/BSD Post Exploitation:
https://docs.google.com/document/d/1ObQB6hmVvRPCgPTRZM5NMH034VDM-1N-EWPRz2770K4/edit
Windows Post Exploitation:
https://docs.google.com/document/d/1U10isynOpQtrIK6ChuReu-K1WHTJm4fgG3joiuz43rw/edit
OSX Post Exploitation:
https://docs.google.com/document/d/10AUm_zUdAQGgoHNo_eS0SO1K-24VVYnulUD2x3rJD3k/edit
and the newly added Obsucure Syststem's Post Exploitation:
https://docs.google.com/document/d/1CIs6O1kMR-bXAT80U6Jficsqm0yR5dKUfUQgwiIKzgc/edit
and Metasploit Post Exploitation:
https://docs.google.com/document/d/1ZrDJMQkrp_YbU_9Ni9wMNF2m3nIPEA_kekqqqA2Ywto/edit
Have been a weekly upkeep for me with so many… I don't know what to call it, 'undesirable edits' to them. Bad copies, bad pastes, formatting issues and some times deletion or vandalism of the docs. Anyways, I have finally broken down and removed 'world' editable permissions. Anyone who has these links can still comment and copy the docs, but they can no longer edit them directly.
If you would like to contribute, please shoot me a tweet, a email, a .. anything and I will gladly add you to the permissions to edit. Honestly it just became so overwhelming that every time I thought to add something I would cringe away because I know I'd spend most of time fixing them.
Anyone enough of my crying, please if you have stuff to add or are just really good at fixing up formatting, let me know and I'll add you to the editors list.
Thanks,
mubix

Thursday, August 23, 2012

[News] Some Signs Point to Shamoon as Malware in Aramco Attack

[News] Some Signs Point to Shamoon as Malware in Aramco Attack: While researchers continue to dig into the Shamoon malware, looking for its origins and a complete understanding of its capabilities, a group calling itself the Cutting Sword of Justice is claiming responsibility for an attack on the massive Saudi oil company Aramco, which some experts believe employed Shamoon to destroy data on thousands of machines.
The attack on Aramco occurred on August 15, taking the main Web site of Saudi Aramco offline. Officials at the company said that the attack affected some of the company's workstations, but did not have any effect on oil production or on the main Aramco networks. The attackers claiming responsibility for the incident dispute that, saying that they deployed a destructive piece of malware that erased data on infected machines and then made them unusable.
"As previously said by hackers, about 30000 (30k) of clients and servers in the company were completely destroyed. Symantec, McAfee and Kaspersky wrote a detail analysis about the virus, good job. Hackers published the range of internal clients IPs which were found in the internal network and became one of the phases of the attack target," the group said in a post on Pastebin shortly after the attack.

Monday, August 20, 2012

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring:
After hitting on the first of the ongoing controls, device control, we now turn to File Integrity Monitoring (FIM). Also called change monitoring, this entails monitoring files to see if and when files change. This capability is important for endpoint security management. Here are a few scenarios where FIM is particularly useful:

  • Malware detection: Malware does many bad things to your devices. It can load software, and change configurations and registry settings. But another common technique is to change system files. For instance, a compromised IP stack could be installed to direct all your traffic to a server in Eastern Europe, and you might never be the wiser.
  • Unauthorized changes: These may not be malicious but can still cause serious problems. They can be caused by many things, including operational failure and bad patches, but ill intent is not necessary for exposure.
  • PCI compliance: Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it – you can justify the expenditure with the compliance hammer, but remember that security is about more than checking the compliance box, so we will focus on getting value from the investment as well.

FIM Process


Again we start with a process that can be used to implement file integrity monitoring. Technology controls for endpoint security management don’t work well without appropriate supporting processes.

  • Set policy: Start by defining your policy, identifying which files on which devices need to be monitored. But there are tens of millions of files in your environment so you need to be pretty savvy to limit monitoring to the most sensitive files on the most sensitive devices.
  • Baseline files: Then ensure the files you assess are in a known good state. This may involve evaluating version, creation and modification date, or any other file attribute to provide assurance that the file is legitimate. If you declare something malicious to be normal and allowed, things go downhill quickly. The good news is that FIM vendors have databases of these attributes for billions of known good and bad files, and that intelligence is a key part of their products.
  • Monitor: Next you actually monitor usage of the files. This is easier said than done because you may see hundreds of file changes on a normal day. So knowing a good change from a bad change is essential. You need a way to minimize false positives from flagging legitimate changes to avoid wasting everyone’s time.
  • Alert: When an unauthorized change is detected you need to let someone know.
  • Report: FIM is required for PCI compliance, and you will likely use that budget to buy it. So you need to be able to substantiate effective use for your assessor. That means generating reports. Good times.

Technology Considerations


Now that you have the process in place, you need some technology to implement FIM. Here are some things to think about when looking at these tools:

  • Device and application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We will talk about this more under research and intelligence, below.
  • Policy Granularity: You will want to make sure your product can support different policies by device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You will also want to be able to set up those policies based on groups of users and device types (locking down Windows XP tighter, for example, as it doesn’t newer protections in Windows 7).
  • Small footprint agent: In order to implement FIM you will need an agent on each protected device. Of course there are different definitions of what an ‘agent’ is, and whether one needs to be persistent or it can be downloaded as needed to check the file system and then removed – a “dissolvable agent”. You will need sufficient platform support as well as some kind of tamper proofing of the agent. You don’t want an attacker to turn off or otherwise compromise the agent’s ability to monitor files – or even worse, to return tampered results.
  • Frequency of monitoring: Related to the persistent vs. dissolvable agent question, you need to determine whether you require continuous monitoring of files or batch assessment is acceptable. Before you respond “Duh! Of course we want to monitor files at all times!” remember that to take full advantage of continuous monitoring, you must be able to respond immediately to every alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process could work.
  • Research & Intelligence: A large part of successful FIM is knowing a good change from a potentially bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource-constrained operations folks doing is assembling monthly lists of file changes for a patch cycle. Your vendor needs to do that. But it’s a bit more complicated, so here are some other notes on detecting bad file changes.

    • Change detection algorithm: Is a change detected based on file hash, version, creation date, modification date, or privileges? Or all of the above? Understanding how the vendor determines a file has changed enables you to ensure all your threat models are factored in.
    • Version control: Remember that even a legitimate file may not be the right one. Let’s say you are updating a system file, but an older legitimate version is installed. Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring that versions are managed by integrating with patch information is also a must.
    • Risk assessment: It’s also helpful if the vendor can assess different kinds of changes for potential risk. Replacing the IP stack is a higher-risk change than updating an infrequently used configuration file. Again, either could be bad, but you need to prioritize, and a first cut from the vendor cut can be useful.
  • Forensics: In the event of a data loss you will want a forensics capability such as a log of all file activity. Knowing when different files were accessed, by what programs, and what was done can be very helpful for assessing the damage of an attack and nailing down the chain of events which resulted in data loss.
  • Closed loop reconciliation: Thousands of file adds, deletes, and changes happen every – and most are authorized and legitimate. That said, for both compliance and operational reliability, you want to be able to reconcile the changes you expect against the changes that actually happened. During a patch cycle, a bunch of changes should have happened. Did all of them complete successfully? We mentioned verification in the patch management process, and FIM technology can provide that reconciliation as well.
  • Integration with Platform: There is no reason to reinvent the wheel – especially for cross-functional capabilities such as discovery, reporting, agentry, and agent deployment/updating/maintenance – so leverage your endpoint security management platform to streamline implementation and for operational leverage.

This description of file integrity monitoring completes our controls discussion. Between periodic and ongoing controls you have seen with a fairly comprehensive toolset for managing the security of endpoints in your environment. We have alluded extensively to the Endpoint Security Management platform, and in the next post will flesh it out – as well as mentioning buying considerations.

- Mike Rothman
(0) Comments

Thursday, August 16, 2012

ShellBag Analysis

ShellBag Analysis: What are "shellbags"?
To get an understanding of what "shellbags" are, I'd suggest that you start by reading Chad Tilbury's excellent SANS Forensic blog post on the topic.  I'm not going to try to steal Chad's thunder...he does a great job of explaining what these artifacts are, so there's really no sense in rehashing everything.

Discussion of this topic goes back well before Chad's post, with this DFRWS 2009 paper.  Before that, John McCash talked about ShellBag Registry Forensics on the SANS Forensics blog.  Even Microsoft mentions the keys in question in KB 813711

Without going into a lot of detail, a user's shell window preferences are maintained in the Registry, and the hive and keys being used to record these preferences will depend upon the version of the Windows operating system.  Microsoft wants the user to have a great experience while using the operating system and applications, right?  If a user opens up a window on the Desktop and repositions and resizes that window, how annoying would it be to shut the system down, and have to come back the next day and have to do it all over again?  Because this information is recorded in the Registry, it is available to analysts who can parse and interpret the data. As such, "ShellBags" is sort of a colloquial term used to refer to a specific area of Registry analysis.

Tools such as Registry Decoder, TZWorks sbag, and RegRipper are capable of decoding and presenting the information available in the ShellBags.

How can ShellBags help an investigation?
I think that one of the biggest issues with ShellBags analysis is that, much like other lines of analysis that involve the Windows Registry, they're poorly understood, and as such, underutilized.  Artifacts like the ShellBags can be very beneficial to an examiner, depending upon the type of examination they're conducting.  Much like the analysis of other Windows artifacts, ShellBags can demonstrate a user's access to resources, often well after that resource is no longer available.  ShellBag analysis can demonstrate access to folders, files, external storage devices, and network resources.  Under the appropriate conditions, the user's access to these resources will be recorded and persist well after the accessed resource has been deleted, or is no longer accessible via the system.

If an organization has an acceptable use policy, ShellBags data may demonstrate violations of that policy, by illustrating access to file paths with questionable names, such as what may be available via a thumb drive or DVD.  Or, it may be a violation of acceptable use policies to access another employee's computer without their consent, such as:

Desktop\My Network Places\user-PC\\\user-PC\Users

...or to access other systems, such as:

Desktop\My Network Places\192.168.23.6\\\192.168.23.6\c$

Further, because of how .zip files are handled by default on Windows systems, ShellBag analysis can illustrate that a user not only had a zipped archive on their system, but that they opened it and viewed subfolders within the archive.

This is what it looks like when I accessed Zip file subfolders on my system:

Desktop\Users\AppData\Local\Temp\RR.zip\DVD\RegRipper\DVD

Access to devices will also be recorded in these Registry keys, including access to specific resources on those devices.

For example, from the ShellBags data available on my own system I was able to see where I'd accessed an Android system:


Desktop\My Computer\F:\Android\data


...as well as a digital camera...

Desktop\My Computer\Canon EOS DIGITAL REBEL XTi\CF\DCIM\334CANON

...and an iPod.

Desktop\My Computer\Harlan s iPod\Internal Storage\DCIM

Another aspect of ShellBags analysis that can be valuable to an examination is by the analyst developing an understanding the actual data structures, referred to as "shell item ID lists", used within the ShellBag.  It turns out that these data structures are not only used in other values within the Registry, but they're also used in other artifacts, such as Windows shortcut/LNK files, as well as within Jump List files.  Understanding and being able to recognize and parse these structures lets an analyst get the most out of the available data.

Locating Possible Data Exfil/Infil Paths via ShellBags
As information regarding access to removable storage devices and network resources can be recorded in the ShellBags, this data may be used to demonstrate infiltration/infection or data exfiltration paths.

For example, one means of getting data off of a system is via FTP.  Many Windows users aren't aware that Windows has a command line FTP client, although some are; in my experience, it's more often intruders who are conversant in the use of the command line client.  One way that analysts look for the use of the FTP client (i.e., ftp.exe) is via Prefetch files, as well as via the MUICache Registry key.

However, another way to access FTP on a Windows system is via the Windows Explorer shell itself.  I've worked with a couple of organizations that used FTP for large file transfers and had us use this process rather than use the command line client.   A couple of sites provide simple instructions regarding how to use FTP via Windows Explorer:

MS FTP FAQ
HostGator: Using FTP via Windows Explorer

Here's what a URI entry looks like when parsed (obfuscated, of course):

Desktop\Explorer\ftp://www.site.com

One of the data types recorded within the ShellBags keys is a "URI", and this data structure includes an embedded time stamp, as well as the protocol (ie, ftp, http, etc.) used in the communications.  The embedded time stamp appears (via timeline analysis) to correlate with when the attempt was made to connect to the FTP site.  If the connection is successful, you will likely find a corresponding entry for the site in the NTUSER.DAT hive, in the path:

HKCU/Software/Microsoft/FTP/Accounts/www.site.com

Much like access via the keyboard, remote access to the system that provides shell-based control, such as via RDP, will often facilitate the use of other graphical tools, including the use of the Windows Explorer shell for off-system communications.  ShellBag analysis may lead to some very interesting findings, not only with respect to what a user may have done, but also other resources an intruder may have accessed.

Summary
Like other Windows artifacts, ShellBags persist well after the accessed resources are no longer available.  Knowing how to parse and interpret this Registry data

Parsing ShellBags data can provide you with indications of access to external resources, potentially providing indications of one avenue of off-system communications.  If the concern is data infiltration ("how did that get here?"), you may find indications of access to an external resource, followed by indications of access to Zip file subfolders.  ShellBags can be used to demonstrate access to resources where no other indications are available (because they weren't recorded some where else, or because they were intentionally deleted), or they can be used in conjunction with other resources to build a stronger case.  Incorporation of ShellBag data into timelines for analysis can also provide some very valuable insight that might not otherwise be available to the analyst.

Resources
ForensicsWiki Shell Item page

Tuesday, August 14, 2012

FISMA Guidance: Minimizing Compromise Among Government Agencies

FISMA Guidance: Minimizing Compromise Among Government Agencies:
In addition to insomnia and lack of Star Trek reruns giving me reason to read through U.S. Government regulations, air travel provides another opportunity when I can peruse them.  Several areas of the NIST 800-53Revision 4 draft continue to pique my curiosity.  Today, it is System and Information Integrity control check SI-14: Non-Persistence.
SI-14 advocates minimizing the persistence mechanism by re-imaging a system, potentially on each re-boot,to minimize the length of time a compromised system is exposed to exploit.  The concept is elegant; every time I log on to my system, my session is built from a read-only “gold image” greatly reducing the window of opportunity for an advanced attacker to penetrate deeper into the infrastructure.  Using this approach, executables load from read-only sources ensuring that if the attacker injected malware into the running processes, those processes will load cleanly the next time.
If your environment adopts this approach, SI-14’s recommendation may provide a safety net for recovering from successful attacks.  Also, system reboots fix any undesired alterations by bringing up a good/ known state, SI-14’s guidance may eliminate some IT problems with users that accidentally trash their environment.
I also see challenges:
  • In my travels I have seen a significant number of organizations support a mobile or a distributed workforce, which makes administering master images extremely difficult.  How do users get the read-only image?  Would the difficulties in distributing and maintaining master images become such a burden that the IT groups would significantly delay system updates that may close vulnerabilities or correct execution bugs in the current code set? How will the organization address mobile users accessing IT resources through a VPN such that compromised systems do not provide the attacker easy entry?
  • The SI-14 control advocates providing a limited window of opportunity for the APT to gain a permanent foothold in the environment.  We know that one of the APT’s early-stage goals entails compromising user credentials (recall Mandiant’s 2012 M-Trends report that shows that 100% of the breaches we investigated involved compromised credentials -http://www.mandiant.com/resources/m-trends).  The APT typically starts the process of harvesting credentials in as quickly as five minutes after gaining access, giving them adequate time to compromise valid credentials within a typical eight hour work day.
  • Not every system can be non-persistent.  End-user data must be stored somewhere and the systems housing the data must get connected to the user’s session when it starts up.  This provides a vector for the APT to establish a persistent foothold.
  • Finally, and most importantly in my mind, in the event of a successful attack, utilization of non-persistent sessions would eradicate evidence of the initial attack.  The trail would be broken making it more difficult for forensic investigators to determine how the attacker penetrated the victim’s defenses.  This takes me back to the “don’t re-image when you find malware until you’ve determined what it allowed the attacker to do” advice incident responders often give.
800-53 Revision 4’s introduction of Control SI-14 – Non-Persistence describes an interesting approach to minimizing an environment’s exposure to the APT, and I like how this control directly acknowledges the inevitability of breaches.  However, organizations should not conclude that this strategy eliminates the need to explore systems for signs of compromise, and to forensically investigate potential breaches.
I am interested to hear about successes and challenges anyone has had with implementing non-persistent sessions as described in SI-14 of 800-53’s Revision 4 draft.  How the challenges are overcome in actual production enterprises would be another great topic of discussion!

Recommended: An Insider's View of China and Sina Weibo

Recommended: An Insider's View of China and Sina Weibo: Do you want to better understand Chinese hackers? If so, then you really need to better understand China.

Context matters.

Rui Chenggang is the anchor of "BizChina", a business show on China's CCTV International. He's a man with a daily television audience of up to 300 million people, and he has nearly 5 million fans on Sina Weibo, China's version of Twitter. He's possibly the most well known guy you've never heard of before.

Rui Chenggang

Chenggang's two-part series, China's Economy – The Insider's View, on the BBC World Service is one of the more unique points of view that we've encountered recently. It's definitely worth a listen.

China's Economy – The Insider's View: Episode 1; Episode 2

And regarding Sina Weibo, to better understand a society, you should better understand its use of social media. For more analysis on how Chinese use of Weibo is affecting public activism, check out another two part series from the World Service.

It Started With A Tweet: Episode 1; Episode 2

The series presents several interesting case studies and mirrors the findings of a recent Harvard paper on: How Censorship in China Allows Government Criticism but Silences Collective Expression [PDF]

Happy listening.
On 14/08/12 At 12:53 PM

Upgrade From BackTrack 5 R2 to BackTrack 5 R3

Upgrade From BackTrack 5 R2 to BackTrack 5 R3:
Recently, we released the long-awaited BackTrack 5 R3 but for those of you who don’t want to start fresh with a new installation, have no fear because you can easily upgrade your existing installation of R2 to R3.
Our primary focus with this release was on the implementation of various bug fixes, numerous tools upgrades and well over 60 new additions to the BackTrack suite. Because of this, the upgrade path to BackTrack 5 R3 is relatively quick and painless.

First, you will want to make sure that your existing system is fully updated:


apt-get update && apt-get dist-upgrade


With the dist-upgrade finished, all that remains is the install the new tools that have been added for R3. An important point to keep in mind is that there are slight differences between the 32-bit and 64-bit tools so make sure you choose the right one.

32-Bit Tools



apt-get install libcrafter blueranger dbd inundator intersect mercury cutycapt trixd00r artemisa rifiuti2 netgear-telnetenable jboss-autopwn deblaze sakis3g voiphoney apache-users phrasendrescher kautilya manglefizz rainbowcrack rainbowcrack-mt lynis-audit spooftooph wifihoney twofi truecrack uberharvest acccheck statsprocessor iphoneanalyzer jad javasnoop mitmproxy ewizard multimac netsniff-ng smbexec websploit dnmap johnny unix-privesc-check sslcaudit dhcpig intercepter-ng u3-pwn binwalk laudanum wifite tnscmd10g bluepot dotdotpwn subterfuge jigsaw urlcrazy creddump android-sdk apktool ded dex2jar droidbox smali termineter bbqsql htexploit smartphone-pentest-framework fern-wifi-cracker powersploit webhandler


64-Bit Tools:



apt-get install libcrafter blueranger dbd inundator intersect mercury cutycapt trixd00r rifiuti2 netgear-telnetenable jboss-autopwn deblaze sakis3g voiphoney apache-users phrasendrescher kautilya manglefizz rainbowcrack rainbowcrack-mt lynis-audit spooftooph wifihoney twofi truecrack acccheck statsprocessor iphoneanalyzer jad javasnoop mitmproxy ewizard multimac netsniff-ng smbexec websploit dnmap johnny unix-privesc-check sslcaudit dhcpig intercepter-ng u3-pwn binwalk laudanum wifite tnscmd10g bluepot dotdotpwn subterfuge jigsaw urlcrazy creddump android-sdk apktool ded dex2jar droidbox smali termineter multiforcer bbqsql htexploit smartphone-pentest-framework fern-wifi-cracker powersploit webhandler


That’s all there is to it! Once the new tools have been installed, you are up and running with BackTrack 5 R3. As always, if you come across any bugs or issues, please submit tickets via our BackTrack Redmine Tracker.

Monday, August 13, 2012

Pragmatic WAF Management: Policy Management

Pragmatic WAF Management: Policy Management:
In order to get value from your WAF investment, which basically means blocking threats, keeping unwanted requests and malware from hitting your applications, and virtually patching known vulnerabilities in the application stack, the WAF must be regularly tuned. As we mentioned in the introductory post, WAF is not a “set and forget” tool – instead it’s a security platform which requires adjustment to new and evolving threats.

As we flesh out the process we presented in the WAF Management Process post, let’s dig into the management of policies, and more specifically how to tune them to defend your site. But first it’s worth spending some time discussing the different types of polices at your disposal. Policies fall into two categories, a ‘Black List’ of stuff you don’t want hitting your WAF, which are basically attacks you know about. You also will define a ‘White List’ of actives that are permissible based on the characteristics of the specific application. These negative (black list) and positive (white list) security models complement one another to fully protect your applications.

Negative Security


Negative security models should be familiar, as this represents the model implemented by intrusion prevention devices on the network layer. It works by detecting explicit patterns of known malicious behavior. Things like ‘site-scraping’, injection attacks, XML attacks, suspected botnets/Tor nodes and even blog spam are universal application attacks that affect all sites. Most of these policies will come ‘out of the box’ from the vendors; they research and develop ‘signatures’ for their customers. These signatures explicitly describe an attack, and is a common approach to identifying SQL Injection and buffer overflow attacks. The downside is this method is fragile, meaning any variation of the attack will no longer match the signature, and thus bypass the WAF. This means signatures should only be used when you can reliably and deterministically describe an attack.

Because of this limitation vendors provide a myriad of other detection options such as heuristics, reputation scoring, evasion technique detection and several proprietary methods that are used to qualitatively determine if there is an attack. Each method has it’s own strength, and should be applied appropriately. These methods can be used in conjunction with one another to provide a sort of risk score for incoming requests, and subjectively block when a request just does not look right. But the devil is in the details, and there are literally thousands of variations of attacks, and how you apply policies to detect and stop these attacks is quite difficult.

Finally, fraud detection, business logic attacks or data leakage policies need to be adapted to the specific use models of your web application to be effective. These attacks are designed to find flaws in the way application developers implemented their code, looking for a gap in the way they enforce process and transaction state. Examples might include issuing order and cancellation requests in rapid succession to see if the web server can be confused, altering address information in a shopping cart, replay attacks, or changing the order of events. Policies to detect fraud for the most part will be provided by you. They are constructed with the above mentioned analysis techniques, but rather than focus on the structure and use of HTTP and XML grammars, they examine user behavior as it relates to the type of transaction being performed. Again, this type of policies requires an understanding of how your web application works, and the application of appropriate detection techniques.

Positive Security


The other side of the policy coin is the positive security model, or ‘white listing’. Yes, network folks, this is the metaphor implemented in your firewalls. In this model you catalog the legitimate application traffic, ensure those actions are actually legitimate, and set up the policies to block anything not on the list. In essence you block anything that you don’t explicitly allow.

The good news is that this approach is very effective at tripping up malicious requests you’ve never seen before (0-day attacks) without having to explicitly code a signature for it. This is also an excellent way to pare down the universe of all total threats into a smaller, more manageable subset of specific threats you need to account for with your black list, which are basically ways that authorized application actions (like GET and POST) can be gamed.

The bad news is that applications are dynamic and change on a regular basis, so unless you update your white list with each application update, the WAF will effectively disable new application features. Regardless, you will be using both approaches in tandem; without both approaches your workload goes up and your chance of being secure goes way down.

People Manage Policies


There is another requirement that needs to be addressed prior to adjusting your polices: assigning someone to manage policies. The in-house construction of new WAF signatures, especially with small and medium businesses, is not that common. Most organizations depend on the WAF vendor to do the research and update the policies accordingly. It’s a little like anti-virus: Companies theoretically could write write their own A/V signatures, but they don’t. They don’t monitor CERT advisories or any other source looking for issues they should protect their applications against. In fact they typically do not have the in house expertise to write these policies, even if they wanted to. And if you want your WAF to perform better than A/V, which usually will address about 30% of the viruses encountered, you’ll need to adjust your policies to your application environment.

That means you need someone who can understand the rule ‘grammars’ and how web protocols work. That person must also understand what type of information should not be leaving the company, what constitutes bad behavior, and the risks your web applications pose to the business. Having someone skilled enough to write WAF and manage policies is a pre-requisite to success. It can be an employee, or it can be a 3rd party, or perhaps you even pay for the vendor to assist, but you must have a skilled resource to manage WAF policies on an ongoing basis. There’s really is no shortcut here; you either have someone knowledgable and dedicated to this task, or you will be reliant on the canned policies that come with the WAF, which aren’t good enough. So the critical success factor in managing the policies is to find a person(s) who can manage the WAF, get them training if need be, and give them enough time to keep the policies up to date.

So what does this person need to do? Let’s break it down simply:

Baseline Application Traffic


The first step is for this person to learn how the applications are deployed and what they are doing. Oh sure, some of you likely installed the WAF, updated the attack signatures, then let it rip. Very little effort, but very little security. And we hear plenty of stories about frustrated users who botched their first deployment, but that’s water under the bridge. Taking a fresh look involves letting the WAF observe your application traffic for a period of time. All WAF platforms can be deployed in ‘monitor only’ to collect traffic and alert based on the policies, but some vendors also provide a specific ‘learning’ mode assisting in classifying application usage, users, structure and services. These baselines are your roadmap for the application actually does and gives you the information to know what you will need to secure, and provide clues as in the best way to approach writing security polices. This initial discovery process is really important to ensure your initial ruleset actually covers what’s really out there, as opposed to what you think is out there. It’s the basis for creating an application whitelist of acceptable actions (the positive security model) for each application. There is really no other effective way to understand what ‘normal’ application behavior is for all the applications on your network, and get a handle on the different applications and services you need to protect.

Understand the Applications


Now that you have have cataloged the application traffic, you want to understand the transactions that are being processed, and get an idea of the risks and threats you face in the context of that business application. In essence, you want want to map the baselines into a set of rules, and that means understanding the application activities and transactions in order to select the appropriate type of rule. Half the battle is understanding the application, and half the battle us understanding how attackers will undermine those functions. Understanding transactions is not something you’ll have coming into this process; unless you possess deep knowledge of the application in advance, thus you’ll need to work with closely DevOps to understand how the applications works and how events are processed. Part of your job will be to work with the people who possess the information you need to understand the application, and document key functions that will be subject to attack.

Once you understand the application functions, and have a pretty good idea of what attacks to expect, you need to determine how you will counter the threats. For example, if your application has a known defect that cannot be addressed through code changes in a timely fashion, you have several options:

  • You can remove the offending function from the white list of allowed activity, thereby removing the threat. The downside is you’ve also removed an app function or service by doing this, which may break the application.
  • You can write a specific signature that defines the attack so you can detect when someone is trying to exploit the vulnerability. This approach will stop known attacks, but it assumes you can account for all potential variations and evasion techniques, and it assumes that legitimate use will not break the application.
  • You can use one or more heuristic tests to look for clues that indicate abnormal application use or an attack. Heuristics might include malformed requests, odd customer geo-location, customer IP reputation, know areas of application weakness, requests for sensitive data or any other number of indicators that suggest an attack.

Lots of customers we have spoken with remain apprehensive with this part of the process. They have a handle on the application functionality, so they can create the white lists for their applications. What they lack is knowledge of how attackers work, so they are not clear of what threats they should look for, and how to write the specific policies. Fear not, as there are several ways to fill in this knowledge gap.

The first thing to understand is the vendor will be most helpful in understanding hackers, and many of the common black list policies will come from your vendor. We recommend that you review the set of policies that come standard with the WAF. This will give you a very good idea of the types of attacks (spoofing, repudiation, tampering). The embedded policies reflect the long organizational memories of the WAF vendor as it pertains to web app attacks, and how they choose to address those threats. It should help you understand how attackers work and provide clues on how to approach writing your own policies. Second, there are publicly available lists of attacks, from OWASP, CERT and other organizations you can familiarize yourself with. Some even provide example attacks, WAF evasion examples and rules to detect them. Third, you can hire pen testers to probe the applications and identify weaknesses. This form of examination is excellent as it approaches exploitation – and WAF evasion – in the same manner an attacker would. Finally, if you have doubts about something, you can always buy professional services to get you over the hump.

Another daunting part of this exercise is the scope of the problem. It’s one thing if you’re only talking about a handful of applications; it’s entirely different when you consider 200 applications. Or 1000. The rules you implement for any given application will be different, so this process is iterative. When facing an overwhelming number of applications, it’s best to take a divide-and-conquer approach; implement basic security across all of the applications, and select a handful of applications to develop more advanced rules. Iteratively work your way through the rest of the applications and services, starting with the apps that pose the greatest risk to your organization. This involves working closely with development teams, which we’ll discuss in the next post to help cut this problem down to size.

To reiterate, if your WAF has a learning mode, use it. Baselining the application and building the profile of what it does will help you plow through the policies much faster. You’ll still need to apply a large dose of common sense to see which rules don’t make sense and what’s missing. You can do this by building threat models for dangerous edge cases and other situations to ensure nothing is missed. But the learning tools speed up the process.

Protect Against Attacks


Now that you’ve selected how you want to block threats to your application, it’s time to implement the rules and apply them to the WAF – or WAFs. This is where you want to write your rules and develop test cases to verify the rules you’ve deployed are effective. You’ll want to deploy your negative and positive rule sets, one at a time, then validate the rules have propagated to each WAF and are in place to protect the applications you’ve cataloged. Don’t assume the rules are in place or that they work – validate with the test cases you’ve developed. Testing is your friend. We mentioned in our introductory post a story shared with us about the company that taught their WAF to accept all attacks as legitimate traffic – essentially by running a pen test while the WAF was in learning mode. The failure was not that they messed up the WAF rules, the failure was they did not test the resulting rule set, not discovering the problem until a 3rd party pen tester probed their system some months later. The lesson here is to validate that your rules are in place and effective.

Also understand this is an ongoing process, especially given both the dynamic attack space (impacting your black list) and application changes (impacting your white list). You’re WAF Management process need to continually learn, continually catalog user and application behaviors, and you should collect metrics as part of the process. Which metrics are meaningful and what activities help you glean information differ across both the customers and vendors we have spoken with. The only consistent thing we found was you can’t measure your success if you don’t have metrics to measure progress and performance.

And on a final note, writing policies for a negative security model requires a balancing act: You want to ensure that the pattern reflects a specific threat, but it’s definition is not so specific that it misses variations of the same attack. Put another way, if Alice is a known attacker, you still want to recognize Alice when she’s wearing a wig and sunglasses. You want to detect the threats without generating false positives or false negatives. That’s easier said than done which means you’ll be revisiting policies over time to continually ensure the effectiveness of the WAF in protecting your applications.

In our next post we will tackle one of the trickier issues around effective WAF management: working with the application development teams to stay current as to what’s happening with the application.

- Adrian Lane
(0) Comments

Friday, August 10, 2012

Jeremy Druin has two new Mutillidae/Web Pen-testing videos

Jeremy Druin has two new Mutillidae/Web Pen-testing videos: Jeremy Druin has two new Mutillidae/Web Pen-testing videos
Setting User Agent String And Browser Information
Introduction to user-agent switching: This video uses the Firefox add-on "User-Agent Switcher" to modify several settings in the browser that are transmitted in the user agent string inside HTTP requests. Some web applications will show different content depending on the user agent setting making alteration of the settings useful in web pen testing.
Walkthrough Of CBC Bit Flipping Attack With Solution
This video shows a solution to the view-user-privilege-level in Mutillidae. Before viewing, review how XOR works and more importantly that XOR is communicative (If A xor B = C then it must be true that A xor C = B and also true that B xor C = A). The attack in the video takes advantage that the attacker knows the IV (initialization vector) and the plaintext (user ID). The attack works by flipping each byte in the IV to see what effect is produced on the plaintext (User ID). When the correct byte is located, the ciphertext for that byte is recovered followed by a determination of the correct byte to inject. The correct value is injected to cause the User ID to change.

Mutillidae is available for download at http://sourceforge.net/projects/mutillidae/. Updates about Mutillidae are tweeted to @webpwnized along with announcements about video releases.

Friday, August 3, 2012

BSides Las Vegas 2012 Videos

BSides Las Vegas 2012 Videos: Link:http://www.irongeek.com/i.php?page=videos/bsideslasvegas2012/mainlist
They have been up on Youtube since Friday, but now I have them indexed and with links to where you can download AVIs from Archive.org. Enjoy. Thanks to all of the BSides Crew for having me out to help record and render the videos. @bsideslv, @banasidhe, @kickfroggy, @quadling, @jack_daniel 
Breaking Ground

Proving Ground

New Series: Pragmatic WAF Management

New Series: Pragmatic WAF Management:
Outside our posts on ROI and ALE, nothing has prompted as much impassioned debate as Web Application Firewalls (WAFs). Every time someone on the Securosis team writes about Web App Firewalls, we create a mini firestorm. The catcalls come from all sides: “WAFs Suck”, “WAFs are useless”, and “WAFs are just a compliance checkbox product.” Usually this feedback comes from pen testers who easily navigate around the WAF during their engagements. The people we poll who manage WAFs – both employees and third party service providers – acknowledge the difficulty of managing WAF rules and the challenges of working closely with application developers. But at the same time, we constantly engage with dozens of companies dedicated to leveraging WAFs to protect applications. These folks get how WAFs impact their overall application security approach, and are looking for more value from their investment by optimizing their WAFs to reduce application compromises and risks to their systems.

A research series on Web Application Firewalls has been near the top of our research calendar for almost three years now. Every time we started the research, we found a fractured market solving a limited set of customer use cases, and our conversations with many security practitioners brought up strong arguments, both for and against the technology. WAFs have been available for many years and are widely deployed, but their capability to detect threats varies widely, along with customer satisfaction.

Rather than our typical “Understanding and Selecting” approach research papers, which are designed to educate customers on emerging technologies, we will focus this series on how to effectively use WAF. So we are kicking off a new series on Web Application Firewalls, called “Pragmatic WAF Management.”

Our goal is to provide guidance on use of Web Application Firewalls. What you need to do in order to make WAFs effective for countering web-borne threats, and how a WAF helps mitigate application vulnerabilities. This series will dig into the reasons for the wide disparity in opinions on the usefulness of these platforms. This debate really frames WAF management issues – sometimes disappointment with WAF due to the quality of one specific vendor’s platform, but far more often the problems are due to mismanagement of the product.

So let’s get going, delve into WAF management, and document what’s required to get the most for your WAF.

Defining WAF


Before we go any farther, let’s make sure everyone is on the same page for what we are describing. We define Web Application Firewalls as follows:

A Web Application Firewall (WAF) monitors requests to, and responses from, web based applications or services. Rather than general network or system activity, a WAF focuses on application-specific communications and protocols – such as HTTP, XML, and SOAP. WAFs look for threats to application – such as injection attacks and malicious inputs, tampering with protocol or session data, business logic attacks, or scraping information from the site. All WAFs can be configured purely to monitor activity, but most are used to block malicious requests before they reach the application; sometimes they are even used to return altered results to the requestor.

WAF is essentially a peer of the application, augmenting its behavior and providing security when and where the application cannot.

Why Buy


For the last three years WAFs have been selling at a brisk pace. Why? Three words: Get. Compliant. Fast. The Payment Card Industry’s Data Security Standard (PCI-DSS) prescribes WAF as an appropriate protection for applications that process credit card data. The standard offers a couple options: build security into your application, or protecting it with a WAF. The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opt for WAFs. Plug it in and get your stamp. WAF has simply been the fastest and most cost-effective way to satisfy the PCI-DSS standard.

The reasons WAFs existed in the first place, and these days the second most common reason customers purchase them, is that Intrusion Detection Systems (IDS) and general-purpose network firewalls are ineffective for application security. They are both poorly suited to protecting the application layer. In order to detect application misuse and fraud, a device must understand the dialogue between the application and the end user. WAFs were designed to fill this need, and they ‘speak’ application protocols so they can identify when an application is under attack.

But our research shows a change over the last year: more and more firms want to get more value out of their WAF investment. The fundamental change is motivated by companies which need to reign in the costs of securing legacy applications under continuing budget pressure. These large enterprises have hundreds or thousands of applications, built before anyone considered ‘hacking’ a threat. You know, those legacy applications that really don’t have any business being on the Internet, but are now “business critical” and exposed to every attackers on the net. The cost to retroactively address these applications’ exposures within the applications themselves are often greater than the worth of the applications, and the time to fix them is measured in years – or even decades. Deep code-level fixes are not an option – so once again WAFs are seen as a simpler, faster, and cheaper way to bolt security on rather than patching all the old stuff.

This is why firms which originally deployed WAFs to “Get compliant fast!” are now trying to make their WAFs “Secure legacy apps for less!”

Series Outline


We plan 5 more posts, broken up as follows:

  • The Trouble with WAFs: First we will address the perceived effectiveness of WAF solutions head-on. We will talk about why security professionals and application developers are suspicious of WAFs today, and the history behind those perceptions. We will discuss the “compliance mindset” that drove early WAF implementations, and how compliance buyers can leverage their investment to protect web applications from general threats. We will address the missed promises of heuristics, and close with a discussion of how companies which want to “build security in” (the long term goal) need to balance the efficiency of fixing problems within the development cycle against the need to quickly respond to immediate threats to production Web applications.
  • The WAF Management Process: To provide a framework for solving the problems identified in this post, we will present a high-level process map for pragmatically managing the WAF environment. It will start with steps to manage policies, highlight integration points with the existing application development lifecycle, and finally address securing the WAF.
  • Policy Management: Despite many companies’ hopes, WAF is not a “set and forget” tool – instead it’s a security platform which requires adjustment to new and evolving threats. In order to block threats, keep unwanted requests and malware from hitting your applications, and virtually patch known vulnerabilities in the application stack, WAFs must be regularly tuned. We will discuss how those responsible for WAF management need to work with security to understand common application threats, and what catalysts or events should prompt you to update policies. We will discuss the need to periodically check with the WAF vendor for new policies and capabilities, as well as with CERT/Mitre and other sources for new vulnerability data, and the need to continually revisit the ‘negative’ security model (blocking the bad). We’ll discuss the good, the bad, and the ugly aspects of heuristics for managing policy sets. Finally, we will cover the need to periodically deprecate policies, re-examine deployment options (in-line, bridge, plug-in, out-of-band) and acknowledge that different applications dictate different deployment models.
  • Application Lifecycle Integration: Keeping WAF rule sets current in the face of web application development cycles which push new code weekly – if not daily – can present the biggest difficulty in WAF management. While most WAFs leverage a ‘positive’ security model (block everything which is not explicitly authorized) that does not mean the application activity is static. URLs and features change, and new capabilities are added constantly – all of which inherently impact the WAF ruleset. We will discuss how the WAF management process must be both cognizant of and integrated into the application development methodology to understand what new functions are on the horizon, when new releases will be pushed live, how best to engage with development teams, and how to test rules prior to production release.
  • Securing the WAF: Like most applications, WAFs themselves are subject to manipulation, attacks, and evasion. We will examine how a WAF presents targets of opportunity to attackers and look at the more esoteric aspects of WAF security, including SSL interception, data leakage, and evasion techniques employed by attackers. We will also discuss the benefits of different deployment models and how they affect security and scalability, as well as cloud and hybrid services, to help you understand the requirements for protecting the WAF. We will talk about how pen testing the WAF, in conjunction with your applications, reveals deficiencies in WAF deployments and increases the security of any organization. Finally, we will review ways to automate WAF management and make your job easier.

Next up: The trouble with WAFs, coming tomorrow afternoon.

- Adrian Lane
(0) Comments

Endpoint Security Management Buyers Guide: the ESM Lifecycle

Endpoint Security Management Buyers Guide: the ESM Lifecycle:
As we described in The Business Impact of Managing Endpoint Security, the world is complex and only getting more so. You need to deal with more devices, mobility, emerging attack vectors, and virtualization, among other things. So you need to graduate from the tactical view of endpoint security.

Thinking about how disparate operations teams manage endpoint security today, you probably have tools to manage change – functions such as patch and configuration management. You also have technology to control use of the endpoints, such as device control and file integrity monitoring. So you might have 4 or more different consoles to manage one endpoint device. We call that problem swivel chair management – you switch between consoles enough to wear out your chair. It’s probably worth keeping a can of WD-40 handy to ensure your chair is in tip-top shape.

Using all these disparate tools also creates challenges in discovery and reporting. Unless the tools cleanly integrate, if your configuration management system (for instance) detects a new set of instances in your virtualized data center, your patch management offering might not even know to scan those devices for missing patches. Likewise, if you don’t control the use of I/O ports (USB) on the endpoints, you might not know that malware has replaced system files unless you are specifically monitoring those files. Obviously, given ongoing constraints in funding, resources, and expertise, finding operational leverage anywhere is a corporate imperative.

So it’s time to embrace a broader view of Endpoint Security Management and improve integration among the various tools in use to fill these gaps. Let’s take a little time to describe what we mean by endpoint security management, the foundation of an endpoint security management suite, its component parts, and ultimately how these technologies fit into your enterprise management stack.

The Endpoint Security Management Lifecycle


The Endpoint Security Management LifecycleAs analyst types, the only thing we like better than quadrant diagrams are lifecycles. So of course we have an endpoint security management lifecycle. Of course none of these functions are mutually exclusive, and you don’t may not perform all these functions. And keep in mind that you can start anywhere, and most organizations already have at least some technologies in place to address these problems. It’s has become rare for organizations to manage endpoint security manually.

We push the lifecycle mindset to highlight the importance of looking at endpoint security management strategically. A patch management product can solve part of the problem, tactically. And the same with each of the other functions. But handling endpoint security management as a platform can provide more value than dealing with each function in isolation.

So we drew a picture to illustrate our lifecycle. We show both periodic functions (patch and configuration management) which typically occur every day or every two. We also depict ongoing activities (device control and file integrity monitoring) which need to run all the time – typically using device agents.

Let’s describe each part of the lifecycle at a high level, before we dig down in subsequent posts.

Configuration Management


Configuration management provides the ability for an organization to define an authorized set of configurations for devices in use within the environment. These configurations govern the applications installed, device settings, services running, and security controls in place. This capability is important because a changing configuration might indicate malware manipulation, an operational error, or an innocent and unsuspecting end user deciding it’s a good idea to bring up an open SMTP relay on their laptop. Configuration management enables your organization to define what should be running on each device based on entitlements, and to identify non-compliant devices.

Patch Management


Patch management installs fixes from software vendors to address vulnerabilities in software. The best known patching process comes from Microsoft every month. On Patch Tuesday, Microsoft issues a variety of software fixes to address defects that could result in exploitation of their systems. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and ultimately install the patch within the window specified by policy – typically a few days. The patch management product scans devices, installs patches, and reports on the success and/or failure of the process. Patch Management Quant provides a very detailed view of the patching process, so check it out if you want more information.

Device Control


End users just love the flexibility their USB ports provide for their ‘productivity’. You know – the ability to share music with buddies and download your entire customer database onto their phones became – it all got much easier once the industry standardized on USB a decade ago. All kidding aside, the ability to easily share data has facilitated better collaboration between employees, while simultaneously greatly increasing the risk of data leakage and malware proliferation. Device control technology enables you both to enforce policy for who can use USB ports, and for what; and also to capture what is copied to and from USB devices. As a more active control, monitoring and enforcement of for device usage policy eliminates a major risk on endpoint devices.

File Integrity Monitoring


The last control we will mention explicitly is file integrity monitoring, which watches for changes in critical system files. Obviously these file do legitimately change over time – particularly during patch cycles. But those files are generally static, and changes to core functions (such as the IP stack and email client) generally indicate some type of problem. This active control allows you to define a set of files (including both system and other files), gather a baseline for what they should look like, and then watch for changes. Depending on the type of change, you might even roll back those changes before more bad stuff happens.

The Foundation


The centerpiece of the ESM platform is an asset management capability and console to define policies, analyze data, and report. A platform should have the following capabilities:

  • Asset Management/Discovery: Of course you can’t manage what you can’t see, so the first critical capability of an ESM platform is sophisticated discovery. When a new device appears on the network the ESM should know about it. That may happen via scanning the organization’s IP address ranges, passively monitoring traffic, or integrating with other asset management repositories (CMDB, vulnerability management, etc). Regardless of how the platform is populated, without a current list of assets you cannot manage endpoint security.
  • Policy Interface: The next key capability of the ESM platform is the ability to set policies – a very broad requirement. You must be able to set standard configurations, patch windows, device entitlements, etc., for groups of devices and users. Obviously it is necessary to balance policy granularity against ease of use, but without an integrated policy encompassing all the platform’s capabilities, you are still stuck in your swivel chair.
  • Analytics: Once policies are defined you need to analyze and alert on them. The key here is the ability to set rules and triggers across all functions of the ESM platform. For instance, if a configuration change occurs shortly after a patch fails, followed by a system file being changed, that might indicate a malware infection. We aren’t talking about the sophisticated multivariate analysis available in enterprise-class SIEMs – just the ability to set alerts based on common attacks you are likely to see.
  • Reporting: You just cannot get around compliance. Many security projects receive funding and resources from the compliance budget, which means your ESM platform needs to report on what is happening. This isn’t novel but it is important. The sooner you can provide the information to make your auditor happy, the sooner you can get back to the rest of your job.

Obviously we could write an entire series just about buying the ESM platform, but that’s enough for now. The key is to look for integration across asset management, policies, analytics, and reporting, to provide the operational leverage you need.

Enterprise Integration Points


Before we move on to specific functions of the platform, keep in mind that no platform really stands alone in an organization. You already have plenty of technology in place, and anything you buy to manage endpoint security needs to play nice with your other stuff. So keep these other enterprise management systems in mind as you look at ESM. Make sure your vendor can provide sufficient integration, or at a minimum a SDK/API to pull data from or send it to these other systems.

  • Operations Management – including device building/provisioning, software distribution/licensing, and other asset repositories
  • Vulnerability Management – for discovery, vulnerabilities, and patch levels
  • Endpoint Protection – including anti-malware and full disk encryption, potentially leveraging agents to simplify management and minimize performance impact
  • SIEM/Log Management – for robust data aggregation, correlation, alerting and reporting
  • Backup/Recovery – many endpoints house valuable data, so make sure device failure doesn’t put intellectual property at risk

Obviously this view of the platform and capabilities is high level. Next we will dig into the periodic functions of patch and configuration management.

- Mike Rothman
(0) Comments