Friday, October 5, 2012

New Series: Understanding and Selecting Identity Management for Cloud Services

New Series: Understanding and Selecting Identity Management for Cloud Services:

Adrian and Gunnar here, kicking off a new series on Identity Management for Cloud Services.


We have been hearing about Federated Identity and Single Sign-On services for the last decade, but demand for these features has only fully blossomed in the last few years, as companies have needed to integrate their internal identity management systems. The meanings of these terms has been actively evolving, under the influence of cloud computing. The ability to manage what resources your users can access outside your corporate network – on third party systems outside your control – is not just a simple change in deployment models; but a fundamental shift in how we handle authentication, authorization, and provisioning. Enterprises want to extend capabilities to their users of low-cost cloud service providers – while maintaining security, policy management, and compliance functions. We want to illuminate these changes in approach and technology. And if you have not been keeping up to date with these changes in the IAM market, you will likely need to unlearn what you know. We are not talking about making your old Active Directory accessible to internal and external users, or running LDAP in your Amazon EC2 constellation. We are talking about the fusion of multiple identity and access management capabilities – possibly across multiple cloud services. We are gaining the ability to authorize users across multiple services, without distributing credentials to each and every service provider.


Cloud services – be they SaaS, PaaS, or IaaS – are not just new environments in which to deploy existing IAM tools. They fundamentally shift existing IAM concepts. It’s not just the way IT resources are deployed in the cloud, or the way consumers want to interact with those resources, which have changed, but those changes are driven by economic models of efficiency and scale. For example enterprise IAM is largely about provisioning users and resources into a common directory, say Active Directory or RACF, where the IAM tool enforces access policy. The cloud changes this model to a chain of responsibility, so a single IAM instance cannot completely mediate access policy. A cloud IAM instance has a shared responsibility in – as an example – assertion or validation of identity. Carving up this set of shared access policy responsibilities is a game changer for the enterprise.


We need to rethink how we manage trust and identities in order to take advantage of elastic, on-demand, and widely available web services for heterogenous clients. Right now, behind the scenes, new approaches to identity and access management are being deployed – often seamlessly into cloud services we already use. They reduce the risk and complexity of mapping identity to public or semi-public infrastructure, while remaining flexible enough to take full advantage of multiple cloud service and deployment models.


Our goal for this series is to illustrate current trends and technologies that support cloud identity, describe the features available today, and help you navigate through the existing choices. The series will cover:


  • The Problem Space: We will introduce the issues that are driving cloud identity – from fully outsourced, hybrid, and proxy cloud services and deployment models. We will discuss how the cloud model is different than traditional in-house IAM, and discuss issues raised by the loss of control and visibility into cloud provider environments. We will consider the goals of IAM services for the cloud – drilling into topics including identity propagation, federation, and roles and responsibilities (around authentication, authorization, provisioning, and auditing). We will wrap up with the security goals we must achieve, and how compliance and risk influence decisions.
  • The Cloud Providers: For each of the cloud service models (SaaS, PaaS, and IaaS) we will delve into the IAM services built into the infrastructure. We will profile IAM offerings from some of the leading independent cloud identity vendors for each of the service models – covering what they offer and how their features are leveraged or integrated. We will illustrate these capabilities with a simple chart that shows what each provides, highlighting the conceptual model each vendor embraces to supply identity services. We will talk about what you will be responsible for as a customer, in terms of integration and management. This will include some of the deficiencies of these services, as well as areas to consider augmenting.
  • Use Cases: We will discuss three of the principal use cases we see today, as organizations move existing applications to the cloud and develop new cloud services. We will cover extending existing IAM systems to cover external SaaS services, developing IAM for new applications deployed on IaaS/PaaS, and adopting Identity as a Service for fully external IAM.
  • Architecture and Design: We will start by describing key concepts, including consumer/service patterns, roles, assertions, tokens, identity providers, relying party applications, and trust. We will discuss the available technologies fors the heavy lifting (such as SAML, XACML, and SCIM) and discuss the problems they are designed to solve. We will finish with an outline of the different architectural models that will frame how you implement cloud identity services, including the integration patterns and tools that support each model.
  • Implementation Roadmap: IAM projects are complex, encompass most IT infrastructure, and may take years to implement. Trying to do everything at once is a recipe for failure. This portion of our discussion will help ensure you don’t bite off more than you can chew. We will discuss how to select an architectural model that meets your requirements, based on the cloud service and deployment models you selected. Then we will create different implementation roadmaps depending on your project goals and critical business requirements.
  • Buyer’s Guide: We will close by examining key decision criteria to help select a platform. We will provide questions to determine with vendors offer solutions that support your architectural model and criteria to measure the appropriateness of a vendor solution against your design goals. We will also help walk you through the evaluation process.

As always, we encourage you to ask questions and chime in with comments and suggestions. The community helps make our research better, and we encourage your participation.


Next: the problems addressed by cloud identity services.


- Adrian Lane
(1) Comments

Adding Anti-CSRF Support to Burp Suite Intruder

Adding Anti-CSRF Support to Burp Suite Intruder:
In the web application penetration testing industry, Burp Suite is considered a must-have tool – it includes an intercepting proxy, both active and passive web vulnerability scanners, crawler, session ID analysis tools and various other useful features, all under a single application. One of Burp's best features is the Intruder, a tool which allows the tester to provide a list of values which should be sent to the application as parameter values. By providing values which trigger SQL errors or inject Javascript into the resulting page, one can easily determine if and how the application is doing filtering on the parameters, and whether it is vulnerable to a given issue.


Intruder
Burp's Intruder works perfectly when the application responds to those requests as if they came from the user. The screenshot above shows the submission of the following HTML form:
<form action="/CSRFGuardTestAppVulnerable/HelloWorld" 
  method="POST" name="first">
    <input type="text" value="SpiderLabs" name="name">
    <input type="submit" value="submit" name="submit">
</form>
However, modern application frameworks are adding support for Anti-CSRF (Cross-Site Request Forgery) techniques, which protect the application from forged requests such as those Burp uses. The most commonly implemented prevention measure is the Synchronizer Token Pattern, which adds a parameter with a random value to all forms generated by the application for a given user session, and validates this token when the form is submitted. For example:
<form action="/CSRFGuardTestApp/HelloWorld"
  method="POST" name="first">
    <input type="text" value="SpiderLabs" name="name">
    <input type="submit" value="submit" name="submit">
    <input type="hidden" value="Z0XN-1FRF-975E-GB9F-GGQM-L2RC-04H1-GKHQ"
      name="OWASP_CSRFTOKEN">
</form>
Because the OWASP_CSRFTOKEN parameter will change between every submission, the Intruder will not work, as it expects the application to respond to the request as if they came from the user. The solution is rather simple: instead of simply feeding a set of values to the parameters being tested, we need to have the Intruder populate the Anti-CSRF token parameter from the form page.
Intruder_regular

This changes the default behavior of the Intruder tool quite a bit. First, it must know which parameter represents the Anti-CSRF token in the request. While many frameworks will use parameter names that include the "csrf" string, this can be configured per application, and thus we cannot rely on automatic detection. Second, it means the Intruder must make twice the number of requests it would normally perform: one to fetch the form page, which contains the Anti-CSRF token embedded in it, and a second one to actually submit the form with the parameter values provided by the tester. We will address both issues by developing a Burp extension called CSRF Intruder.
Burp offers an extensibility API, called Burp Extender, which allows us to hook into various points in the application, including the UI and the request interception engine. The first thing we need to do is create a Java class which will host our extension.
package burp;

import spiderlabs.burp.intruder.CSRFIntruder;

public class BurpExtender {
    private IBurpExtenderCallbacks callbacks;
    private CSRFIntruder csrfIntruder;

    public void registerExtenderCallbacks(IBurpExtenderCallbacks callbacks) {
        this.callbacks = callbacks;
        this.csrfIntruder = new CSRFIntruder(this.callbacks);
        this.callbacks.registerMenuItem("CSRF Intruder", this.csrfIntruder);
        this.callbacks.issueAlert(String.format("Starting up CSRF Intruder extension [%s]",
                this.getClass().getCanonicalName()));
    }
    
    public void processHttpMessage(java.lang.String toolName, boolean messageIsRequest,
        IHttpRequestResponse messageInfo) {
        /* Intercept Intruder requests and check if they came from CSRF Intruder */
        if (toolName.equals("intruder") && messageIsRequest)
            this.callbacks.issueAlert("TODO: Intercept CSRF Intruder requests");
    }
}
The registerExtenderCallbacks() method acts as the extension constructor, and is the first method called by Burp when the extension is loaded. It provides us with a IBurpExtenderCallbacks instance, which we use to create a new context menu entry for our CSRF Intruder handler. We also implement a processHttpMessage() method, which we will use to intercept Intruder requests and modify the Anti-CSRF token velues before they are sent to the remote application.
How does Burp use this class? It must obey a series of restrictions before Burp can find and use it:
  • It must be in package burp;
  • It must be named BurpExtender;
  • It must implement at least one of the methods in interface IBurpExtender;
  • Burp must be executed with the JVM classpath pointing to our BurpExtender class:

    java -classpath burp.jar;BurpProxyExtender.jar burp.StartBurp
If these criteria are met, then when Burp starts up our CSRF Intruder extension message should show up in the Alerts tab.
Burp_alerts
We can now right-click everywhere where a "Send to [...]" menu entry is displayed and click on "CSRF Intruder".
Csrf_intruder_context_menu
Clicking on the CSRF Intruder menu entry will trigger the handler in our CSRFIntruder class, shown below:
package spiderlabs.burp.intruder;

import javax.swing.JOptionPane;

import spiderlabs.burp.intruder.gui.CSRFConfigurationDialog;
import burp.IBurpExtenderCallbacks;
import burp.IHttpRequestResponse;
import burp.IMenuItemHandler;

public class CSRFIntruder implements IMenuItemHandler {
    private IBurpExtenderCallbacks callbacks;
    
    public CSRFIntruder(IBurpExtenderCallbacks callbacks) {
        this.callbacks = callbacks;
    }
    
    @Override
    public void menuItemClicked(String caption, IHttpRequestResponse[] messageInfo) {
        try {
            IHttpRequestResponse message = messageInfo[0];
            
            String refererUrl = new String();
            for (String header: this.callbacks.getHeaders(message.getRequest())) {
                System.out.println(header);
                if (header.startsWith("Referer:"))
                    refererUrl = header.split(":\\s")[1];
            }
            
            String parameters[][] = this.callbacks.getParameters(message.getRequest());
            CSRFIntruderConfiguration configuration =
                    CSRFConfigurationDialog.getConfigurationFromDialog(refererUrl, parameters);
            this.callbacks.issueAlert(configuration.toString());
        } catch (Exception exception) {
            this.callbacks.issueAlert(String.format("Error while obtaining request URL: %s",
                    exception.toString()));
        }
    }
}
The menuItemClicked() handler does two things: first, it fetches the value of the Referer header (if present) from the request the user right-clicked on and uses it as the suggested source form URL, that is, the place where we can find the Anti-CSRF token values; next, it obtain a list of all parameters in that request. It then sends those to a CSRFConfigurationDialog, which displays the URL and parameters, and waits for the user to configure the Anti-CSRF parameters.
Csrf_intruder_configuration
When the user provides the URL and the token parameter to be used, we already have all the information necessary to start up Intruder and manipulate its requests. This will come on a second post.

Forensic Scanner

Forensic Scanner: I've posted regarding my thoughts on a Forensic Scanner before, and just today, I gave a short presentation at the #OSDFC conference on the subject.

Rather than describing the scanner, this time around, I released it.  The archive contains the Perl source code, a 'compiled' executable that you can run on Windows without installing Perl, a directory of plugins, and PDF document describing how to use the tool.  I've also started populating the wiki with information about the tool, and how it's used.

Some of the feedback I received on the presentation was pretty positive.  I did chat with Simson Garfinkel for a few minutes after the presentation, and he had some interesting thoughts to share.  The second was on making the Forensic Scanner thread-safe, so it can be multi-threaded and use multiple cores.  That might make it on to the "To-Do" list, but it was Simson's first thought that I wanted to address.  He suggested that at a conference were the theme seemed to revolve around analysis frameworks, I should point out the differences between the other frameworks and what I was presenting on, so I wanted to take a moment to do that.

Brian presented on Autopsy 3.0, and along with another presentation in the morning, discussed some of the features of the framework.  There was the discussion of pipelines, and having modules to perform specific functions, etc.  It's an excellent framework that has the capability of performing functions such as parsing Registry hives (utilizing RegRipper), carving files from unallocated space, etc.  For more details, please see the web site.  

I should note that there are other open source frameworks available, as well, such as DFF.

The Forensic Scanner is...different.  Neither better, nor worse...because it addresses a different problem.  For example, you wouldn't use the Forensic Scanner to run a keyword search or carve unallocated space.  The scanner is intended for quickly automating repetitive tasks of data collection, with some ability to either point the analyst in a particular direction, or perform a modicum of analysis along with the data presentation (depending upon how much effort you want to put into writing the plugins).  So, rather than providing an open framework which an analyst can use to perform various analysis functions, the Scanner allows the analyst to perform discrete, repetitive tasks.

The idea behind the scanner is this...there're things we do all the time when we first initiate our analysis.  One is to collect simple information from the system...it's a Windows system, which version of Windows is it, it's time zone settings, is it 32- or 64-bit, etc.  We collect this information because it can significantly impact our analysis.  However, keeping track of all of these things can be difficult.  For example, if you're looking at an image acquired from a Windows system and don't see Prefetch files, what's your first thought?  Do you check the version of Windows you're examining?  Do you check the Registry values that apply to and control the system's prefetching capabilities?  I've talked with examiners who's first thought is that the user must have deleted the Prefetch files...but how do you know?

Rather than maintaining extensive checklists of all of these artifacts, why not simply write a plugin to collect what data it is that you want to collect, and possibly add a modicum of analysis into that plugin?  One analyst writes the plugin, shares it, and anyone with that plugin will have access to the functionality without having to have had the same experiences as the analyst.  You share it with all of your analysts, and they all have the capability at their fingertips.  Most analysts recognize the value of the Prefetch files, but some may not work with Windows systems all of the time, and may not stay up on the "latest and greatest" in analysis techniques that can be applied to those files.  So, let's say that instead of dumping all of the module paths embedded in Prefetch files, you add some logic to search for .exe files, .dat files, and any file that includes "temp" in the path, and display that information?  Or, why not create whitelists of modules over time, and have the plugin show you all modules not in that whitelist?

Something that I and others have found useful is that, instead of forcing the analyst to use "profiles", as with the current version of RegRipper, the Forensic Scanner runs plugins automatically based on OS type, class, and then organizes the plugin results by category.  What this means is that for the system class of plugins, all of the plugins that pertain to "Program Execution" will be grouped together; this holds true for other plugin categories, as well.  This way, you don't have to go searching around for the information in which you're interested.

As I stated more than once in the presentation, the Scanner is not intended to replace analysis; rather, it's intended to get you to the point of performing analysis much sooner.  For example, I illustrated a plugin that parses a user's IE index.dat file.  Under normal circumstances when performing analysis, you'd have to determine which version of Windows you were examining, determine the path to the particular index.dat file that you're interested in, and then extract it and parse it.  The plugin is capable of doing all of that...in the test case, all of the plugins I ran against a mounted volume completed in under 2 seconds...that's scans of the system, as well as both of the selected user profiles.

So...please feel free to try the Scanner.  If you have any questions, you know where to reach me.  Just know that this is a work-in-progress, with room for growth.

Addendum
Matt Presser identified an issue that the Scanner has with identifying user profiles that contain a dot.  I fixed the issue and will be releasing an update once I make a couple of minor updates to other parts of the code.

Wednesday, October 3, 2012

Securing Big Data: Recommendations and Open Issues

Securing Big Data: Recommendations and Open Issues:
Our previous two posts outlined several security issues inherent to big data architecture, and operational security issues common to big data clusters. With those in mind, how can one go about securing a big data cluster? What tools and techniques should you employ?

Before we can answer those questions we need some ground rules, because not all ‘solutions’ are created equally. Many vendors claim to offer big data security, but they are really just selling the same products they offer for other back office systems and relational databases. Those products might work in a big data cluster, but only by compromising the big data model to make it fit the restricted envelope of what they can support. Their constraints on scalability, coverage, management, and deployment are all at odds with the essential big data features we have discussed. Any security product for big data needs a few characteristics:

  1. It must not compromise the basic functionality of the cluster
  2. It should scale in the same manner as the cluster
  3. It should not compromise the essential characteristics of big data
  4. It should address – or at least mitigate – a security threat to big data environments or data stored within the cluster.

So how can we secure big data repositories today? The following is a list of common challenges, with security measures to address them:

  • User access: We use identity and access management systems to control users, including both regular and administrator access.
  • Separation of duties: We use a combination of authentication, authorization, and encryption to provide separation of duties between administrative personnel. We use application space, namespace, or schemata to logically segregate user access to a subset of the data under management.
  • Indirect access: To close “back doors” – access to data outside permitted interfaces – we use a combination of encryption, access control, and configuration management.
  • User activity: We use logging and user activity monitoring (where available) to alert on suspicious activity and enable forensic analysis.
  • Data protection: Removal of sensitive information prior to insertion and data masking (via tools) are common strategies for reducing risk. But the majority of big data clusters we are aware of already store redundant copies of sensitive data. This means the data stored on disk must be protected against unauthorized access, and data encryption is the de facto method of protecting sensitive data at rest. In keeping with the requirements above, any encryption solution must scale with the cluster, must not interfere with MapReduce capabilities, and must not store keys on hard drives along with the encrypted data – keys must be handled by a secure key manager.
  • Eavesdropping: We use SSL and TLS encryption to protect network communications. Hadoop offers SSL, but its implementation is limited to client connections. Cloudera offers good integration of TLS; otherwise look for third party products to close this gap.
  • Name and data node protection: By default Hadoop HTTP web consoles (JobTracker, NameNode, TaskTrackers, and DataNodes) allow access without any form of authentication. The good news is that Hadoop RPC and HTTP web consoles can be configured to require Kerberos authentication. Bi-directional authentication of nodes is built into Hadoop, and available in some other big data environments as well. Hadoop’s model is built on Kerberos to authenticate applications to nodes, nodes to applications, and client requests for MapReduce and similar functions. Care must be taken to secure granting and storage of Kerberos tickets, but this is a very effective method for controlling what nodes and applications can participate on the cluster.
  • Application protection: Big data clusters are built on web-enabled platforms – which means that remote injection, cross-site scripting, buffer overflows, and logic attacks against and through client applications are all possible avenues of attack for access to the cluster. Countermeasures typically include a mixture of secure code development practices (such as input validation, and address space randomization), network segmentation, and third-party tools (including Web Application Firewalls, IDS, authentication, and authorization). Some platforms offer built-in features to bolster application protection, such as YARN’s web application proxy service.
  • Archive protection: As backups are largely an intractable problem for big data, we don’t need to worry much about traditional backup/archive security. But just because legitimate users cannot perform conventional backups does not mean an attacker would not create at least a partial backup. We need to secure the management plane to keep unwanted copies of data or data nodes from being propagated. Access controls, and possibly network segregation, are effective countermeasures against attackers trying to gain administrative access, and encryption can help protect data in case other protections are defeated.

In the end, our big data security recommendations boil down to a handful of standard tools which can be effective in setting a secure baseline for big data environments:

  1. Use Kerberos: This is effective method for keeping rogue nodes and applications off your cluster. And it can help protect web console access, making administrative functions harder to compromise. We know Kerberos is a pain to set up, and (re-)validation of new nodes and applications takes work. But without bi-directional trust establishment it is too easy to fool Hadoop into letting malicious applications into the cluster, or into accepting introduce malicious nodes – which can then add, alter, or extract data. Kerberos is one of the most effective security controls at your disposal, and it’s built into the Hadoop infrastructure, so use it.
  2. File layer encryption: File encryption addresses two attacker methods for circumventing normal application security controls. Encryption protects in case malicious users or administrators gain access to data nodes and directly inspect files, and it also renders stolen files or disk images unreadable. Encryption protects against two of the most serious threats. Just as importantly, it meets our requirements for big data security tools – it is transparent to both Hadoop and calling applications, and scales out as the cluster grows. Open source products are available for most Linux systems; commercial products additionally offer external key management, trusted binaries, and full support. This is a cost-effective way to address several data security threats.
  3. Management: Deployment consistency is difficult to ensure in a multi-node environment. Patching, application configuration, updating the Hadoop stack, collecting trusted machine images, certificates, and platform discrepancies, all contribute to what can easily become a management nightmare. The good news is that most of you will be deploying in cloud and virtual environments. You can leverage tools from your cloud provider, hypervisor vendor, and third parties (such as Chef and Puppet) to automate pre-deployment tasks. Machine images, patches, and configuration should be fully automated and updated prior to deployment. You can even run validation tests, collect encryption keys, and request access tokens before nodes are accessible to the cluster. Building the scripts takes some time up front but pays for itself in reduced management time later, and additionally ensures that each node comes up with baseline security in place.
  4. Log it!: Big data is a natural fit for collecting and managing log data. Many web companies started with big data specifically to manage log files. Why not add logging onto your existing cluster? It gives you a place to look when something fails, or if someone thinks perhaps you have been hacked. Without an event trace you are blind. Logging MR requests and other cluster activity is easy to do, and increases storage and processing demands by a small fraction, but the data is indispensable when you need it.
  5. Secure communication: Implement secure communication between nodes, and between nodes and applications. This requires an SSL/TLS implementation that actually protects all network communications rather than just a subset. Cloudera appears to get this right, and some cloud providers offer secure communication options as well; otherwise you will likely need to integrate these services into your application stack.

When we speak with big data architects and managers, we hear their most popular security model is to hide the cluster within their infrastructure, where attackers don’t notice it. But these repositories are now common, and lax security makes them a very attractive target. Consider the recommendations above a minimum for preventative security. And these are the easy security measures – they are simple, cost-effective, and scalable, and they addresses specific security deficiencies with big data clusters. Nothing suggested here impairs performance, scalability, or functionality. Yes, it’s more work to set up, but relatively simple to manage and maintain.

Several other threats are less easily addressed. API security, monitoring, user authentication, granular data access, and gating MapReduce requests, are all issues without simple answers. In some cases, such as with application security, there are simply too many variables – with security controls requiring tradeoffs that are entirely dependent upon the particular environment. Some controls, including certain types of encryption and activity monitoring, require modifications to deployments or data choke points that can slow data input to a snail’s pace. In many cases we have nothing to recommend – technologies have not yet evolved sufficiently to handle real-world big data requirements. That does not mean you cannot fill the gaps with your own solutions. But for some problems, there are no simple solutions. Our recommendation can greatly improve overall security of big data clusters, and we highlight outstanding operational and architectural security issues to help you develop your own security measures.

This concludes our short introduction to big data security. As always, please comment if you feel we missed something or feel we have gotten anything wrong. Big data security is both complex and rapidly changing; we know there are many different perspectives and encourage you to share yours.

- Adrian Lane
(0) Comments

Tuesday, September 25, 2012

Inflection

Inflection:
Hang with me as I channel my inner Kerouac (minus the drugs, plus the page breaks) and go all stream of consciousness. To call this post an “incomplete thought” would be more than a little generous.

I believe we are now deep in the early edge of a major inflection point in security. Not one based merely on evolving threats or new compliance regimes, but a fundamental transformation of the practice of security that will make it nearly unrecognizable when we finally emerge on the other side. For the past 5 years Hoff and I have discussed disruptive innovation in our annual RSA presentation. What we are seeing now is a disruptive conflagration, where multiple disruptive innovations are colliding and overlapping. It affects more than security, but that’s the only area about which I’m remotely qualified to pontificate.

Perhaps that’s a bit of an exaggeration. All the core elements of what we will become are here today, and there are certain fundamentals that never change, but someone walking into the SOC or CISO role of tomorrow will find more different than the same unless they deliberately blind themselves.

Unlike most of what I read out there, I don’t see these changes as merely things we in security are forced to react to. Our internal changes in practice and technology are every bit as significant contributing factors.

One of the highlights of my career was once hanging out and having beers with Bruce Sterling. He said that his role as a futurist was to imagine the world 7 years out – effectively beyond the event horizon of predictability. What I am about to describe will occur over the next 5-10 years, with the most significant changes likely occurring in those last 7-10 years, but based on the roots we establish today. So this should be taken as much as science fiction as prediction.

The last half of 2012 is the first 6 months of this transition. The end result, in 2022, will be far more change over 10 years than the evolution of the practice of security from 2002 through today.



The first major set of disruptions includes the binary supernova of tech – cloud computing and mobility. This combination, in my mind, is more fundamentally disruptive than the initial emergence of the Internet. Think about it – for the most part the Internet was (at a technical level) merely an extension of our existing infrastructure. To this day we have tons of web applications that, through a variety of tiers, connect back to 30+-year-old mainframe applications. Consumption is still mostly tied to people sitting at computers at desks – especially conceptually.

Cloud blows up the idea of merely extending existing architectures with a web portal, while mobility advances fundamentally redefine consumption of technology. Can you merely slop your plate of COBOL onto a plate of cloud? Certainly, right as you watch your competitors and customers speed past at relativistic speeds.

Our tradition in security is to focus on the risks of these advances, but the more prescient among us are looking at the massive opportunities. Not that we can ignore the risks, but we won’t merely be defending these advances – our security will be defined and delivered by them. When I talk about security automation and abstraction I am not merely paying lip service to buzzwords – I honestly expect them to support new capabilities we can barely imagine today.

When we leverage these tools – and we will – we move past our current static security model that relies (mostly) on following wires and plugs, and into a realm of programmatic security. Or, if you prefer, Software Defined Security. Programmers, not network engineers, become the dominant voices in our profession.



Concurrently, four native security trends are poised to upend existing practice models.

Today we focus tremendous effort on an infinitely escalating series of vulnerabilities and exploits. We have started to mitigate this somewhat with anti-exploitation, especially at the operating system level (thanks to Microsoft). The future of anti-exploitation is hyper segregation.

iOS is an excellent example of the security benefits of heavily sandboxing the operating ecosystem. Emerging tools like Bromium and Invincea are applying even more advanced virtualization techniques to the same problem. Bromium goes so far as to effectively virtualize and isolate at a per task level. Calling this mere ‘segregation’ is trite at best.

Cloud enables similar techniques at the network and application levels. When the network and infrastructure are defined in software, there is essentially zero capital cost for network and application component segregation. Even this blog, today, runs on a specially configured hyper-segregated server that’s managed at a per-process level.

Hyper segregated environments – down, in some cases, to the individual process level – are rapidly becoming a practical reality, even in complex business environments with low tolerance for restriction.



Although incident response has always technically been core to any security model, for the most part it was shoved to the back room – stuck at the kids’ table next to DRM, application security, and network segregation. No one wanted to make the case that no matter what we spent, our defenses could never eliminate risk. Like politicians, we were too frightened to tell our executives (our constituency) the truth. Especially those who were burned by ideological execs.

Thanks to our friends in China and Eastern Europe (mostly), incident response is on the earliest edge of getting its due. Not the simple expedient of having an incident response plan, or even tools, but conceptually re-prioritizing and re-architecting our entire security programs – to focus as much or more on detection and response as on pure defense. We will finally use all those big screens hanging in the SOC to do more than impress prospects and visitors.

My bold prediction? A focus on incident response, on more rapidly detecting and responding to attacker-driven incidents, will exceed our current checklist and vulnerability focused security model, affecting everything from technology decisions to budgeting and staffing.

This doesn’t mean compliance will go away – over the long haul compliance standards will embrace this approach out of necessity.



The next two trends technically fall under the umbrella of response, but only in the broadest use of the term.

As I wrote earlier this year, active defense is reemerging and poised to materially impact our ability to detect and manage attacks. Historically we have said that as defenders we will always lose – because we need to be right every time, and the attacker only needs to be right or lucky once. But that’s only true for the most simplistic definition of attack. If we look at the Data Breach Triangle, an attacker not only needs a way in, but something to steal and a way out.

Active defense reverses the equation and forces attacker perfection, making accessing our environment only one of many steps required for a successful attack. Instead of relying on out-of-date signatures, crappy heuristics prone to false positives, or manual combing through packets and logs, we will instead build environments so laden with tripwires and landmines that they may end up being banned by a virtual Geneva Convention.

Heuristic security tends to fail because it often relies on generic analysis of good and bad behavior that is difficult or impossible to model. Active defenses interact with intruders while complicating and obfuscating the underlying structure. This dynamic interaction is far more likely to properly identify and classify an attacker.

Active defenses will become commonplace, and in large part replace our signature-based systems of failure.



But none of this information is useful if it isn’t accurate, actionable, and appropriate, so the last major trend is closing the action loop (the OODA loop for you milspec readers). This combines big data, visualization, and security orchestration (a facet of our earlier automation) – to create a more responsive, manageable, and, frankly, usable security system.

Our current tools largely fall into general functional categories that are too distinct and isolated to really meet our needs. Some tools observe our environment (e.g., SIEM, DLP, and full packet capture), but they tend to focus on narrow slices – with massive gaps between tools hampering our ability to acquire related information which we need to understand incidents. From an alert, we need to jump into many different shells and command lines on multiple servers and appliances in order to see what’s really going on. When tools talk to each other, it’s rarely in a meaningful and useful way.

While some tools can act with automation, it is again self-contained, uncoordinated, and (beyond the most simplistic incidents) more prone to break a business process than stop an attacker. When we want to perform a manual action, our environments are typically so segregated and complicated that we can barely manage something as simple as pushing a temporary firewall rule change.

Over the past year I have seen the emergence of tools just beginning to deliver on old dreams, which were so shattered by the ugly reality of SIEM that many security managers have resorted to curling up in the fetal position during vendor presentations.

These tools will combine the massive amounts of data we are currently collecting on our environments, at speeds and volumes long promised but never realized. We will steal analytics from big data; tune them for security; and architect systems that allow us to visualize our security posture, identify, and rapidly characterize incidents. From the same console we will be able to look at a high-level SIEM alert, drill down into the specifics, and analyze correlated data from multiple tools and sensors. I don’t merely mean the SNMP traps from those tools, but full incident data and context.

No, your current SIEM doesn’t do this.

But the clincher is the closer. Rather than merely looking at incident data, we will act on the data using the same console. We will review the automated responses, model the impact with additional analytics and visualization (real-time attack and defense modeling, based on near-real-time assessment data), and then tune and implement additional actions to contain, stop, and investigate the attack.

Detection, investigation, analysis, orchestration, and action all from the same console.



This future won’t be distributed evenly. Organizations of different sizes and markets won’t all have the same access to these resources and they do not have the same needs – and that’s okay. Smaller and medium-sized organizations will rely more on Security as a Service providers and automated tools. Larger organizations or those less willing to trust outsiders will rely more on their own security operators. But these delivery models don’t change the fundamentals of technologies and processes.

Within 10 years our security will be abstracted and managed programmatically. We will focus more on detection and response – relying on powerful new tools to identify, analyze, and orchestrate responses to attacks. The cost of attacking will rise dramatically due to hyper segregation, active countermeasures, and massive increases in the complexity required for a successful attack chain.

I am not naive or egotistical enough to think these broad generalities will result in a future exactly as I envision it, or on the precise timeline I envision, but all the pieces are in at least early stages today, and the results seem inevitable.

- Rich
(1) Comments

Tuesday, September 4, 2012

ZackAttack!: Firesheep for NTLM Authentication!

ZackAttack!: Firesheep for NTLM Authentication!: Not BlackHat, but a goodie from the Defcon this time! What Firesheep was for HTTP, ZackAttack! aims to be for NTLM authentication!  Simply put, ZackAttack! is a new toolset to perform NTLM authentication relaying. NTLM is pretty much prevalent in the corporate environment. Think of all that you can leverage using ZackAttack! It uses another tool – [...]
ZackAttack!: Firesheep for NTLM Authentication! is a post from: PenTestIT

Old School On-target NBNS Spoofing

Old School On-target NBNS Spoofing:
One of pen testers favorite attacks is NBNS spoofing. Now Wesley who I originally learned this attack from, traced this back to sid (http://www.notsosecure.com/folder2/2007/03/14/abusing-tcpip-name-resolution-in-windows-to-carry-out-phishing-attacks/) . Wesley's stuff can be found here: http://www.mcgrewsecurity.com/tools/nbnspoof/
Wesley's stuff eventually lead to this awesome post on the Packetstan blog: http://www.packetstan.com/2011/03/nbns-spoofing-on-your-way-to-world.html
and in that post the Metasploit module to do it all is demoed. But there in lies the rub. With each degree of separation we have more and more solidified in into a "on-site" only attack. But if you read through Sid's paper from 2007 this doesn't have to be the case. He uses a tool written by "Francis Hauguet" back in 2005 for the Honeynet project: http://seclists.org/honeypots/2005/q4/46 called "FakeNetbiosDGM and FakeNetbiosNS".
Finding the tools was no easy task though, googling for the file name, the author or the project just netted me this link:
http://honeynet.rstack.org/tools/FakeNetBIOS-0.91.zip
Gotta love the Wayback Machine, I finally found it here: http://wayback.archive.org/web/*/http://honeynet.rstack.org/tools/FakeNetBIOS-0.91.zip
and eventually also here: http://www.chambet.com/tools.html
Question is, does it still work?? 2nd Question, how well does it work through/with Meterpreter?
(As a side note, I haven't tried, but you might be able to use Py2Exe or PyInstaller to run nbnspoof.py on a windows box)
When running it on XP SP3 I get the following
Screen Shot 2012 09 02 at 12 24 44 AM
Booooooooo, and on Windows 7 I get this:
Screen Shot 2012 09 02 at 12 29 03 AM
Ok, error 10013 is a permissions issue, I can deal with that..
Screen Shot 2012 09 02 at 12 32 38 AM
Run as Administrator it works! But something is wrong with the communication because the host doing the lookup doesn't get the correct resolution back.
From what I can google it looks as though Windows Firewall has an 'Anti-Spoofing' outbound filter, so these "Bytes sent" don't even make it to Wireshark.
I have created a Github repository, stuck the contents of the zip file in it and this is where I ask for help. If you know 1) how to disable the Windows Anti-spoofing filter or 2) How to circumvent it please leave a comment here, and issue on the repo or email me directly.
The other thing is, if you want to improve the code, that would be awesome too, submit a pull request, I'd love to get this thing going again and make it into something that we can solidly use over a Meterpreter session.
Github repo: https://github.com/mubix/FakeNetBIOS
And if the only commit to this repo 5 years from now is "Initial commit" then at the very least it will be some where the next blogger who picks up the trail can get it from.
P.S. If you know how to solve the issue on XP, that would be an awesome fix as well.