Tuesday, September 25, 2012

Inflection

Inflection:
Hang with me as I channel my inner Kerouac (minus the drugs, plus the page breaks) and go all stream of consciousness. To call this post an “incomplete thought” would be more than a little generous.

I believe we are now deep in the early edge of a major inflection point in security. Not one based merely on evolving threats or new compliance regimes, but a fundamental transformation of the practice of security that will make it nearly unrecognizable when we finally emerge on the other side. For the past 5 years Hoff and I have discussed disruptive innovation in our annual RSA presentation. What we are seeing now is a disruptive conflagration, where multiple disruptive innovations are colliding and overlapping. It affects more than security, but that’s the only area about which I’m remotely qualified to pontificate.

Perhaps that’s a bit of an exaggeration. All the core elements of what we will become are here today, and there are certain fundamentals that never change, but someone walking into the SOC or CISO role of tomorrow will find more different than the same unless they deliberately blind themselves.

Unlike most of what I read out there, I don’t see these changes as merely things we in security are forced to react to. Our internal changes in practice and technology are every bit as significant contributing factors.

One of the highlights of my career was once hanging out and having beers with Bruce Sterling. He said that his role as a futurist was to imagine the world 7 years out – effectively beyond the event horizon of predictability. What I am about to describe will occur over the next 5-10 years, with the most significant changes likely occurring in those last 7-10 years, but based on the roots we establish today. So this should be taken as much as science fiction as prediction.

The last half of 2012 is the first 6 months of this transition. The end result, in 2022, will be far more change over 10 years than the evolution of the practice of security from 2002 through today.



The first major set of disruptions includes the binary supernova of tech – cloud computing and mobility. This combination, in my mind, is more fundamentally disruptive than the initial emergence of the Internet. Think about it – for the most part the Internet was (at a technical level) merely an extension of our existing infrastructure. To this day we have tons of web applications that, through a variety of tiers, connect back to 30+-year-old mainframe applications. Consumption is still mostly tied to people sitting at computers at desks – especially conceptually.

Cloud blows up the idea of merely extending existing architectures with a web portal, while mobility advances fundamentally redefine consumption of technology. Can you merely slop your plate of COBOL onto a plate of cloud? Certainly, right as you watch your competitors and customers speed past at relativistic speeds.

Our tradition in security is to focus on the risks of these advances, but the more prescient among us are looking at the massive opportunities. Not that we can ignore the risks, but we won’t merely be defending these advances – our security will be defined and delivered by them. When I talk about security automation and abstraction I am not merely paying lip service to buzzwords – I honestly expect them to support new capabilities we can barely imagine today.

When we leverage these tools – and we will – we move past our current static security model that relies (mostly) on following wires and plugs, and into a realm of programmatic security. Or, if you prefer, Software Defined Security. Programmers, not network engineers, become the dominant voices in our profession.



Concurrently, four native security trends are poised to upend existing practice models.

Today we focus tremendous effort on an infinitely escalating series of vulnerabilities and exploits. We have started to mitigate this somewhat with anti-exploitation, especially at the operating system level (thanks to Microsoft). The future of anti-exploitation is hyper segregation.

iOS is an excellent example of the security benefits of heavily sandboxing the operating ecosystem. Emerging tools like Bromium and Invincea are applying even more advanced virtualization techniques to the same problem. Bromium goes so far as to effectively virtualize and isolate at a per task level. Calling this mere ‘segregation’ is trite at best.

Cloud enables similar techniques at the network and application levels. When the network and infrastructure are defined in software, there is essentially zero capital cost for network and application component segregation. Even this blog, today, runs on a specially configured hyper-segregated server that’s managed at a per-process level.

Hyper segregated environments – down, in some cases, to the individual process level – are rapidly becoming a practical reality, even in complex business environments with low tolerance for restriction.



Although incident response has always technically been core to any security model, for the most part it was shoved to the back room – stuck at the kids’ table next to DRM, application security, and network segregation. No one wanted to make the case that no matter what we spent, our defenses could never eliminate risk. Like politicians, we were too frightened to tell our executives (our constituency) the truth. Especially those who were burned by ideological execs.

Thanks to our friends in China and Eastern Europe (mostly), incident response is on the earliest edge of getting its due. Not the simple expedient of having an incident response plan, or even tools, but conceptually re-prioritizing and re-architecting our entire security programs – to focus as much or more on detection and response as on pure defense. We will finally use all those big screens hanging in the SOC to do more than impress prospects and visitors.

My bold prediction? A focus on incident response, on more rapidly detecting and responding to attacker-driven incidents, will exceed our current checklist and vulnerability focused security model, affecting everything from technology decisions to budgeting and staffing.

This doesn’t mean compliance will go away – over the long haul compliance standards will embrace this approach out of necessity.



The next two trends technically fall under the umbrella of response, but only in the broadest use of the term.

As I wrote earlier this year, active defense is reemerging and poised to materially impact our ability to detect and manage attacks. Historically we have said that as defenders we will always lose – because we need to be right every time, and the attacker only needs to be right or lucky once. But that’s only true for the most simplistic definition of attack. If we look at the Data Breach Triangle, an attacker not only needs a way in, but something to steal and a way out.

Active defense reverses the equation and forces attacker perfection, making accessing our environment only one of many steps required for a successful attack. Instead of relying on out-of-date signatures, crappy heuristics prone to false positives, or manual combing through packets and logs, we will instead build environments so laden with tripwires and landmines that they may end up being banned by a virtual Geneva Convention.

Heuristic security tends to fail because it often relies on generic analysis of good and bad behavior that is difficult or impossible to model. Active defenses interact with intruders while complicating and obfuscating the underlying structure. This dynamic interaction is far more likely to properly identify and classify an attacker.

Active defenses will become commonplace, and in large part replace our signature-based systems of failure.



But none of this information is useful if it isn’t accurate, actionable, and appropriate, so the last major trend is closing the action loop (the OODA loop for you milspec readers). This combines big data, visualization, and security orchestration (a facet of our earlier automation) – to create a more responsive, manageable, and, frankly, usable security system.

Our current tools largely fall into general functional categories that are too distinct and isolated to really meet our needs. Some tools observe our environment (e.g., SIEM, DLP, and full packet capture), but they tend to focus on narrow slices – with massive gaps between tools hampering our ability to acquire related information which we need to understand incidents. From an alert, we need to jump into many different shells and command lines on multiple servers and appliances in order to see what’s really going on. When tools talk to each other, it’s rarely in a meaningful and useful way.

While some tools can act with automation, it is again self-contained, uncoordinated, and (beyond the most simplistic incidents) more prone to break a business process than stop an attacker. When we want to perform a manual action, our environments are typically so segregated and complicated that we can barely manage something as simple as pushing a temporary firewall rule change.

Over the past year I have seen the emergence of tools just beginning to deliver on old dreams, which were so shattered by the ugly reality of SIEM that many security managers have resorted to curling up in the fetal position during vendor presentations.

These tools will combine the massive amounts of data we are currently collecting on our environments, at speeds and volumes long promised but never realized. We will steal analytics from big data; tune them for security; and architect systems that allow us to visualize our security posture, identify, and rapidly characterize incidents. From the same console we will be able to look at a high-level SIEM alert, drill down into the specifics, and analyze correlated data from multiple tools and sensors. I don’t merely mean the SNMP traps from those tools, but full incident data and context.

No, your current SIEM doesn’t do this.

But the clincher is the closer. Rather than merely looking at incident data, we will act on the data using the same console. We will review the automated responses, model the impact with additional analytics and visualization (real-time attack and defense modeling, based on near-real-time assessment data), and then tune and implement additional actions to contain, stop, and investigate the attack.

Detection, investigation, analysis, orchestration, and action all from the same console.



This future won’t be distributed evenly. Organizations of different sizes and markets won’t all have the same access to these resources and they do not have the same needs – and that’s okay. Smaller and medium-sized organizations will rely more on Security as a Service providers and automated tools. Larger organizations or those less willing to trust outsiders will rely more on their own security operators. But these delivery models don’t change the fundamentals of technologies and processes.

Within 10 years our security will be abstracted and managed programmatically. We will focus more on detection and response – relying on powerful new tools to identify, analyze, and orchestrate responses to attacks. The cost of attacking will rise dramatically due to hyper segregation, active countermeasures, and massive increases in the complexity required for a successful attack chain.

I am not naive or egotistical enough to think these broad generalities will result in a future exactly as I envision it, or on the precise timeline I envision, but all the pieces are in at least early stages today, and the results seem inevitable.

- Rich
(1) Comments

Tuesday, September 4, 2012

ZackAttack!: Firesheep for NTLM Authentication!

ZackAttack!: Firesheep for NTLM Authentication!: Not BlackHat, but a goodie from the Defcon this time! What Firesheep was for HTTP, ZackAttack! aims to be for NTLM authentication!  Simply put, ZackAttack! is a new toolset to perform NTLM authentication relaying. NTLM is pretty much prevalent in the corporate environment. Think of all that you can leverage using ZackAttack! It uses another tool – [...]
ZackAttack!: Firesheep for NTLM Authentication! is a post from: PenTestIT

Old School On-target NBNS Spoofing

Old School On-target NBNS Spoofing:
One of pen testers favorite attacks is NBNS spoofing. Now Wesley who I originally learned this attack from, traced this back to sid (http://www.notsosecure.com/folder2/2007/03/14/abusing-tcpip-name-resolution-in-windows-to-carry-out-phishing-attacks/) . Wesley's stuff can be found here: http://www.mcgrewsecurity.com/tools/nbnspoof/
Wesley's stuff eventually lead to this awesome post on the Packetstan blog: http://www.packetstan.com/2011/03/nbns-spoofing-on-your-way-to-world.html
and in that post the Metasploit module to do it all is demoed. But there in lies the rub. With each degree of separation we have more and more solidified in into a "on-site" only attack. But if you read through Sid's paper from 2007 this doesn't have to be the case. He uses a tool written by "Francis Hauguet" back in 2005 for the Honeynet project: http://seclists.org/honeypots/2005/q4/46 called "FakeNetbiosDGM and FakeNetbiosNS".
Finding the tools was no easy task though, googling for the file name, the author or the project just netted me this link:
http://honeynet.rstack.org/tools/FakeNetBIOS-0.91.zip
Gotta love the Wayback Machine, I finally found it here: http://wayback.archive.org/web/*/http://honeynet.rstack.org/tools/FakeNetBIOS-0.91.zip
and eventually also here: http://www.chambet.com/tools.html
Question is, does it still work?? 2nd Question, how well does it work through/with Meterpreter?
(As a side note, I haven't tried, but you might be able to use Py2Exe or PyInstaller to run nbnspoof.py on a windows box)
When running it on XP SP3 I get the following
Screen Shot 2012 09 02 at 12 24 44 AM
Booooooooo, and on Windows 7 I get this:
Screen Shot 2012 09 02 at 12 29 03 AM
Ok, error 10013 is a permissions issue, I can deal with that..
Screen Shot 2012 09 02 at 12 32 38 AM
Run as Administrator it works! But something is wrong with the communication because the host doing the lookup doesn't get the correct resolution back.
From what I can google it looks as though Windows Firewall has an 'Anti-Spoofing' outbound filter, so these "Bytes sent" don't even make it to Wireshark.
I have created a Github repository, stuck the contents of the zip file in it and this is where I ask for help. If you know 1) how to disable the Windows Anti-spoofing filter or 2) How to circumvent it please leave a comment here, and issue on the repo or email me directly.
The other thing is, if you want to improve the code, that would be awesome too, submit a pull request, I'd love to get this thing going again and make it into something that we can solidly use over a Meterpreter session.
Github repo: https://github.com/mubix/FakeNetBIOS
And if the only commit to this repo 5 years from now is "Initial commit" then at the very least it will be some where the next blogger who picks up the trail can get it from.
P.S. If you know how to solve the issue on XP, that would be an awesome fix as well.