No, what I’m more interested in is the new trend in breach disclosure. There are so many dynamics to this case, it merits special attention. First, it’s one of the most well-known and generally respected security companies in the industry. This automatically heightens the scrutiny and interest, naturally. Second, the potential effect is significant – we’re not talking simple identity theft or credit card fraud here – instead, the very backbone of many organizations’ authentication scheme (especially remote access authentication) is being threatened, which opens the door to a wholly different risk profile than simple data theft in an isolated case. So the stakes are high here, to say the least.
RSA gets that, of course. And so they’ve gone straight for the FUD choke hold – the APT. In Art Coviello’s disclosure letter to customers, he states that:
…our security systems identified an extremely sophisticated cyber attack in progress, targeting our RSA business unit…Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT).
And…….HERE……WE……GO! We’re off to the races, riding the dark horse of FUD for all that it’s worth. No one is really surprised by this – claiming to be the victim of an advanced threat targeting specific data? <Yawn> That’s the new hawtness in breach disclosure for large, concerned-about-public-image companies, for sure. But is it really that advanced? Let’s take a look at a brand new paper published by a group of folks at RSA that explains in detail a) what the APT is, b) how it works, and c) how to counter it effectively with “next generation SOC” concepts. Oh, the irony. Here’s a diagram from the paper, which can be found at RSA’s site:
So anyone care to comment on the “advanced nature” of this attack vector scenario? Let’s walk through it:
- Identifying “the mark”: Google queries, SEC paperwork, social media sites, online technical or business sites. Tools like Maltego can help put together recon data simply and quickly.
- Spear Phishing: Spoofed emails using SET. Easy to wrap up a Trojan or bot, even a modified netcat backdoor, in a PDF file. Wrappers, packers, and sploits, oh my! For a specific exploit, it’s trivial to create a customized Metasploit payload for distribution. Click–>Sploit–>Shell to me. Outbound on port 80, naturally.
- With my new happy shell, I scan, look for other opportunities, etc. (Addresses #3 in the diagram, “Organization mapping”). If I have professional-grade tools like Core Impact, this gets really simple to not only scan, but pivot through other vulnerable systems. If not, I launch Ettercap or another MITM tool to snag some switched-environment traffic and hopefully get NTLM exchanges, or maybe I just dump the local Admin hash on the box and guess it might work across lots of other systems. Chances are, it does. So I’m already up to #4, maybe #5, in the “advanced” scenario described.
- I start siphoning data. Maybe it takes a week or two. Maybe a month. So #6, “D-day” is really somewhat arbitrary. I can tunnel this through SSH, or use ICMP (outbound is usually just fine), or any other manner of tunneling. I don’t even need some 0-day attack or super-customized malware or bot kit (like Zeus) to accomplish this.
Is this what happened at RSA? I have no idea. Neither do you, chances are. Maybe the initial entry vector was SQLi or another Web-based attack. Maybe it WAS some 0-day that the attacker(s) had squirreled away. But folks…this is not that advanced, not for a decent pen tester. To the general public, sure. But RSA should not insult the security community with this kind of rhetoric, especially not while simultaneously releasing a position paper. So to RSA, I say – sorry it happened, but I’m inclined to believe you were not the victim of an APT. You were much more likely the victim of the SST – the Simple Short-term Threat. And 90% of what we read about likely falls into that category.
This is most definitely an effort to avoid looking negligent on RSA’s part. Whatever the initial attack vector was, there’s an extremely good chance that there were changes made to the box where this started. Even if it was a memory-resident attack, at some point one of the systems in the chain of attack/exploits would have been changed in some way. So my question is this…when, exactly, will we consider the lack of whitelisting software (perhaps in addition to traditional A/V) a negligent security stance? I won’t go so far out on a limb to claim this will work every time, not by a long shot. But identifying changes to the box – new DLLs, changed files/directories, new open ports, is critical. Trying to pick up most of this on the network, especially a busy one with lots of HTTP and common traffic, is next to impossible for a large organization. I’m not saying you shouldn’t watch the network – I’m just saying we have to focus on the host with MUCH more scrutiny than we have to date. And given how easy it is to fool A/V, we need a new plan.