Friday, October 28, 2011

Tools and Links

Tools and Links: Not long ago, I started a FOSS page for my blog, so I didn't have to keep going back and searching for various tools...if I find something valuable, I'll simply post it to this page and I won't have to keep looking for it. You'll notice that I really don't have much in the way of descriptions posted yet, but that will come, and hopefully others will find it useful. That doesn't mean the page is stagnant...not at all. I'll be updating the page as time goes on.

Volatility
Melissa Augustine recently posted that she'd set up Volatility 2.0 on Windows, using this installation guide, and using the EXE for Distorm3 instead of the ZIP file. Take a look, and as Melissa says, be sure to thoroughly read and follow the instructions for installing various plugins. Thanks to Jamie Levy for providing such clear guidance/instructions, as I really think that doing so lowers the "cost of entry" for such a valuable tool. Remember..."there're more things in heaven and earth than are dreamt of in your philosophy." That is, performing memory analysis is a valuable skill to have, particularly when you have access to a memory dump, or to a live system from which you can dump memory. Volatility also works with hibernation files, from whence considerable information can be drawn, as well.

WDE
Now and again, you may run across whole disk encryption, or encrypted volumes on a system. I've seen these types of systems before...in some cases, the customer has simply asked for an image (knowing that the disk is encrypted) and in others, the only recourse we have to acquire a usable image for analysis is to log into the system as an Admin and perform a live acquisition.

TCHunt
ZeroView from Technology Pathways, to detect WDE (scroll down on the linked page)

You can also determine if the system had been used to access TrueCrypt or PGP volumes by checking the MountedDevices key in the Registry (this is something that I've covered in my books). You can use the RegRipper mountdev.pl plugin to collect/display this information, either from a System hive extracted from a system, or from a live system that you've accessed via F-Response.

Timelines
David Hull gave a presentation on "Atemporal timeline analysis" at the recent SecTorCA conference (can find the presentation .wmv files here), and posted an abridged version of the presentation to the SANS Forensic blog (blog post here).

When I saw the title, the first thing I thought was...what? How do you talk about something independent of time in a presentation on timeline analysis? Well, even David mentions at the beginning of the recorded presentation that it's akin to "asexual sexual reproduction"...so, the title is meant to be an oxymoron. In short, what the title seems to refer to is performing timeline analysis during an incident when you don't have any sort of time reference from which to start your analysis. This is sometimes the case...I've performed a number of exams having very little information from which to start my analysis, but finding something associated with the incident often leads me to the timeline, providing a significant level of context to the overall incident.

In this case, David said that the goal was to "find the attacker's code". Overall, the recorded presentation is a very good example of how to perform analysis using fls and timelines based solely on file system metadata, and using tools such as grep() to manipulate (as David mentions, "pivot on") the data. In short, the SANS blog post doesn't really address the use of "atemporal" within the context of the timeline...you really need to watch the recorded presentation to see how that term applies.

Sniper Forensics
Also, be sure to check out Chris Pogue's "Sniper Forensics v3.0: Hunt" presentation, which is also available for download via the same page. There are a number of other presentations that would be very good to watch, as well...some talk about memory analysis. The latest iteration of Chris's "Sniper Forensics" presentations (Chris is getting a lot of mileage from these things...) makes a very important point regarding analysis...in a lot of a cases, an artifact appears to be relevant to a case based on the analyst's experience. A lot of analysts find "interesting" artifacts, but many of these artifacts don't relate directly to the goals of their analysis. Chris gives some good examples of an "expert eye"; in one slide, he shows an animal track. Most folks might not even really care about that track, but to a hunter, or someone like me (ride horses in a national park), the track tells me a great deal about what I can expect to see.

This applies directly to "Sniper Forensics"; all snipers are trained in observation. Military snipers are trained to quickly identify military objects, and to look for things that are "different". For example, snipers will be sent to observe a route of travel, and will recognize freshly turned earth or a pile of trash on that route when the sun comes up the next day...this might indicate an attempt to hide an explosive device.

How does this apply to digital forensic analysis? Well, if you think about it, it is very applicable. For example, let's say that you happen to notice that a DLL was modified on a system. This may stand out as odd, in part because it's not something that you've seen a great deal of...so you create a timeline for analysis, and see that there wasn't a system or application update at that time.

Much like a sniper, a digital forensic analyst must be focused. A sniper observes an area in order to gain intelligence...enemy troop movements, civilian traffic through the area, etc. Is the sniper concerned with the relative airspeed of an unladen swallow? While that artifact may be "interesting", it's not pertinent to the sniper's goals. The same holds true with the digital forensic analyst...you may find something "interesting" but how does that apply to your goals, or should you get your scope back on the target?

Data Breach 'Best Practices'
I ran across this article recently on the GovernmentHealthIT site, and while it talks about breach response best practices, I'd strongly suggest that all four of these steps need to be performed before a breach occurs. After all, while the article specifies PII/PHI, regulatory and compliance organizations for those and other types of data (PCI) specifically state the need for an incident response plan (PCI DSS para 12.9 is just one example).

Item 1 is taking an inventory...I tell folks all the time that when I've done IR work, one of the first things I ask is, where is your critical data. Most folks don't know. A few that have have also claimed (incorrectly) that it was encrypted at rest. I've only been to one site where the location of sensitive data was known and documented prior to a breach, and that information not only helped our response analysis immensely, it also reduced the overall cost of the response (in fines, notification costs, etc.) for the customer.

While I agree with the sentiment of item 4 in the article (look at the breach as an opportunity), I do not agree with the rest of that item; i.e., "the opportunity to find all the vulnerabilities in an organization—and find the resources for fixing them."

Media Stuff
Brian Krebs has long followed and written on the topic of cybercrime, and one of his recent posts is no exception. I had a number of take-aways from this post that may not be intuitively obvious:

1. "Password-stealing banking Trojans" is ambiguous, and could be any of a number of variants. The "Zeus" (aka, Zbot) Trojan is mentioned later in the post, but there's no information presented to indicate that this was, in fact, a result of that specific malware. Anyone who's done this kind of work for a while is aware that there are a number of malware variants that can be used to collect online banking credentials.

2. Look at the victims mentioned in Brian's post...none of them is a big corporate entity. Apparently, the bad guys are aware that smaller targets are less likely to have detection and response capabilities (*cough*CarbonBlack*cough*). This, in turn, leads directly to #3...

3. Nothing in the post indicates that a digital forensics investigation was done of systems at the victim location. With no data preserved, no actual analysis was performed to identify the specific malware, and there's nothing on which law enforcement can build a case.

Finally, while the post doesn't specifically mention the use of Zeus at the beginning, it does end with a graphic showing detection rates of new variants of the Zeus Trojan over the previous 60 days; the average detection rate is below 40%. While the graphic is informative,

More Media Stuff
I read this article recently from InformationWeek that relates to the recent breach of NASDAQ systems; I specifically say "relates" to the breach, as the article specifies, "...two experts with knowledge of Nasdaq OMX Group's internal investigation said that while attackers hadn't directly attacked trading servers...". The title of the article includes the words "3 Expected Findings", and the article is pretty much just speculation about what happened, from the get-go. In fact, the article goes on to say, "...based on recent news reports, as well as likely attack scenarios, we'll likely see these three findings:". That's a lot of "likely" in one sentence, and this much speculation is never a good thing.


My concern with this is that the overall take-away from this is going to be "NASDAQ trading systems were hit with SQL injection", and folks are going to be looking for this sort of thing...and some will find it. But others will miss what's really happening while they're looking in the wrong direction.

Other Items
F-Response TACTICAL Examiner for Linux now has a GUI
Lance Mueller has closed his blog; old posts will remain, but no new content will be posted