Tuesday, April 24, 2012

Vulnerability Management Evolution: Value-Add Technologies

Mike Rothman has this nice article on Value Added Technologies for Vulnerability Management.

Let’s look at a few emerging capabilities that really help make the information gleaned from scans and assessment more impactful to the operational decisions you make every day.

These capabilities are not common to all the leading vulnerability management offerings today. But we expect that most (if not all) will be core capabilities of these platforms in some way over the next 2-3 years, so watch for increasing M&A and technology integration for these functions.

Attack Path Analysis


If no one hears a tree fall in the woods has it really fallen? The same question can be asked about a vulnerable system. If an attacker can’t get to the vulnerable device, is it really vulnerable? The answer is yes, it’s still vulnerable, but clearly less urgent to remediate. So tracking which assets are accessible to a variety of potential attackers becomes critical for an evolved vulnerability management platform.

Typically this analysis is based on ingesting firewall rule sets and router/switch configuration files. With some advanced analytics the tool determines whether an attacker could (theoretically) reach the vulnerable devices. This adds a critical third leg to the “oh crap, I need to fix it now” decision process depicted below.

Oh crap, you hope you don't get there.

Obviously most enterprises have fairly complicated networks, which means an attack path analysis tool must be able to crunch a huge amount of data to work through all the permutations and combinations of possible paths to each asset. You should also look for native support of the devices (firewalls, routers, switches, etc.) you use, so you don’t have to do a bunch of manual data entry – given the frequency of change in most environments, this is likely a complete non-starter. Finally, make sure the visualization and reports on paths present the information in a way you can use.

By the way, attack path analysis tools are not really new. They have existed for a long time, but never really achieved broad market adoption. As you know, we’re big fans of Mr. Market, which means we need to get critical for a moment and ask what’s different now that would enable the market to develop? First, integration with the vulnerability/threat management platforms makes this information part of the threat management cycle rather than a stand-alone function, and that’s critical. Second, current tools can finally offer analysis and visualization at an enterprise scale. So we expect this technology to be a key part of the platforms sooner rather than later; we already see some early technical integration deals and expect more.

Automated Pen Testing


Another key question raised by a long vulnerability report needs to be, “Can you exploit the vulnerability?” Like a vulnerable asset without a clear attack path, if a vulnerability cannot be exploited thanks to some other control or the lack of a weaponized exploit, remediation becomes less urgent. For example, perhaps you have a HIPS product deployed on a sensitive server that blocks attacks against a known vulnerability. Obviously your basic vulnerability scanner cannot detect that, so the vulnerability will be reported just as urgently as every other one on your list.

Having the ability to actually run exploits against vulnerable devices as part of a security assurance process can provide perspective on what is really at risk, versus just theoretically vulnerable. In an integrated scenario a discovered vulnerability can be tested for exploit immediately, to either shorten the window of exposure or provide immediate reassurance.

Of course there is risk with this approach, including the possibility of taking down production devices, so use pen testing tools with care. But to really know what can be exploited and what can’t you need to use live ammunition. And be sure to use fully vetted, professionally supported exploit code. You should have a real quality assurance process behind the exploits you try. It’s cool to have an open source exploit, and on a test/analysis network using less stable code that’s fine. But you probably don’t want to launch an untested exploit against a production system. Not if you like your job, anyway.

Compliance Automation


In the rush to get back to our security roots, many folks have forgotten that the auditor is still going to show up every month/quarter/year and you need to be ready. That process burns resources that could otherwise be used on more strategic efforts, just like everything else. Vulnerability scanning is a critical part of every compliance mandate, so scanners have pumped out PCI, HIPAA, SOX, NERC/FERC, etc. reports for years. But that’s only the first step in compliance automation. Auditors need plenty of other data to determine whether your control set is sufficient to satisfy the regulation. That includes things like configuration files, log records, and self-assessment questionnaires.

We expect you to see increasingly robust compliance automation in these platforms over time. That means a workflow engine to help you manage getting ready for your assessment. It means a flexible integration model to allow storage of additional unstructured data in the system. The goal is to ensure that when the auditor shows up your folks have already assembled all the data they need and can easily access it. The easier that is the sooner the auditor will go away and lets your folks get back to work.

Patch/Configuration Management


Finally, you don’t have to stretch to see the value of broader configuration and/or patch management capabilities within the vulnerability management platform. You are already finding what’s wrong (either vulnerable or improperly configured), so why not just fix it? Clearly there is plenty of overlap with existing configuration and patching tools, and you could just as easily make the case that those tools can and should add vulnerability management functions. Regardless of which platform ultimately handles vulnerability and patch/configuration, there are clear advantages to having a common environment.

There are also clear disadvantages to integrating finding and fixing these issues, mostly concerning separation of duties. Your auditors may have an issue with the same platform being used to figure out what’s wrong and then to fix it, but that’s a point of discussion for your next assessment. Clearly in terms of leverage and efficiency, the ability to find and fix problems in a single same environment is attractive to understaffed shops (and who has resources to burn out there?) and folks who need to improve efficiency (everyone).

Benchmarking


We are big fans of benchmarking your environment against other organizations of similar size and industry to figure out where you stand. In fact, we wrote a paper last year on the topic. But compared to the other value-add capabilities we have already described, our research indicates that benchmarking isn’t the highest priority item. We still find great value in comparing configurations / patch windows / malware infections / any other metrics you gather against other companies, and we this should be another data point to help determine which actions need to be taken right now. But for the time being Mr. Market says this capability is a bit further off.

We understand that some of these capabilities already exist within your existing IT management stacks, so a vulnerability management platform should cleanly integrate with those existing functions. In fact, we will discuss those logical integration points next. But for now suffice it to say that smaller organizations (and enterprises acting like small organizations) may favor leveraging a single platform to provide a closed-loop ability to find problems and fix them without having to deal with multiple systems. So over time it’s quite possibile that vulnerability management platforms will evolve into a broader IT ops platform – offering deeper asset management, performance management, the aforementioned configuration/patch capabilities, and other more traditional IT Ops functions.

Will the evolved vulnerability management platform ever really get to this point, of being everything to everyone? Probably not – the natural order of the IT business means only one, or maybe two of the current players are likely to remain stand-alone entities. The rest will likely be acquired by larger providers and integrated into the acquirer’s stack, or they will go away altogether.

But regardless of how much stuff is built into the vulnerability management platform, organizations are unlikely to move all their technologies into this platform in a flash cut. So for the foreseeable future these platforms need to play nicely with other management technologies. In the next post we will tackle the integration points with these systems, and define a set of interfaces and standards that need to be supported by the platform.

- Mike Rothman
(0) Comments

Friday, April 20, 2012

Vulnerability Management Evolution: Core Technologies

Interesting Read on Securosis blog about Vulnerability Management

As we discussed in the last couple posts, any VM platform must be able to scan infrastructure and scan the application layer. But that’s still mostly tactical stuff. Run the scan, get a report, fix stuff (or not), and move on. When we talk about a strategic and evolved vulnerability management platform, the core technology needs to evolve to serve more than merely tactical goals – it must provide a foundation for a number of additional capabilities. Before we jump into the details we will reiterate the key requirements. You need to be able to scan/assess:

  1. Critical Assets: This includes the key elements in your critical data path; it requires both scanning and configuration assessment/policy checking for applications, databases, server and network devices, etc.
  2. Scale: Scalability requirements are largely in the eye of the beholder. You want to be sure the platform’s deployment architecture will provide timely results without consuming all your network bandwidth.
  3. Accuracy: You don’t have time to mess around, so you don’t want a report with 1,000 vulnerabilities, 400 of them false positives. There is no way to totally avoid false positives (aside from not scanning at all) so accuracy is a key selection criteria.

Yes, that was pretty obvious. With a mature technology like vulnerability management the question is less about what you need to do and more about how – especially when positioning for evolution and advanced capabilities. So let’s first dig into the foundation of any kind of strategy platform: the data model.

Integrated Data Model


What’s the difference between a tactical scanner and an integrated vulnerability/threat management platform? Data sharing, of course. The platform needs the ability to consume and store more than just scan results. You also need configuration data, third party and internal research on vulnerabilities, research on attack paths, and a bunch of other data types we will discuss in the next post on advanced technology. Flexibility and extensibility are key for the data schema. Don’t get stuck with a rigid schema that won’t allow you to add whatever data you need to most effectively prioritize your efforts – whatever data that turns out to be.

Once the data is in the foundation, the next requirement involves analytics. You need to set alerts and thresholds on the data and be able to correlate disparate information sources to glean perspective and help with decision support. We are focused on more effectively prioritizing security team efforts, so your platform needs analytical capabilities to help turn all that data into useful information.

When you start evaluating specific vendor offerings you may get dragged into a religious discussion of storage approaches and technologies. You know – whether a relational backend, or an object store, or even a proprietary flat file system; provides the performance, flexibility, etc. to serve as the foundation of your platform. Understand that it really is a religious discussion. Your analysis efforts need to focus on the scale and flexibility of whatever data model underlies the platform.

Also pay attention to evolution and migration strategies, especially if you plan to stick with your current vendor as they move to a new platform. This transition is akin to a brain transplant, so make sure the vendor has a clear and well-thought-out path to the new platform and data model. Obviously if your vendor stores their data in the cloud it’s not your problem, but don’t put the cart in front of the horse. We will discuss the cloud versus customer premises later in this post.

Discovery


Once you get to platform capabilities, first you need to find out what’s in your environment. That means a discovery process to find devices on your network and make sure everything is accounted for. You want to avoid the “oh crap” moment, when a bunch of unknown devices show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. Or at least shorten the window between something showing up on your network and the “oh crap” discovery moment.

There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. That works well enough and tends to be the main way vulnerability management offerings handle discovery, so active discovery is still table stakes for VM offerings. You need to balance the network impact of active discovery against the need to quickly find new devices. Also make sure you can search your networks completely, which means both your IPv4 space and your emerging IPv6 environment. Oh, you don’t have IPv6? Think again. You’d be surprised at the number of devices that ship with IPv6 active by default and if you don’t plan to discover that address space as well, you’ll miss a significant attack surface. You never want to hold a network deployment while your VM vendor gets their act together.

You can supplement active discovery with a passive capability that monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to find new unmanaged devices faster. Once a new device is identified passively, you could then launch an active scan to figure out what it’s doing. Passive discovery is also helpful for devices that use firewalls to block active discovery and vulnerability scanning.

But that’s not all – depending on the breadth of your vulnerability/threat management program you might want to include endpoints and mobile devices in the discovery process. We always want more data, so we are for determining all assets in your environment. That said, for determining what’s important in your environment (see the asset management/risk scoring section below), endpoints tend to be less important than databases with protected data, so prioritize the effort you expend on discovery and assessment.

Finally, another complicating factor for discovery is the cloud. With the ability to spin up and take down instances at will, your platform needs to both track and assess cloud resources, which requires integrating with cloud consoles to make sure your platform knows about new devices and can assess them appropriately. This is an emerging capability, but realistically you’ll see a lot more private and public cloud-based resources in your environment.

Asset Management and Risk Scoring


The key capability of the evolved vulnerability management platform is its ability to help you you prioritize efforts, so any calculation of a risk score largely depends on 1) the ‘importance’ of the asset and 2) how ‘exposed’ it is to attack at any given point in time. Evaluating what’s important is really an asset management function. Of course many operations teams run extensive asset management efforts. The VM platform can and should take advantage of any existing resources and integrate with those tools. But many organizations don’t have an existing asset database (scary as that sounds), so the VM platform may need to serve as the authoritative registry of IT assets. Either way, the platform needs to store and/or access asset information.

Once you have the assets defined in the system the next step is to tag, group and/or categorize them. The more flexible the system the better; every organization groups their assets differently so your platform should support the way you categorize assets – not force you to fit your assets into their vendor-defined buckets. Assign an (admittedly subjective) importance to each group or category of assets. We suggest a simple approach, with 3 or 5 levels of importance. Really important means someone would be fired, while some assets are simply unimportant. You don’t need complexity or fine precision, but you at least need to identify devices which hold (or have access to) critical data.

As you evaluate the vulnerability of each asset through the platform’s various tests you can determine a risk score to drive prioritization.

The point here is flexibility. You want to group assets in a way that makes sense for your organization. You want to derive a risk score based on your calculation of risk, not an black box calculation that may or may not be relevant for your organization. And you need the ability to change everything the next time a significant technology or organizational disruption happens, like cloud computing or a big M&A deal.

To Cloud or Not to Cloud


The next aspect of the core technology underlying the evolved vulnerability/threat management platform is the cloud buzzword. If you thought people got religious about data models and engines, ask a cloud vendor about an on-premise solution or vice-versa. That’s always fun. At the end of the day, this cloud discussion involves two things.

  1. Scale: You will hear a lot from cloud-based providers about infinite scale and the limitations of customer premise-based offerings. It is true that scalability is the vendor’s problem in a cloud scenario. That offers some advantages, but any solution can scale with a suitable deployment architecture.
  2. Technology Updates/Change: The other big message you’ll hear from cloud bigots is that cloud platform handle software updates more quickly and transparently than on your own gear. Again, there is truth to this, but every vulnerability management vendor has been sending new rules and tests to its devices for years, so it’s not like they haven’t figured out software distribution.

These two objections to customer premise-based solutions are really much ado about nothing. The ‘decision’ isn’t really a decision at all – what is ‘cloud’ and isn’t nowadays is largely a matter of semantics. Let’s get back to your requirements. You need to be able to test your environment from outside – most attackers are outside your perimeter. That works best with a cloud service. But you also need a presence within your perimeter to scan internal devices, especially those on protected networks. So every cloud service must include an on-site component for internal scans.

That on-site component might be a dedicated appliance, a virtual machine, a dynamic instance downloaded to a device inside the network at scan time, or a combination. Ultimately the deployment model is beside the point – choose the model that best fits your operational processes. There is no point in getting religious about deployment models, so the leading platform vendors will offer hybrid approaches to meet your specific needs. If it’s easier to provision the device once and let ops deal with it, then opt for the internal scanning appliance or dedicate a VM to scanning in your virtual data center.

But don’t get caught up in hype. You need an external component to test your environment from the outside and an internal component for testing inside your perimeter.

To Agent or Not to Agent. To Credential or not to Credential.


You will also hear a lot about agent vs. agent-less scanning. This is also mostly hyperbole and semantics. In order to do any kind of granular scan of a device, you need a persistent agent on the device, the ability to download a temporary agent, or full administrator rights (credentials) to the device to remotely poll it for the things you are looking for (configurations, patches, logs, etc.). As usual, the answer is all of the above. There are advantages to a temporary agent in terms of having to manage software distribution to devices you worry about. But ultimately the scanning model you choose depends on your access to the device, the type of device, and what kinds of data it has access to.

When thinking about credentialed vs. non-credentialed scans, the answer is also both. Non-credentialed scans give you the external attacker’s view, but of course there are limits detail that can be gleaned from a non-credentialed scan. So to gain a full understanding of the security posture of a device you also want a credentialed scan with full access to configurations, patch levels, logs, entitlements, applications, etc.

Keep in mind that you cannot actively scan certain devices. Think brittle control systems which fall over under the onslaught of a vulnerability scan. So it’s probably not in your best interest to scan those devices. Above we mentioned passively discovering assets by monitoring the network. A similar approach can be used to find vulnerabilities on devices you can’t actively scan. Obviously it doesn’t provide the same detail as a credentialed scan, but if the alternative is knocking down the device any data is better than no data.

Security Research


Finally, any vulnerability/threat management platform needs to be driven by research. Things move fast in the attack space and your threat management tools need to stay current. So your vendor needs to make considerable investments in a dedicated team to track the field, observe and analyze new attacks, figure out how to search for those attacks using their tools, ensure the quality of their tests to minimize false positives, and finally get the tests into your hands as quickly as possible. For a more granular view into the process of analyzing attacks and malware check out the Analyze Malware subprocess in Malware Analysis Quant. It provides an idea of what’s involved in profiling malware files and figuring out how to find them in your environment.

To compare research groups evaluate the sophistication of their analysis. Do you understand how to remediate issues that your scanner finds? Can you determine the seriousness of the attack? Do you believe them? Is the data just coming from the vendor or do they integrate third party data? And most importantly, do they provide coverage for the assets in your environment. You know, the OSes, databases, and critical applications that drive your business.

There is a lot to the evolved vulnerability/threat management platform. But we see these capabilities as table stakes. A lot of innovation is happening in this space, and advanced – and in some cases adjacent – technologies will be the focus of the next post. We will dive into capabilities such as attack path analysis, penetration testing, and benchmarking.

- Mike Rothman

Thursday, April 19, 2012

Rootdabitch: Linux root password Bruteforcer tool

r00tw0rm hacker “th3breacher!” has just released Rootdabitch version 0.1,which is a multithreaded Linux/UNIX tool to brute-force cracking local root through su using sucrack.
sucrack is a multithreaded Linux/UNIX tool for brute-force cracking local user accounts via su. The main feature of the Rootdabitch is that it is a local brute forcer, using upto 10 passwords in 3 seconds.

Just when Su was the right way to go. 

Tuesday, April 10, 2012

RCE root in all current Samba versions

This just came in from Spider Labs: (sounds like a high priority item, just got added to my list)


While perusing the change log for the release of SAMBA that was pushed out today a member of the SpiderLabs team (Rodrigo Montoro) noticed a CVE number in the change log. When we dug a little deeper we found that this CVE was for a remote root exploit! And it’s pretty nasty.  It affects all current versions of Samba from 3.0 onward and will allow remote code execution as the root user without an authenticated connection. That’s about as bad as bad can get when your talking about vulnerabilities.
The flaw can be found in the code generator for the remote procedure call, which can generate vulnerable code.  A specially crafted RPC call can then cause the server to execute arbitrary code. The worst part is that it doesn’t require any sort of authenticated connection to a server! In addition a high quality proof of concept has already been released basically making this point and click.
Nasty.
And it gets worse; Samba is everywhere that Linux is. Got a NAS device on your network with an embedded Linux server? You probably have Samba, and you probably can’t update it since it’s embedded.
Don’t forget that OSX used to include Samba up to OSX 10.6, but it looks like Apple stopped updating the version it used at 3.0.X meaning that this time, Apples failure to keep up with the latest and greatest might have saved its bacon.
So what can you do? Update your Samba to one of the new versions, if you can. If you can’t upgrade because your Linux is embedded seriously consider replacing your device, yes, this is that bad. If your servers are in production and can’t risk the update right now then edit your ‘hosts allow’ parameter inside smb.conf to restrict access. Editing SMB.CONF should not be seen as a complete fix but only as a way to help mitigate an attack.
This is one of the most sever vulnerabilities that we have seen a while, it affects a wide variety of systems, already has a proof of concept and give an attacker a lot of access, root access!

Wednesday, April 4, 2012

Defining Your iOS Data Security Strategy- securosis

Defining Your iOS Data Security Strategy:
Now that we’ve covered the different data security options for iOS it’s time to focus on building a strategy. In many ways figuring out the technology is the easy part of the problem – the problems start when you need to apply that technology in a dynamic business environment, with users who have already made technology choices.

Factors


Most organizations we talk with – of all sizes and in all verticals – are under intense pressure to support iOS, to expand support of iOS, or to wrangle control over data security on iDevices already deployed and in active use. So developing your strategy depends on where you are starting from as much as on your overall goals. Here are the major factors to consider:

Device ownership


Device ownership is no longer a simple “ours or theirs”. Although some companies are able to maintain strict management of everything that connects to their networks and accesses data, this is becoming the exception more than the rule. Nearly all organizations are being forced to accept at least some level of employee-owned device access to enterprise assets whether that means remote access for a home PC, or access to corporate email on an iPad.

The first question you need to ask yourself is whether you can maintain strict ownership of all devices you support – or if you even want to. The gut instinct of most security professionals is to only allow organization-owned devices, but this is rarely a viable long-term strategy. On the other hand, allowing employee-owned devices doesn’t require you to give up on enterprise ownership completely.

Many of the data security options we have discussed work in a variety of scenarios. Here’s how to piece together your options:

  • Employee owned devices: Your options are either partially managed or unmanaged. With unmanaged you have few viable security options and should focus on sandboxed messaging, encryption, and DRM apps. Even if you use one of these options, it will be more secure if you use even minimal partial management to enable data protection (by enforcing a passcode), enable remote wipe, and installing an enterprise digital certificate. The key is to sell this option to users, as we will detail below.
  • Organization owned devices: These fall into two categories – general and limited use. Limited use devices are highly restricted and serve a single purpose; such as flight manuals for pilots, mobility apps for health care, or sales/sales engineering support. They are locked down with only necessary apps running. General use devices are issued to employees for a variety of job duties and support a wider range of applications. For data security, focus on the techniques that manage data moving on and off devices – typically managed email and networking, with good app support for what they need to get their jobs done.

If the employee owns the device you need to get their permission for any management of it. Define simple clear policies that include the following points:

  • It is the employee’s device, but in exchange for access to work resources the employee allows the organization to install a work profile on the device.
  • The work profile requires a strong passcode to protect the device and the data stored on it.
  • In the event the device is lost or stolen, you must report it within [time period]. If there is reasonable belief the device is at risk [employer] will remotely wipe the device. This protects both personal and company data. If you use a sandboxed app that only wipes itself, specify that here.
  • If you use a backhaul network, detail when it is used.
  • Devices cannot be shared with others, including family.
  • How the user is allowed to backup the device (or a recommended backup option).

Emphasize that these restrictions protect both personal and organizational data. The user must understand and accept that they are giving up some control of their device in order to gain access to work resources. They must sign the policy, because you are installing something on their personal device, and you need clear evidence they know what that means.

Culture


Financial services companies, defense contractors, healthcare organizations, and tech startups all have very different cultures. Some expect and accept much more tightly restricted access to employer resources, while others assume unrestricted access to consumer technology.

Don’t underestimate culture when defining your strategy – we have presented a variety of options on the data security spectrum, and some may not work with your particular culture. If more freedom is expected look to sandboxed apps. If management is expected, you can support a wider range of work activities, with your tighter device control.

Sensitivity of the data


Not every organization has the same data security needs. There are industries with information that simply shouldn’t be allowed onto a mobile device with any chance of loss. But most organizations have more flexibility.

The more sensitive the data, the more it needs to be isolated (or restricted from being on the device). This ties into both network security options (including DLP to prevent sensitive data from going to the device) and messaging/file access options (such as Exchange ActiveSync and sandboxed apps of all flavors).

Not all data is equal. Assess your risk and then tie it back into an appropriate technology strategy.

Business needs and workflow


If you need to exchange documents with partners, you will use different tools than if you only want to allow access to employee email. If you use cloud storage or care about document-level security, you may need a different tool.

Determine what the business wants to do with devices, then figure out which components you need to support that. And don’t forget to look at what they are already doing, which might surprise you.

Existing infrastructure


If you have backhaul networks or existing encryption tools that may incline you in a particular direction. Document storage and sharing technologies (both internal and cloud) are also likely to influence your decision.

The trick is to follow the workflow. As we mentioned previously, you should map out existing and desired employee workflows. These will show you where they intersect with your infrastructure, which will further feed your strategy requirements.

Compliance


Will the device access any data or applications with compliance ramifications? If so it may need to comply with specific compliance requirements which could include anything from encryption to email archiving. Or even restricting the devices completely.

Make a decision


Here is a suggested process to pull the factors together:

  1. Determine the ownership model to support – personal, employer, or both.
  2. Determine which devices to support (we focused on iOS, but your options may change with additional device types).
  3. Identify business processes and applications to support. This includes:
    a. Email and communications.
    b. Data repositories.
    c. Enterprise applications.
    d. External services, such as cloud storage and SaaS applications.
  4. Map out business workflows for the identified processes.
  5. Determine data security and compliance requirements for identified data and workflows. These should include how the data needs to be stored (e.g., encrypted), where it can be exchanged (e.g., email to external parties), and where it can be accessed.
  6. Map business workflows first to device (where the data may transfer onto the device) and then to the on-device workflow (which apps are used). Don’t map your security controls yet – for now it is more about figuring out how employees want to use the data on the device.
  7. Identify potential security controls/tools to enforce security requirements at each step of each identified workflow.
  8. Review and determine which tool categories to support.
  9. Identify and select specific tools.

You’ll notice that although we opened with a discussion of information-centric security, at this point we are more concerned with identifying the workflows involved. That’s because we need to bridge the business and security requirements – to protect the data we need to know how it’s used, and how employees want to use it. The best data security in the world is useless if it interferes so much with business process that it kills off what the business wants to do, or users decide they need to work around it.

Conclusion


iPhones, iPads, and cloud computing are the 1-2-3 punch knocking down our traditional expectations for securing enterprise data and managing employee devices and services. Simultaneously, this is creating new opportunities for information-centric security approaches we have long ignored as we fixated on our fantasy of the enterprise perimeter. I am firmly convinced that these new models create more security opportunities than security risks.

But it is a challenge every time we face intense pressure to support new things in a short time frame.

The good news is that iOS is a relatively secure platform that is completely suitable for most organizations. Of course it isn’t perfect, and employee ownership and expectations further complicate the situation. For some organizations, the risks are still simply too great.

For the rest of us who want to embrace iOS, we have tools available to do so securely, with a range of deployment scenarios. We can start with something as simple as filtering out sensitive emails before they hit the iPhone, to something as complex as multi-organization secure document workflows. Hopefully this series has given you some good starting tips, and as new technologies appear we will try to keep it up to date.

- Rich
(3) Comments

Monday, April 2, 2012

Rootcon Blog: Introducing 35 Pentesting Tools Used for Web Sec Assessments


Original post here

1. w3af
w3af

w3af or Web Application Attack and Audit Framework is an open source penetration testing tool for finding web vulnerabilities and an exploit tool that comes with cool plugins like sqlmap, xssBeef, and davShell. w3af automatically updates itself every time you launch the tool making it a very reliable tool for website hacking.  For more information just check out their website hosted at SourceForge.
2. Acunetix Web Vulnerability Scanner
Acunetix WVS

Acunetix WVS or Web Vulnerability Scanner is a pentesting tool for Windows users so that they may be able to check for SQL Injection, Cross Site Scripting (XSS), CRLF injection, Code execution, Directory Traversal, File inclusion, checks for vulnerabilities in File Upload forms and other serious web vulnerabilities. You can download this tool here.

3. SQLninja

SQLninja is a an sql injection tool for web applications that use Microsoft SQL Server as its back-end though it runs only in Linux, Mac and BSD. It requires perl modules; NetPacket, Net-Pcap, Net-DNS, Net-RawIP, and IO-Socket-SSL. You can download this tool here.

4. Nikto


nikto

Nikto is an open source web server scanner “which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files or CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers." The good thing about Nikto is that it easy to use and and performs scanning faster. Nikto is coded in Perl and written by Chris Sullo and David Lodge. Although not all checks are really a big security problem but most are like XSS (Cross Site Scripting) Vulnerabilities, phpmyadmin logins, etc. Nikto alerts and gives you security tips in order to prevent your website from various attacks.

5. SQLmap



SQLmap is an open source automatic SQL injection and database takeover tool that fully supports MySQL, Oracle, PostgreSQL and Microsoft SQL Server. It partially supports Microsoft Access, DB2, Informix, Sybase and Interbase. Download sqlmap here.


6. Pangolin 3.2.3

Pangolin is another sql injection scanner for web applications using Access,DB2,Informix,Microsoft SQL Server 2000,Microsoft SQL Server 2005,Microsoft SQL Server 2008, MySQL, Oracle, PostgreSQL, Sqlite3, and Sybase. Its features include keyword auto analysis, supports HTTPS, has bypass firewall setting, injection digger, data dumper, etc. You can download its zip file here.

7. Havij v1.15 Advanced SQL Injection



Havij is another famous automatic sql injection tool that has a free and premium version. The free version only supports a few injection methods like MsSQL 2000/2005 with error, MsSQL 2000/2005 no error union based, MySQL union based, MySQL Blind, MySQL error based, MySQL time based, Oracle union based, MsAccess union based, and Sybase (ASE). It also includes an admin finder and an md5 cracker.


8. SQL Power Injector 

SQL Power Injector is a web pentesting application created in .Net 1.1 that helps the penetration tester and hackers find and exploit SQL injections on a web application that uses SQL Server, Oracle, MySQL, Sybase/Adaptive Server and DB2 compliant, but it is possible to use it with any existing Database Management System when using the inline injection or normal mode. You can download the latest version of this tool which includes a Firefox plugin here.

9. VulnDetector

VulnDetector is a project coded in python which scans a website and detects various web based security vulnerabilities in the website. It was developed by Brad Cable who is into coding open source tools. You can download the script here.

10. SQLIer 0.8.2b


SQLIer is another project of Brad Cable and is a shell script that determines all the necessary information to build and exploit an SQL Injection vulnerability to a URL by itself without user interaction unless it can't guess the table or field names for the database correctly. SQLIer can build a UNION SELECT query designed to brute force passwords out of the database. This script also does not use quotes in the exploit to operate, meaning it will work for a wider range of sites. Download the shell script here.

11. bsqlbf-v2

bsqlbf-v2 or Blind Sql Injection Brute Forcer version 2 is a perl script that allows extraction of data from Blind SQL Injections. It accepts custom SQL queries as a command line parameter and it works for both integer and string based injections. It supports MySQL, Oracle, PostgreSQL and Microsoft SQL Server databases. You can download the perl script on a Google hosted project.

12. Marathon Tool 

Marathon Tool is an alpha release SQL Injection tool or project that extracts information from web applications using Microsoft SQL Server, Microsoft Access, MySQL or Oracle Databases by using Time-Based Blind SQL Injection attack. The alpa release can be found here.

13. XSSer 



XSSer or Cross Site "Scripter" is an automatic -framework- to detect, exploit and report XSS vulnerabilities in web-based applications. It also includes a GUI interface by using the command : ./xxser --gtk. You can download xxser's beta version here.

14. ASP Auditor v2.2



ASP Auditor v2.2 is a an auditing tool for ASP that sends initial probe request, path discovery request, ASP.NET validate discovery request, ASP.NET Apr/07 XSS Check, application trace request, and null remoter service request. By using the opt command -bf, it allows you to brute force ASP.NET version using JS Validate directories.

15.Absinthe

"Absinthe is a GUI-based tool that automates the process of downloading the schema and contents of a database that is vulnerable to Blind SQL Injection.    This tool does not aid in the discovery of SQL Injection holes but speeds up the process of data recovery." It supports Microsoft SQL Server, MSDE, Oracle, and Postgres and the tool runs on Linux, Windows and Mac OSX. Download here.

16. SQID

SQID or SQL injection digger is a command line tool written in ruby by Metaeye Security Group that looks for SQL injections and common errors in web sites. It performs a Google search when finding for SQL injections and common errors in web site URLs and crawls a webpage. You can download this tool by checking out its project SVN:

svn checkout svn://rubyforge.org/var/svn/sqid 

17.DarkMySQLi



DarkMySQLi is a multi purpose MySQL Injection tool coded in python which is also available for BackTrack 5 as one of its packed tools.

18. fimap 



fimap is an automatic LFI/RFI scanner and exploiter coded in python by Iman Karim. It allows a pentester to scan a single URL for File inclusion errors, scan a list of URLS for File Inclusion errors, scan Google search results for FiIe inclusion errors, and harvest all links of a webpage with recurse level of 3 and write the URLs to a file directory.

19.Script Hex Dump – Forensic Tool


forensic tool

Script Hex Dump - Forensic Tool is a java application that helps you in parsing your scripts like PHP and automatically converts it as a hex value, some penetration testers use this to test for possible sql injection vulnerability in a website. SQL Injection attack has been a chronic threat especially for those websites running PHP and MySQL as the backend of their database server, one of its capability if the server is not properly configure is the command for writing arbitrary files. You can download this tool here.

20. PHP Vulnerability Hunter


php fuzzer

PHP Vulnerability Hunter is a PHP web application fuzzer that scans for common vulnerabilities like local file inclusion, SQL Injection, full path disclosure, arbitrary command execution and many more. A good tool for analyzing your own web server. You can grab the new version of this toolhere which is 1.1.4.6.

21. WSTOOL : Web vulnerable scan tool


wstool

WATOOL is a server error and SQL Injection, XSS or Cross Site Scripting scanner which uses PHP Check up collate with HTML FORM and LINK. You can download this tool here.

22. ProjectX WHMCS Pentesting Tool v.1





Projectx WHMCS Pentesting Tool v.1 is a vulnerability scanner coded in VB.NET that uses a black box approach. It echos the db_username and the db_password of a website that is vulnerable to WHMCS Local File Disclosure. This kind of vulnerability is only applicable to versions 3.x.x and some 4.x.x which was a viral exploit last year that some website administrators took for granted. You can download the tool here.

23. Wpscan 



WPscan or Wordpress Security Scanner is a pentesting tool written in ruby for Wordpress installations. The tools is coed by Ryan Dewhurst which uses a black box approach in finding security holes for Wordpress like timthumb, easy to guess passwords, plugin holes, etc. You can download wpscan here.

24. Skipfish


Skipfish is an active web application security reconnaissance tool written by Michal Zalewski. Skipfish spiders a URL using the wordlists, a very powerful web scanning tool with a simple implementation. It also scans for vulnerabilities like php injection, XSS, format string vulnerabilities, overflow vulnerabilities, file inclusions , etc. You can download this tool here.


25. WhatWeb



WhatWeb is a web scanner coded by Andrew Horton aka urbanadventurer from Security-Assessment.com. It is used for information gathering because it identifies content management systems (CMS), blogging platforms, stats/analytics packages, javascript libraries, servers, etc. You can download this tool here.

26. OWASP ZAP 

Zed Attack Proxy (ZAP) is a project of OWASP which is a GUI penetration testing tool for finding website vulnerabilities and flaws. This open source tool includes features like  intercepting proxy, active scanner, passive scanner, brute force scanner, spider, fuzzer, port scanner,  dynamic SSL certificates, API, and Beanshell integration. For more information about this tool, check out their website.

27.  Webshag



Webshag is a multi-threaded, multi-platform web server auditing tool coded in python. It is used for crawling a URL, port scanning, file fuzzing and audits your website. You can download this security auditing tool here.

28. OWASP DirBuster



DirBuster is another project of OWASP that a multi threaded java application designed to brute force directories and files names on web/application servers that uses a black box approach for application testing by trying to find hidden content. You can download this tool here.

29. Grendel-Scan

Grendel-Scan is free and open source web application pentesting tool that has an automatic scanning feature which detects common web application vulnerabilities, and features geared at aiding manual penetration tests. Get this tool now.

30. Mopest



Mopest is a PERL Local PHP Vulnerability Scanner for exploits PhpBB 2.0.20 Disable Administrator, PhpBB 2.0.19 Denial of Service - Infinitely topic, phpBB 2.0.15 Database Authentication Details, Invision Power Board 2.0.2 Multipl Users DoS, Invision Power Board 2.1.5 Code Execution, MyBB 1.0 RC4 Sql injection, MyBB 1.1.3 Create An Admin, MyBB Sql Injection, and WordPress 1.5.11 Sql Injection. It also has tools like Fake Mailer, Email Bomber, and MD5 Cracker.  You can check out this project here.

31. SecuBat

SecuBat is another web vulnerability scanner which automatically analyzes web sites with the aim of finding exploitable SQL injection and XSS vulnerabilities. You can check this tool here.

32. Arachni





Arachni is an open source web application security scanner framework coded in ruby that helps website administrators and penetration testers evaluate the security of a web application. Arachni asks you for the URL of the target and it automatically performs a simple scan and presents you with its findings which could be a very risky flaw or loophole. You can download this tool here.

33. WebSlayer


WebSlayer is another OWASP project that slays your web application by brute forcing the GET and POST parameters, checking the directories, brute forcing the login forms, fuzzing, brute forcing sessions, Ntml brute forcing, and many more. For more information of this project just check this site.

34. Burp Suite





Burp Suite is penetration testing tool and integrated platform for website security. Burp Suite has cool features like an intercepting proxy, application spider for crawling, detects numerous web application vulnerabilities, repeater tool, allows you to write your own plugins, and many more. The free edition is available for download here.

35. ProxMon


ProxMon is not a Digimon but a Python based open source framework that automates web application tests. Its key features include:

- automatic value tracing of set cookies, sent cookies, query strings and post parameters across sites,
- proxy agnostic
- included library of vulnerability checks
- active testing mode
- cross platform
- easy to program extensible python framework

You can download this tool here.

Original post at:
http://blog.rootcon.org/2012/03/introducing-35-pentesting-tools-used.html?m=1