Latest News Posts

Social
Latest Forum Posts

SysAdmin Corner: Introduction to Pentesting and the Pwn Plug – Part 1
Bookmark and Share

Pentesting_PwnPlug_Thumb
Print
by Brett Thomas on December 20, 2012 in Editorials, Security

Many sysadmins understand how to set up and maintain a network, but the concept of auditing is an entirely different world. In the first part of a series on auditing and penetration testing (pentesting), we introduce the concepts and tools for putting all that security to the test. We’ll also talk about our pentest platform of choice, the Pwn Plug.

Introduction; Understanding Vulnerabilities

When we build networks – particularly small ones – it can be easy to “set it and forget it” by plugging everything into the router and clicking “Deny All” incoming and “Allow All” outgoing. While, indeed, this can be an effective start to small business security (many consumer routers that are not wireless-enabled can be an effective front-line of protection this way), it’s only one point among many that needs to be considered. 

To effectively protect a network, particularly in an ongoing fashion, there are three things required:  proper setup, proper monitoring, and proper auditing.  Setup involves things like user accounts, firewall rules, domain policy and software upgrades/patches.  Monitoring involves maintaining and examining logs, analyzing traffic through things like Wireshark, and the installation/maintenance of an Intrusion Detection System (IDS) such as Snort.  Auditing, which is the aspect we will begin to look at today, involves putting the network security through its paces – learning what vulnerabilities exist and how they can be used by an unauthorized party to gain access, maintain access, and perform reconnaissance about your deeper network. 

In other words, auditing is about hacking your own network in a controlled and documented manner.  We call it penetration testing, or “Pentesting” for short.  In the IT world, there are numerous tools designed for the security-minded to help them test their networks – from the extremely expensive commercial products like those from CORE Security to the freebie open-source Backtrack Linux distribution.  Some offer one-click tests, others rely on you to understand the intricacies of what you are working with – which probably means you already vetted your setup, software and services carefully to begin with and are just making sure all is right with the world.

Pentesting is not a process that’s learned in a day or a week or even a series of tutorials as we’ll write here, but there are basics that can help you get familiar with the process.  The purpose of this series will be to use what I feel is an excellent platform of pentesting tools contained in a handy little box – the PwnPlug by Pwnie Express.  It’s well-balanced between price, performance, and ease-of-use, and is something that I feel should be in every true sysadmin’s toolbox if setting up or protecting the network is in his or her job description.  Though the tools available on it are open source (and thus available without cost), there is no place you can get so many things in one small, mobile package. 

This particular article is aimed at the novice in security (who understands basic networking principles and may have some minor programming experience), introducing you to the terms, concepts and goals of both a successful pentest and a well-thought-out attack (which are really the two sides of the same game of chess), and introduce you to the PwnPlug and its tool suite.  Part two and beyond of this series will get into using the individual tools to examine your network for vulnerabilities and learn what can and can’t be monitored through open-source tools.

Vulnerabilities – What they are and how we get them

Before we go further into the tools of exploitation, it’s important to understand a bit about vulnerabilities, which come in two flavors:  holes and bugs.  Holes are an easy concept, and are usually the result of us as sysadmins configuring our networks improperly.  A hole can be anything from leaving ports open to not enforcing good domain policy to misconfigured services (I once audited a network where the external-facing DNS server spit out an entire map of the internal network due to an error in the BIND 9 config).

Bugs, on the other hand, are somewhat out of a sysadmin’s control.  Bugs are flaws in a currently running program or service that the programmer did not intend to be there. They are a natural byproduct of the complex programming required by modern systems, and largely arise from the fact that programming is actually the exact opposing concept (and school of thought) from true hacking.  Programmers, as a stereotype, are methodical people creating things from the ground up.   Contrary to popular belief (especially as jokes amongst the technically intelligent against companies like Microsoft and Cisco), these people take their roles very seriously and work as quickly as they can to patch flaws in their products once discovered. The problem is, when you are building something up, you often don’t have the ability to think of how the not-even-existing-yet final product could or would be taken apart.

Most bugs come from new feature sets that are introduced on a tight schedule that demands a product be released as soon as it’s functional, which is a different concept altogether from secure.  After all, end users usually compare the feature-list (not the security patches) when they are making buying decisions.  The developers then have to go back and fix what’s found afterward as it’s disclosed, having likely already been moved on to another project.  Most of the bugs are byproducts that simply can’t be detected in the short amount of time before a feature goes live or in a small testing environment, as different usage scenarios influence vulnerabilities tremendously. 

Bugs come in all shapes and sizes – from critical vulnerabilities like SQL injection on a webserver to “harmless” ones like Denial-Of-Service (DOS) weaknesses. They can cause anything from a crashed process to the theft of information.  The more dangerous a bug is, the less you will hear about it (even if it’s being used on you!) until it’s patched – the best exploits leave the process that they hijack running normally, while inserting themselves into other areas of the system. 

The most dangerous kinds of bugs are ones that are made inadvertently by incorrectly parsing a variable.  The most common type of this is the “Buffer Overflow”, which takes a program that’s waiting for input and crams a response that’s too long into it, causing it to actually execute part of that response as if it was the program’s own code.  Though operating systems across the board have made steps to secure this (the biggest ones being ASLR and DEP), they have started a cat-and-mouse game that provides security for only a short time.  Both technologies have seen multiple revisions through each OS, and both technologies are routinely broken or bypassed shortly thereafter. 

Web technologies have made the list of vulnerabilities that much larger,  as we build software that runs within its own Java VM or interfaces that use a scripting language like PHP, Perl or Python.  For instance, if a Tomcat Web service ties into your database and the attacker wants the database, s/he can get that access through multiple points – breaking into your server’s OS remotely (often difficult), breaking into the network from another entry point and then connecting to the DB port directly (much easier), or breaking into the Web service sufficiently to make it display the results of rogue queries (entirely dependent on the software in question).


Advertisement