Beyond Ports and Protocols

Often we talk about how destination port is not an accurate classification for controlling network traffic. At this point, hopefully that is obvious. Everyone knows that just about anything can get out of an enterprise network via port 80 or 443. Lately I have had several discussions with customers curious about protocol validation and ensuring that only "valid" traffic is being allowed. Being "valid" has become a mostly useless concept. How do you control traffic on 80 and 443? You put in a proxy, right? Hmm. That is useful if you want to make sure non-HTTP applications do not take advantage of a firewall policy that allows 80 and 443 out of the network. However, it is clearly not that simple – and it is not just HTTP that is the issue.

There are dozens of applications out there that allow a user to tunnel just about any application over "valid" HTTP or SSL. The protocol validation available in many products does nothing for this. Lately I have been studying other tunneling applications – applications that correctly utilize a protocol and take advantage of the fact that most networks assume if the flow follows the standard for the protocol then it should be allowed. What are the most likely protocols to be allowed out of the network, even when HTTP may not be? DNS and SMTP. Don't be confused, I don't mean to say that all enterprises allow a random PC on the network to begin sending DNS or SMTP traffic directly to the Internet – although some do. I do mean to say that just about any PC in any enterprise can send an email and lookup a hostname, albeit through corporate DNS and SMTP servers. Enter a few creative tunneling applications: dns2tcp and HoSProxy.

dns2tcp is a clever system that essentially allows you to tunnel any TCP traffic inside of valid DNS lookups through any DNS server. It works by taking your application payload and breaking it up into small enough chunks to be able to fit into DNS requests as a hostname. The trick to it is that those "hostnames" need to eventually get resolved by the authoritative DNS for the specified domain. Conveniently, the authoritative DNS happens to be the server side of the dns2tcp system. It receives requests for "hostname" resolution for hosts in its domain or sub-domain and sends back "responses" as requested. The DNS requests and responses are actually the tunneled TCP payload. Pretty slick if you seem to be stuck inside someone's "restrictive" network. Next time you are traveling and on an airport or hotel wireless network, see if you can do a DNS lookup for google.com before you go through the captive portal sign-in process that is usually required before browsing the web. If you can, you can also tunnel anything you want through that network.

HoSProxy is a similar concept but uses SMTP to transmit HTTP requests and responses. If you can send an email, you can browse the web. I am not sure you will be too happy with the latency of the solution, but in a pinch, it works.

Now, I am not encouraging everyone to run out and setup SMTP or DNS servers at their home so they can start tunneling everything. Rather, I wanted to highlight that validating traffic to ensure that it matches an RFC is only a tiny step more useful than using destination port to classify and control traffic. It's not about ports and protocols, it's about the applications running on top of them – and there are lots of creative people writing creative applications using these normally boring protocols. To steal a desktop publishing acronym: WYSIWYG (what you see is what you get). If the firewall or IPS doesn't see it, it doesn't get it – and you don't get to control it.