Recently, Roger Grimes opined that the firewall was dead. Several folks chimed in to tell him that he was wrong, and much debate has ensued, citing reports about the nature of recent breaches, how attacks used to work, and how modern attacks work.
I think much of the misunderstanding has to do with the definition of the firewall. Roger uses an implementation-specific definition for the firewall. An implementation-specific definition describes the firewall as something that opens and closes ports in an attempt to reduce the attack surface of the network. We all know that open and closing ports doesn’t limit the attack surface of the network – and hasn’t for years. So, if you stick to the implementation-specific definition (a stateful inspection, port-based firewall), I agree – the firewall fails to provide much help in securing the network – and for most intents and purposes, is dead.
If, however, you look at an architectural definition of the firewall – an infrastructure component that:
- Defines the boundary between trust zones
- Sees all of the traffic
- Has a positive security model (i.e., default deny)
- And most importantly, meaningfully reduces the attack surface of the network
Viewed this way, the firewall is alive and well, and more necessary than ever.
Jody Brazil at FireMon correctly points out a couple of relevant pieces (disclosure – FireMon is a Palo Alto Networks partner):
- Next-generation firewalls are relevant
- Firewall management is a problem
On the first point, Grimes dismisses “deep packet inspection” out of hand, again proving his implementation-specific understanding of the firewall. If a firewall allows all port 80 traffic, then scans for a bunch of attacks or undesirable applications with an after-firewall IPS-style engine, then I agree – the firewall aspect is useless. But identifying the application PRIOR to allowing the traffic – in effect, classifying the traffic not by port, but by application, then making an access decision – is fundamentally different. Not only does it reduce the attack surface of the network (allow traffic from/to these twenty applications into my data center, all else deny) but it also goes a long way to address the management issues that Grimes and Brazil both highlight – namely that too many arcane port-based firewall rules exist, and far too many are left alone because nobody understands what they do. This results in poor security, and major management issues across thousands of port-based firewall rules, and countless policies across the ever-increasing other network security devices that organizations put in place to compensate for the port-based firewall’s irrelevance.
When a firewall rule reads “allow sales to use GoToMeeting,” or “allow IT to use BitTorrent,” there’s no confusion of the intent of the rule due to obscure port assignments. This enables easy understanding, reduced rulesets, and the important “all else deny” statement at the end of the rulebase (which Grimes laments the loss of). It also makes it far easier to stop the kinds of attacks that Grimes and Brazil talk about – the first rule of defense is to control the avenues of attack (which, in today’s world, are applications), not by blocking the applications the business values, but by only allowing the applications the business values, and then scanning those for threats of all sorts. Wade Williamson does a good job of explaining this in detail.
Doing this well, in the firewall, enables organizations to rationalize network security infrastructure investments, but that’s a different topic.