Data Center Summit - Learnings from the Road

Mar 28, 2012
5 minutes

Data Center Learnings from the Road

I just got back from our London Data Center Summit. We’ve had multiple data center events now in the U.S. and kicked off the international leg in London last week. At these data center summits, we address the evolution of the data center, data center networking changes and challenges, and finally the implications from a security perspective. I thought it would be interesting to share some of the learnings from the road. What are the top of mind issues from our data center audience?


Many customers expressed concern not only about modern day attacks, but also insider threats. This is interesting. We spend a lot of time worrying about the uber hacker in some far-away nation attacking our data centers when the reality is a lot of threats are just as likely to come from the people sitting next to you at the office. Compliance folks think about this regularly.

We know this is happening, it’s all over the news. We have seen many examples of insider threats from disgruntled employees or employees who gained access to privileged confidential information they should not have been privy to. Bradley Manning, Phillip Cummings and  Orazio Lembo all come to mind.

One interesting question at the Dallas event was how to  control “compromised” internal users (like a home VPN user), a
theoretically-trusted user, from using legitimate access (like RDP) to get to the data center.

The short answer is to treat insiders just as you would external users like partners and contractors. They should be evaluated appropriately from a security and risk analysis view. A home VPN user should not just have complete access to the data center, but only what he/she is allowed to by policy. The firewall should be integrated with the remote access VPN or placed behind it.

Granular access control can be at the firewall level (restrictive access control policies for a compromised user), or at the user repository level (creating a new high-risk group in Active Directory). Data filtering options like those on Palo Alto Network firewalls can ensure data is not flowing out of a segment of the data center. In addition, it is best practice to ensure management applications using RDP, Telnet or SSH are allowed only for a select number of users, like IT personnel. And finally, the best thing you can do to deal with insider threats is constant logging, monitoring and analysis for early discovery of suspicious insider activities.

Scalability and Performance

As expected, a number of questions arose around network security and how it would impact scalability and performance in the datacenter. For example, in a data center with thousands of virtual machines, and terabytes or petabyte of data, how do you address latency challenges?

Yes, the latency for a firewall is going to be non-zero. But not having security controls is no longer an option.  Every device introduced in the data center, such as firewalls, routers or switches will have latency. But the impact is minimal. We have specifically designed the Palo Alto Networks’ “single pass” software architecture to process a specific function only once, in one pass, so the latency is optimized. The multi-core hardware architecture was purpose-built to optimize performance, with dedicated hardware acceleration for intensive computation functions like decryption. The consideration of security versus performance ultimately becomes a policy decision for the organization, instead of a tradeoff.

In addition, if servers are grouped according to their risk and trust levels, then it is likely that inspection can be focused on traffic between different trust levels, optimizing the latency and performance of the firewall.

Firewall Deployments in an Ethernet fabric data center

We’re happy to see a lot of interest in Ethernet fabrics. With server virtualization, traffic patterns are changing in the access layer towards an east-west model instead of north-south via aggregation and core layers. We fielded a number of questions around firewall deployment modes in an Ethernet Fabric environment. Should it be layer 1, layer 2 or layer 3?

In a defense -in-depth approach, you can consider multiple firewalls in the data center, and the right mode should be chosen depending on the security needs - A high-performance firewall at layer 1 (virtual wire) is best positioned at the entry to the data center to filter against threats.  Within the data center itself, i.e. for server segmentation, layer 3 mode is ideal for a segmentation firewall to inspect traffic in and out of a “virtual” or physical segment. A layer 2 deployment should be used if you need to filter traffic between different servers in the same VLAN. This guidance is true in an Ethernet Fabric environment as well.

A move towards flat networks like Ethernet Fabric does not mean your security options become limited. Flat networks and virtualized networks should still be segmented for security reasons. John Kindervag of Forrester Research in his Zero Trust Model states emphatically that segmentation is key for security and compliance. This means segmentation via next-generation firewalls, NOT VLANs and switch ACLs.

I hope this was useful. We’ll continue to share learnings from the road in the next data center blog. For those who won't be able to attend our Data Center events in person, we have webinars and archives of webinars available that you can view.


Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.