I used to love visiting data centers. From the sticky floor sheets that ensured you weren’t tracking in dust, to the white noise of a thousand fans, and the climate control that ensured it was always the perfect temperature no matter what time of year it was. I always marveled at the opportunity to walk through corridors of cages where I could see the machines that make the internet work. Maybe your memories of data centers are not so sweet, but it’s likely many of us will never be in a data center again. Reminiscences aside, I thought this would be a good place to begin a discussion around container security because we’re in a multi-compute world now, and it’s important to have a working knowledge of what that means.
My first interaction with a container environment came as part of an offensive exercise where I’d exploited a vulnerable application to gain shell access and ended up inside a container. It felt a lot like being in a chroot jail (a *nix command that sandboxes a process, allowing it to safely execute with higher permissions). While there’s a lot more under the hood from a virtualization perspective with containers, I’ve found this to be a useful reference for helping people understand container environments. Using this analogy, I often describe containers as pre-defined environments to be jailed, stored in a container registry meant to be stood up and torn down on demand by the container runtime.
That brings us to an important discussion about Docker versus Kubernetes. Docker can support everything I’ve introduced here from building container images, providing the container registry and the container runtime. Kubernetes provides a distributed container runtime across multiple devices to provide higher availability, scalability and a better fit into the cloud paradigm; you don’t care where something’s executing as long as it does. When using containers, it’s common to leverage both of these technologies — Docker is used in your continuous integration/continuous deployment (CI/CD) pipeline to build and store container images, and a Kubernetes runtime is used for execution.
The uniqueness of container environments creates a number of interesting consequences. By applying the principles of Zero Trust, we can better understand how to approach securing these environments.
- Verify all users, devices and applications: You will often hear that “containers are code.” This is because they are often defined by your developers and contain a combination of internally-developed, third-party code. While you will be responsible for vulnerabilities and misconfigurations in these container images, you can scan these predefined container images offline, in the container registry, performing software composition analysis (SCA) and other purpose-built container security tooling.
- Apply context-based access: Protect against both lateral movement and exfiltration of data. Ensure the only connections happening in an environment are authorized based on context. The ephemeral nature of container environments is such that asset management and administering access control lists becomes a challenge. Reduce manual labor and improve your security posture by enabling dynamically authorized connections.
- Secure all content: Cloud workloads provide a unique ability to perform content verification to ensure it’s legitimate and safe due to the modular nature of cloud apps. Transactional data, which traditionally would have been delivered through some manner of interprocess communication (IPC) or function return is now being delivered via sockets that allow easier inspection. This provides another level of inspection beyond what security teams have historically had access to.
- Continuously monitor and analyze all security infrastructure: Containers are designed to be lightweight workloads, but they are still an exploitable part of your environment and require runtime monitoring. As opposed to traditional EDR type agents leveraged in your end-user computing (EUC) and server environments, virtualization allows you to hook through the container runtime instead of having to install in the container itself. This provides performance gains and eliminates any concern of negative interaction between the container code and container security monitoring.
If you’re early in your cloud adoption journey, you need to ensure you’re securing your infrastructure and your applications. VM-Series virtual next-gen firewalls (NGFW) or our CN-Series containerized NGFW, which has been purpose-built for Kubernetes environments, can ensure the content and method of access is being enforced. Prisma Cloud can provide the foundational characteristics of CI/CD security, including registry scanning, configuration and posture management, full-stack runtime protection and incident response.
As your security organization matures and wants to collect additional telemetry for your security operations team, Cortex XDR for Cloud can play a critical role in monitoring container execution from the container runtime environment; it stitches both endpoint and non-endpoint events together to provide a holistic view of your entire environment. Finally, because asset management might just be the hardest problem in security, Cortex Xpanse can play a critical role in understanding how your attack surface is changing as a result of your organization's cloud adoption.
If you’re interested in learning more, contact your Palo Alto Networks sales representative and request a call with our subject matter experts to help.