What Is Orchestration Security? | Palo Alto Networks
Orchestration security refers to the measures taken to protect container orchestration systems, such as Kubernetes, from potential threats and vulnerabilities. As container orchestration automates the deployment, management, and scaling of containerized applications, ensuring the security of these systems is critical to maintain the integrity, confidentiality, and availability of applications and data. Orchestration security encompasses access control, network segmentation, secure communication, and monitoring.
Orchestration Security Explained
Addressing orchestration security begins with the first overarching layer of a Kubernetes-based environment — the build layer, the set of tools developers use to build code that will run in a Kubernetes environment. Although these tools aren’t directly part of Kubernetes, ensuring the security of the code running on a Kubernetes cluster is prerequisite to safeguarding all aspects of the platform.
Securing the Build Layer
Automated Scanning of IaC and YAML Files
Most Kubernetes application build and deployment pipelines rely on automated, policy-based configuration management in the form of infrastructure as code (IaC) and YAML files. These approaches let Kubernetes administrators write code to define how a cluster (and the infrastructure that hosts it) should be configured and then apply that code automatically.
In addition to streamlining the process of provisioning a Kubernetes environment, configuration management tools offer an opportunity to scan configuration files for security problems before they’re applied. Tools like Prisma Cloud can do this automatically by comparing your IaC and YAML files to those known to be secure. Some solutions integrate directly with your source code management system, such as GitHub or GitLab, making it easy to build a fully automated process for securing Kubernetes configuration files that works with existing build pipelines.
Security Policies in Kubernetes
Orchestration environments involve numerous security settings and configurations at the application, network, and infrastructure levels. These settings play a significant role in determining the security posture of these environments. Hunting for vulnerabilities and misconfigurations to exploit, attackers look for hardening flaws, such as:
- Systems running outdated versions
- Systems with overly permissive network access controls
- Self-hosted systems with administrative permissions on the underlying OS
- Poor credential hygiene
Policies that drive security control of container applications in Kubernetes need to account for various areas of risk prevention — access control, network traffic isolation, runtime security, image validation, monitoring, etc.
Orchestration security necessitates that we ensure the proper execution of these risk prevention measures via policy checks. Solutions like Checkov, KubeLinter, Falco, Prisma Cloud, and Terrascan can scan using compliance checks verification. Examples of checks you can perform include:
- Avoid Running Privileged Containers
- CAP_SYS_ADMIN Capability Not Restricted
- CPU Limits Not Configured
- Container Configured to Allow Privilege Escalation
- Container Configured to Run as Root User
- Container Configured to Use the Default Set of Capabilities
- Container Configured with Custom SELinux Options
- Container Configured with Custom Hosts
- Container Could Run Using Outdated Docker Image
Performing these checks helps harden configurations, minimize potential attack surfaces, and maintain system stability.
Orchestration Access Security
Container orchestration security involves implementing proper access control measures to mitigate risks from over-privileged accounts, network attacks, and unauthorized lateral movement. By using identity access management (IAM) and a least-privileged access model with allowlisted Docker and Kubernetes actions, security and infrastructure teams can limit users' commands based on their roles. The goal is to restrict direct access to Kubernetes nodes while providing the minimum necessary privileges to authorized users.
Cloud infrastructure entitlement management (CIEM) solutions enable securing cloud resources by establishing least-privileged access. IAM for cloud infrastructure controls actions on specific resources. Defining roles and permissions using the least privilege principle is challenging, especially in public and multicloud environments.
Challenges include managing numerous machine identities, understanding users' entitlements, and unique IAM policy models across cloud service providers (CSPs). Controlling access and assigning correct net-effective permissions are crucial for preventing unnecessary access to container environments. Prisma Cloud calculates users' effective permissions, detects overly permissive access, and suggests corrections for least privilege entitlements across AWS, Azure, and GCP.
By leveraging CIEM policies, administrators can detect public exposure, wildcards, and risky permissions. CIEM platforms help remove unnecessary access by detecting overly permissive policies and suggesting rightsizing for least privilege entitlements. With Resource Query Language (RQL), admins can query IAM entities and their relationships and effective permissions across cloud environments.
Examples:
- Which users have access to resource X?
- What accounts, services and resources do the user name@domain.com have access to?
- Can any user outside of group C access resources in region D?
Various IAM solution providers like OKTA, Auth0, PingID, Avatier, My1Login, and SecureAuth can help maintain authentication, authorization, and access (AAA) levels. These solutions support centralized access control and compliance with company NetSec requirements. Authentication mechanisms include single sign-on (SSO) and multifactor authentication (MFA).
Many CIEM solutions — like Prisma Cloud, for example — can integrate with identity providers (IdP) to ingest SSO data. Correlating this data with cloud identities, such as IAM users and machine identities, allows for viewing a user's effective permissions or detecting overly permissive roles. Additionally, organizations should protect pod-to-pod communications, prevent lateral movement, and secure frontend and backend services.
Address key areas of concern using role-based access control and Kubernetes security contexts to define least-privileged access for pods and containers and maintain restricted access to resource orchestration.
Kubernetes Security Contexts
When designing containers and pods, configuring the security context for pods, containers, and volumes is crucial. This includes settings like running as a non-root user, controlling Linux capabilities, and setting read-only root filesystems.
Security contexts in Kubernetes define the security parameters for pods, containers, and volumes. Configure security contexts in your deployment YAML files to control security settings, such as running containers as non-root users, limiting Linux capabilities, and enforcing read-only root filesystems. Use PodSecurityPolicies to enforce security context settings cluster-wide and prevent the creation of non-compliant workloads.
Key Security Context Practices
When designing your containers and pods, make sure you configure the security context for your pods, containers, and volumes. Your security context is defined in the deployment yaml, which will dictate the security parameters assigned to the pods, containers, and volumes.
Security Context Setting | Description |
SecurityContext -> runAsNonRoot | Indicates that containers should run as non-root user |
SecurityContext -> Capabilities | Controls the Linux capabilities assigned to the container. |
SecurityContext -> readOnlyRootFilesystem | Controls whether a container will be able to write into the root filesystem. |
PodSecurityContext -> runAsNonRoot | Prevents running a container with 'root' user as part of the pod |
Table 5: Sample key parameters of security context settings
If you’re running containers with elevated privileges (i.e., privileged), you should consider using the “DenyEscalatingExec” admission control. This control denies exec and attach commands to pods that run with escalated privileges allowing host access — including pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace.
Encrypt Your Secrets
Hard-coded credentials in SCM repositories pose a significant risk, as they can be accessed by anyone with read permissions. Insufficient credential hygiene, an OWASP Top 10 CI/CD security risk, involves inadequate management and protection of CI/CD pipeline credentials, making systems vulnerable to attacks.
To protect secrets like passwords, API keys, and database credentials, encrypt them at rest and in transit. Use secrets management tools like HashiCorp Vault, CyberArk, or AWS Secrets Manager to securely store and control access to secrets. Implement secure credential storage, credential rotation, least-privileged access, and audit logging to minimize risks.
Key Recommendations
- Continuously map and review credentials across engineering systems, ensuring least privilege and limited permissions.
- Avoid sharing credentials across multiple contexts, and prefer temporary credentials over static ones.
- Periodically rotate static credentials and detect stale ones.
- Configure credentials for specific conditions, such as source IPs or identities, to limit unauthorized usage.
- Detect secrets in code repositories with IDE plugins, automatic scanning, and periodic repository scans.
- Scope secrets in CI/CD systems to provide pipelines and steps with minimal required access.
- Prevent secrets from being printed in console outputs and remove them from artifacts like container images, binaries, and Helm charts.
At-a-Glance Container Orchestration Security Checklist
- Scan IaC and YAML files for security issues
- Implement security policies in Kubernetes
- Enforce orchestration access security
- Utilize a CIEM solution to ensure least-privileged access
- Configure Kubernetes security contexts
- Apply PodSecurityPolicies
- Use "DenyEscalatingExec" admission control
- Encrypt secrets at rest and in transit
- Adopt secrets management tools
- Establish secure credential storage and rotation
Container Orchestration FAQs
API server authorization in Kubernetes determines whether a specific authenticated entity (user or service) has the right to perform an action on a particular resource within the cluster. It occurs after successful authentication and involves evaluating the entity's permissions based on predefined policies.
Kubernetes supports several authorization modes, including role-based access control (RBAC), attribute-based access control (ABAC), and node authorization. These mechanisms ensure that entities can only perform actions they’re explicitly permitted to.
Secure image management involves measures for securely handling container images, such as scanning images for vulnerabilities, signing images to verify their integrity, managing image lifecycles, and enforcing policies for image storage and distribution. A critical component of container security, secure image management ensures that container images are free from known vulnerabilities, unauthorized changes, and comply with security standards.
Ingress security involves protecting the entry points into a network, specifically in Kubernetes, where it manages external access to services within a cluster. It includes implementing rules and policies to control incoming traffic, ensuring only authorized and validated requests reach the cluster's applications. Ingress security often integrates SSL/TLS encryption, authentication mechanisms, and rate limiting to safeguard against unauthorized access and attacks like DDoS.
Control plane security in Kubernetes refers to protecting the components responsible for managing cluster state and configuration, such as the API server, etcd, controller manager, and scheduler. Routine measures involve securing communication channels, authenticating and authorizing access to the control plane, encrypting sensitive data, and monitoring for malicious activities.
Compromising the control plane can lead to cluster-wide security breaches, making its protection a top priority in cloud security strategies.