Intelligently Managing Risk: Multicloud Infrastructure Security

Nov 24, 2020
5 minutes
16 views

Multicloud is the new reality for many organizations, whether chosen as a strategy or forced on them through another means – like customer preference, mergers and acquisitions, or government regulations. Forward-looking organizations have accepted that this reality will happen – or has already happened – to their organization. They’re making plans to intelligently manage multiple clouds and proactively put measures in place to ensure continued compliance.

Some of the risks they need to manage are technical. Differences in authentication, for example, and authorization solutions; or how network routing and security are configured. Other risks are on the people side of the organization. A common risk, caused by excessive processes and procedures, can lead to fast-moving parts of the organization going a little rogue and becoming an IT headache.

Automation Is the New Baseline

It is nearly impossible for one organization to acquire all the skills it needs to learn the intricacies of how to do everything on every cloud. Huge amounts of ramp-up time and budget are required to find the necessary expertise. Automation is the cornerstone to supporting multiple clouds.

Starting with a multicloud friendly automation product will save an immeasurable amount of time. Particularly if it is one that can run the same deployments and same compliance checks on every cloud in the mix. Since automation is repeatable, there is less time cross-training new team members – whether it is a role as a security admin, or a DevOps engineer. AnsiblePuppet, and Terraform are leading multicloud automation frameworks.

Flexible Policies and Procedures

Use an automation framework to handle most of the implementation. This lets you have policies and procedures that are a little more flexible, and allows for the differences between cloud implementations. You’re able to focus on the goal rather than the exact procedure. This is a stark contrast from procedures traditionally written in on-premises environments, where the technologies in use are much more tightly controlled.

For example, a single network in Google Cloud can support subnets from multiple regions. To have subnets from different regions even know about each other in Azure will require multiple virtual networks with peering configured.

Flexibility is even more important in the DevOps space. There are far more variables and options available. From how the application is packaged to how it is deployed; on anything from Kubernetes to Fargate to AppEngine; even to an existing shared virtual machine.

How teams get access to clouds requires further flexibility. Traditional, corporate procurement processes that involve competing bids and other red tape are counter-productive. Teams will work around these barriers. They may put the services they need on a credit card and put it on an expense report right next to lunch with a client.

The organization’s control points will be circumvented, which can lead to critical security leaks and technical debt. Instances can only be found through extensive searches of expense reports. This is “shadow IT,” and it is a real problem.

Standardize Components as Much as Possible 

Even though multiple clouds are in use, it is possible to minimize the number of technologies that are in play. This is done through standardization of as many of the core technologies as possible. Often this involves using third-party components.

Whenever possible, stick to solutions that support multiple clouds. This may mean giving up a nice-to-have feature in the case of a very specific point solution. The consistency and efficiencies that are gained more than outweigh the benefits of any one nice-to-have feature. If it is a must-have requirement, that is the only time single-cloud management solutions are worth the pain.

Approaches for standardization start all the way back in the application development pipeline. Using generic and popular IDEs like Visual Studio Code, for example. Or, a single CI/CD pipeline that builds all applications using containers based on something like Red Hat UBI as a cloud-agnostic base. Then, moving up the stack, stick to using Kubernetes for container orchestration. Always use a PostgreSQL-based database engine. Introduce third-party monitoring and log aggregation tools to provide as close to a single pane of glass as possible across all the clouds being actively used.

There are far more examples. But it is about taking a step back and only using cloud-specific tooling where required, to avoid lock-in — which all cloud providers are trying to accomplish. Like acquiring MongoDB from MongoDB instead of a cloud’s own in-house deployment of it.

Multicloud Management for Compliance and Cost Tracking

In multicloud environments, it is paramount that insight into costs and security compliance are available on-demand to those that require the information. There is a category of tools that can perform one or both of these activities. They work closely with the largest hyperscale cloud providers to ensure the solutions are up-to-date. They include the latest security policy enhancements to ensure that when a policy is applied and validated it is as consistent as possible across all clouds in the mix.

These tools for cloud security posture management, or CSPM, can help organizations intelligently manage risks associated with multicloud environments. Check out Cloud Security and Compliance for Dummies, which explains the importance of CSPM as part of a holistic cloud native security program.

This post originally appeared on TheNewStack.


Subscribe to Cloud Native Security Blogs!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.