Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Security experts discuss oft-neglected areas of cloud security and offer guidance to businesses working to strengthen their security posture.

Kelly Sheridan, Former Senior Editor, Dark Reading

May 21, 2021

9 Min Read
(image by pickup, via Adobe Stock)

RSA CONFERENCE 2021 – Enterprise cloud adoption brings myriad benefits, risks, challenges, and opportunities – both for organizations and attackers who target them. Even longtime users of cloud infrastructure and services could still learn a thing or two about strengthening security. 

Given the year that preceded this year's all-virtual RSA Conference, during which businesses grew heavily dependent on cloud services and struggled to secure fully remote teams, amid the COVID-19 pandemic it was little surprise cloud security was a hot topic. Speakers explored the gaps that are frequently overlooked and offered practical guidance on how to mitigate risks. 

One of these blind spots is identity and access management (IAM) in the cloud, said Matthew Chiodi, chief security officer for public cloud at Palo Alto Networks, in his RSAC talk on the topic. A generic cloud account might have two roles and six policies assigned to each, but in most cases it's far more complex and challenging to determine what someone can and can't do. 

In most production accounts Chiodi has seen, "it's usually hundreds of roles and maybe even thousands of policies," he said. "It becomes really difficult to understand what we call net effective permissions." The problem is magnified as organizations use more cloud accounts.

To get a better sense of how widespread the issue is, Palo Alto Network collected "a massive, massive data set" of publicly available Github data: 283,751 files and 145,623 repos, from which they were able to extract 68,361 role names and 32,987 potential cloud accounts. Researchers took the 500 most common role names, with validated cloud account lists, and used different combinations to find potential misconfigurations.

What they found is with these misconfigurations, they could have had access to thousands of EC2 snapshots, hundreds of S3 buckets, and a wealth of KMS keys and RDS snapshots, he said. 

"When you have a compromised cloud account due to one of these types of misconfigurations, it is almost always much worse than a compromised cloud host," Chiodi said.

An attacker who can compromise a single host might be able to exploit a bug and access data, but they are generally limited if network segmentation is done. In the case of these findings, patches and multi-factor authentication wouldn't matter because "[an attacker] can weave around all of those things when you have an identity-based misconfiguration at the CSP level." 

The Risks of Infrastructure-as-Code
Infrastructure-as-code (IaC), a way of managing and provisioning infrastructure through code instead of manual processes, "is really blossoming for most organizations," Chiodi said. While it poses benefits to security teams, this strategy also comes with risks. 

Palo Alto Networks surveyed nearly one million IaC templates found on Github. They learned 42% of AWS CloudFormation template users have at least one insecure configuration, and more than three-quarters of cloud workloads expose SSH. Sixty percent have cloud storage logging disabled. In 43% of organizations configuring cloud-native databases via IaC, encryption at the database layer is completely disabled. 

"We found that when organizations rely on infrastructure-as-code to create their external and even internal security boundaries, 76% of the time they're exposing sensitive ports like SSH directly to the Internet," he said. 

For Terraform, which lets organizations use multi-cloud IaC templates across all major cloud service providers, the numbers were lower but "consistent inconsistencies" persisted. More than 20% of all Terraform configuration files had at least one insecure configuration; in 67%, access logging for S3 buckets was disabled; more than half had object versioning disabled.

Spilling Secrets in the Cloud
While most security practitioners know accidental data exposure is a common cloud security issue, many don't know when it's happening to them. This was the crux of a talk by Jose Hernandez, principal security researcher, and Rod Soto, principal security research engineer, both with Splunk, who explored the ways corporate secrets are exposed on public repositories.

In today's environments, credentials are everywhere: SSH key pairs, Slack tokens, IAM secrets, SAML tokens, API keys for AWS, GCP, and Azure, and many others. A common risk scenario is when credentials aren't properly protected and left exposed, most often in a public repository – Bitbucket, Gitlabs, Github, Amazon S3, and Open DB, are the main public repos for software.

"If you are an attacker and you're trying to find somebody that, either by omission or neglect, embedded credentials that could be reused, these would be your sources of leaked credentials," Soto said, noting these can help attackers pivot between endpoints and the cloud.

Splunk researchers found there are 276,165 companies with leaked secrets in Github. The most leaked were GCP service account tokens, seen in 34% of cases, followed by "password in URL" (30%) and AWS API keys (12.7%). When they saw leaked secrets, it took an average of 52 days for the secret to be removed from the Github project, Hernandez said.

More organizations have a "converged perimeter," a term he used to define environments with assets both behind an Internet gateway, such as DevOps and ITOps, and in the cloud. There are several attacker tactics, techniques, and procedures (TTPs) to watch for in these environments. 

One is the creation of temporary or permanent keys. "We've seen cases, for example, where developers had root keys on an AWS environment, and that is pretty bad," Soto said. "You should never give root keys; you have to enforce segregation of duties and principle of least privilege … once you have a root key, you can do whatever you want and take over," he added.

Other TTPs include the creation of trust policies and attaching a policy to a role in AWS, and hijacking temporary tokens such as OAuth2 in GCP, the researchers said. Azure users should watch for creation of a new federated domain and service principal. Those with Active Directory Federation Services, Azure, and AWS should pay attention for forged SAML assertion, he added.

Attack Detection & Defensive Strategies
It's no secret that detecting malicious activity is tougher in the cloud, a truth partly attributed to the uncertainty of bad actions, said Alfie Champion, cyber defense consultant with F-Secure Consulting, in an RSAC talk on attack detection. Fewer actions in the cloud are obviously bad.

"Context is ever more key when it comes to cloud detection," Champion said. "With much of this API interaction going on, understanding an action, the intent behind an action, and the context of it can be crucial to building high-fidelity detections."

A common mistake that organizations make when pivoting to cloud is aggregating telemetry with no context. There's no way of knowing which account a log belongs to, and no way for an analyst to pivot into an account to understand what's going on when they perform an investigation. "What is bad in one account could be good in another, and you need that context to figure that out," he noted.

Many overlook authentication logs, which interface between on-premise and cloud, as well as management interfaces. Larger organizations likely manage various cloud accounts, and likely in a federated way, he added. These logs will provide meaningful correlation for events they see.

It's worth noting that logging and threat detection looks different for each of the major cloud providers, and admins may need to take extra steps to ensure they're receiving the data they want. Flow logging, which indicates where traffic is coming from, where it's going, and how much data is being transferred, may indicate potentially malicious activity but isn't enabled, noted Brandon Evans, senior security engineer with Zoom Video Communications in his RSAC talk.

"None of the big three cloud providers have flow logging enabled by default," he said, noting that customers must explicitly opt in and define a log retention policy. AWS, Azure, and GCP all have varying delays between triggering logs and receiving them, and differences in maximum log retention periods, command line support, and logging of blocked ingress traffic, he said. 

Evans urged businesses to ensure they are capturing cloud API and network flow logs for each cloud provider they use. In the long term, as they find weaknesses in cloud infrastructure and configuration, they should work with engineering to harden permissions and use the principle of least privilege.

"If we can block attacks altogether, we absolutely should," he said. "However, monitoring and alerting will always be necessary to find the weaknesses we have not yet identified and fixed."

It's handy for businesses to design a "cloud detection stack," which can help ingest the right logs and present them in the correct way. Nick Jones, senior security consultant with F-Secure, pointed out in his talk with Champion that while the industry likes to talk about a "single pane of glass" for this practice, he believes this is "useful, but perhaps not necessary." 

"The real critical thing here is attacks rarely happen in isolation in a single environment," he said. "It's likely an attacker is likely to try and pivot or laterally move from your on-premises estate into the cloud, or vice versa, or between two environments."

Given this, he continued, analysts will need to look at logs from one data source and pivot into the next. While there are many data sources to work with, Jones recommended prioritizing Control Plane audit logs such as CloudTrail and Audit Log for visibility of all administrative actions. Service Specific Logs such as storage access logs, function executions, and KMS key access, are also critical as they show access and usage of specific resources and services.

It's never too early to threat model and test offensive scenarios, said Champion. How would an attacker target one of your assets? How would you subvert your own security controls? He advised identifying the organization's critical data, considering the attacker's objectives and starting points, and from there, prioritizing the attack path. What might be their ultimate goal? 

About the Author(s)

Kelly Sheridan

Former Senior Editor, Dark Reading

Kelly Sheridan was formerly a Staff Editor at Dark Reading, where she focused on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial services. Sheridan earned her BA in English at Villanova University. You can follow her on Twitter @kellymsheridan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights