Using metadata endpoints in the cloud to obtain high privileged credentials from containers has been saved
Using metadata endpoints in the cloud to obtain high privileged credentials from containers
A point of view on how metadata endpoints can be vulnerable in cloud containers
For cloud infrastructure, metadata endpoints provide an accessible way to collect information which may prove useful to manage virtual machines and containers. Moreover, whenever a service account is associated with the resource, they also provide a way to access keys to the cloud environment. In this post, we introduce this topic and in its associated technical blog, we show how insecure applications can allow a hacker to extract AWS, Azure and Google Cloud credentials.
Go directly to
- Metadata services in the cloud
- Service accounts and least privilege
- The role of application security in service account protection
- Compromising service accounts: the case of vulnerable applications in containers
- So what can be done about metadata endpoints?
Metadata services in the cloud
Although different cloud providers call them in different ways (e.g., AWS and Azure – instance metadata service, Google Cloud – metadata server), metadata endpoints, the name we have chosen in this post, provide a programmable interface (API) to obtain useful information about different compute resources such as virtual machines, containers, etc. Although in most cases the metadata stored includes information such as CPU usage, network configurations, host names, events, and Docker information, these services also provide a very important piece of information from a security perspective: credentials.
The credentials that can be obtained from these metadata endpoints are linked to service accounts (i.e., non-human accounts) associated to the specific compute resource for which the metadata is obtained. In many cases, these accounts have high privileges within the cloud environment and thus, protecting these credentials becomes critical to ensure that attackers do not gain a foothold within your cloud environment.
Service accounts and least privilege
The most common reason to associate a service account with a compute resource is to ensure that cloud services such as databases are available to the specific resource. However, as it is common with cloud providers, the kind of permissions that can be given to service accounts is quite flexible and granular. For this reason, it is important to provide just only the “right access” to the service account, rather than give it more privileges than the resource should have. This is often called the “principle of least privilege” and is one of the main tenets for access control. That being said, even applying least privilege, the access that some compute resources require may be quite elevated. For example, if a CI/CD orchestrator virtual machine is hosted on the cloud and will be used to deploy infrastructure, it may the case that administrative permissions within the cloud environment are required. Therefore, the protection of these resources is just as important.
The role of application security in service account protection
As we have mentioned before, metadata endpoints provide clear text credentials of the service accounts they are assigned to. Although these metadata endpoints can only be reached from the inside of the compute resource that uses them, applications vulnerable to attacks such as server-side request forgeries (SSRFs) or code execution server-side are prime targets to extract cloud credentials from.
For these reasons, it is necessary to ensure that all your applications deployed on the cloud are securely developed and routinely tested for security via penetration tests to ensure that they cannot be compromised.
Compromising service accounts: the case of vulnerable applications in containers
Although in most cases, metadata endpoints are described and used in the case of virtual machines, containers are also compute resources and therefore have access to their own metadata endpoints. These endpoints can also provide credentials for associated service accounts and therefore, it is also important to ensure that container applications are secure, preferably from their development phase. In our accompanying technical blog we can see how these attacks may occur in different cloud providers.
So what can be done about metadata endpoints?
The usage of service accounts is a necessity as automation takes more prominence in development. Moreover, many applications require service accounts to ensure that permissions are assigned as required. Thus, metadata endpoint play an important role in enabling automation within cloud environments.
As of today, the mitigation offered by cloud providers is the usage of special headers in the request so that SSRFs attacks cannot be used, as well as session-based access to the metadata endpoint. However, there are still ways in which this protection can be avoided. Therefore, we propose the following recommendations:
- Ensure that the permissions given to the service account associated with the compute resource are minimal and that only about cloud resources that they must access (i.e., least privilege principle).
- Ensure that whenever the metadata endpoint is not required, for example, when explicit service accounts are not used and credentials are stored in third-party solutions like vaults and secrets managers, the metadata endpoint is disabled and inaccessible from the compute resource, for example, by changing the container configuration.
- Ensure that all applications are securely developed and that security testing is in place to promptly detect potential vulnerabilities which may end up in the compromise of the application and access to the metadata endpoint.
- Ensure that monitoring is in place and use cases are created to detect extraneous behavior coming from the service accounts associated with compute resources. For example, if a service account is making requests to the cloud provider API to list or access resources that have not been assigned to them, it could be an indicator of compromise.
Finally, it is important to understand that proactivity is the best option when it comes to security applications in cloud environments. This means that making sure that security is considered in the early stages of cloud application development prevents situations as the one described above from happening. Moreover, ensuring that the cloud environment is properly monitored also will allow rapid responses to potentially dangerous situations related to metadata endpoints.
We would like to thank the security teams of Azure, AWS and Google Cloud for their useful and helpful comments on the development of the technical blog.