Introduction to Cloud-Native Secrets Management: Part II
This is the second part in a three-part blog series titled “Introduction to Cloud-Native Secrets Management”.
I had a lot to say so I am giving my readers the option to read the concise version below, or the more elaborate version here. Thanks guys!
In the first part of the Cloud-Native blog series we introduced the various secrets types, the technical challenges and the business implications of misusing secrets in a Cloud-Native environment. In this, second part of the blog series we’ll deep-dive into how secrets are protected in Cloud-Native environments and its implications on security and trust.
As many start to realize the damaging potential of a major security breach, different set of vault-like tools begin to emerge in the Cloud-Native eco-system. Logical vaults, as their physical predecessors, securely store the secrets while within the vault. They encrypt the data while at rest within the vault as well as using TLS for encrypting the secret while being transferred from the vault to the application.
Although utilization of such tools is far better than simply coding the secret into the source code, still severe security implications arise from the way they’re typically being used, as we’ll exemplify below.
When an application needs to use a certain secret stored within the vault (e.g. a code signing key, database encryption key, connection string to a database containing sensitive information), it first has to authenticate itself with the vault by providing proper credentials.
Once the access request is authenticated by the vault it reads the secret data from its storage and decrypts it with a key stored on the same storage, or rarely in an external HSM. For the experienced security professional, this already raises a red flag, as this opens a possibility to compromise the key to the vault and obtain access for its whole content. Obfuscation methods that may be used to protect this key, can only slow down an attacker, but cannot prevent the breach.
After decrypting the secret (either using an HSM or a local key), the vault provides it to the application in a secure manner via TLS, so it can be decrypted only be the application. However, the TLS-secured communication is decrypted at the application, the secret that we worked so hard to protect suddenly becomes available in clear text, and can be harvested, opening an easy and lucrative attack surface for the potential adversary.
This is a major security issue, with a broad effect on Cloud-Native applications, as cryptographic keys, credentials and secrets are not strongly secured and exposed while in-use. It strongly contrasts with private keys and credentials in traditional environments that are typically secured in hardware like HSM/TPM, especially in medium to high trust use cases.
This broadly practiced modus operandi opens a possibility for a skilled intruder to intercept secrets used by Cloud-Native applications. The security implication of revealing such secrets ranges from the relatively low impact of an attacker compromising a username and password for a low importance system, and steeply rising with gaining access to admin credentials, API tokens or private encryption keys that can lead to leaking personally identifiable information on a large scale.
In addition, direct passing of the secrets to the applications creates an auditing nightmare, as there is no possibility to trace the usage of a certain secret by the requesting applications in a centralized manner.
In this part of the Cloud-Native blog series, we have discussed some of the inherent challenges of protecting secrets in cloud-native environments.
In the next post of this series, we’ll dive deeper into the challenges of container identity in a cloud-native environment.
To read the more elaborate version, please click here.