The recently published ROCA (Return of Coppersmith’s Attack) vulnerability continues to generate shock waves in the crypto and security community. This severe vulnerability, besides the immediate implications, serves as a wakeup call that highlights some of the underlying issues in today’s typical crypto architectures.
The vulnerability stems from a flawed implementation of RSA key generation in a library developed by the German semiconductor manufacturer Infineon Technologies AG. This library runs on Infineon hardware that is then embedded in a range of products, some of them FIPS 140-2 and CC EAL 5+ certified devices. As these chips constitute a building block for numerous products, the range of affected devices is very broad, including the majority of Microsoft and Chromebook laptops developed by Fujitsu, HP and Lenovo. In addition, authentication tokens and even citizens’ digital identity cards are impacted.
RSA 1024 and 2048 keys generated using the aforementioned chips may be vulnerable to a factorization attack, in which the private key could be factorized directly from the public key within practical timespans; factorizing an RSA 2048 key will take a little more than 2 weeks and $40k, while less than an hour and $76 will get you a 1024-bit key. Identifying whether a key is vulnerable is very quick, taking less than a millisecond.1 So far, the researchers have uncovered over 700,000 keys affected, with the full scope likely to be much broader. This is a devastating vulnerability, exposing keys used for the most sensitive operations such as data encryption, email encryption, code signing and national identity (e.g. eID).
Remediation is likely to take a long time, as fixing the vulnerability requires a firmware upgrade – which depends on availability from the OEM, and is far from straightforward to install. Microsoft has recently released a software patch to bypass the vulnerability that defaults to generate RSA keys on software on applicable devices. This, in turn, means that these keys will not be hardware protected going forward – until a firmware upgrade takes place.
The grim picture described above calls the crypto community to ask some serious questions:
What are the practical security implications of using crypto hardware?
Using dedicated hardware for managing crypto keys is considered by many the most secure option, protecting sensitive keys from the threats and vulnerabilities of general-purpose operating systems and applications. In theory, this pans out nicely security-wise, until a vulnerability like ROCA hits. Vulnerabilities in hardware are extremely problematic: a fix is likely to require a costly firmware upgrade (or even full hardware replacement), taking months or years to propagate through the supply chain, e.g. chip manufacturer –> device manufacturer –> OEM –> end user. In the meantime, thoughtfully built security systems may be completely broken and left exposed for a very long period of time as crypto infrastructure is at the root of the security architecture. Therefore, when relying on crypto hardware as part of the security architecture, the risks of potential vulnerabilities, long update time, hassle to users and high costs must seriously be considered.
As we all realize that vulnerabilities aren’t going anywhere, we should think of better ways to build our crypto infrastructure using agile and software-based methods in which updates take a matter of days to implement and recovery is far easier.
Is “Certified” = “Secure”?
This concerning question is raised as the said flawed modules behind ROCA have been certified by two internationally recognized certifications standards: FIPS 140-2 and Common Criteria. If such a serious vulnerability went undetected in a rigorously tested module, and within a function that is very mature and known for decades, what does it reflect on the core security value of these certifications?
Using crypto hardware on BYOD devices: is it a good idea?
Crypto hardware has become much more common, with secure elements, trusted platform modules (TPM), trusted execution environments (TEE) etc. built in to the hardware of many endpoint devices. While the use of such features for protecting sensitive operations like data protection or identity and payment verification is growing steadily, so are the risks. For example, consider the risk of a specific hardware function that is considered critical to the trust model for a user facing application without remedy: firstly, the complexity of the supply chain (which is demonstrated well in ROCA, where few vulnerable chips are embedded in hundreds of products) will slow down the detection and scope realization of the issue. Furthermore, the app provider does not even own the device and so cannot force the user to upgrade. In fact, they do not have a relationship at all with the device manufacturer and/or the vulnerable chip manufacturer in that context. This leads to an ironic scenario, where app providers are often held liable for misdeeds taking place on devices of their end users (think online ), while the app security model is built on third party vendors that have no business relationship with the app provider.
This is the place to say that such crypto hardware chips have great benefits and they should be leveraged by apps if they exist on the device and have no known unpatched vulnerabilities. However, relying on hardware alone can be very problematic. In other words, it’s probably a better idea to treat these as an additional layer of security rather than a critical building block of the app security model. Security must be built in within the app you control, rather than be assumed to exist and function correctly on environments you have absolutely no control of.
Lastly, the ROCA story isn’t over yet. Next week at the ACM Conference on Computer and Communications Security (ACM CCS), the full technical details behind the vulnerability will be uncovered. When these details are revealed, we will follow up with a technical analysis and our takeaways on how to build an agile and resilient crypto infrastructure.