Secure Multiparty Computation – a Seasoned Technology with Strong Foundations

Secure Multiparty Computation

What is MPC?

Secure multiparty computation (MPC) is a technology that enables different parties with private inputs to carry out a joint computation on their inputs without revealing them to each other. For example, it is possible for two people to compare their DNA and see if they are related to each other, without revealing anything but that fact.

A special case of MPC is “threshold cryptography” where private keys (for signing, decryption, etc.) are shared amongst parties, and the cryptographic operation can only be carried out when an authorized quorum of those parties agree to carry it out. This is an MPC scenario since the parties’ inputs are “shares of the key”, and the cryptographic operation is carried out without ever bringing these shares together. When used in this way, MPC dismantles the single point of compromise of cryptography; there is no single place where an attacker can steal the key from. MPC protocols come with mathematical proofs of security, guaranteeing that even if an attacker breaches a subset of the parties holding the key shares, and can run any malicious code it desires, it cannot learn anything about the key nor carry out unauthorized operations.

Is MPC a new technology?

MPC is the new kid on the block. Until a few years ago, no one had heard of MPC. However, it now appears in multiple Gartner hype cycles, Unbound’s MPC-based cryptographic module is FIPS 140-2 certified (at levels 1 and 2), and there are a number of startups and companies now using MPC in their products. Recently, I was even told that “MPC is a buzzword”. However, MPC is more like an elderly person who relocated than a new kid on the block. The study of MPC started in 1986, by Andrew Yao, and quickly became an extremely active area of research in academic circles. A quick search of “secure computation” in Google scholar returns 12800 research papers (on July 8, 2019), making it a 33-year old well-studied science.

The science of MPC

Immediately after Yao introduced the notion of MPC, there was a flurry of foundational results in the area showing that anything that can be computed can be computed securely in MPC. This means that MPC can be used, in principle, to solve any distributed computing problem while guaranteeing privacy of inputs and security. Decades of deep research followed in which different definitional frameworks for MPC were formulated and studied, and the feasibility of achieving MPC under different cryptographic assumptions and in different settings was explored. I began my research of MPC in 1998, and this was a golden age of theoretical research, understanding how to define security, construct protocols, and formally prove their security. Since MPC was not even considered practical at the time, there was no rush to implementation before it was well understood, and this enabled us to develop a rich and beautiful theory, and a deep understanding of the science of MPC.

The move to efficiency

Approximately 10-15 years ago, interest started growing in the question of whether or not MPC can be made efficient enough to actually use. At the time that we started looking at efficient MPC, it was a relatively small community, and it looked like it would be an effort that would take decades to complete. However, progress came in leaps and bounds, and others quickly recognized that MPC was now moving into a different phase, of a more practical nature. A new and vibrant research community grew around this effort – techniques for constructing efficient protocols were developed, and each year saw major advances. The effort was so successful that in less than a decade, we had developed protocols that could solve real problems of interest that the world was grappling with. Furthermore, we had developed a rich toolbox of techniques and expertise that could be applied in practice. Finally, all of this applied work is based on the strong foundations laid in the theoretical phase of research (which still continues), providing it the scientific rigor needed in the cryptographic space.

The move to practice

The natural next step for MPC was then to actually go out and solve acute problems in industry. This is a non-trivial task, since MPC is still at the stage where scientific expertise is needed. Fortunately, however, there are enough people from academia who are willing to make this effort, and some of them (like myself) are even throwing themselves into the task full time. The ability to apply the theoretical foundations that I along with others spent years working on, to design practical protocols that are deployed in production worldwide, is the fulfillment of a personal dream and a demonstration of the power of scientific research. It is important to stress that MPC is now a mature technology. It is being used by some of the world’s leading banks and hi-tech companies, and is used to protect assets worth billions of dollars. We are way beyond the experimental stage of MPC, and its use is growing rapidly. MPC still requires significant expertise to deploy, and as such is not yet a ubiquitous technology. This is a phase that will still take some time to come, but I expect to see sooner rather than later.

The security of MPC

All MPC protocols that we use (in Unbound, at least) are proven secure in great detail under the most stringent definitions of security, and externally verified by independent cryptographers. In addition, implementations are carefully reviewed to verify that they accurately meet the protocol specification. Does this mean that MPC is perfect, and there can be no flaws? Not at all – it is a system, and no system is bulletproof or impenetrable. However, our deep understanding of the domain of applied MPC means that risks can be significantly mitigated. Furthermore, we believe in transparency, and carry out continuous independent reviews (initiated by us, as well as by customers). Finally, since MPC is software based, if any flaws are discovered (and this has happened), fixes can be deployed quickly.

MPC versus hardware

There have been some claims that hardware solutions are more secure, and relying on “the latest MPC paper would be a bit irresponsible”. We argue that hardware is actually a single point of failure, and suffers from the same software ailments as the rest of the industry. Does anyone think that they can build a perfect software stack to carry out cryptographic operations? This would be an absurd claim. The problem with hardware is that it is far less transparent than software (at least as long as the HSM manufacturers refuse to allow customers to review their code), and updating in the case of a hardware breach is very hard, and sometimes impossible. There are certainly cases where hardware has its benefits, but we strongly believe that MPC provides a more suitable security model to the modern setting (where physical access can be prevented in other ways), without going into the functional and operational advantages of software. Some can argue their preference for hardware, and that is certainly their prerogative. However, belittling the rigor of MPC solutions would be denying its strong scientific foundations, and the value of it being deployed by people who themselves traveled the MPC route from theory to efficiency to practice.

Prof. Yehuda Lindell

Prof. Yehuda Lindell

Yehuda Lindell is a professor of Computer Science at Bar-Ilan University, and a cryptographer with expertise in secure multiparty computation (MPC) that forms the technological core of Unbound’s solutions. Yehuda served as the Chief Scientist of Unbound from its inception until February 2019, when he took over the role as CEO.

Subscribe to BLOG

shares