Blockdaemon’s complete node stack supports the flow of data and value for millions of users. Our customers include top tier financial institutions, crypto native companies, exchanges, and many more…
Secure Multiparty Computation (MPC) has rapidly emerged as the preferred digital asset wallet security scheme. Hundreds of digital asset exchanges, custodians, institutional investors and more recently payment firms are now using MPC. Analysts at major consulting and advisory firms now advise clients to only consider blockchain wallets and custody solutions based on MPC. The question is no longer if to use MPC, but rather which implementation of MPC to use. How will you decide which MPC-based wallet, or MPC technology to source for your project?
Book a call for your very own demo of the Blockdaemon Wallet™, or read on to get a closer look at what’s new.
Secure MPC is a description of a particular cryptographic protocol, and as with any technology, the results may vary based on implementation. After surveying a number of companies who have implemented MPC-based wallets, or developed their own wallets using MPC, we learned that not all MPC implementations are created equal. Following are some areas that experienced MPC wallet developers and users suggest you consider in your selection.
If you require an institutional grade custodial or operational wallet, that is subject to regulation and is completely under your control, you may prefer a Do It Yourself (DIY) wallet implementation. This will allow you to design and implement your own wallet, while sourcing the core MPC technology from an MPC expert such as Blockdaemon. Alternatively, if you require a wallet that is more of a basic utility and your value-add is everything else, you may be better served securing a turn-key MPC-based wallet from a third-party MPC wallet provider.
The next question is what form should your wallet take. Will a Wallet as a Service (WaaS) be sufficient, or does your security model require greater control over the domain where the MPC computations and operations occur? Perhaps one or more of the MPC computations nodes should be hosted on-premises, in cloud resources under your control, or in wallet as a service resources hosted by some other party. This decision may be impacted by regulatory mandates, whether your wallet is being used for self or 3rd party custody, whether your clients are holding any key shares on mobile or laptop devices, if off-line signing is required, among other considerations. The good news is that MPC can support any of these models. But check with your wallet/MPC technology provider to make sure your particular model is supported.
Every vendor claims to have good performance. Transaction latency is one critical metric to measure performance of MPC implementations. The nature of MPC makes it far more computationally intensive than traditional plaintext computing. The application of MPC to generate threshold signatures for secure multiparty approval can require multiple rounds of communications and computations between the parties before a signature is generated. These computations should be moderate, especially for today’s processing devices, however the communication latency between geographically distributed MPC parties can add up and result in longer transaction signing latencies.
For blockchain applications such as Bitcoin, the cycle time between processing each block is several minutes, so a few seconds of signing latency may not seem concerning. However, the industry quest for faster transactions and healthy competition demand better performance. Application optimized implementations of MPC can support pre-processing and other performance enhancements to reduce cycle time between the request for a transaction to be approved and the signature to be generated by nearly an order of magnitude. This can reduce the signature generation latency from seconds down to milliseconds, making it imperceptible to most users and applications.
Latency is one of the areas where “results may vary” considerably across different vendor implementations. Be sure to do your due diligence and benchmark test for latency if possible.
Another critical dimension of MPC performance is throughput. This indicates how many transactions per second (TPS) the system can support. Throughput is largely a function of MPC computational efficiency, system latency, and compute resources. MPC can be designed to run in containers, virtual machines, or physical servers/appliances. In most cases, these systems have ample compute capacity, and latency can be optimized as noted above. The real gains on MPC throughput are achieved through efficient and effective MPC algorithms and optimized coding. The goal is to minimize the number of rounds of computation while maintaining the integrity and resilience of the overall system.
Some MPC implementations can achieve excellent latency and throughput at small scale, only to come to a near screeching halt when the system scales up. For example, let’s say you want an MPC implementation that allows each client to hold their share of a private key. Having each client hold their own key share means that as the client base grows, the effective scale of the system must grow.
With many applications supporting thousands to millions of clients your MPC system design may need to support massive scale. The system architecture to deal with widely varying levels of scale turn up the contrast on other aspects of the system. In short, if a system is not expressly designed and tested to support large scale you may encounter limitations.
A fundamental premise of MPC is the concept of reducing the risk where the compromise of one party holding an entire key can result in a complete security failure. Using MPC for key management applications, such as wallets and transaction security, requires at least two parties, each with a key share. This way, an adversary would have to compromise both parties to gain access to a key.
Taking that concept further, it should be even more difficult to compromise enough parties to reconstruct the key if the key is composed of more than two shares and held by more than two parties. If we considered this parameter alone, one might choose to build an MPC system with eight parties, or maybe even 100 parties. After all, what are the odds that a hacker could penetrate nearly all of those systems concurrently? It would certainly be less than the odds of breaking into a two-party system.
Unfortunately, the latency and throughput are non-linearly impacted by increasing the number of parties. So bigger isn’t always better. The typical model to design the MPC system is to have as many parties as the security model requires, and no more. For many applications, two-parties is completely sufficient. But for other applications and certain security models 3 or more parties may be preferred.
There are schemes to help achieve the required level of security with a fewer number of parties. One such scheme is key share rotation, also known as key share refresh. This is particularly common in two-party MPC systems, because the probability that two different systems could become compromised over time is certainly feasible. The concept behind key share refresh is that the key shares held by each party are refreshed after each transaction. Doing so means that a hacker would have to compromise two different systems concurrently, and do so undetected.
Adding in a key share refresh after every transaction can have implications on the performance and scalability of the system. Ironically, increasing the number of parties moderately, from two to three or four parties will materially reduce the probability that multiple parties are compromised concurrently and undetected. This allows for key share refresh to be reduced to a lower frequency event and mitigates the impact on performance and scale. As a result, it may be possible that having more than two parties yields greater overall performance at scale.
Consideration must also be given to the types of physical or virtual containers or machines that may be hosting your MPC operations. Will they be full scale servers, with cloud compute scale and capacity, or will they be tablets, mobile phones, or perhaps even IoT devices with far more limited compute and memory resources?
MPC systems are typically designed with some minimum assumed resources available to facilitate the computational functions. Take time to consider how your service model may evolve over time and what physical or virtual platforms your system may need to support. Then, verify that your preferred MPC technology supports the performance, scale, and security models you need - on the device types that your solution may have to support.
The default nature of MPC is to execute an operation, such as generating an approval signature, through a computation across multiple parties. The simplest form of such a computation would be to require all parties to be online concurrently, inter-connected, and actively communicating throughout the entire computation and signature generation process.
Certain operational objectives may require a different model. For example, what if one of the approvers is a busy executive that is frequently traveling. It may be preferable to allow this party to generate her share of approval, using a key share stored on her mobile phone. Network coverage of mobile phones cannot be assured. People drive out of range, enter buildings with poor connectivity, get in elevators, hop on planes, batteries die, etc. It may be a requirement for that party to approve a transaction, but you may not want to force all approvers to be online concurrently to generate an approval.
Some, but not all, MPC implementations may support asynchronous approvals where one or more of the parties may be offline when another party is generating their share of approval.
Some MPC implementations may be more rigid or flexible than others with regard to the sequence for approval schemes. These attributes may reflect a conscious MPC design decision to support or to not support certain modes of operation, which may prove critical to your security model.
For example, one model might be to require the explicit approval of a particular party such as a client to a shared custody service. In such a model, the design might provide a share of the private key to a client party and the algorithm may require that either the client party must initiate each transaction for approval or that this particular party must be one of the m of n approvers.
For other security models, the explicit approval of any particular party may not be relevant and the preference may be to support any 2 out of 3 potential approvers. The preferred model comes down to your particular application, and any changes you anticipate over time.
Taking the earlier example of asynchronous approval to an extreme case might require one of the approvers to use an air-gapped device to generate an off-line share of approval. This might be to provide a more agile and efficient cold storage model where at least one of the approvers is using MPC with an air-gapped machine. In such a scenario, the machine might generate a partial signature which is transferred using a USB or other physical mechanism to export the partial approval.
Some MPC implementations can support offline, air-gapped approval models, but historically most do not. As with any technology innovation it is critical to ensure that the fundamental security principles remain intact.
Anyone working in security knows there is no such thing as absolute, unconstrained security. Security is typically defined with a framework of a reasonable set of conditions and operational assumptions that are believed to be sustainable. In the case of MPC, it might be that no more than a predefined threshold of t parties can become corrupt, and still assure that operations will continue securely and with trusted outputs.
Even if we’re assured that the implementation is secure within a given set of conditions, how do we really know that a particular implementation is secure? Most organizations will not have the skill sets, tools, or time to conduct a complete audit to anticipate every potential vulnerability. Instead we must look to the credibility of the MPC cryptographers backing the implementation and third-party attestations to assure that the security is indeed trustworthy within the specified conditions and constraints.
A historic attestation benchmark for security applications such as key management is FIPS 140-3 (the latest revision, superseding FIPS 140-2). Very few MPC vendors have FIPS validations or in-process listings for MPC. When available, this is one widely recognized benchmark for attestation. In addition to FIPS, attestations by specialized security firms such as NCC Group are well known and offer a valuable third-party assessment.
In summary, there are many things to consider while selecting your MPC-wallet or MPC technology provider. Mapping out a clear list of your considerations and prioritizing to determine which are more important will empower you to identify and select the best solution for your particular application.
Blockdaemon is a well-established pioneer in the application of MPC for key management and protection. Our team were among the first MPC scholars to apply MPC to practical real-world applications, beginning in 2008. Since then our team has supported a wide range of MPC implementations, with very diverse requirements. We invite you to consider some of our application specific packages, or approach us for custom developments of MPC to optimally support your unique requirements.
If you want more information or are interested in working with us, don't hesitate to contact our sales team.
Fill out the form to connect with one of our product experts and learn how Blockdaemon can help you unlock the power of blockchain.