What Is Verifiable Compute?

As demand for high-value private applications rises, verifiable compute is becoming increasingly important, with use cases including private onchain transactions and confidential AI model training. It’s a vital component in providing technology with privacy.

In the context of privacy, verifiable compute involves being able to verify that a computation has been completed correctly without revealing the information computed. It can be achieved with two core approaches: Trusted Execution Environments (TEEs) and Zero-Knowledge Proofs (ZKPs). Each of these approaches has its unique strengths and trade-offs, and can enhance other cryptographies to make them more trustless.

The Significance of Verifiability

Privacy can be achieved with various cryptographic techniques, such as MPC (Multi-Party Computation), FHE (Fully Homomorphic Encryption), or GC (Garbled Circuits), but in scenarios where computation happens outside a user's control—such as outsourced cloud computing, decentralized applications, or multi-party systems—proving correctness is just as important as ensuring confidentiality.

Verifiability allows users to trust computational results without relying on a central authority. It is especially crucial for:

  • Blockchain-based applications (ensuring smart contracts execute as expected). Many blockchain scaling solutions, such as zk-rollups, use zero-knowledge proofs to batch transactions offchain and submit only a proof onchain. This drastically reduces gas fees while ensuring that all computations and state transitions are valid. Projects like Starknet and zkSync leverage this approach to enhance Ethereum’s scalability without compromising security.

  • Privacy-preserving financial transactions. Cryptographic techniques like ZKPs allow for verifiable yet private transactions. Zcash pioneered this with zk-SNARKs, enabling shielded transactions where neither sender, receiver, nor amount is revealed. More recent advancements like zk-STARKs offer scalability and transparency improvements while ensuring transactions remain confidential and auditable.

  • Confidential AI models (ensuring inference results are correct without leaking data). Machine learning models can be run on encrypted data using privacy-preserving techniques. Recent work like zkCNN explores how zero-knowledge proofs can be used to verify the integrity of neural network inferences. This ensures that AI models function correctly while keeping both the model weights and input data private. Such approaches are vital for applications in sensitive areas like healthcare, finance, and identity verification.

  • Secure data sharing and analytics. Organizations dealing with sensitive data—such as medical research institutions or financial firms—need to perform analytics on shared datasets without exposing raw data. Techniques like secure multi-party computation (MPC) and homomorphic encryption allow collaborative analysis while keeping inputs confidential. Ensuring verifiability in these cases is important to maintain trust in the computed results.

Verifiability Solutions

Different approaches have emerged to provide verifiability alongside privacy.

Zero-Knowledge Proofs (ZKPs)

ZKPs have emerged as a powerful way to prove correctness. They allow one party to convince another that a computation was performed correctly without revealing the inputs. This makes them ideal for applications like privacy-preserving transactions (Zcash), rollups (Starknet, zkSync), and even verifiable machine learning. However, ZKPs can be computationally expensive, and designing proofs for general-purpose computations isn’t always straightforward.

Trusted Execution Environments (TEEs)

On the other hand, TEEs take a hardware-based approach. Technologies like Intel SGX and AMD SEV create isolated environments where computations can be performed securely. These solutions provide attestation, a way to prove that a specific software binary is run inside the trusted enclave. However, part of this attestation process requires the trust in the hardware provider.

Attestation

Attestation is a mechanism that proves a computation was executed in a specific, trusted environment. It ensures that the code ran as expected and wasn’t tampered with, where security relies on hardware and software integrity rather than cryptographic proofs.

TEEs isolate sensitive computations, ensuring confidentiality and integrity. However, without attestation, there is no guarantee that the enclave has not been compromised or that the expected software is running inside it. Attestation enables the TEE to generate a cryptographic proof that the software loaded within it is the correct and expected one. As a result, all outputs produced by the enclave can be deemed trustworthy.

Remote attestation extends this by allowing external parties to verify the enclave’s state, checking everything from firmware integrity to loaded code before trusting its outputs. This is especially critical in decentralized or multi-party settings, where trust must be established without direct control over the execution environment. By verifying that the enclave is running the intended software, remote attestation ensures that even untrusted parties can rely on the enclave’s outputs without needing to trust the host system.

Several projects, including Taiko, Nillion, and Phala, are currently building TEE-based solutions.

Co-SNARKs

MPC-based co-SNARKs (Multiparty Computation-based co-Succinct Non-Interactive Arguments of Knowledge) are a cryptographic technique that combines secure Multi-Party Computation (MPC) with succinct proofs to enable efficient verification of computations. The core idea is to distribute computation across multiple parties using MPC, ensuring that no single party learns the full input while still collaboratively computing a result.

This process generates an intermediate proof that is then transformed into a SNARK, allowing for compact and non-interactive verification. Unlike traditional SNARKs, which often require a trusted setup, MPC-based co-SNARKs can reduce or eliminate this requirement by leveraging distributed randomness generation and secure computation techniques. This approach is particularly useful for privacy-preserving applications, such as proving statements about encrypted data or verifying computations in decentralized settings, where trust minimization is critical.

Companies like TAECO are building Co-SNARK based products.

Making FHE Verifiable

One of the challenges with FHE is verifiability. Since the computation happens on encrypted data, how do you ensure it was done correctly? Some approaches involve ZKPs layered on top of FHE:

Currently, most work around this is to prove a single bootstrapping as fast as possible. 

An alternative approach is to run FHE computations inside a TEE, leveraging the TEE’s verifiable compute guarantees to enure integrity of the results without incurring the overhead of ZKPs.

Verifiable FHE is an active research area, and solving it could unlock new privacy-preserving applications with stronger correctness guarantees.

The Road Ahead

The field of verifiable compute is evolving rapidly. ZKPs and TEEs are leading the charge, but integrating these solutions with FHE, MPC, and GC in a practical way is still a work in progress. As more projects tackle these challenges, we’ll likely see better, faster, and more scalable solutions in the near future.

Incoming newsletter

Stay up to date with the latest on FHE and onchain confidentiality.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.