We’re excited to announce the first developer release of Mina’s zkML library—a powerful tool for generating zero knowledge proofs (ZKPs) from AI models and settling those proofs on the Mina blockchain. This article provides developers a walkthrough of Mina’s zkML library design and provides a tutorial to help get you started.
As we catapult towards a world where AI agents are not only our personal assistants, but also our bankers and our doctors, verifiable and private inference will become critical to building an agentic world we can trust.
How can I be sure which model was used? How do I know there wasn’t a code injection attack? How can I get personal health or financial advice without disclosing my most sensitive personal data? You can with zkML on Mina.
Mina’s zkML library enables anyone to generate a zero knowledge proof from an AI inference job using private inputs. This means you can trust the output has not been tampered with and you can keep your private data private. This allows you to:
- Convert AI models (in the widely used ONNX format) into zero knowledge proof circuits
- Generate a Mina proof of the AI inference job of the ONNX model on private inputs, providing both verifiability of execution and privacy
- Submit these proofs to the Mina blockchain, where they can be verified in a trustless environment, ensuring both the privacy of the inputs and the verifiability of the results.
- Verify model execution without exposing any proprietary or sensitive data.
A final release is coming soon with support for more models, performance improvements, better documentation, and more examples.
Ok, with that let’s dive in …
An Introduction to Mina’s zkML Library Design
Mina’s zkML library includes a few key components:
- a prover written in Rust for generating a zero knowledge proof from an ONNX file,
- a command-line interface (CLI) for generating a verifier for your proof and deploying it to the Mina blockchain, and
- a set of examples in Python and Rust demonstrating how to prove and verify various AI models.
Read on to learn more about the high-level design of the library.
A Graph-Based Framework for Neural Network Processing
At its core, the library uses a graph representation for neural networks, where nodes represent operations (e.g., matrix multiplications, convolutions).
The graph structure is encapsulated within the Model
struct. A Model
comprises a parsed graph (ParsedNodes
), input-output mappings.
#[derive(Clone, Debug, Serialize, Deserialize, Default)] pub struct Model { pub graph: ParsedNodes, pub visibility: VarVisibility, }
Each graph node can be of two types:
- Computation Nodes represent standard neural network operations like matrix multiplication (
MatMul
) or activation functions (ReLU
) - Subgraph Nodes are used for more complex operations requiring control flow, such as loops. Subgraphs are recursively defined
Model
objects.
#[derive(Clone, Debug, Serialize, Deserialize)] pub enum NodeType { Node(SerializableNode), SubGraph { model: Box, inputs: Vec, idx: usize, out_dims: Vec<Vec>, out_scales: Vec, output_mappings: Vec<Vec>, input_mappings: Vec, }, }
The library ensures computational correctness through mechanisms like topological sorting which orders nodes based on dependencies, as illustrated by the topological_sort
method:
fn topological_sort(&self) -> Result<Vec, GraphError> { let mut visited = HashMap::new(); let mut sorted = Vec::new(); // Logic to visit nodes in dependency order... }
ONNX Integration
The library’s integration with ONNX, a widely adopted format for AI models, ensures compatibility with frameworks like TensorFlow and PyTorch. Using the tract_onnx
Rust crate, ONNX models are parsed and converted into the library’s internal graph representation.
The library reads ONNX models and extracts their graph structure using the load_onnx_model
method:
pub fn load_onnx_model( path: &str, run_args: &RunArgs, visibility: &VarVisibility, ) -> Result<ParsedNodes, GraphError> { let (model, symbol_values) = Self::load_onnx_using_tract(path, run_args)?; // Further parsing and validation... }
Attributes like kernel_shape
, strides
, and padding
are parsed for convolutional layers. This process ensures that models retain their operational semantics after being loaded.
A Cryptographic Proof System for Privacy and Integrity
The library takes the ONNX op codes and converts them into a ZK circuit aimed to be executed on Mina’s Kimchi prover.
The ProverSystem
translates a graph into circuit’s, constructs a witness (intermediate values), and generates a proof:
pub fn prove(&self, inputs: &[Vec]) -> Result<ProverOutput, String> { let (witness, outputs) = self.create_witness(inputs)?; let proof = ProverProof::create( &group_map, witness, &[], &self.prover_index, &mut rng, )?; Ok(ProverOutput { output: Some(outputs), proof, .. }) }
The VerifierSystem
verifies the proof inputs and outputs, optionally using private inputs. This ensures that computations are verifiable while maintaining data privacy.
pub fn verify( &self, proof: &ProverProof<Vesta, ZkOpeningProof>, public_inputs: Option<&[Vec]>, public_outputs: Option<&[Vec]>, ) -> Result<bool, String> { let result = kimchi::verifier::verify( &group_map, &self.verifier_index, proof, &public_values, ); result.map(|_| true) }
Mapping Neural Network Operations to ZK Circuits
The library translates neural network operations into ZK circuits. This ensures computational logic is preserved while adapting to the constraints of zk-SNARKs. Each operation is mapped to circuit gates using patterns tailored for efficiency.
For example, matrix multiplication is implemented as a series of multiplication and addition gates:
OnnxOperation::MatMul { m, n, k } => { for _i in 0..*m { for _j in 0..*n { for _l in 0..*k { gates.push(CircuitGate::new( GateType::ForeignFieldMul, [Wire::new(current_row, 0); 7], vec![], )); current_row += 1; } } } Ok(gates) }
Other operations, like ReLU
, use a combination of range checks and generic gates: OnnxOperation::Relu => { let range_check = CircuitGate::new(GateType::RangeCheck0, [Wire::new(start_row, 0); 7], vec![]); let generic = CircuitGate::new(GateType::Generic, [Wire::new(start_row + 1, 0); 7], vec![]); Ok(vec![range_check, generic]) }
Scalability and Extensibility
The library dynamically calculates the required circuit size based on the model’s complexity. By leveraging efficient memory management and domain-specific scaling, it ensures performance even for deep neural networks.
fn calculate_domain_params(circuit_size: usize) -> (usize, usize) { let lookup_domain_size = 0; let circuit_lower_bound = std::cmp::max(circuit_size, lookup_domain_size + 1); // Logic to calculate domain size and zk_rows... }
On chain verification
In this article we use the CLI tool or the Rust library to verify proofs generated by mina-zkml.
Every ML model requires its own verifier since they are unique circuits, however anyone can operate the verifier.
To make it easy for anyone to verify a proof and record it on the Mina chain we are also releasing the zkML Verifier— a tool for deploying smart contracts to verify proofs on chain.
zkML verifier allows you to verify proofs using o1js smart contracts and push proofs to chain using a REST API for proof verification, it also allows you to upload arbitrary input for validation.
To learn more about deploying a verifier and interacting with it go to the zkML Verifier repository.
Library Walkthrough
This section walks you through a practical example of using the Mina ZKML library to run a MNIST handwritten digits example. We will demonstrate how to load a pre-trained ONNX model, preprocess input images, generate predictions, and verify cryptographic proofs using the library’s capabilities. View the repository for this example here.
Prerequisites
Before diving into the code, ensure you have:
- A pre-trained ONNX model for MNIST digit recognition (mnist_mlp.onnx).
- Sample MNIST images saved locally (e.g., 1052.png and 1085.png).
- Clone the repository mina-zkml
Preprocessing Images
To use the model, input images must be preprocessed. The following function:
- Converts the image to grayscale.
- Resizes it to 28×28 pixels (the MNIST format).
- Normalizes the pixel values to match the model’s training distribution.
fn preprocess_image(img_path: &str) -> Result<Vec, Box> { let img = image::open(img_path)?.into_luma8(); let resized = image::imageops::resize(&img, 28, 28, image::imageops::FilterType::Lanczos3); let pixels: Vec = resized.into_raw().into_iter().map(|x| x as f32).collect(); let pixels = pixels.into_iter() .map(|x| (x / 255.0 - 0.1307) / 0.3081) // Normalize with mean and standard deviation .collect(); Ok(pixels) }
Setting Up the Model
The Model::new
method initializes the neural network model. The RunArgs
struct specifies the batch size, and the VarVisibility
struct controls whether inputs and outputs are public or private.
let mut variables = HashMap::new(); variables.insert("batch_size".to_string(), 1); let run_args = RunArgs { variables }; let visibility = VarVisibility { input: Visibility::Public, output: Visibility::Public, }; let model = Model::new("models/mnist_mlp.onnx", &run_args, &visibility)?;
Creating the Prover and Verifier Systems
The ProverSystem
generates cryptographic proofs for model predictions, and the VerifierSystem
validates them.
let prover = ProverSystem::new(&model); let verifier = prover.verifier();
Generating Predictions and Proofs
For each input image, preprocess the image, generate predictions, and create a proof.
let input1 = preprocess_image("models/data/1052.png")?; let input_vec1 = vec![input1]; let prover_output1 = prover.prove(&input_vec1)?; let output1 = prover_output1.output.as_ref().expect("Output should be public");
The print_prediction_info
function computes and displays the probabilities for each digit:
fn print_prediction_info(logits: &[f32]) { let max_logit = logits.iter().take(10).fold(f32::NEG_INFINITY, |a, &b| a.max(b)); let exp_sum: f32 = logits.iter().take(10).map(|&x| (x - max_logit).exp()).sum(); let softmax: Vec = logits.iter().take(10).map(|&x| ((x - max_logit).exp()) / exp_sum).collect(); println!("Probabilities for each digit:"); for (digit, prob) in softmax.iter().enumerate() { println!("Digit {}: {:.4}", digit, prob); } println!("Predicted digit: {}", softmax.iter().enumerate().max_by(|(_, a), (_, b)| a.partial_cmp(b).unwrap()).map(|(digit, _)| digit).unwrap()); }
Verifying Proofs
The VerifierSystem::verify
method ensures that the proof matches the inputs and outputs:
let is_valid1 = verifier.verify(&prover_output1.proof, Some(&input_vec1), Some(output1))?; println!( "Verification result: {}", if is_valid1 { "✓ Valid" } else { "✗ Invalid" } );
Testing Robustness
The example also demonstrates how the system detects invalid proofs. For instance, manipulating the logits (predicted probabilities) results in verification failure:
let mut fake_output1 = output1.clone(); for i in 0..10 { fake_output1[0][i] = -fake_output1[0][i]; // Invert logits } let is_valid_fake = verifier.verify(&prover_output1.proof, Some(&input_vec1), Some(&fake_output1))?; println!( "Verification result: {}", if is_valid_fake { "✓ Valid (Unexpected!)" } else { "✗ Invalid (Expected)" } );
For more examples please have a look at the examples folder in the mina-zkml repository.
Want to participate in the development? Talented developers and individuals are always welcome.
All work is taking place on GitHub and anyone is welcome to join the conversation on the Mina #zkML Discord channel.
About Mina Protocol
Mina is the world’s lightest blockchain, powered by participants. Rather than apply brute computing force, Mina uses advanced cryptography and recursive zk-SNARKs to design an entire blockchain that is about 22kb, the size of a couple of tweets. It is the first layer-1 to enable efficient implementation and easy programmability of zero knowledge smart contracts (zkApps). With its unique privacy features and ability to connect to any website, Mina is building a private gateway between the real world and crypto—and the secure, democratic future we all deserve.