This paper is available on arxiv under CC BY 4.0 DEED license.
Authors:
(1) Zhipeng Wang, Department of Computing, Imperial College London;
(2) Nanqing Dong Department of Computer Science, University of Oxford;
(3) Jiahao Sun, Data Science Institute, Imperial College London;
(4) William Knottenbelt, Department of Computing, Imperial College London.
TABLE OF LINKS
Theoretical and Empirical Analysis
Methodology
zkFL
As shown in Fig. 1, our zkFL system works as follows:

Setup: N clients and 1 aggregator generate their private/public key pairs and set up communication channels. Each client knows the public keys of the other n − 1 clients, and this setup can be achieved by using a public key infrastructure (PKI).

Local Training, Encrypting, and Signing: During each round, the n clients train their models locally to compute the local model updates w1, w2, . . . , wn. Each client encrypts their update Enc(wi) = g wi · h si using Pedersen commitment, where g and h are public parameters and si is a random number generated by the client. The client signs the encrypted updates with their private key to generate a signature sigi

The client then sends the tuple of local model update, the randomly generated number, encrypted local model update, and signature (wi , si , Enc(wi), sigi) to the aggregator.

Global Aggregation and ZKP Generation: The aggregator aggregates the received local model updates w1, w2, . . . , wn to generate the aggregated global model update w = Pni= 1 wi. The aggregator also computes the aggregated value of the encrypted global model update Enc(w) = Qn i=1 Enc(wi) and signs it with its private key to generate the signature sig. The aggregator then leverages zkSNARK to issue a proof π for the following statement and witness:
where the corresponding circuit C(statement, witness) outputs 0 if and only if:
Global Model Transmission and Proof Broadcast: The aggregator transfers the aggregated global model update w, its encryption Enc(w) and the proof π to the n clients.

Verification: Upon receiving the proof π and the encrypted global model update Enc(w) from the aggregator, the clients verify if π is valid. When the verification is passed, the clients start their local training based on the aggregated global model update w.
Blockchainbased zkFL
To decrease the computation burden on clients, we incorporate blockchain technology into our zkFL system. In this approach, the verification of proofs generated by the aggregator is entrusted to blockchain miners. Illustrated in Figure 2, the blockchainbased zkFL operates as follows:

Setup: N clients and 1 aggregator generate their private/public key pairs, which correspond to their onchain addresses.

Selection: For each round, n clients are selected from the N clients via Verifiable Random Functions [24,3]. The n selected clients’ public keys are broadcasted to the underlying P2P network of the blockchain, which will be received and verified by the miners.

Local Training, Encrypting, and Signing: The n selected clients train their models locally to compute the local model updates w1, w2, . . . , wn. Each client encrypts their update Enc(wi) = g wi ·hsi using Pedersen commitment, where g and h are public parameters and si is a random number generated by the client. The client signs the encrypted updates with their private key to generate a signature sigi. The client then sends the tuple of local model update, the randomly generated number, encrypted local model update, and signature (wi , si , Enc(wi), sigi) to the aggregator.

Global Aggregation and ZKP Generation: The aggregator aggregates the received local model updates w1, w2, . . . , wn to generate the aggregated global model update w = Pn i=1 wi. The aggregator also computes the aggregated value of the encrypted global model update Enc(w) = Qn i=1 Enc(wi) and signs it with its private key to generate the signature sig. The aggregator then leverages zkSNARK to issue a proof π for the following statement and witness:
where the corresponding circuit C(statement, witness) outputs 0 if and only if:
 Global Model Transmission and Proof Broadcast: The aggregator transfers the aggregated global model update w and its encryption Enc(w) to the n clients, and broadcasts the proof π, and the encrypted global model update Enc(w) to the miners over the P2P network.
 OnChain Verification: Upon receiving the proof π and the encrypted global model update Enc(w) from the aggregator, the miners verify π and append Enc(w) to the blockchain if π is valid.
 OnChain Reading: When the next round starts, the newly selected n clients read the blockchain to check if Enc(w) is appended onchain. When the check is valid, the clients start their local training based on the aggregated global model update w.