Encryption of AI/ML models
This page describes our ChemXor library, a tool to build privacy-preserving machine learning models for drug discovery. We believe that encrypted assets can foster collaboration
Privacy preserving AI/ML for drug discovery
We are developing an open-source privacy-preserving machine learning platform for drug discovery. It has been widely argued that artificial intelligence and machine learning (AI/ML) can transform the pharmaceutical industry. However, AI/ML techniques are bound to the availability of training datasets, oftentimes restricted by intellectual property (IP) liabilities. As a result, a wealth of proprietary experimental screening results remains inaccessible to researchers and impossible to share without compromising the IP of the companies.
The current project offers a solution to this problem. We propose that sensitive experimental results can be shared securely in the form of AI/ML models, which retain the essential properties of the dataset but do not display the identity of the screened compounds. Sharing encrypted AI/ML tools instead of datasets may enable new forms of collaboration between pharma, biotech and academia, and offers a new means of contribution to Open Science.
Fully homomorphic encryption for AI/ML models
Fully homomorphic encryption (FHE) allows computation on encrypted data without leaking any information about the encrypted data. More succinctly:
Where *
can be either multiplication or addition. The result of the computation can only be decrypted by the party that holds the decryption key.
The current state of FHE
Fully homomorphic encryption is still a nascent area of research in the field of cryptography compared to other established cryptographic techniques. It was theorised back in the 70s but the first practical breakthrough happened in 2009 with the seminal thesis of Craig Gentry on this topic. Since then, various new schemes have been proposed to improve usability and performance. Some of the major schemes used in practice are discussed below.
Application of FHE in AI/ML
CryptoNets (2016) was the first paper to show that FHE can be successfully used to encrypt a machine learning model to perform encrypted evaluation and training. The model was hand-tuned and used Microsoft's SEAL library to implement FHE functions.
However, the adoption of FHE in AI/ML applications is still low despite its enormous potential in enabling privacy-preserving ML-as-a-Service systems with strong theoretical security guarantees. FHE still suffers from performance issues which makes it practically infeasible to use with large machine learning models. We'll discuss the challenges further in the following sections.
The selection of encryption parameters is not trivial
Selecting encryption parameters depends on the computation being performed. So, it takes some trial and error in each case. There are some projects trying to solve this problem by using a compiler approach. Look here for more details.
The time complexity of computation scales poorly with input size
The current generation of FHE libraries suffers from severe performance issues. As the input size increases, the evaluation time quickly becomes infeasibly large. This limits the size of input matrices to an ML model.
Poor integration of current FHE libraries with popular ML frameworks
FHE libraries are not well integrated with the rest of the machine learning ecosystem. For example, TenSeal tensors are not interoperable with Pytorch tensors.
Poor support for hardware accelerator backends in FHE libraries to speed up the computation
None of the major FHE libraries implements a CUDA backend for computation. So, GPUs cannot be used to speed up computations.
Poor community support
FHE community is still small which results in poor documentation and limited worked-out examples.
Our work
We have created a Python library (ChemXor). It provides a set of pre-tuned model architectures for evaluating FHE(Fully homomorphic encryption) encrypted inputs. These models can be trained as normal Pytorch models. It also provides convenient functions to quickly query and host these models as a service with strong privacy guarantees for the end-user. It is built on top of TenSEAL and Pytorch.
Encryption
The encryption context for input data is based on the third-party TenSeal library. TenSeal is currently using the CKKS encryption schema but it can be adapted to incorporate other encryption schemas as well. Computational performance is extremely sensitive to CKKS parameters. For the built-in model architectures available in ChemXor, we already provide manually tuned CKKS parameters. As a result, ChemXor provides a straightforward API to perform this otherwise laborious FHE step.
Partitioned models
FHE inputs also suffer from fixed multiplication depth. After a certain number of multiplication operations, the noise in the input grows too large. This limits the number of layers that a neural network can have. To overcome this problem, ChemXor encrypted models are partitioned. After a certain number of multiplications, the output is sent back to the user. The user decrypts the output, recovers the plain text and encrypts it again to send back to the model to continue execution. ChemXor provides functions to do all of this automatically.
Getting started with ChemXor
ChemXor is available on PyPi and can be installed using pip.
Tutorials
Model selection and training
At the moment, one can choose from 3 pre-tuned models.
OlindaNetZero
: Slimmest model with one convolution and 3 linear layersOlindaNet
: Model with two convolutions and 4 linear layersOlindaOneNet
: Model with four convolutions and 4 linear layers
These models accept a 32 x 32
input and can be configured to produce a signle or multiple outputs.
The model is a normal Pytorch Lightning module which is compatible with Pytorch NN
module.
Dataset Preparation
ChemXor provides two generic Pytorch Lightning Datamodules (Regression, Classification) that can be used to train and evaluate the models. These Datamodules expects raw data as CSV files with two columns (target, SMILES).
The DataModules will take care of converting the smiles
input to 32 x 32
images.
Model training
It is recommended to use a Pytorch Lightning trainer to train the models. Although a normal Pytorch training loop can also be used.
FHE models
After training, the models can be wrapped using their specific FHE wrappers to process FHE inputs. FHE wrappers will take care of Tenseal context parameters and keys management.
FHE inputs evaluation
The Datamodules can generate Pytorch dataloaders that produce encrypted inputs for the model.
Also, the FHE models are partitioned to control multiplicative depth. So, the forward function is modified to accept a step parameter. For testing, The FHE model can be evaluated locally as follows:
This process can automated using a utility function provided by ChemXor
Serve models
FHE Models can be served in the form of a Flask app as follows:
ChemXor's Pre defined Models can also be served using the CLI
Query models
We can then query models with this simple command:
Future work
It might be possible to offload the encrypted model evaluation to the client with the help of re-encryption proxy schemes. It will eliminate the need for hosting models.
Last updated