Accurate AutoML with ZairaChem
We present ZairaChem, Ersilia's modeling pipeline for chemistry data
ZairaChem offers a relatively complex ensemble modeling pipeline, showing robust performance over a wide set of tasks. If, instead, you want to build quick baseline models, we recommend to check LazyQSAR, the light-weight modeling tool of Ersilia.
In brief, in ZairaChem molecules are represented numerically using a combination of distinct descriptors, including physicochemical parameters (Mordred), 2D structural fingerprints (ECFP), inferred bioactivity profiles (Chemical Checker), graph-based embeddings (GROVER), and chemical language models (ChemGPT). Any other descriptor from the Ersilia Model Hub can be selected. The rationale is that combining multiple descriptors will enhance applicability over a broad range of tasks, ranging from aqueous solubility predictions to phenotypic outcomes. Subsequently, an array of AI/ML algorithms is applied using modern AutoML techniques aimed at yielding accurate models without the need for human intervention (i.e. algorithm choice, hyperparameter tuning, etc.). The AutoML frameworks FLAML, AutoGluon, Keras Tuner, TabPFN and MolMapNet are incorporated, covering mostly tree-based methods (Random Forest, XGBoost, etc.) and neural network architectures.
ZairaChem can be installed as follows:
git clone https://github.com/ersilia-os/zaira-chem.git
A Conda environment called
zairachemwill be created. Start by activating this environment:
conda activate zairachem
Check that ZairaChem has been installed properly. The following will display the command-line interface (CLI) options.
zairachem example --classification --file_name input.csv
This file can be split into train and test sets.
zairachem split -i input.csv
The command above will generate two files in the current folder, named
test.csv. By default, the train:test ratio is 80:20.
You can train a model as follows:
zairachem fit -i train.csv -m model
This command will run the full ZairaChem pipeline and produce a
modelfolder with processed data, model checkpoints, and reports.
You can then run predictions on the test set:
zairachem predict -i test.csv -m model -o test
ZairaChem will run predictions using the checkpoints stored in
modeland store results in the
testdirectory. Several performance plots will be generated alongside prediction outputs.
Internally, the ZairaChem pipeline consists of the following steps:
session: a session is initialized pointing to the necessary system paths.
setup: data is processed and stored in a cleaned form.
describe: molecular descriptors are calculated.
estimate: models are trained or predictions are done on trained models.
pool: results from multiple models from the ensemble are aggregated.
report: output data is assembled in a spreadsheet, and plots are created for easy inpection of results.
finish: the session is closed and residual files are deleted.
You can start a ZairaChem training session as follows:
zairachem session --fit -i train.csv -m model
Likewise, you can start a prediction session:
zairachem session --predict -i test.csv -m model -o test
sessioncommand will simply create the necessary folders and a session log.
In this step, data preparation is done, including:
- Identification of relevant columns (compound identifier, SMILES, and value) in the input file.
- Chemical structure standardization.
- Data balancing and augmentation using a reference set of molecules (e.g. ChEMBL).
- Binarization when a cutoff is specified.
- Transformation (Guassianization) of continuous data.
- Folds and clusters assignments.
Give an initialized session (fit or predict), data preparation will be done accordingly. To perform this step, simply run:
Most data generated in the
setupstep will be stored in
test/data(predict). The most important file in this folder is
data.csv, containg the result of the data preparation step. Other files are generated, like
mapping.csv, which match
data.csvto the row indices of the input file.
describestep, small molecule descriptors are calculated. ZairaChem provides a set of default descriptors, including the Chemical Checker signaturizer, Grover embeddings and Morgan fingerprints and Mordred descriptors.
Several operations are performed for each of the descriptors, including:
- Calculation of descriptors for each molecule using the Ersilia Model Hub.
- Removal of constant-value columns and columns with a high degree of missing values.
- Imputation of the rest missing values.
- Robust scaling of contiuous descriptors.
In addition, a reference descriptor is calculated (Grover). To this reference descriptors, the following dimentionality reduction techniques are applied:
Optionally, supervised versions of thesealgorithms are applied:
- Supervised UMAP
All of the above can be performed by running the following command:
Please note that calculating some descriptors (for example, GROVER) may be a slow procedure. However, the Ersilia backend is linked to an in-house caching library called Isaura that is able to access pre-calculated data. At the moment, Isaura works on local caching. However, we are currently setting up a cloud-based database in order to facilite access to pre-calculations stored online.
This step is aimed at training AutoML models based on the descriptors calculated above.
The following supervised models are applied:
- Baseline LazyQSAR models (based on Morgan fingerprints and classic descriptors).
- FLAML models on each of the pre-calculated descriptors.
- AutoGluon model based on the manifolds of the reference embedding.
- Keras Tuner fully-connected network based on the reference embedding.
- MolMap convolutional neural network.
All of these steps can be performed with the following command.
In the pooling step, results from the estimators above are aggregated. A weighted average is applied, based on the expected performance of each of the individual estimators.
Pooling can be performed with the following command:
ZairaChem provides automated performance reports as well as a output table.
- Output table
- Performance table
ZairaChem models are computationally demanding. At the end of the procedure, our goal is to provide a distilled model. This distilledm model is stored in an interoperable format (ONNX) and can be deployed as an AWS lambda. The Ersilia package for creating distilled models is called Olinda.
The finish command simply offers options for cleaning
It is possible to run a specific step from a previous session. In this case, simply initialize the session pointing to the relevant folders:
zairachem session --path model
ZairaChem will automatically identify the session as training (fit) task or as a prediction task.
Once the session has been set, you can run the command of choice. For example:
In the session file, multiple steps are specified. Each step in ZairaChem has an associated name. You can restart the pipeline at any given step.