Results caching

The Ersilia Model Hub enables caching of the model inference results to save computational time. By default, all users can enable local caching through REDIS, and Ersilia maintainers can also cache results in the cloud.

Local caching

Ersilia provides built-in caching using Redis to improve performance by storing model results. This caching system does not target variable outputs from generative models but instead caches various types of model outputs. The cache key is generated by creating an MD5 hash from the combination of the model ID and its input (e.g., SMILES string). It only supports models which are packed using the FastAPI-based ersilia-pack server (more on this in Model packaging).

Caching Features

  • Automatic Caching: By default, Ersilia caches model results.

  • Key Generation: A unique key is created using an MD5 hash based on the model ID and input.

  • Flexibility: Caching can be turned off during server startup if desired.

Set up and installation

Redis will be installed and set up automatically if Docker is installed and running in your system. Else, caching will be skipped as if the --no-cache flag was used.

How to Control Caching

When starting the Ersilia server, you can decide whether to use caching with these command-line options:

ersilia server eos3b5e --cache #default
ersilia server eos3b5e --no-cache #disables caching

Memory Management

Redis manages its memory usage based on available system resources. By default, it uses 30% of the system RAM for caching. You can adjust this limit by specifying a different memory usage fraction. Simply replace with the desired fraction of your system's RAM that Redis should use. Recommended range is 0.1-0.7.

ersilia serve eos3b5e --max-cache-memory-frac <fraction>

Last updated

Was this helpful?