Results caching
Last updated
Was this helpful?
Last updated
Was this helpful?
The Ersilia Model Hub enables caching of the model inference results to save computational time. By default, all users can enable local caching through REDIS, and Ersilia maintainers can also cache results in the cloud.
Ersilia provides built-in caching using to improve performance by storing model results. This caching system does not target variable outputs from generative models but instead caches various types of model outputs. The cache key is generated by creating an MD5
hash from the combination of the model ID and its input (e.g., SMILES string). It only supports models which are packed using the FastAPI-based ersilia-pack server (more on this in ).
Automatic Caching: By default, Ersilia caches model results.
Key Generation: A unique key is created using an MD5 hash based on the model ID and input.
Flexibility: Caching can be turned off during server startup if desired.
Redis will be installed and set up automatically if Docker is installed and running in your system. Else, caching will be skipped as if the --no-cache
flag was used.
When starting the Ersilia server, you can decide whether to use caching with these command-line options:
Redis manages its memory usage based on available system resources. By default, it uses 30% of the system RAM for caching. You can adjust this limit by specifying a different memory usage fraction. Simply replace with the desired fraction of your system's RAM that Redis should use. Recommended range is 0.1-0.7.