The installation consists of two steps:

Nebullvm installation

There are two ways to install nebullvm:
  • Using PyPI. We suggest installing the library with pip to get the stable version of nebullvm
  • From source code to get the latest features
Installation with PyPI (recommended)
The easiest way to install nebullvm is by using pip, running
pip install nebullvm
Installation from source code
Alternatively, you can install nebullvm from source code by cloning the directory on your local machine using git.
git clone
Then, enter the repo and install nebullvm with pip.
cd nebullvm
pip install .

Installation of deep learning compilers

Follow the instructions below to automatically install all deep learning compilers leveraged by nebullvm (OpenVINO, TensorRT, ONNX Runtime, Apache TVM, etc.).
We have prepared three ways to easily install all compilers at once:
Note that:

Installation at the first optimization run

The automatic installation of the deep learning compilers is activated after you import nebullvm and perform your first optimization. You may run into import errors related to the deep learning compiler installation, but you can ignore these errors/warnings. It is also recommended re-starting the python kernel between the auto-installation and the first optimization. Otherwise, not all compilers will be activated.
To avoid any problems, we strongly recommend running the auto-installation before performing the first optimization by running
python -c "import nebullvm"
You should ignore at this stage any import warning resulting from the previous command.

Download Docker images with preinstalled compilers

Instead of installing the compilers, which may take a long time, you can simply download the docker container with all compilers preinstalled and start using nebullvm.
To pull up the docker image you can simply run
docker pull nebulydocker/nebullvm:cuda11.2.0-nebullvm0.3.1-allcompilers
and you can then run and access the docker with
docker run -ia nebulydocker/nebullvm:cuda11.2.0-nebullvm0.3.1-allcompilers
After you have compiled the model, you may decide to deploy it to production. Note that some of the components used to optimize the model are also needed to run it, so you must have the compiler installed in the production docker. For this reason, we have created several versions of our Docker container in the Docker Hub, each containing only one compiler. Pull the image with the compiler that has optimized your model!
Here we go, you should be all set to Get started with nebullvm
Have you run into any problems with the installation? Report any issues on the community channels or on GitHub. We are here to help you πŸ™Œ
Export as PDF
Copy link
On this page
Nebullvm installation
Installation of deep learning compilers