Nebuly
Search…
⌃K

Advanced options

nebulgym is designed to be fast and extremely user-friendly. Ease of use is achieved by not requiring modification of the framework used and instead just adding class decorators to the training code. As an alternative to decorators, below we also explain how to use nebulgym with standard python classes.

Class decorators

You can speed up training by simply adding nebulgym class decorators before defining the dataset and model. Through the decorators, nebulgym adds functionality to the model and dataset classes and makes data loading efficient, forward and backward transitions faster, and convergence quicker.
nebulgym class decorators are @accelerate_model and @accelerate_dataset and can take additional parameters:
  • @accelerate_model:
    • backends. You can specify the backend you prefer instead of the default which is PyTorch (see Supported backends section below). Backends can be specified in order of priority: nebulgym will try to use the first backend in the list, and if it fails, the model will move to the next backend. The available backends are "ONNXRUNTIME," "RAMMER," and "PYTORCH".
    • patch_backprop. This is an optional boolean parameter to optimize the backprop phase. It can lead to a huge speedup and at times even an improvement in model performance. However, since it slightly modifies the backprop calculation and the performance implications are not known beforehand, we prefer to leave it up to the end-user to turn this technique off or not.
  • @accelerate_dataset:
    • max_memory_size. @accelerate_dataset can take this extra argument that defines the maximum memory size that the data-loading phase can access.
    • preloaded_data. This specifies the number of "parallel" workers allowed to pre-load the data while training the model.

nebulgym classes

As an alternative to class decorators, you can use the TrainingLearner and NebulDataset classes.
  • TrainingLearner. The TrainingLearner class receives as input:
    • model (Module): The original pytorch model
    • backends: The list of backends that can be used for running the model (see Supported backends section below). The available backends are "ONNXRUNTIME," "RAMMER," and "PYTORCH".
  • NebulDataset. The NebulDataset accepts as inputs:
    • input_data (Dataset): Inner dataset which will be used during the first data iteration.
    • preloaded_data (int, optional): Number of workers that will pre-load the data from the second iteration on.
    • max_memory_size (int, optional): Max number of Bytes that can be occupied by the pre-loading process in the RAM.

Supported backends

Currently nebulgym supports three different backends, with PyTorch being the default one.
  • PYTORCH. Default compiler for models trained in PyTorch.
  • RAMMER. Compiler that can be used on Nvidia GPUs.
  • ONNXRUNTIME. Training API leveraging on some techniques developed for inference optimization. It currently supports only Nvidia GPUs.
Note that if the user-selected backend does not work at instantiation time, the running backend will automatically switch to PyTorch by default. Backends other than PyTorch must be installed separately from nebulgym. ONNX Runtime can be installed on Linux platforms that support CUDA GPUs.

Installation of ONNX Runtime backend

pip install torch_ort

Installation of Rammer backend

Rammer must be installed from source code. Details are available at. https://github.com/microsoft/nnfusion.