Loading models
This file has the methods for loading your underlying graph and putting that into nbox.Model object.
Source file: Github
Using nbox you can load thousands of publicly pretrained models or bring your own models into the ecosystem. First I will cover how to load public models and later how to bring your own model.

Public Models

nbox uses a model registry called PRETRAINED_MODELS, to fetch model_builder_fn that initialises the underlying model graph eg. load_torchvision_models() loads all the models in the torchvision package. At initialisation this will populate the registry and will throw an error if it is empty. Each public model has key like this:
1
<package>/<key_in_package>::<extra-args>
Copied!
Currently you can load the following models:
    1.
    torchvision: to load this model start your key with torchvision see the complete list of supported models here.
    2.
    transformers: 🤗transformers have completely revolutionised the NLP landscape. In order to load any model, you need the source weights and architecture, thus your key looks like transformers/gpt2::AutoModelForCasualLM. The model_builder_fn will load the weight from gpt2 and the architecture will be AutoModelForCasualLM.

Add your loader in nbox ecosystem

Model loading happens through dedicated loader methods for each package. Function should have name load_<package>_models() and it should return a dict with key that you want and value should be a tuple/list of (builder_fn and category, read more here). Inside each builder_fn you can add your own code for loading the model, it should return two things: (a) model ie. the underlying graph, this is what gets loaded in the nbox.Model and (b) some extra args for loading the model, by default return and empty dict {}
1
def load_package_models(pop_kwargs = ["model_instr"]):
2
3
def model_builder_fn(**kwargs):
4
# need to define how the model loading is suppposed to take place
5
# eg. if this is alexnet from torchvision this is how it would be
6
model_fn = torchvision.models.alexnet
7
8
# since this package can load from any function, not all functions
9
# consume all keywords, so we have to remove some of them
10
kwargs = remove_kwargs(pop_kwargs, **kwargs)
11
12
# now initialise the underlying model
13
model = model_fn(**kwargs)
14
15
# sometimes the nbox.Model may require extra item, eg. tokenizer
16
# when using this for NLP task
17
model_kwargs = {"tokenizer": some_tokenizer}
18
19
return model, model_kwargs
20
21
# loader start, this function has to return dictionary with key
22
# against a model_builder_fn and some metadata
23
return {"my_new_model": (model_builder_fn, "image")}
Copied!
You can raise a PR with your own models by adding a simple function into this source of this file.
When your loader function is complete add that to the model registry by simply adding your package name in list all_repos:
1
all_repos = [..., "package_name", ...]
Copied!

Category

This defines the type of each input to the model, if the model is resnet-18 then the type is image, if it is a transformer model it should be text. In case your model consumes more than one input, the category can be defined as a dictionary with key for each input eg. in case of OpenAI/CLIP, the category will be {"text": "text", "image": "image"} because the torch.nn.Module object consumes keyword text and image in .forward() method.

Bring your Model

You will mostly use NimbleBox.ai's deployment service and put your model in production. In this case you can easily wrap your model in nbox.Modelthat takes your model and brings the goodness of using nbox with it. You can read more in documentation below. All you have to do is define the model and category:
1
nbox.Model(
2
my_clip_model(),
3
category = {
4
"input_images": "image",
5
"input_string": "text"
6
}
7
)
Copied!
As you can see, you initialise your model and define the category (assuming your .forward() method takes input_images and input_string as inputs)

Documentation

load

1
load(model_key: str = None, nbx_api_key: str = None, cloud_infer: bool = False, **loader_kwargs)
Copied!
Returns nbox.Model from a model_key, can optionally setup a connection to cloud inference on a Nimblebox.
Arguments:
    model_key str, optional - key for which to load the model, the structure looks as follows:
    1
    source/(source/key)::<pre::task::post>
    Copied!
    Defaults to None.
    nbx_api_key str, optional - your Nimblebox API key. Defaults to None.
    cloud_infer bool, optional - If true uses Nimblebox deployed inference and logs in
    using nbx_api_key. Defaults to False.
Raises:
    ValueError - If source is not found
    IndexError - If source is found but source/key is not found
Returns:
    nbox.Model - when using local inference
    nbox.NBXApi - when using cloud inference
Last modified 25d ago