QuickStart with Deployment
Deploy on managed autoscaling pods on NimbleBox.ai
Deployment is a major bottleneck for DS/ML engineers to get their systems into production. Unless any model is in production there really is no way to complete a project. nbox as an SDK makes deployment just one command, literally!
When on our platform, first step is to open terminal from the App Bar on your left and type the command below. This initialises the cuda101 environment and installs our SDK nbox
(A) Activting account and installing nbox
1
conda activate cuda101 && pip install nbox
Copied!
Now go to VSCode and create a new file called nbox_test.py and start adding the following:
(B) Add Basic Imports
1
#!/usr/bin/env python3
2
3
import os
4
import numpy as np
5
6
import nbox
7
from nbox.utils import folder, join, get_image
8
9
import warnings
10
warnings.filterwarnings("ignore")
Copied!
You can bring your own model, for this tutorial we want to deploy torchvision/resnet18. Loading models is super easy, either use a publicly available models or bring in your own models. Add the following code to nbox_text.py
(C) Loading the model
1
model = nbox.load(
2
"torchvision/resnet18",
3
pretrained = True,
4
)
5
image_url = "https://github.com/NimbleBoxAI/nbox/raw/master/tests/assets/cat.jpg"
6
out = model(image_url)
7
print(out[0].topk(5))
Copied!
Now comes the cool part, i.e. deploying models. There is already support for deploying a tonne of models directly from nbox. For deployment you need to give it any input_object that can be used to perform trace (torchscript / ONNX). If you do not resize the input image the deployed model will have default shape, so for now we will resize the image to (244, 244) and then pass it for deployment.
(D) Deploying the model
1
image = get_image(image_url) # get the PIL.Image object
2
3
# you can skip the following shape if the shape is already correct
4
image = image.resize((244, 244))
5
6
# simply provide the input_object and watch the Terminal
7
model.deploy(input_object = image)
Copied!
Now wait for the deployment to complete. You can check out the dashboard till then and once deployed you will see the URL and API key for your model. Copy and pass it to the nbox.load() method.
(E) Cloud Inference
1
model = nbox.load(
2
"https://api.nimblebox.ai/user/my_big_model/",
3
"nbxdeploy_zpO8I8AVzvOetQYAZanzP2mMgJ5oh84LG0wZdgh3U"
4
)
Copied!
Now use the model without being concerned with the API hits, as nbox handles it internally.
(F) Cloud Inference usage
1
out = model("https://github.com/NimbleBoxAI/nbox/raw/master/tests/assets/cat.jpg")
2
print(out.shape) # == (1, 1000)
Copied!
So, you can see how easy it is to load a model, test it and deploy it in minutes. You can head over to technical documentation for further reading on this.
Deploy models in minutes not days!
Last modified 24d ago
Copy link