This SDK tool is under active development For any bug found or featur request, feel free to open any issue regarding this SDK tool in our in our community repo
#Requirements
- Python 3.8+
- Pip | Poetry
#Installation
Install it directly into an activated virtual environment:
If your host machine is on arm64 architecture (including Apple silicon
machines, equipped with m1/m2 processors), there are some issues when
installing grpcio
within conda
environment. You will have to manually
build and install it like below. Read more about this issue
here.
GRPC_PYTHON_LDFLAGS=" -framework CoreFoundation" pip install grpcio --no-binary :all:
#Check Import
After installation, you can check if it has been installed correctly:
python>>> import instill>>> instill.__version__
#Config Instill Core or Instill Cloud Instance
Before we can start using this SDK, you will need to properly config your target instance. We support two ways to setup the configs, which are
#Config file
create a config file under this path ${HOME}/.config/instill/sdk/python/config.yml
, and within that path you will need to fill in some basic parameters for your desired host.1
Within the config file, you can define multiple instances with the alias
of your liking, later in the SDK you can refer to this alias
to switch between instances.2
hosts: alias1: url: str secure: bool token: str alias2: url: str secure: bool token: str ... ...
Example:
hosts: default: url: localhost:8080 secure: false token: instill_sk*** cloud: url: api.instill.tech secure: true token: instill_sk***
#At runtime
If you do not like the idea of having to create a config file, you can also setup your target instance by doing the following at the very beginning of your script.
from instill.configuration import global_configglobal_config.set_default( url="api.instill.tech", token="instill_sk***", secure=True,)
#Usage
You can find a complete pipeline setup example with python-sdk on our GitHub repo
#Create Client
Simply import the get_client
function to get the client that are connected to all services with the config you setup previously.
from instill.clients import get_clientclient = get_client()
Remember to call client.close()
at the end of script to
release the channel and the underlying resources.
If you have not set up Instill VDP
or Instill Model
, you will get a warning like this:
2023-09-27 18:49:04,871.871 WARNING Instill VDP is not serving, VDP functionalities will not work2023-09-27 18:49:04,907.907 WARNING Instill Model is not serving, Model functionalities will not work
You can check the readiness of each service:
client.mgmt_service.is_serving()# Trueclient.connector_service.is_serving()# Trueclient.pipeline_service.is_serving()# Trueclient.model_service.is_serving()# True
Depends on which project(Instill VDP
or Instill Model
or both) you had
launched locally, some services might not be available.
After making sure all desired services are serving, we can check the user status by:
client.mgmt_service.get_user()
If you have a valid api_token
in your config file, you should see something like this:
name: "users/admin"uid: "4767b74d-640a-4cdf-9c6d-7bb0e36098a0"id: "admin"type: OWNER_TYPE_USERcreate_time { seconds: 1695589596 nanos: 36522000}update_time { seconds: 1695589749 nanos: 544980000}email: "hello@instill.tech"first_name: "Instill"last_name: "AI"org_name: "Instill AI"role: "hobbyist"newsletter_subscription: truecookie_token: ""
#Create Resource
#Create Model
Let's say we want to serve a yolov7
model from github
with the following configs
model_name = "yolov7"model_repo = "instill-ai/model-yolov7-dvc"model_tag = "v1.0-cpu"
Simply import the GithubModel resource and fill in the corresponding fields
from instill.resources.model import GithubModelyolov7 = GithubModel( client=client, name=model_name, model_repo=model_repo, model_tag=model_tag,)
After the creation is done, we can check the state of the model3
yolov7.get_state()# 1# means STATE_OFFLINE
Now we can deploy the model
yolov7.deploy()
Check the status
yolov7.get_state()# 2# means STATE_ONLINE
Trigger the model with the correct task
type4
from instill.resources import model_pb, task_detectiontask_inputs = [ model_pb.TaskInput( detection=task_detection.DetectionInput( image_url="https://artifacts.instill.tech/imgs/dog.jpg" ) ), model_pb.TaskInput( detection=task_detection.DetectionInput( image_url="https://artifacts.instill.tech/imgs/bear.jpg" ) ), model_pb.TaskInput( detection=task_detection.DetectionInput( image_url="https://artifacts.instill.tech/imgs/polar-bear.jpg" ) ),]outputs = yolov7(task_inputs=task_inputs)
Now if you print
the outputs, you will get a list of specific task
output, in this case is a list of TASK_DETECTION
output
[detection { objects { category: "dog" score: 0.958271801 bounding_box { top: 102 left: 324 width: 208 height: 403 } } objects { category: "dog" score: 0.945684791 bounding_box { top: 198 left: 130 width: 198 height: 236 } }}, detection { objects { category: "bear" score: 0.968335629 bounding_box { top: 85 left: 291 width: 554 height: 756 } }}, detection { objects { category: "bear" score: 0.948612273 bounding_box { top: 458 left: 1373 width: 1298 height: 2162 } }}]
#Create Connector
With similiar conecpt as creating model
, below is the steps to create a instill model connector
First import our predefined InstillModelConnector
and config dataclass InstillModelConnector2
5
from instill.resources.schema.instill import InstillModelConnector1from instill.resources import InstillModelConnector, connector_pb, const
Then we set up the connector resource information6
# create the config dataclass object and fill in necessary fieldsinstill_model_config = InstillModelConnector1(mode=const.INSTILL_MODEL_INTERNAL_MODE)instill_model = InstillModelConnector( client, name="instill", config=instill_model_config,)
#Create Pipeline
Since we have created a Instill Model Connector
that connect to our Instill Model
instance, we can now create a pipeline that utilize both Instill VDP
and Instill Model
First we import Pipeline
class and other helper functions
from instill.resources.schema import ( instill_task_detection_input, start_task_start_metadata, end_task_end_metadata,)from instill.resources import ( const, InstillModelConnector, Pipeline, create_start_operator, create_end_operator, create_recipe, populate_default_value,)
To Form a pipeine, it required a start
operator and a end
operator, we have helper functions to create both
# define start component input spec# each key you put inside the metadata dict represents a desire input fieldstart_metadata = {}start_metadata.update( { "input_image": start_task_start_metadata.Model1( instillFormat="image/*", title="Image", type="string", ) })# create start componentstart_operator_component = create_start_operator(start_metadata)
If you wish to define multiple input fields in the start component, simply add more "key"
and "start_task_start_metadata.Model1"
pair by
start_metadata.update( { "input_image": start_task_start_metadata.Model1( instillFormat="{your input format}", title="{input title}", type="{input type}", ) })
Now we can create a model
component
. From the already defined instill Model Connector
, we can utilize the models served on Instill Model
, import them as a component
.
# first we create the input for the component from the dataclass# here we need to specify which model we want to use on our `Instill Model` instance# in this case there is only one model we deployed, which is the yolov7 modelinstill_model_input = instill_task_detection_input.Input( model_namespace="admin", model_id="yolov7", image_base64="${start.input_image}",)# create model connector component from the connector resource we had created previouslyinstill_model_connector_component = instill_model.create_component( name="yolov7", inp=instill_model_input,)
Finally, we create an end component.
# define end component input and metadata specend_operator_inp = {}end_operator_inp.update({"inference_result": "${yolov7.output.objects}"})end_operator_metadata = {}end_operator_metadata.update( {"inference_result": end_task_end_metadata.Model1(title="result")})# create end componentend_operator_component = create_end_operator(end_operator_inp, end_operator_metadata)
We now have all the components ready for the pipeline. Next, we add them into the recipe and create a pipeline.
# create a recipe to construct the pipelinerecipe = create_recipe([start_operator_component, instill_model_connector_component, end_operator_component])# create pipelineinstill_model_pipeline = Pipeline( client=client, name="instill-model-pipeline", recipe=recipe)
Then the pipeline is done, now let us test it by triggering it!
# we can trigger the pipeline nowimport base64import requestsfrom google.protobuf.struct_pb2 import Structi = Struct()i.update( { "input_image": base64.b64encode( requests.get( "https://artifacts.instill.tech/imgs/dog.jpg", timeout=5 ).content ).decode("ascii") })# verify the outputinstill_model_pipeline([i])[0][0]["inference_result"][0]["category"] == "dog"
#Footnotes
-
You can obtain an
api_token
, by simply going to Settings > API Tokens page from the console, no matter it isInstill Core
orInstill Cloud
. ↩ -
SDK will load the configs for
alias
nameddefault
when start up. So it is required to have at least one instance nameddefault
. ↩ -
Check out our supported tasks to learn more, or read our json schema directly ↩
-
config dataclass is auto-gen from our json schema, we will refacor the source json to make the dataclass name makes more sense ↩
-
Find out the resource definition in our json schema ↩