Prepare post-processing model

#Standardise post-processing output format

You can prepare the post-processing model the same way as the pre-processing model. However, to get the model inference output in a standarised format you can

If no task is specified when creating a model, the output will the raw model output in a serialized JSON message.

#Image Classification

INFO

Learn more about Image Classification task

Assume we have a "cat vs. dog" model to infer whether an image is a cat image or dog image. Create a labels.txt file to list all the pre-defined categories, with one category label per line. Add the file to the folder of inference model.

labels.txt example


cat
dog

Include the label file labels.txt in the model configuration of the inference model.

config.pbtxt example


...
output [
{
...
label_filename: "labels.txt"
}
]
...

Check the standarised output for Image Classification, here shows an output example:


{
"task": "TASK_CLASSIFICATION",
"task_outputs": [
{
"classification": {
"category": "dog",
"score": 0.9
}
}
]
}

#Object Detection

INFO

Learn more about Object Detection task

Create a Python file with a structure similar to below. The file inherits the PostDetectionModel class and implement the post_process_per_image abstract method. Then, add the file in the post-processing model folder:

model.py
Copy

import numpy as np
from triton_python_model.task.detection import PostDetectionModel
class TritonPythonModel(PostDetectionModel):
"""Your Python model must use the same class name.
Every Python model that is created must have "TritonPythonModel" as the class name.
"""
def __init__(self):
""" Constructor function must be implemented in every model.
This function initializes the names of the input and output
variables in the model configuration.
"""
# super().__init__(input_names=[...], output_names=[...])
# ...
def post_process_per_image(self, inputs):
"""`post_process_per_image` must be implemented in every Python model.
This function detects objects in one image of a batch.
Args:
inputs (Tuple[np.ndarray]): a sequence of model input array for one image
Raises:
NotImplementedError: all subclasses must implement this function of per-image post-processing for `DETECTION` task.
Returns:
Tuple[np.ndarray, np.ndarray]: a tuple of bounding box with label array: (`bboxes`, `labels`).
- `bboxes`: the bounding boxes detected in this image with shape (n,5) or (0,). The bounding box format is [x1, y1, x2, y2, score] in the original image. `n` is the number of bounding boxes detected in this image.
- `labels`: the labels corresponding to the bounding boxes with shape (n,) or (0,).
The length of `bboxes` must be the same as that of `labels`.
"""
# return np.array([[324, 102, 532, 507, 0.98]]), np.array(["dog"]) # Dummy detection example

Check the standardised output for Object Detection, here shows an output example:


{
"task": "TASK_DETECTION",
"task_outputs": [
{
"detection": {
"objects": [
{
"category": "dog",
"score": 0.98,
"bounding_box": {
"top": 102,
"left": 324,
"width": 208,
"height": 405
}
}
]
}
}
]
}

#Keypoint Detection

INFO

Learn more about Keypoint Detection task

Create a Python file with a structure similar to below and add the file in the post-processing model folder:

model.py
Copy

import numpy as np
from triton_python_model.task.keypoint import PostKeypointDetectionModel
class TritonPythonModel(PostKeypointDetectionModel):
"""Your Python model must use the same class name.
Every Python model that is created must have "TritonPythonModel" as the class name.
"""
def __init__(self):
""" Constructor function must be implemented in every model.
This function initializes the names of the input and output
variables in the model configuration.
"""
# super().__init__(input_names=[...], output_names=[...])
# ...
def post_process_per_image(self, inputs):
"""`post_process_per_image` must be implemented in every Python model.
This function detects keypoints of objects in one image of a batch.
Args:
inputs (Tuple[np.ndarray]): a sequence of model input array for one image
Raises:
NotImplementedError: all subclasses must implement this function of per-image post-processing for `KEYPOINT` task.
Returns:
np.ndarray: Keypoint Detection score array `scores` for this image. The shape of `scores` should be (n,), where n is the number of categories.
"""

Check the standardised output for Keypoint Detection, here shows an output example:


{
"task": "TASK_KEYPOINT",
"task_outputs": [
{
"keypoint": {
"objects": [
{
"keypoints": [
{
"x": 1052.8419,
"y": 610.0058,
"v": 0.84
},
{
"x": 1047.5118,
"y": 514.04474,
"v": 0.81
},
...
],
"score": 0.99,
"bounding_box": {
"top": 299,
"left": 185,
"width": 1130,
"height": 1210
}
}
]
}
}
]
}

#Instance Segmentation task

INFO

Check the standardised output for Instance Segmentation task, here shows an output example:


{
"task": "TASK_INSTANCE_SEGMENTATION",
"task_outputs": [
{
"instance_segmentation": {
"objects": [
{
"rle": "2918,12,382,33,...",
"score": 0.99,
"bounding_box": {
"top": 95,
"left": 320,
"width": 215,
"height": 406
},
"category": "dog"
},
...
]
}
}
]
}

#Unspecified AI task

INFO

Learn more about Unspecified AI task

If your model is imported without specifying any task metadata, the model will be recognised to solve an Unspecified task. There is no need to prepare your model outputs to fit any format.

Check the standardised output for Unspecified AI task. Assume we import the above "cat vs. dog" model without specifying the AI task metadata, here shows an output example:


{
"task": "TASK_UNSPECIFIED",
"task_outputs": [
{
"unspecified": {
"raw_outputs": [
{
"data": [0, 1],
"data_type": "FP32",
"name": "output",
"shape": [2]
}
]
}
}
]
}

Last updated: 5/29/2023, 12:50:07 AM