#Standardise post-processing output format
You can prepare the post-processing model the same way as the pre-processing model. However, to get the model inference output in a standarised format you can
- specify a supported AI task when creating the model card
- create a Python model that inherits the corresponding post-processing task class in triton_python_model.
If no task is specified when creating a model, the output will the raw model output in a serialized JSON message.
#Image Classification
Learn more about Image Classification task
Assume we have a "cat vs. dog" model to infer whether an image is a cat image or dog image. Create a labels.txt
file to list all the pre-defined categories, with one category label per line. Add the file to the folder of inference model.
labels.txt
example
catdog
Include the label file labels.txt
in the model configuration of the inference model.
config.pbtxt
example
...output [ { ... label_filename: "labels.txt" }]...
Check the standarised output for Image Classification, here shows an output example:
{ "task": "TASK_CLASSIFICATION", "task_outputs": [ { "classification": { "category": "dog", "score": 0.9 } } ]}
#Object Detection
Learn more about Object Detection task
Create a Python file with a structure similar to below. The file inherits the PostDetectionModel
class and implement the post_process_per_image
abstract method.
Then, add the file in the post-processing model folder:
Check the standardised output for Object Detection, here shows an output example:
{ "task": "TASK_DETECTION", "task_outputs": [ { "detection": { "objects": [ { "category": "dog", "score": 0.98, "bounding_box": { "top": 102, "left": 324, "width": 208, "height": 405 } } ] } } ]}
#Keypoint Detection
Learn more about Keypoint Detection task
Create a Python file with a structure similar to below and add the file in the post-processing model folder:
Check the standardised output for Keypoint Detection, here shows an output example:
{ "task": "TASK_KEYPOINT", "task_outputs": [ { "keypoint": { "objects": [ { "keypoints": [ { "x": 1052.8419, "y": 610.0058, "v": 0.84 }, { "x": 1047.5118, "y": 514.04474, "v": 0.81 }, ... ], "score": 0.99, "bounding_box": { "top": 299, "left": 185, "width": 1130, "height": 1210 } } ] } } ]}
#Instance Segmentation task
Learn more about Instance Segmentation task
Check the standardised output for Instance Segmentation task, here shows an output example:
{ "task": "TASK_INSTANCE_SEGMENTATION", "task_outputs": [ { "instance_segmentation": { "objects": [ { "rle": "2918,12,382,33,...", "score": 0.99, "bounding_box": { "top": 95, "left": 320, "width": 215, "height": 406 }, "category": "dog" }, ... ] } } ]}
#Unspecified AI task
Learn more about Unspecified AI task
If your model is imported without specifying any task metadata, the model will be recognised to solve an Unspecified
task.
There is no need to prepare your model outputs to fit any format.
Check the standardised output for Unspecified AI task. Assume we import the above "cat vs. dog" model without specifying the AI task metadata, here shows an output example:
{ "task": "TASK_UNSPECIFIED", "task_outputs": [ { "unspecified": { "raw_outputs": [ { "data": [0, 1], "data_type": "FP32", "name": "output", "shape": [2] } ] } } ]}