Universal AI

The Universal AI component is an AI component that allows users to connect the AI models served on the different platforms with standardized input and output formats.. It can carry out the following tasks:

#Release Stage

Alpha

#Configuration

The component definition and tasks are defined in the definition.json and tasks.json files respectively.

#Setup

In order to communicate with the external application, the following connection details need to be provided. You may specify them directly in a pipeline recipe as key-value pairs within the component's setup block, or you can create a Connection from the Integration Settings page and reference the whole setup as setup: ${connection.<my-connection-id>}.

FieldField IDTypeNote
API Keyapi-keystringFill in your API key from the vendor's platform.
Organization IDorganizationstringSpecify which organization is used for the requests. Usage will count against the specified organization's subscription quota.

#Supported Tasks

#Chat

Generate response base on conversation input

InputIDTypeDescription
Task ID (required)taskstringTASK_CHAT
Chat Data (required)dataobjectInput data
Input ParameterparameterobjectInput parameter
Input Objects in Chat

Chat Data

Input data

FieldField IDTypeNote
Chat MessagesmessagesarrayList of chat messages
Model NamemodelstringThe model to be used. Now, it only supports OpenAI model, and will support more models in the future.
Enum values
  • o1-preview
  • o1-mini
  • gpt-4o-mini
  • gpt-4o
  • gpt-4o-2024-05-13
  • gpt-4o-2024-08-06
  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
  • gpt-4-0125-preview
  • gpt-4-turbo-preview
  • gpt-4-1106-preview
  • gpt-4-vision-preview
  • gpt-4
  • gpt-4-0314
  • gpt-4-0613
  • gpt-4-32k
  • gpt-4-32k-0314
  • gpt-4-32k-0613
  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k
  • gpt-3.5-turbo-0301
  • gpt-3.5-turbo-0613
  • gpt-3.5-turbo-1106
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-16k-0613

Chat Messages

List of chat messages

FieldField IDTypeNote
ContentcontentarrayThe message content
NamenamestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.
RolerolestringThe message role, i.e. 'system', 'user' or 'assistant'
Enum values
  • system
  • user
  • assistant

Input Parameter

Input parameter

FieldField IDTypeNote
Max New Tokensmax-tokensintegerThe maximum number of tokens for model to generate
Number of ChoicesnintegerHow many chat completion choices to generate for each input message.
SeedseedintegerThe seed, default is 0
StreamstreambooleanIf set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available.
TemperaturetemperaturenumberThe temperature for sampling
Top Ptop-pnumberAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
OutputIDTypeDescription
Output DatadataobjectOutput data
Output Metadata (optional)metadataobjectOutput metadata
Output Objects in Chat

Output Data

FieldField IDTypeNote
ChoiceschoicesarrayList of chat completion choices

Choices

FieldField IDTypeNote
CreatedcreatedintegerThe Unix timestamp (in seconds) of when the chat completion was created.
Finish Reasonfinish-reasonstringThe reason the model stopped generating tokens.
IndexindexintegerThe index of the choice in the list of choices.
MessagemessageobjectA chat message generated by the model.

Message

FieldField IDTypeNote
ContentcontentstringThe contents of the message.
RolerolestringThe role of the author of this message.

Output Metadata

FieldField IDTypeNote
UsageusageobjectUsage statistics for the request.

Usage

FieldField IDTypeNote
Completion Tokenscompletion-tokensintegerNumber of tokens in the generated response.
Prompt Tokensprompt-tokensintegerNumber of tokens in the prompt.
Total Tokenstotal-tokensintegerTotal number of tokens used in the request (prompt + completion).