OpenAI

The OpenAI component is an AI connector that allows users to connect the AI models served on the OpenAI Platform. It can carry out the following tasks:

#Release Stage

Alpha

#Configuration

The component configuration is defined and maintained here.

#Connection

FieldField IDTypeNote
API Key (required)api_keystringFill your OpenAI API key. To find your keys, visit your OpenAI's API Keys page.
Organization IDorganizationstringSpecify which organization is used for the requests. Usage will count against the specified organization's subscription quota.

#Supported Tasks

#Text Generation

Provide text outputs in response to their inputs.

InputIDTypeDescription
Task ID (required)taskstringTASK_TEXT_GENERATION
Model (required)modelstringID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
Prompt (required)promptstringThe prompt text
System messagesystem_messagestringThe system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant."
Imageimagesarray[string]The images
Chat historychat_historyarray[object]Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"{.
TemperaturetemperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
NnintegerHow many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
Max Tokensmax_tokensintegerThe maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Example Python code for counting tokens.
Response Formatresponse_formatobjectAn object specifying the format that the model must output. Used to enable JSON mode.
Top Ptop_pnumberAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Presence Penaltypresence_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties.
Frequency Penaltyfrequency_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties.
OutputIDTypeDescription
Textstextsarray[string]Texts

#Text Embeddings

Turn text into numbers, unlocking use cases like search.

InputIDTypeDescription
Task ID (required)taskstringTASK_TEXT_EMBEDDINGS
Model (required)modelstringID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
Text (required)textstringThe text
OutputIDTypeDescription
Embeddingembeddingarray[number]Embedding of the input text

#Speech Recognition

Turn audio into text.

InputIDTypeDescription
Task ID (required)taskstringTASK_SPEECH_RECOGNITION
Model (required)modelstringID of the model to use. Only whisper-1 is currently available.
Audio (required)audiostringThe audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
PromptpromptstringAn optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.
LanguagelanguagestringThe language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
TemperaturetemperaturenumberThe sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
OutputIDTypeDescription
TexttextstringGenerated text

#Text To Speech

Turn text into lifelike spoken audio

InputIDTypeDescription
Task ID (required)taskstringTASK_TEXT_TO_SPEECH
Model (required)modelstringOne of the available TTS models: tts-1 or tts-1-hd
Text (required)textstringThe text to generate audio for. The maximum length is 4096 characters.
Voice (required)voicestringThe voice to use when generating the audio. Supported voices are alloy, echo, fable, onyx, nova, and shimmer. Previews of the voices are available in the Text to speech guide.
Response Formatresponse_formatstringThe format to audio in. Supported formats are mp3, opus, aac, and flac.
SpeedspeednumberThe speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.
OutputIDTypeDescription
Audio (optional)audiostringAI generated audio

#Text To Image

Generate or manipulate images with DALL·E.

InputIDTypeDescription
Task ID (required)taskstringTASK_TEXT_TO_IMAGE
Model (required)modelstringThe model to use for image generation.
Prompt (required)promptstringA text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.
NnintegerThe number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
QualityqualitystringThe quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3.
SizesizestringThe size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models.
NstylestringThe style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.
OutputIDTypeDescription
Imagesresultsarray[object]Generated results

Last updated: 4/29/2024, 5:53:52 AM