The Groq component is an AI component that allows users to connect the AI models served on GroqCloud. It can carry out the following tasks:
#Release Stage
Alpha
#Configuration
The component definition and tasks are defined in the definition.json and tasks.json files respectively.
#Setup
In order to communicate with Groq, the following connection details need to be
provided. You may specify them directly in a pipeline recipe as key-value pairs
within the component's setup
block, or you can create a Connection from
the Integration Settings
page and reference the whole setup
as setup: ${connection.<my-connection-id>}
.
Field | Field ID | Type | Note |
---|---|---|---|
API Key | api-key | string | Fill in your GroqCloud API key. To find your keys, visit the GroqCloud API Keys page. |
#Supported Tasks
#Text Generation Chat
Groq serves open source text generation models (often called generative pre-trained transformers or large language models) have been trained to understand natural language, code, and images. The models provide text outputs in response to their inputs. The inputs to these models are also referred to as "prompts". Designing a prompt is essentially how you “program” a large language model model, usually by providing instructions or some examples of how to successfully complete a task.
Input | ID | Type | Description |
---|---|---|---|
Task ID (required) | task | string | TASK_TEXT_GENERATION_CHAT |
Model (required) | model | string | The OSS model to be used. |
Prompt (required) | prompt | string | The prompt text. |
System Message | system-message | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant.". |
Prompt Images | prompt-images | array[string] | The prompt images (Note: Only a subset of OSS models support image inputs). |
Chat History | chat-history | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"}. |
Seed | seed | integer | The seed. |
Temperature | temperature | number | The temperature for sampling. |
Top K | top-k | integer | Integer to define the top tokens considered within the sample operation to create new text. |
Max New Tokens | max-new-tokens | integer | The maximum number of tokens for model to generate. |
Top P | top-p | number | Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top-p (default=0.5). |
User | user | string | The user name passed to GroqPlatform. |
Input Objects in Text Generation Chat
Chat History
Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : {"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"}.
Field | Field ID | Type | Note |
---|---|---|---|
Content | content | array | The message content. |
Role | role | string | The message role, i.e. 'system', 'user' or 'assistant'. |
Content
The message content.
Field | Field ID | Type | Note |
---|---|---|---|
Image URL | image-url | object | The image URL. |
Text | text | string | The text content. |
Type | type | string | The type of the content part. Enum values
|
Image URL
The image URL.
Field | Field ID | Type | Note |
---|---|---|---|
URL | url | string | Either a URL of the image or the base64 encoded image data. |
Output | ID | Type | Description |
---|---|---|---|
Text | text | string | Model Output. |
Usage (optional) | usage | object | Token usage on the GroqCloud platform text generation models. |
Output Objects in Text Generation Chat
Usage
Field | Field ID | Type | Note |
---|---|---|---|
Input Tokens | input-tokens | number | The input tokens used by GroqCloud OSS models. |
Output Tokens | output-tokens | number | The output tokens generated by GroqCloud OSS models. |
#Example Recipes
Recipe for the Groq Interview Helper pipeline.
version: v1betacomponent: groq-0: type: groq task: TASK_TEXT_GENERATION_CHAT input: max-new-tokens: 300 model: llama3-groq-70b-8192-tool-use-preview prompt: |- Rewrite this experience using the STAR (Situation, Task, Action, Result) method for a resume or CV: ${variable.experience} system-message: You are a helpful resume assistant. temperature: 0.05 top-k: 10 top-p: 0.5 user: instill-ai setup: api-key: ${secret.INSTILL_SECRET}variable: experience: title: experience description: describe your work experience format: string instill-ui-multiline: trueoutput: resume_format: title: resume_format value: ${groq-0.output.text}