The purpose of Instill Credit is easing the adoption of ☁️ Instill Cloud, minimizing the time required to build and set up a pipeline. After setting up your account, you'll get 10,000 monthly credits for free, which can be spent on the following actions:
- Run your own pipelines, or any public pipeline available on Explore.
- Execute pre-configured AI components without needing to create accounts or API keys on 3rd party services.
We offer different subscription plans for users that need more monthly credits to fulfil their pipeline run and API consumption needs.
#Model Run
The model credit consumption is calculated as follows: active period * credit pricing.
Here, active period refers to the number of seconds the model takes to process a request, while credit pricing varies based on the hardware used by the model.
The table below lists the available hardware options along with their corresponding credit pricing.
Hardware | Credit Price (per second) | Credit Price (per hour) |
---|---|---|
CPU | 1 | 3600 |
NVIDIA Tesla T4 | 2 | 7200 |
NVIDIA L4 | 3 | 10800 |
NVIDIA A100 | 13 | 46800 |
#Pipeline Run
Each pipeline run will consume 1 credit per component within the pipeline.
Additionally, Instill AI facilitates out-of-the-box configuration for many AI /
Application components to consume Instill Credit, meaning you can try out multiple
3rd-party vendors without needing to manually create accounts or API keys with these
services. When such components are created, if the selected task and model are
supported, you can simply leave setup
field
blank to consume Instill Credit. For instance, here is how you can setup an
OpenAI component to use Instill Credit:
openai-0: type: openai input: model: gpt-4o prompt: ${variable.prompt} system-message: You are a helpful assistant. response-format: type: text task: TASK_TEXT_GENERATION
If you instead wish to bring your own key, you'll need to create a
connection with your API key in
your account settings page. You can then reference this connection
in the recipe with the ${connection.<my-connection-id>}
syntax:
openai-0: type: openai input: model: gpt-4o prompt: ${variable.prompt} system-message: You are a helpful assistant. response-format: type: text # Prerequisite: you have created a connection with the ID "openai-instill-pipelines" setup: ${connection.openai-instill-pipelines} task: TASK_TEXT_GENERATION
The following section details the supported AI Tasks & models and their Instill Credit cost per unit.
#Supported AI Tasks and Models
#LLM
Vendor | Model | Credit Cost per 1,000 Input Tokens | Credit Cost per 1,000 Output Tokens | Credit Cost per Request |
---|---|---|---|---|
OpenAI | OpenAI o1-preview | 150 | 600 | |
OpenAI | OpenAI o1-mini | 30 | 120 | |
OpenAI | GPT-4o | 50 | 150 | |
OpenAI | GPT-4o 2024-08-06 | 25 | 100 | |
OpenAI | GPT-4o mini | 1.5 | 6 | |
OpenAI | GPT-4 | 300 | 600 | |
OpenAI | GPT-4 (32K) | 600 | 1,200 | |
OpenAI | GPT-4 (128K) | 100 | 300 | |
OpenAI | GPT-4 VISION | 100 | 300 | |
OpenAI | GPT-3.5 Turbo | 5 | 15 | |
Anthropic | Claude 3.5 Sonnet | 30 | 150 | |
Anthropic | Claude 3 Opus | 150 | 750 | |
Anthropic | Claude 3 Sonnet | 30 | 150 | |
Anthropic | Claude 3 Haiku | 2.5 | 15 | |
Cohere | Command-R+ | 30 | 150 | |
Cohere | Command-R | 5 | 15 | |
Cohere | Command | 10 | 20 | |
Cohere | Command-light | 3 | 6 | |
Mistral AI | open-mixtral-8x22b | 20 | 60 | |
Mistral AI | open-mixtral-8x7b | 7 | 7 | |
Mistral AI | open-mistral-7b | 2.5 | 2.5 | |
Mistral AI | codestral-latest | 10 | 30 | |
Mistral AI | mistral-large-latest | 40 | 120 | |
Mistral AI | mistral-small-latest | 10 | 30 | |
Fireworks AI | llama-v3p1-405b-instruct | 30 | 30 | |
Fireworks AI | llama-v3p1-70b-instruct | 9 | 9 | |
Fireworks AI | llama-v3p1-8b-instruct | 2 | 2 | |
Fireworks AI | llama-v3-70b-instruct | 9 | 9 | |
Fireworks AI | llama-v3-8b-instruct | 2 | 2 | |
Fireworks AI | firellava-13b | 2 | 2 | |
Fireworks AI | firefunction-v2 | 9 | 9 | |
Fireworks AI | deepseek-coder-v2-lite-instruct | 5 | 5 | |
Fireworks AI | starcoder-16b | 2 | 2 | |
Fireworks AI | starcoder-7b | 2 | 2 | |
Fireworks AI | phi-3-vision-128k-instruct | 2 | 2 | |
Fireworks AI | qwen2-72b-instruct | 9 | 9 | |
Fireworks AI | mythomax-l2-13b | 2 | 2 | |
Fireworks AI | yi-large | 30 | 30 | |
Groq | llama3-groq-70b-8192-tool-use-preview | 8.9 | 8.9 | |
Groq | llama3-groq-8b-8192-tool-use-preview | 1.9 | 1.9 | |
Groq | llama3-70b-8192 | 5.9 | 7.9 | |
Groq | llama3-8b-8192 | 0.5 | 0.8 | |
Groq | mixtral-8x7b-32768 | 2.4 | 2.4 | |
Groq | gemma2-9b-it | 2 | 2 | |
Groq | gemma-7b-it | 0.7 | 0.7 | |
Perplexity | llama-3.1-sonar-small-128k-online | 2 | 2 | 50 |
Perplexity | llama-3.1-sonar-large-128k-online | 10 | 10 | 50 |
Perplexity | llama-3.1-sonar-huge-128k-online | 50 | 50 | 50 |
#Text Embeddings
Vendor | Model | Credit Cost per 1,000 Input Tokens |
---|---|---|
OpenAI | Text Embedding 3 Small | 0.2 |
OpenAI | Text Embedding 3 Large | 1.3 |
Cohere | Embed 3 | 1 |
Mistral AI | mistral-embed | 1 |
Fireworks AI | nomic-ai/nomic-embed-text-v1.5 | 0.08 |
Fireworks AI | nomic-ai/nomic-embed-text-v1 | 0.08 |
Fireworks AI | WhereIsAI/UAE-Large-V1 | 0.16 |
Fireworks AI | thenlper/gte-large | 0.16 |
Fireworks AI | thenlper/gte-base | 0.08 |
#Image Generation
Vendor | Model | Image Size | Credit Cost per Image |
---|---|---|---|
OpenAI | DALL·E 3 Standard | 1024x1024 | 400 |
OpenAI | DALL·E 3 Standard | 1024x1792, 1792x1024 | 800 |
OpenAI | DALL·E 3 HD | 1024x1024 | 800 |
OpenAI | DALL·E 3 HD | 1024x1792, 1792x1024 | 1,200 |
Stability AI | SDXL 1.0 | 60 | |
Stability AI | SD 1.6 | 100 |
#Audio Recognition
Vendor | Model | Credit Cost per Second |
---|---|---|
OpenAI | Whisper | 1 |
#Text to Speech
Vendor | Model | Credit Cost per 1M Characters |
---|---|---|
OpenAI | TTS 1 | 150 |
OpenAI | TTS 1 HD | 300 |
#Text Reranking
Vendor | Model | Credit Cost Per 1000 Searches |
---|---|---|
Cohere | Rerank 3 | 20,000 |
#Get Prospects Emails
Vendor | Credit Cost Per Email |
---|---|
LeadIQ | 395 |
#Next Additions
Subscribe to our Newsletter to keep up to date with the latest tasks, models and vendors that are supported by Instill Credit.
We also plan to leverage Instill Credit to ease data storage and model hosting, stay tuned!
#Organization Credit
Team plans also include monthly Instill Credit for organizations, allowing their members to consume the organization's credit instead of their own.
Users can impersonate an organization and consume its Instill Credit by switching namespaces. Any pipeline accessible by the organization (i.e. any public pipeline or any private pipeline owned by the organization) can be triggered using this method.

#How can I get more credit?
The amount of monthly Instill Credit of a user or organization depends on their subscription plan.
Additionally, subscribers can purchase extra credit in case the monthly amount doesn't cover their Instill Credit needs. In contrast to monthly credit, purchased credit won't expire at the end of the billing cycle. For that reason, purchased credit will only be consumed only when the subscription credit has been exhausted.
Our roadmap includes more features to cover more complex Instill Credit use cases, such as Credit Auto-billing, where credit is topped up before it is totally exhausted, keeping production environments safe from any potential downtime.