Stability AI

The Stability AI component is an AI component that allows users to connect the AI models served on the Stability AI Platform. It can carry out the following tasks:

#Release Stage

Alpha

#Configuration

The component definition and tasks are defined in the definition.json and tasks.json files respectively.

#Setup

In order to communicate with Stability AI, the following connection details need to be provided. You may specify them directly in a pipeline recipe as key-value pairs within the component's setup block, or you can create a Connection from the Integration Settings page and reference the whole setup as setup: ${connection.<my-connection-id>}.

FieldField IDTypeNote
API Keyapi-keystringFill in your Stability AI API key. To find your keys, visit here

#Supported Tasks

#Text to Image

Generate a new image from a text prompt.

InputIDTypeDescription
Task ID (required)taskstringTASK_TEXT_TO_IMAGE
Engine (required)enginestringStability AI Engine (model) to be used.
Prompts (required)promptsarray[string]An array of prompts to use for generation.
Weightsweightsarray[number]An array of weights to use for generation.
CFG Scalecfg-scalenumberHow strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
Clip Guidance Presetclip-guidance-presetstringClip guidance preset
HeightheightintegerThe image height
WidthwidthintegerThe image width
SamplersamplerstringWhich sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you.
SamplessamplesintegerNumber of images to generate
SeedseedintegerRandom noise seed (omit this option or use 0 for a random seed)
StepsstepsintegerNumber of diffusion steps to run.
Style Presetstyle-presetstringPass in a style preset to guide the image model towards a particular style. This list of style presets is subject to change.
OutputIDTypeDescription
Imagesimagesarray[string]Generated images
Seedsseedsarray[number]Seeds of generated images

#Image to Image

Modify an image based on a text prompt.

InputIDTypeDescription
Task ID (required)taskstringTASK_IMAGE_TO_IMAGE
Engine (required)enginestringStability AI Engine (model) to be used.
Prompts (required)promptsarray[string]An array of prompts to use for generation.
Init Imageinit-imagestringImage used to initialize the diffusion process, in lieu of random noise.
Weightsweightsarray[number]An array of weights to use for generation. If unspecified, the model will automatically assign a default weight of 1.0 to each prompt.
Clip Guidance Presetclip-guidance-presetstringClip guidance preset
Image Strengthimage-strengthnumberHow much influence the init_image has on the diffusion process. Values close to 1 will yield images very similar to the init_image while values close to 0 will yield images wildly different than the init_image. The behavior of this is meant to mirror DreamStudio's "Image Strength" slider.

This parameter is just an alternate way to set step_schedule_start, which is done via the calculation 1 - image_strength. For example, passing in an Image Strength of 35% (0.35) would result in a step_schedule_start of 0.65.
CFG Scalecfg-scalenumberHow strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
Init Image Modeinit-image-modestringWhether to use image_strength or step_schedule_* to control how much influence the init_image has on the result.
SamplersamplerstringWhich sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you.
SamplessamplesintegerNumber of images to generate
SeedseedintegerRandom noise seed (omit this option or use 0 for a random seed)
Step Schedule Startstep-schedule-startnumberSkips a proportion of the start of the diffusion steps, allowing the init_image to influence the final generated image. Lower values will result in more influence from the init_image, while higher values will result in more influence from the diffusion steps. (e.g. a value of 0 would simply return you the init_image, where a value of 1 would return you a completely different image.)
Step Schedule Endstep-schedule-endnumberSkips a proportion of the end of the diffusion steps, allowing the init_image to influence the final generated image. Lower values will result in more influence from the init_image, while higher values will result in more influence from the diffusion steps.
StepsstepsintegerNumber of diffusion steps to run.
Style Presetstyle-presetstringPass in a style preset to guide the image model towards a particular style. This list of style presets is subject to change.
OutputIDTypeDescription
Imagesimagesarray[string]Generated images
Seedsseedsarray[number]Seeds of generated images