Image Generation (ImageGenerator)
Generate images using a base model or a previously trained model. You provide prompts, tweak inference settings, and get back high-quality images.
You can use ImageGenerator in Flows to dynamically generate images based on data from prior steps, or generate images directly through the dashboard. Use a trained model by specifying a trainingId (we recommend the same base model used for training), or just provide a baseModel to generate from a publicly available Hugging Face model.
Parameters
imagesImage[]requiredAn array of image definitions describing what to generate. Each item is an
object with a prompt and optional inference settings. You can use @Map to
dynamically build this array from previous step results.
baseModelstringA global base model applied to all images unless overridden individually. Must
be a valid Hugging Face model ID (e.g.,
stable-diffusion-v1-5/stable-diffusion-v1-5).
trainingIdstring (UUID)A global trained model ID applied to all images unless overridden
individually. Can be a static UUID or a JSON Path reference like
$.results.myTrainingStep.id.
inferencePackIdstring (UUID)The ID of an inference pack created through the dashboard. The system merges pack prompts with any additional images you specify.
Image object properties
promptstringrequiredA textual prompt describing what to generate.
baseModelstringrequiredA valid Hugging Face model ID. Overrides the global baseModel for this
specific image.
trainingIdstring (UUID)Applies a LoRA trained through lensless on top of the base model. Overrides
the global trainingId for this image.
negativePromptstringThings to avoid in the image (e.g., “blurry, poorly drawn face, extra limbs”).
stepsnumber (1–60)Number of diffusion steps. Higher values produce more refined images but take
longer. Default: 25.
widthnumber (8–2048)Image width in pixels. Must be a multiple of 8. Default: 1024.
heightnumber (8–2048)Image height in pixels. Must be a multiple of 8. Default: 1024.
scalenumber (0–1)LoRA network strength — controls how strongly the trained model influences the
output. Only effective when trainingId is provided. Too high can cause
artifacts; too low reduces the subject’s influence. Default: 0.9.
guidanceScalenumber (0–10)Strength of prompt conditioning (CFG scale). Higher values make the image
adhere more strictly to your prompt, but very high values can look
over-saturated. Default: 7.5.
seednumberSeed for deterministic outputs. Set a fixed seed when comparing settings. Omit for random results.
batchSizenumber (1–4)Number of images to generate from a single prompt in one pass. Useful for
variety without submitting multiple jobs. Default: 1.
precisionenum: fp16 | bf16Mixed precision mode. fp16 works for most models; use bf16 only if your
base model requires it. Default: fp16.
Example
Generating images from user input
This Flow asks the user for a genre, generates 40 prompts with ObjectGenerator, then maps over them to create images:
{
id: 'promptsGenerator',
type: 'ObjectGenerator',
parameters: {
amount: 40,
input: '$.input.promptType',
description:
'You are an experienced artist with knowledge of how diffusion models work. Generate a list of prompts that will result in great images in the context of the prompt type input.',
schema: {
type: 'object',
properties: {
prompts: {
type: 'array',
items: { type: 'string' },
},
},
},
},
},
{
id: 'dynamicImageGenerator',
type: 'ImageGenerator',
parameters: {
baseModel: 'stable-diffusion-v1-5/stable-diffusion-v1-5',
images: [
[
'@Map($.results.promptsGenerator.prompts)',
{
prompt: '$$',
negativePrompt: 'ugly, blurry',
steps: 30,
},
],
],
},
},
Using a trained model with overrides
{
id: 'generateImages',
type: 'ImageGenerator',
parameters: {
trainingId: '$.results.myTrainingStep.id',
baseModel: 'stable-diffusion-v1-5/stable-diffusion-v1-5',
images: [
{
prompt: 'A photo of an astronaut riding a horse on mars',
negativePrompt: 'blurry, deformed',
width: 768,
height: 768,
steps: 20,
guidanceScale: 8.0,
},
{
prompt: 'A portrait in a sunlit garden',
negativePrompt: 'blurry, deformed',
width: 768,
height: 768,
steps: 20,
guidanceScale: 8.0,
// These override the global values for this image only:
trainingId: '4716c93e-03bc-49cf-921d-203971fe5866',
baseModel: 'Realistic_Vision_V6.0_B1',
},
],
},
}
Billing: Image generation is billed at $0.02 per inference. Each image definition counts as one inference.
Inference Packs
Inference Packs let you save a set of prompts and inference settings for reuse — similar to training templates but for image generation. They’re helpful if you frequently generate similar images with different trainings or want your Flow to always include a baseline set.
- Create an Inference Pack in the dashboard.
- Reference it via
inferencePackIdalongside any extra prompts. - The system merges them all, generating every prompt in the pack plus any additional ones you specify.
Best practices
It’s usually much faster to generate inferences in batches. Setting up an inference job has overhead (downloading model, setting up environment, processing results), so batching multiple inference requests into a single step drastically speeds things up compared to one-at-a-time.
- Craft your prompts carefully — detail, style references, and negative prompts can drastically affect outcomes. Experiment!
- Image dimensions must be multiples of 8 — larger images might produce poor results depending on the model.
- Set a fixed seed when comparing hyperparameters or testing. Omit it when you want variety.
- Use
@Mapfor dynamic generation — map over arrays from previous steps to generate one image per item.