Model Training (ModelGenerator)

Fine-tune a base model on your custom dataset. ModelGenerator is the backbone for personalized avatars, advanced product photography, domain-specific image generation, and anything else where you need a model that “knows” your subject.

This is usually the step where you’ll spend the most time tinkering — small hyperparameter changes can make a big difference in output quality. You can also train models directly through the dashboard and generate images from them there.

Models expire. Models trained through Flows expire after 7 days. Models trained through the dashboard expire after 30 days (unless configured otherwise). Plan your image generation accordingly. Currently, trained models can’t be downloaded — they can only be used to generate images through lensless.

Preparing a dataset

High-quality data is the single most important factor in training. For example, if you’re training a model to learn a person’s appearance, all images should have good lighting and resolution, include different angles and poses, and contain only one person.

Since you don’t control uploaded datasets for public Flows, instruct your users on how to best select their images. There are also hyperparameters you can configure to minimize the impact of a suboptimal dataset — but you’ll need to experiment.

Upload via Dashboard:

  • Go to the Subjects section.
  • Create or select a subject and upload your images (.jpg or .png).

Upload via Flow Input:

  • Declare an input property with "dataset": true (e.g., "userDatasetId").
  • When a user runs the Flow, they’ll be prompted to upload files.
  • These are automatically assembled into a dataset under your organization.

Each dataset has a unique id you can reference in the training parameters.

Step definition

In your Flow’s steps array, add a ModelGenerator step:

{
  id: 'trainer',
  type: 'ModelGenerator',
  parameters: {
    datasetId: '$.input.userDatasetId',
    settings: {
      baseModel: 'stable-diffusion-v1-5/stable-diffusion-v1-5',
      epochs: 50,
      resolution: 1024,
      normalizeDataset: true,
      maskedLoss: true,
      // ...more hyperparameters below
    },
  },
}

Parameters

datasetIdstringrequired

The dataset to train on. Can be a static ID (from the dashboard) or a JSON Path reference like $.input.userDatasetId.

trainingSettingsIdstring (UUID)

Reference a training template created in the dashboard instead of providing a full settings object. You can still override individual settings alongside this.

settingsobject

A block of hyperparameters to control training. See the full list below.

Hyperparameters

Core

baseModelstringrequired

A valid Hugging Face model ID to train on top of (e.g., stable-diffusion-v1-5/stable-diffusion-v1-5).

epochsnumber (1–10,000)

Number of training epochs. If not provided, lensless estimates a good value based on the model and dataset size. We recommend experimenting and setting this manually.

repeatsnumber (1–10,000)

How many times each sample is repeated per epoch. Default: 1.

resolutionnumber (32–2048)

Image resolution used for training. Default: 1024.

batchSizenumber (1–6)

Batch size per training step. Default: 1.

seednumber

Seed for more deterministic results. Useful when testing different hyperparameters — keeps other variables constant.

Dataset processing

normalizeDatasetboolean

If true, detects the main subject in each image using object segmentation, crops to a square around them, and resizes. Recommended for portrait/person training. Default: false.

maskedLossboolean

Enables masked loss training. Generates a grayscale mask per image that focuses the model on the subject. Works best with normalizeDataset: true. Default: false.

headMaskShadenumber (0–255)

Mask intensity for the face/head region. Higher values mean a stronger training signal on the face. Only used when maskedLoss: true. Default: 255.

bodyMaskShadenumber (0–255)

Mask intensity for the body region. Only used when maskedLoss: true. Default: 200.

boundingMaskShadenumber (0–255)

Mask intensity for the bounding box outline. Only used when maskedLoss: true. Default: 40.

backgroundMaskShadenumber (0–255)

Mask intensity for the background area. Lower values mean the model pays less attention to the background. Only used when maskedLoss: true. Default: 40.

flipAugboolean

Flip augmentation — useful for very limited datasets. Should only be used for symmetrical subjects. Default: false.

Optimizer

optimizerenum: Prodigy | AdaFactor | AdamW | AdamW8bit | DAdaptation | DAdaptAdam

The optimizer algorithm. Default: Prodigy.

learningRatenumber

Main learning rate. Default: 1.0.

biasCorrectionboolean

Recommended for certain optimizers like Prodigy. Default: false.

weightDecaynumber

Weight decay regularization. Recommended for certain optimizers like Prodigy.

d0number

Initial D value. Recommended for certain optimizers like Prodigy.

decoupleboolean

Decoupled weight decay. Recommended for certain optimizers like Prodigy. Default: false.

dCoefnumber

D coefficient. Recommended for certain optimizers like Prodigy.

Network

networkDimnumber (0–256)

LoRA network dimension. Default: 64.

alphaDimnumber

LoRA alpha dimension. Defaults to match networkDim. Typically the same or lower than networkDim.

trainUnetOnlyboolean

If true, only trains the U-Net portion, ignoring the text encoder. Default: false.

captionPrefixstring

Prefix added to generated or existing captions (e.g., a unique trigger token like "lvff").

Advanced

precisionenum: fp16 | bf16

Mixed-precision mode. Default: fp16.

schedulerenum: Cosine | Constant | CosineAnnealing

Learning rate scheduler. Default: Cosine.

lossTypeenum: L2 | SmoothL1 | Huber

Loss function. Default: L2.

noiseOffsetnumber (0–1)

Adds additional noise during training. 0 adds no noise; 1 adds strong noise.

multiresNoiseIterationsnumber

Creates noise at various resolutions and adds them together. Specify how many resolution levels to create.

multiresNoiseDiscountnumber (0–1)

Weakens the noise amount of each resolution level. Lower values mean weaker noise.

timestepSamplingenum: sigma | uniform | sigmoid | shift | flux_shift

How to sample timesteps during training. sigma: sigma-based; uniform: uniform random; sigmoid: sigmoid of random normal; shift: shifts the sigmoid value; flux_shift: resolution-dependent shift.

guidanceScalenumber

Guidance scale applied during training. Used with certain model architectures that support training-time guidance.

huberCnumber

Huber loss constant. Required when using lossType: 'Huber'. Default: 1.0.

Example

Here’s a real-world training configuration for portrait training:

{
  id: 'portraitTrainer',
  type: 'ModelGenerator',
  parameters: {
    datasetId: '$.input.userDatasetId',
    settings: {
      baseModel: 'Realistic_Vision_V6.0_B1',
      epochs: 80,
      repeats: 1,
      alphaDim: 64,
      decouple: true,
      batchSize: 1,
      optimizer: 'Prodigy',
      precision: 'fp16',
      scheduler: 'Cosine',
      maskedLoss: true,
      networkDim: 64,
      resolution: 1024,
      learningRate: 1,
      bodyMaskShade: 200,
      headMaskShade: 255,
      biasCorrection: true,
      normalizeDataset: true,
      boundingMaskShade: 40,
      backgroundMaskShade: 40,
    },
  },
}

Billing: Training is billed at $0.06 per minute. A 40-minute training job costs $2.40. Watch your logs and organization balance to avoid unexpected costs.

Training templates

Repeatedly specifying the same training parameters can be tedious. Training Templates let you define a configuration once and reuse it.

  1. Go to Trainings → New training template in the dashboard.
  2. Fill out fields like base model, epochs, etc.
  3. Save it as a template.
  4. Reference it in a Flow:
{
  id: 'trainer',
  type: 'ModelGenerator',
  parameters: {
    trainingSettingsId: 'your-template-uuid',
    datasetId: '$.input.datasetId',
  },
}

You can still override or supplement the template’s settings in your Flow’s settings block.

Best practices

  • Dataset quality matters most — ensure images are clear, well-lit, diverse in angles/poses, and properly represent your subject.
  • Start with defaults — define only a small subset of hyperparameters at first. As results improve, experiment with more.
  • Use a fixed seed when testing — this keeps other variables constant so you can isolate the effect of each change.
  • Some hyperparameters work in combination — for example, maskedLoss works best with normalizeDataset: true, and Prodigy benefits from biasCorrection and decouple.
  • Verify your base model — it must be a valid Hugging Face repository name. If you see errors, double-check the spelling.

Last updated on March 19, 2026