Generate Image Tools
Automatic detection
You can simply mention @Hana and describe your request. If the intent is clear, she will automatically run the relevant image workflow for you.
OpenAI-powered image generation and editing
Hana's image creation and edit workflows are powered by OpenAI image models.
What it does
Hana can generate and edit images directly from chat.
- For new generation: describe the image clearly.
- For edits: attach the source image in the same thread.
- For targeted edits: use a mask when asked.
Supported operations/arguments
| Operation | Trigger Phrases | What it does | Supported arguments |
|---|---|---|---|
| New image generation | generate, create image | Creates a fresh image from text prompt. | model_prompt, file_name, optional: size, quality, style, background, output_format, n |
| Image edit | recolor, replace, edit this image | Edits an attached source image while preserving context. | model_prompt, file_name, source_image_attachment_names, optional: input_fidelity, size, quality, output_format |
| Masked edit | mask edit, change only this part | Edits only selected region using a mask. | model_prompt, file_name, source_image_attachment_names, mask_attachment_name, optional: input_fidelity, size, quality |
| Outpaint | outpaint, extend canvas | Expands an image beyond original boundaries. | model_prompt, file_name, source_image_attachment_names, optional: size, quality, background |
| Background removal | remove background | Removes background and isolates subject. | model_prompt, file_name, source_image_attachment_names, optional: background, output_format |
Argument Meaning
model_prompt: What should be generated or changed.file_name: Output file name.source_image_attachment_names: Source image(s) from current thread/context.mask_attachment_name: Mask image for targeted edits.size: Output dimensions.quality: Output quality level.style: Style steering option when supported.background: Background handling (for example opaque/transparent where supported).output_format: Output type (for example png/jpeg/webp where supported).n: Number of outputs requested.input_fidelity: How strongly to preserve source details during edits.
Invocation examples
generaterecolorreplaceoutpaintremove backgroundmask edit
@Hana generate a clean product hero image with white background and soft studio lighting.
@Hana recolor the curtains to white.
@Hana replace the coffee cup with a teapot.
@Hana outpaint this image to landscape while keeping the subject centered.
@Hana remove background from this image and keep only the product.
@Hana make her wear a crown. Use a mask for the head region only.
Troubleshooting
- Vague prompts: add subject, style, lighting, and composition details.
- Edits not applied: attach source image and provide/select mask for targeted edits.
- Wrong framing: specify orientation (
square,portrait,landscape). - Poor consistency: continue in same thread so Hana can reuse context.
When to use
Use for new image generation, edits to existing images, and background/mask-based transformations.
Permissions/limits
- Edits that reference an existing image require that image to be available in chat context.
- Complex edits may require iterative prompts for precise control.
High-signal invocation
@Hana generate a product launch banner with clean typography, white background, and export-ready 16:9 composition.
Edge-case invocation
@Hana using this image, replace only the background with a studio setup and keep product color/shape unchanged.