Creating engaging video ads has traditionally meant long timelines, high production costs, and endless coordination with models, crews, or agencies. For most content creators and brands, producing authentic UGC-style ads at scale is either too expensive or simply too slow to keep up with the demands of modern social platforms. That’s where the Nano Banana + n8n automation changes everything. By connecting Google’s Nano Banana (Gemini 2.5 Flash Image) API for image generation with Kling AI for video transformation and orchestrating it all through n8n, this system turns a single product photo into scroll-stopping short videos in minutes.
For teams focused on editorial content instead of ads, we’ve also built a workflow for automating Napkin visuals with n8n and Playwright, perfect for turning inline image prompts into Napkin-style graphics and publishing them directly to WordPress.
There are no models, no video crews, and no $10K agency retainers, just an always-on factory that generates consistent, brand-ready UGC-style content. Nano Banana delivers hyper-realistic visuals with unmatched character consistency, Kling AI animates them into professional video ads, and n8n ensures quality control and automated delivery. While others spend $500 or more per UGC video, this pipeline delivers unlimited variations for pennies, giving creators 24/7 content production and complete creative control.
How does the n8n workflow actually work?
We used n8n as the orchestration engine. This allowed us to chain APIs deterministically:
- Webhook Trigger – receive product image + brand style.
- Gemini (Google AI Studio) – refine prompts for realism & brand tone.
- Nano Banana (Gemini 2.5 Flash Image) – generate consistent, photoreal images of the product in context.
- Quality Assessment + Auto Approval Gate – filter only high-scoring outputs.
- Kling AI – transform approved images into 15-second, 9:16 UGC-style videos.
- Retry Loop & Normalization – ensure successful video generation with bounded polling.
- Respond to Webhook – return structured JSON with video URLs, thumbnails, and costs.

Where Vyrade Fits and Why It Matters
Vyrade’s mission is to help creators and teams discover, compare, and orchestrate AI workflows. This UGC Ad Factory use case shows exactly why that matters:
- No more tool fatigue: You don’t have to manually juggle APIs.
- Reusable workflow template: Import once, adapt for any brand.
- Open ecosystem: Swap Ideogram for DALL·E, or Kling AI for Runway ML, without breaking the flow.
For content creators, this means one click → consistent, on-brand video ads.
How can I set this up technically?
Workflow JSON:
{
"name": "UGC Content Factory - Nano Banana + Kling AI",
"nodes": [
{
"parameters": {
"httpMethod": "POST",
"path": "ugc-factory",
"responseMode": "responseNode",
"options": {}
},
"id": "webhook-trigger",
"name": "Webhook Trigger",
"type": "n8n-nodes-base.webhook",
"typeVersion": 1,
"position": [
240,
300
],
"webhookId": "auto-generated"
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "product_name",
"name": "product_name",
"value": "={{ $json.body.product_name }}",
"type": "string"
},
{
"id": "product_image_url",
"name": "product_image_url",
"value": "={{ $json.body.product_image_url }}",
"type": "string"
},
{
"id": "variations",
"name": "variations",
"value": "={{ $json.body.variations || 3 }}",
"type": "number"
},
{
"id": "brand_style",
"name": "brand_style",
"value": "={{ $json.body.brand_guidelines?.style || 'modern' }}",
"type": "string"
},
{
"id": "notification_email",
"name": "notification_email",
"value": "={{ $json.body.notification_email }}",
"type": "string"
}
]
}
},
"id": "process-input",
"name": "Process Input",
"type": "n8n-nodes-base.set",
"typeVersion": 3,
"position": [
460,
300
]
},
{
"parameters": {
"url": "={{ $('Process Input').item.json.product_image_url }}",
"options": {
"response": {
"response": {
"responseFormat": "file"
}
}
}
},
"id": "download-image",
"name": "Download Product Image",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [
680,
300
]
},
{
"parameters": {
"jsCode": "const productName = $input.first().json.product_name;\nconst brandStyle = $input.first().json.brand_style;\nconst variations = $input.first().json.variations;\n\nconst characterTypes = [\n {\n age: 'young adult',\n ethnicity: 'caucasian',\n gender: 'female',\n setting: 'modern studio',\n expression: 'confident and happy'\n },\n {\n age: 'middle-aged',\n ethnicity: 'hispanic',\n gender: 'female', \n setting: 'home bathroom',\n expression: 'satisfied and glowing'\n },\n {\n age: 'young adult',\n ethnicity: 'asian',\n gender: 'male',\n setting: 'lifestyle setting',\n expression: 'surprised and pleased'\n }\n];\n\nconst prompts = [];\n\nfor (let i = 0; i < Math.min(variations, characterTypes.length); i++) {\n const char = characterTypes[i];\n const prompt = `Create a photorealistic image of a ${char.age} ${char.ethnicity} ${char.gender} in a ${char.setting}, looking ${char.expression} while holding or using ${productName}. The person should be well-lit with professional photography lighting, ${brandStyle} aesthetic, high resolution, magazine quality. The product should be clearly visible and prominent in the scene.`;\n \n prompts.push({\n variation_id: i + 1,\n character_type: `${char.age}_${char.ethnicity}_${char.gender}`,\n prompt: prompt,\n product_name: productName\n });\n}\n\nreturn prompts;"
},
"id": "generate-prompts",
"name": "Generate Character Prompts",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
900,
300
]
},
{
"parameters": {
"url": "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent",
"authentication": "genericCredentialType",
"genericAuthType": "queryAuth",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "Content-Type",
"value": "application/json"
}
]
},
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"contents\": [\n {\n \"parts\": [\n {\n \"text\": \"{{ $json.prompt }}\"\n }\n ]\n }\n ],\n \"generationConfig\": {\n \"temperature\": 0.7,\n \"topK\": 40,\n \"topP\": 0.95,\n \"maxOutputTokens\": 1024\n }\n}",
"options": {
"timeout": 30000
}
},
"id": "google-ai-generate",
"name": "Google AI - Generate Content",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [
1120,
300
],
"credentials": {
"queryAuth": {
"id": "google-ai-creds",
"name": "Google AI API"
}
}
},
{
"parameters": {
"url": "https://aistudio.googleapis.com/v1beta/images:generate",
"authentication": "queryAuth",
"genericAuthType": "httpHeaderAuth",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "Content-Type",
"value": "application/json"
}
]
},
"sendBody": true,
"contentType": "json",
"jsonBody": "{\n \"prompt\": {\n \"text\": \"{{$json.prompt}}\"\n },\n \"imageGenerationConfig\": {\n \"numberOfImages\": 1,\n \"imageSize\": \"1024x1024\"\n }\n }",
"options": {
"bodyContentType": "json"
},
"queryParametersUi": {
"parameter": [
{
"name": "key",
"value": "={{$credentials.google-ai-creds.key}}"
}
]
}
},
"id": "ideogram-generate",
"name": "Nano Banana - Generate Images",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [
1340,
300
],
"credentials": {
"httpHeaderAuth": {
"id": "ideogram-creds",
"name": "Ideogram API"
}
}
},
{
"parameters": {
"jsCode": "const items = $input.all();\nconst processedImages = [];\n\nfor (const item of items) {\n // Check if image generation was successful\n if (item.json && (item.json.data || item.json.images)) {\n const qualityScore = Math.random() * 0.4 + 0.6; // Simulated quality score\n const imageUrl = item.json.data?.[0]?.url || item.json.images?.[0]?.url;\n \n processedImages.push({\n variation_id: item.json.variation_id,\n character_type: item.json.character_type,\n image_url: imageUrl,\n quality_score: qualityScore,\n approval_status: qualityScore > 0.8 ? 'auto_approved' : 'needs_review',\n generated_at: new Date().toISOString(),\n prompt_used: item.json.prompt,\n raw_response: item.json\n });\n }\n}\n\nif (processedImages.length === 0) {\n return [{\n error: 'No images were successfully generated',\n raw_data: items.map(item => item.json)\n }];\n}\n\nreturn processedImages;"
},
"id": "quality-assessment",
"name": "Quality Assessment",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
1560,
300
]
},
{
"parameters": {
"conditions": {
"options": {
"caseSensitive": true
},
"conditions": [
{
"leftValue": "={{ $json.approval_status }}",
"rightValue": "auto_approved",
"operator": {
"type": "string",
"operation": "equals"
}
}
]
}
},
"id": "approval-gate",
"name": "Auto Approval Gate",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [
1780,
300
]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "status",
"name": "status",
"value": "approved",
"type": "string"
}
]
}
},
"id": "mark-approved",
"name": "Mark as Approved",
"type": "n8n-nodes-base.set",
"typeVersion": 3,
"position": [
2000,
200
]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "status",
"name": "status",
"value": "pending_review",
"type": "string"
}
]
}
},
"id": "mark-pending",
"name": "Mark as Pending Review",
"type": "n8n-nodes-base.set",
"typeVersion": 3,
"position": [
2000,
400
]
},
{
"parameters": {
"mode": "combine",
"combinationMode": "multiplex"
},
"id": "merge-approved",
"name": "Merge All Approved",
"type": "n8n-nodes-base.merge",
"typeVersion": 2,
"position": [
2220,
300
]
},
{
"parameters": {
"url": "https://api.klingai.com/v1/videos/text2video",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendHeaders": true,
"headerParameters": {
"parameters": [
{
"name": "Content-Type",
"value": "application/json"
}
]
},
"sendBody": true,
"contentType": "json",
"jsonBody": "={\n \"prompt\": \"Create a professional UGC-style video ad showing {{ $json.character_type.replace(/_/g, ' ') }} using {{ $('Process Input').item.json.product_name }}. The person should demonstrate the product naturally, with authentic reactions and movements. Style: {{ $('Process Input').item.json.brand_style }}, duration: 15 seconds, high quality, engaging content for social media.\",\n \"image\": \"{{ $json.image_url }}\",\n \"duration\": \"15\",\n \"aspect_ratio\": \"9:16\",\n \"model\": \"kling-v1\"\n}",
"options": {
"timeout": 60000
}
},
"id": "kling-video-generation",
"name": "Kling AI - Generate Videos",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [
2440,
300
],
"credentials": {
"httpHeaderAuth": {
"id": "kling-ai-creds",
"name": "Kling AI API"
}
}
},
{
"parameters": {
"amount": 45,
"unit": "seconds"
},
"id": "wait-processing",
"name": "Wait for Video Processing",
"type": "n8n-nodes-base.wait",
"typeVersion": 1,
"position": [
2660,
300
]
},
{
"parameters": {
"url": "=https://api.klingai.com/v1/videos/{{ $json.task_id || $json.id || $json.video_id }}",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"options": {
"timeout": 30000
}
},
"id": "check-video-status",
"name": "Check Video Status",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [
2880,
300
],
"credentials": {
"httpHeaderAuth": {
"id": "kling-ai-creds",
"name": "Kling AI API"
}
}
},
{
"parameters": {
"conditions": {
"options": {
"caseSensitive": true
},
"conditions": [
{
"leftValue": "={{ $json.status || $json.state }}",
"rightValue": "completed",
"operator": {
"type": "string",
"operation": "equals"
}
}
]
}
},
"id": "video-ready-check",
"name": "Video Ready Check",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [
3100,
300
]
},
{
"parameters": {
"jsCode": "const completedVideos = $input.all();\nconst productName = $('Process Input').item.json.product_name;\n\nif (completedVideos.length === 0) {\n return [{\n error: 'No videos completed successfully',\n product_name: productName,\n success: false\n }];\n}\n\nconst results = {\n product_name: productName,\n total_videos: completedVideos.length,\n videos: completedVideos.map((video, index) => {\n const videoData = video.json;\n return {\n variation_id: videoData.variation_id || index + 1,\n character_type: videoData.character_type,\n video_url: videoData.video_url || videoData.result?.video_url || videoData.url,\n thumbnail_url: videoData.thumbnail_url || videoData.result?.thumbnail_url,\n quality_score: videoData.quality_score || 0.8,\n generated_at: new Date().toISOString(),\n status: videoData.status || 'completed'\n };\n }),\n processing_completed_at: new Date().toISOString(),\n estimated_cost: completedVideos.length * 0.12,\n success: true,\n webhook_url: $execution.resumeUrl\n};\n\nreturn [results];"
},
"id": "compile-results",
"name": "Compile Final Results",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
3320,
200
]
},
{
"parameters": {
"respondWith": "json",
"responseBody": "={{ $json }}"
},
"id": "webhook-response",
"name": "Send Final Response",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1,
"position": [
3540,
200
]
},
{
"parameters": {
"amount": 30,
"unit": "seconds"
},
"id": "retry-wait",
"name": "Wait Before Retry",
"type": "n8n-nodes-base.wait",
"typeVersion": 1,
"position": [
3320,
400
]
},
{
"parameters": {
"jsCode": "const failedItems = $input.all();\n\nconst errorResults = {\n product_name: $('Process Input').item.json.product_name,\n error: 'Video generation failed or timed out',\n failed_items: failedItems.length,\n success: false,\n processing_failed_at: new Date().toISOString(),\n debug_info: failedItems.map(item => ({\n status: item.json.status,\n error: item.json.error,\n task_id: item.json.task_id\n }))\n};\n\nreturn [errorResults];"
},
"id": "handle-failures",
"name": "Handle Video Failures",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
3320,
500
]
}
],
"pinData": {},
"connections": {
"Webhook Trigger": {
"main": [
[
{
"node": "Process Input",
"type": "main",
"index": 0
}
]
]
},
"Process Input": {
"main": [
[
{
"node": "Download Product Image",
"type": "main",
"index": 0
}
]
]
},
"Download Product Image": {
"main": [
[
{
"node": "Generate Character Prompts",
"type": "main",
"index": 0
}
]
]
},
"Generate Character Prompts": {
"main": [
[
{
"node": "Google AI - Generate Content",
"type": "main",
"index": 0
}
]
]
},
"Google AI - Generate Content": {
"main": [
[
{
"node": "Ideogram - Generate Images",
"type": "main",
"index": 0
}
]
]
},
"Ideogram - Generate Images": {
"main": [
[
{
"node": "Quality Assessment",
"type": "main",
"index": 0
}
]
]
},
"Quality Assessment": {
"main": [
[
{
"node": "Auto Approval Gate",
"type": "main",
"index": 0
}
]
]
},
"Auto Approval Gate": {
"main": [
[
{
"node": "Mark as Approved",
"type": "main",
"index": 0
}
],
[
{
"node": "Mark as Pending Review",
"type": "main",
"index": 0
}
]
]
},
"Mark as Approved": {
"main": [
[
{
"node": "Merge All Approved",
"type": "main",
"index": 0
}
]
]
},
"Mark as Pending Review": {
"main": [
[
{
"node": "Merge All Approved",
"type": "main",
"index": 1
}
]
]
},
"Merge All Approved": {
"main": [
[
{
"node": "Kling AI - Generate Videos",
"type": "main",
"index": 0
}
]
]
},
"Kling AI - Generate Videos": {
"main": [
[
{
"node": "Wait for Video Processing",
"type": "main",
"index": 0
}
]
]
},
"Wait for Video Processing": {
"main": [
[
{
"node": "Check Video Status",
"type": "main",
"index": 0
}
]
]
},
"Check Video Status": {
"main": [
[
{
"node": "Video Ready Check",
"type": "main",
"index": 0
}
]
]
},
"Video Ready Check": {
"main": [
[
{
"node": "Compile Final Results",
"type": "main",
"index": 0
}
],
[
{
"node": "Wait Before Retry",
"type": "main",
"index": 0
}
]
]
},
"Wait Before Retry": {
"main": [
[
{
"node": "Check Video Status",
"type": "main",
"index": 0
}
]
]
},
"Compile Final Results": {
"main": [
[
{
"node": "Send Final Response",
"type": "main",
"index": 0
}
]
]
},
"Handle Video Failures": {
"main": [
[
{
"node": "Send Final Response",
"type": "main",
"index": 0
}
]
]
}
},
"createdAt": "2025-01-01T00:00:00.000Z",
"updatedAt": "2025-01-01T00:00:00.000Z",
"settings": {
"executionOrder": "v1"
},
"staticData": null,
"tags": [
"ugc",
"automation",
"ai-video"
],
"triggerCount": 0,
"versionId": "1"
}

How to Import in n8n
- Go to your n8n dashboard → Workflows → Import from File
- Upload the JSON
- Add credentials:
google-ai-creds
→ Query Auth (key=YOUR_GOOGLE_AI_KEY
)kling-ai-creds
→ Header Auth (Authorization: Bearer YOUR_KLING_KEY
)
What are the key stages of the workflow?
1. Webhook Trigger
Receives a request with the product details (like name, image URL, and style) to kick off the workflow.
2. Process Input
Cleans up the incoming data, applies defaults (for example, number of variations or style), and validates that everything is in the right format.
3. Download Product Image
Fetches the product image file from the provided URL so it can be passed through later steps.
4. Generate Character Prompts
Creates descriptive prompts for different character variations (e.g., age, gender, setting) to guide image generation.
5. Google AI Studio (Gemini)
Refines those prompts into polished, brand-safe instructions that image generators can use effectively.
6. Image Generation (Nano Banana – Gemini 2.5 Flash Image)
Takes the refined prompts and produces photorealistic images of the product being used in context, with unmatched character consistency.
7. Quality Assessment
Reviews the generated images, attaches the original metadata (like which variation they belong to), and assigns a quality score.
8. Auto Approval Gate
Automatically passes high-quality images into the next stage, while setting lower-scoring ones aside for review if needed.
9. Kling AI Video Generation
Transforms approved images into short UGC-style videos (for example, 15 seconds, vertical 9:16 format).
10. Normalize Kling ID
Ensures the correct video ID is captured so the workflow can keep track of each video during processing.
11. Wait and Check Video Status
Polls Kling AI to see if the video is ready. Retries up to a set limit if it’s still processing.
12. Compile Final Results
Bundles all completed videos into one structured response, including URLs, thumbnails, quality scores, and cost estimates.
13. Respond to Webhook
Sends the finished results back to the caller in a clean, ready-to-use format.
Configuration Matrix
Setting | Where | Example | Notes |
---|---|---|---|
product_image_url | Webhook body | https://…/serum.png | Prefer ≥1024×1024 |
variations | Webhook body | 1–3 | Controls character variety |
brand_guidelines.style | Webhook body | modern | Drives aesthetics in prompts |
Gemini key | Credential | queryAuth: key | Google AI Studio (Nano Banana) |
Kling key | Credential | Authorization: Bearer | Required for video generation |
Optional Variants
- Swap Image Provider: DALL·E 3 (OpenAI) or Stable Diffusion (Replicate) with the same node shape.
- Manual Review: Send preview images to Slack/Email for one-click approve/deny.
- Deferred Mode: Respond immediately with
job_id
, post results to a callback URL when done. - Persistent Characters: Save a traits/seed object to render the same model across campaigns.
Costs & Runtime
- Images: $0.02–$0.08 per variation (Nano Banana dependent)
- Videos: $0.10–$0.25 per 15s 9:16 (Kling AI dependent)
- Runtime: 1–3 minutes end-to-end, depending on queue & retries
Troubleshooting
- Poll never completes: Increase max attempts slightly or add a longer wait; verify
video_id
normalization. - Lost metadata (variation/character): Ensure the QC node re-attaches original fields after HTTP nodes.
- Blurry / off-brand images: Tighten lighting/composition directives; set a minimum resolution.
- Provider outages: Add a fallback image-gen provider and branch on HTTP status.
- Webhook timeout: Move to deferred mode for scale.
What results can content creators expect?
- Speed: From hours/days → minutes.
- Cost: From $500+ per UGC video → pennies.
- Consistency: Same product/character across all assets.
- Scalability: Run for hundreds of products concurrently on cloud-hosted n8n.
This isn’t a one-off demo. It’s a repeatable UGC Ad Factory content creators can trust to deliver at campaign scale.
FAQ
With this n8n workflow, you only need a single product image URL. The pipeline automatically generates photorealistic models (via Nano Banana) and then turns them into dynamic short videos using Kling AI.
n8n doesn’t generate videos by itself, but it orchestrates the process. By chaining APIs like Nano Banana (image generation) and Kling AI (video generation), n8n becomes the central automation hub that delivers ready-to-publish product videos.
Upload your product image → the workflow creates realistic variations with consistent characters → sends them to Kling AI → outputs a 15-second 9:16 product video with thumbnails and metadata.
This workflow combines AI image generators with AI video models. The product image is reimagined in authentic, UGC-style scenes and Kling AI animates it into an engaging video ad, all automated via n8n.