Video Generation
OpenRouter supports video generation from text prompts (and optional reference images) via a dedicated asynchronous API. You can find the supported models, their capabilities, and pricing by filtering our model list by video output.
Model Discovery
You can find video generation models in several ways:
Via the Video Models API
Use the dedicated video models endpoint to list all available video generation models along with their supported parameters:
The response returns a data array where each model includes:
Use this endpoint to check which resolutions, aspect ratios, and passthrough parameters are supported by each model before submitting a generation request.
Via the Models API
You can also use the output_modalities query parameter on the Models API to discover video generation models:
On the Models Page
Visit the Models page and filter by output modalities to find models capable of video generation. Look for models that list "video" in their output modalities.
How It Works
Unlike text or image generation, video generation is asynchronous because generating video takes significantly longer. The workflow is:
- Submit a generation request to
POST /api/v1/videos - Receive a job ID and polling URL immediately
- Poll the polling URL (
GET /api/v1/videos/{jobId}) until the status iscompleted - Download the video from the content URL (
GET /api/v1/videos/{jobId}/content)
API Usage
Submitting a Video Generation Request
Request Parameters
Supported Resolutions
480p720p1080p1K2K4K
Supported Aspect Ratios
16:9— Widescreen landscape9:16— Vertical/portrait1:1— Square4:3— Standard landscape3:4— Standard portrait21:9— Ultra-wide9:21— Ultra-tall
Using Images
There are two ways to provide images, each triggering a different generation mode:
frame_images— Specifies first or last frame images for image-to-video generation. Each entry must include aframe_typeoffirst_frameorlast_frame.input_references— Provides style or content reference images for reference-to-video generation. The model uses these as visual guidance rather than exact frames.
If both fields are provided, frame_images takes
precedence and the request is treated as
image-to-video.
Image-to-Video (frame_images)
Reference-to-Video (input_references)
Provider-Specific Options
You can pass provider-specific options using the provider parameter. Options are keyed by provider slug, and only the options for the matched provider are forwarded:
Use the Video Models API to check which passthrough parameters each model supports via the allowed_passthrough_parameters field.
Response Format
Submit Response (202 Accepted)
When you submit a video generation request, you receive an immediate response with the job details:
Poll Response
When polling the job status, the response includes additional fields as the job progresses:
Job Statuses
Downloading the Video
Once the job status is completed, the unsigned_urls array contains URLs to download the generated video content. You can also use the content endpoint directly:
The index query parameter defaults to 0 and can be used if the model generates multiple video outputs.
Webhooks
Instead of polling for job status, you can receive a webhook notification when a video generation job completes. There are two ways to configure a callback URL:
- Per-request: Pass
callback_urlin the request body. This takes priority over the workspace default. - Workspace default: Set a default callback URL in your workspace settings. This applies to all video generation requests that don’t specify their own
callback_url.
Webhook Payload
When the job completes (or fails), a POST request is sent to the callback URL with the job result:
Signing Secret
You can configure a signing secret in your workspace settings to verify that webhook payloads are authentically from OpenRouter. When a signing secret is configured, each webhook delivery includes an X-OpenRouter-Signature header.
The signature includes a timestamp and an HMAC hash:
Verifying Signatures
To verify the signature on your webhook receiver:
- Extract the timestamp (
t) and signature hash (v1) from the header - Construct the signed payload:
{timestamp},{raw_request_body}(joined with a comma) - Compute the HMAC-SHA256 of the signed payload using your signing secret as the key
- Compare the hex-encoded result with the
v1value
Use the raw request body (the exact bytes received) for verification. Parsing and re-serializing JSON may change key ordering or number formatting, which will cause verification to fail.
Best Practices
- Detailed Prompts: Provide specific, descriptive prompts for better video quality. Include details about motion, camera angles, lighting, and scene composition
- Appropriate Resolution: Higher resolutions take longer to generate and cost more. Choose the resolution that fits your use case
- Polling Interval: Use a reasonable polling interval (e.g., 30 seconds) to avoid excessive API calls. Video generation typically takes 30 seconds to several minutes depending on the model and parameters
- Error Handling: Always check the job status for
failedstate and handle theerrorfield appropriately - Reference Images: When using reference images, ensure they are high quality and relevant to the desired video output
Zero Data Retention
Video generation is not eligible for Zero Data Retention (ZDR). Because video generation is asynchronous, the generated video output must be retained by the provider for a short period of time so that it can be retrieved after generation is complete. This temporary retention is inherent to the async polling workflow and cannot be bypassed.
If you have ZDR enforcement enabled (either via account settings or the per-request zdr parameter), video generation requests will not be routed.
Troubleshooting
Job stays in pending for a long time?
- Video generation can take several minutes depending on the model, resolution, and server load
- Continue polling at regular intervals
Generation failed?
- Check the
errorfield in the poll response for details - Verify the model supports video generation (
output_modalitiesincludes"video") - Ensure your prompt is appropriate and within model guidelines
- Check that any reference images are accessible and in supported formats
Model not found?
- Use the Video Models API or the Models page to find available video generation models
- Verify the model slug is correct (e.g.,
google/veo-3.1)