Wan2.6
Wan 2.6 is Alibaba’s advanced multimodal video generation model designed to create high-quality, audio-synchronized videos from text or images. It supports video creation up to 15 seconds in length while maintaining strong narrative flow and visual consistency. The model delivers smooth, realistic motion with cinematic camera movement and pacing. Native audio-visual synchronization ensures dialogue, sound effects, and background music align perfectly with visuals. Wan 2.6 includes precise lip-sync technology for natural mouth movements. It supports multiple resolutions, including 480p, 720p, and 1080p. Wan 2.6 is well-suited for creating short-form video content across social media platforms.
Learn more
Lyria 3 Clip
Lyria 3 Clip is a lightweight AI music generation capability within Google’s Lyria 3 ecosystem that focuses on creating short-form audio tracks from prompts. It enables users to generate brief music clips, typically around 30 seconds, using text, images, or video inputs. The model transforms creative ideas into complete soundtracks with vocals, lyrics, and instrumentals automatically. It is designed for fast, iterative creation, allowing users to experiment with different styles, moods, and genres. Lyria 3 Clip is integrated into platforms like the Gemini app and developer tools, making it accessible for both creators and developers. The tool emphasizes ease of use, requiring no musical expertise to produce polished audio outputs. Overall, it provides a quick and intuitive way to generate short, high-quality music clips for creative projects.
Learn more
MediaPet
MediaPET is an AI-powered video advertising platform that transforms business ideas into professional-quality video ads by handling script generation, visuals, animation, audio, and editing automatically. It offers over 100 animation styles, automated custom musical scores, advanced lip-syncing and voice-cloning, and supports high-definition export in multiple aspect ratios. Rather than relying solely on prompt-based generation, MediaPET gives users control over key creative variables such as character, environment, and product consistency, and lets them supply reference images to maintain visual continuity across scenes. It integrates research-driven creative methodologies, including neurometric data, into the production process, meaning ads generated on the platform have been independently validated to deliver ad impact comparable to premium national-level campaigns while costing substantially less.
Learn more
AIReel
AIReel is an AI-powered video generation platform that enables users to create short-form videos automatically from text prompts or uploaded images without requiring traditional video editing skills. It functions as an all-in-one AI video creator where users simply describe an idea or upload an image, and the system generates a complete video with scenes, motion effects, and music. AIReel relies on multiple advanced generative video models, including engines similar to Sora, Veo, and other multimodal AI systems, to transform text or images into dynamic visual content. Its dual-mode generation system allows both text-to-video and image-to-video workflows, making it possible to animate static photos or generate entirely new cinematic scenes from written prompts. It includes a built-in prompt assistant that helps users refine simple ideas into more detailed instructions so the AI can produce higher-quality results.
Learn more