Pika Labs’ powerful AI video generator is now open to everyone



summary
Summary

Pika Labs’ AI video generator is now available to all users.

The app is available on Discord and recently also on the web. The web version offers a much more intuitive interface with many features. For example, it is possible to prompt an AI video with image, image and text, or video, image and text.

A common workflow is to create a high-quality image with an AI image generator and then animate it in Pika Labs. You upload the image and describe the desired motion effect in a short text prompt.

There are also options for camera movement, aspect ratio, and consistency between video and text. You can also edit parts of the generated video or scale the video background to fit, e.g. from 1:1 to 16:9.

Ad

Ad

Generate videos up to 15 seconds long

Videos run at 8 to 24 frames per second and are four seconds long by default. They can be extended by four seconds at a time, up to 15 seconds, with two clicks. The longer a scene, the harder it is for the model to maintain consistency, as shown in the following example.

“A red-bearded guy in a jiu-jitsu fight with a cow” | Video: Pika v1.0 prompted by THE DECODER

A better approach is to prompt many short scenes in high-quality and then edit them into a longer video using a video editing program. The following video was created this way using Pika Labs.

Video: Pika Labs via X

The web version is available at pika.art. Check out the Explore section for many video examples from the community, including the prompts. For more inspiration, join the Discord community.

Recommendation

prompt parameters here, and more good tips in the Prompt Tutorial on Discord.

Video AI generation moves fast

Pika Labs is one of the best-funded video AI startups alongside RunwayML, but is still in its infancy. It recently raised $55 million in pre-seed and seed rounds led by Nat Friedman and Daniel Gross, and a Series A round led by Lightspeed Venture Partners.

Stability AI is also entering the AI video market with Stable Video Diffusion and Google with its video language model Videopoet. Meta has also demonstrated a text-to-video model with Emu Video, which significantly outperformed all commercial offerings in user preference studies.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top