How to Make AI Videos with Runway (Step-by-Step Tutorial)

I still remember the first time I saw what Runway could do. My friend sent me a video of a cat walking through a living room, except the cat was now a dragon and the living room was a castle. I watched it like six times in a row, trying to figure out if it was real or CGI.

Spoiler: it was Runway Gen-2.

That was in 2023. The platform’s gotten way more powerful since then, and honestly? The learning curve is way lower than you probably think. I taught myself the basics in an afternoon, and I still consider myself “tech curious” at best.

So here’s everything I wish someone had told me when I started.

What Even Is Runway?

Quick overview for the uninitiated: Runway is basically a web-based AI video generation platform. You give it text prompts or images, and it spits out video clips. No fancy hardware required, no download needed—just a browser and a dream.

They’ve got different tools for different things:

  • Gen-2 (and now Gen-3): Text-to-video and image-to-video
  • Motion Brush: Animate specific parts of a static image
  • Infinite Image: Extend images beyond their borders (cool for backgrounds)
There’s a free tier with limited credits, and paid plans if you want to go full creator mode.

Getting Started: The Stuff Before the Fun Stuff

Creating Your Account

Head to runwayml.com and click sign up. You can use email or sign in with Google. Took me maybe two minutes.

Fair warning: the interface has changed a few times since I’ve been using it, so some button names might look different now. The core functionality’s the same though.

The Dashboard Layout

Okay, this confused me at first, so let me break it down:

Left sidebar: Your projects, saved generations, templates
Main area: Where the magic happens (generation tools)
Top right: Your account info, credits remaining

Once you’re logged in, look for “Try Models” or “Create New” — that’s where the generation tools live.

The Basics: Text-to-Video

This is probably what you want to start with. Here’s how:
  1. Click on Gen-2 or Gen-3 (whatever’s currently available)
  2. You’ll see a text input field
  3. Type your prompt

Prompt Writing That Actually Works

Here’s my biggest discovery: vague prompts = boring results.

Instead of: “a dog running”

Try: “a golden retriever running through tall grass in slow motion, morning light, cinematic, the dog’s fur catching the sun”

The more specific, the better. Include:

  • Subject details (what’s in the frame)
  • Action/movement (what’s happening)
  • Environment (where is this)
  • Style/mood (how does it look)
  • Technical specs if you want (cinematic, portrait mode, etc.)

Settings You’ll Want to Tweak

Motion Intensity: Low for subtle movement, high for dramatic stuff. I usually start at medium.

Seed: This is like a random number that determines the randomness. Same seed + same prompt = same-ish video. Useful for iteration.

Duration: Usually 4 seconds per generation on free tier. Longer if you upgrade.

Aspect Ratio: Choose based on where you’ll post. 16:9 for YouTube, 9:16 for TikTok/Reels, 1:1 for general use.

The Image-to-Video Trick

This is where things get actually impressive. Instead of generating from scratch, you can upload an image and animate it.

Here’s my workflow:
  1. Generate a cool image in Midjourney or DALL-E (or upload one you have)
  2. In Runway, select “Image to Video”
  3. Upload your image
  4. Add a motion prompt if you want: “camera slowly pushing in” or “subject blinking and turning head”
  5. Hit generate
The image quality and composition will largely determine how good the output looks. Good input = good output. Garbage in, garbage out.

I made this surreal short film for a friend’s birthday using like four generated images and some music. Cost me maybe $5 in credits total. Was it Oscar-worthy? Nah. But it made her cry happy tears, so.

The Motion Brush Feature (Game Changer)

Okay, this one blew my mind when I figured it out.

Instead of animating the whole image, you paint which parts you want to move. Let’s say you have a landscape photo and you just want the water to ripple and the leaves to sway. Paint those areas, set the motion direction, and hit generate.

It’s way more controllable than just hoping the AI guesses right about what should move.

I’ve used this for:

  • Making product photos look more dynamic
  • Adding subtle movement to illustrations
  • Creating b-roll for presentations

Things That’ll Go Wrong (And How to Fix Them)

The “AI nightmare” problem

Sometimes you’ll get results that are… cursed. Bodies that don’t quite work, objects that phase in and out of existence. It’s part of the game.

My fix: be more specific in your prompts, especially about what you DON’T want. “Realistic human hands” or “no people in frame” can help.

Consistency between clips

If you’re trying to make a longer video, you’ll notice the AI struggles to keep things consistent between generations. Same subject looks different in clip 2 vs clip 3.

Possible solutions:

  • Use the same seed
  • Use image-to-video with consistent source images
  • Use the new “Consistent Character” feature if available
  • Accept that AI video has an inherent “dreamlike” quality and lean into it stylistically

Credit management

The free tier runs out fast if you’re not careful. I learned this the hard way at 11 PM when I was in the middle of a creative burst.

Tip: batch your generations. Don’t generate one, wait, generate another. Do all your prompting, then generate a bunch in a row while you have credits.

My Actual Creative Process

Here’s how I actually use Runway:

For social content:

  1. Write a batch of prompts during lunch
  2. Generate 10-15 options during coffee break
  3. Pick the best 3-4
  4. Edit together with CapCut or Premiere
For presentations:

  1. Find relevant stock image
  2. Animate it with Motion Brush
  3. Add it to my slides
  4. Audience thinks I’m way more tech-savvy than I am

For fun/art projects:

  1. Dream up something weird
  2. Generate it
  3. Add dramatic music
  4. Post before I second-guess myself

Exporting and Formatting

When you’re happy with your generation:

  1. Click on the video to preview
  2. Look for download button (usually an arrow pointing down)
  3. Choose quality: Draft (fast, smaller file) or Full Quality
  4. Save to your device

For social media, I usually go 1080p minimum. For personal stuff, draft quality is fine.

The Reality Check

Let me be honest: Runway isn’t going to replace video production. The results are often wonky, the consistency is rough, and you can’t control everything.

But for creators who need quick b-roll, artists who want to bring static images to life, or anyone experimenting with AI art? It’s genuinely exciting technology.

I’ve used it for client presentations, personal art projects, birthday videos, and educational content. Not once did I think “man, I should’ve just hired a video crew.”

Sometimes good enough + fast + cheap beats perfect + slow + expensive.

Get Started

Honestly, just go make something weird. That’s the best way to learn.

Start with the free tier, play around, see what works. The interface is intuitive enough that you’ll figure it out as you go.

If you make something cool, honestly? I’d love to see it. Drop a comment below.

Now if you’ll excuse me, I’m gonna go try to make a video of my cat as a medieval knight.


Quick note: Runway updates frequently. This tutorial reflects my experience using the platform through early 2025. Interface and available features may have changed since then.