Watch the video:
0:00 - Intro/Explanation
1:05 - Models
1:40 - Use Wan2.1 Now
2:50 - Download Wan2.1
3:39 - Install Conda
4:20 - Installing Wan2.1
5:40 - Download Wan2.1 Models
6:58 - Starting Wan2.1 WebUI (Gradio)
8:09 - Generating video (WebUI/Gradio)
8:40 - Generate video with CLI
9:20 - The... Results...
Wan2.1 is popping up everywhere. A new video generator with good physics and more. Want to test it out? This guide explains how to download and use the official models with their WebUI / Gradio interface. It’s simple.
Wan2.1: https://github.com/Wan-Video/Wan2.1
Use Wan 2.1 Now: https://huggingface.co/spaces/Wan-AI/Wan2.1
WSL Setup (for Windows users): https://learn.microsoft.com/en-us/windows/wsl/install
Download Anaconda: https://docs.anaconda.com/miniconda/install/
Commands:
conda create -n wan python=3.11 -y && conda activate wan
conda install gcc_linux-64 gxx_linux-64 -y
conda install cuda -c nvidia -y
pip3 install torch torchvision torchaudio –index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt
pip install huggingface_hub (to download models)
Use less GPU & Offload to CPU: –offload_model True –t5_cpu
Generation command I used: python generate.py –task t2v-14B –size 834*480 –ckpt_dir ./Wan2.1-T2V-14B –prompt “A panda floating in spacec” –offload_model True –t5_cpu