The Mystery Is (Mostly) Solved
Alibaba’s #1 Video Model, Explained
Here’s what we now know about HappyHorse 1.0:
An anonymous #1, then a 72-hour disappearance. HappyHorse-1.0 surfaced on the Artificial Analysis Video Arena in late March 2026, swept #1 in both Text-to-Video and Image-to-Video, then vanished from the leaderboard roughly three days later — with no API, weights, or paper to point to.
Source: Artificial Analysis (official announcement) →Alibaba finally claimed it on April 10, 2026. The @AlibabaGroup account publicly congratulated @HappyHorseATH, confirming the model was built by the Future Life Lab team inside Taotian Group’s Alibaba Token Hub (ATH) innovation unit, led by Zhang Di — former VP at Kuaishou and technical lead on Kling AI.
Source: CNBC — Alibaba reveals HappyHorse →Real specs, verified on the record. 15 billion parameters. A unified 40-layer self-attention Transformer that jointly generates video and synchronized audio. Native 1080p at 24fps. Inference benchmark: 38.4 seconds for a 5-second 1080p clip on a single NVIDIA H100.
Source: WaveSpeedAI technical breakdown →Still nothing to download — yet. The team states “Everything is open,” but the official GitHub and Model Hub buttons still read “Coming soon.” API access is scheduled to open April 30, 2026. Any GitHub or HuggingFace repo currently using the HappyHorse name is unofficial.
Source: WaveSpeedAI — Is HappyHorse open source? →