From breakthrough models to creator tools and enterprise integrations, company brings AI-powered video understanding directly into production workflows
LAS VEGAS, April 20, 2026 /PRNewswire-PRWeb/ -- TwelveLabs, the video understanding company, today announced a series of product and ecosystem advancements at the NAB Show 2026. The releases showcase TwelveLabs' evolution from a model and infrastructure provider to a full-stack platform, delivering end-to-end video intelligence.
With new capabilities and partnerships, TwelveLabs continues to drive the shift in how video is understood, accessed, and used at scale. Video now represents 90% of the world's data, and TwelveLabs lets organizations and creators leverage this data, moving from raw footage to insight, and action faster than ever.
Introducing Pegasus 1.5: The Newest Breakthrough in Video Understanding
TwelveLabs is known for developing some of the most advanced video foundation models in the world, and the launch of its Pegasus 1.5 model represents a new category of video intelligence. It introduces Time-Based Metadata Extraction, becoming the first model ever that discovers temporal boundaries in video and extracts structured metadata to match a customer-defined schema. No re-indexing or manual annotation, and just a single API call. Users simply define what matters, and the model finds every instance with timestamps and structured outputs.
This means Pegasus 1.5 interprets video with a level of contextual understanding similar to how editors review footage, recognizing context, transitions, and key moments like when a brand appears or a key moment occurs.
For media companies, decades of archival footage become instantly searchable, structured assets. For sports broadcasters, every play and event can be immediately identified. And for enterprises, Pegasus 1.5 replaces manual video-tagging workflows that previously required thousands of hours of review per year– all with unrivaled performance. In early testing, Pegasus 1.5 outperformed Gemini 2.5 Pro by 30% on aggregate segmentation quality benchmarks, and it is already in production with a major broadcast network.
Rodeo Brings AI Agents Directly into Video Production
TwelveLabs also introduced Rodeo, the company's first application-layer product designed for creators . Rodeo acts as an AI-powered creative co-pilot, enabling them to find, edit, and assemble footage using natural language. It removes labor-intensive, technically demanding processes and constraints to free users to do what they do best: create.
Rodeo introduces AI agents directly into the workflow with no technical integration. These agents surface relevant clips, suggest edits, help assemble sequences and more, as directed by the user. This empowers creatives to move from raw footage to finished stories in minutes vs. hours or days.
Embedded in Industry Tools: AutoDesk Flow Capture
In addition to its own product innovation, TwelveLabs also adds its video intelligence capabilities into the tools professionals already use. TwelveLabs has partnered with Autodesk to enhance their digital dailies and review software, Autodesk Flow Capture, powered by PIX. From Hollywood blockbusters to indie breakouts, Flow Capture (formerly Moxion and PIX) is a secure, cloud-based tool that connects production and postproduction teams and workflows. By combining fragmented workflows with a single, connected solution, Flow Capture helps production teams move faster, stay aligned, and deliver high-quality content on time and on budget.
With the addition of TwelveLabs-powered Smart Search and Smart Actions, Flow Capture unlocks an entirely new level of efficiency. Teams can search video the way they think, instantly jumping to exact moments using natural language and surfacing the right content in seconds. At the same time, automated workflows tag, organize, and route media from the moment it's uploaded, eliminating manual effort and streamlining collaboration at scale. The result: faster discovery, smarter workflows, and more time to focus on the creative.
"Creative teams shouldn't have to hunt for their footage," said Hugh Calveley, Sr. Director of Product Management at Autodesk. "With TwelveLabs powering Flow Capture's Smart Search and Smart Actions, teams can search video the way they think and stay focused on what matters most: the story."
"For years video has been the most valuable and least accessible form of data. TwelveLabs is changing this at every level," said Jae Lee, CEO and co-founder of TwelveLabs. "With Pegasus 1.5, Rodeo, and our integrations with industry leaders, we're transitioning from understanding video to operationalizing it at scale. This will transform how we use and experience what has become the most prolific medium on the planet."
To experience TwelveLabs' latest innovations, please stop by the company's booth #W1923, at the NAB Show 2026 at the Las Vegas Convention Center April 18-22. Or visit TwelveLabs.io to learn more.
About TwelveLabs
TwelveLabs is the world's most powerful video intelligence platform, enabling machines to see, hear, and reason about video like humans do. From semantic search to automated summaries and multimodal embeddings, TwelveLabs empowers developers, enterprises, and creatives to unlock the full potential of video data across industries including media, advertising, government, security, and automotive. For more information, visit www.twelvelabs.io.
Media Contact
Amber Moore, Moore Communications, 1 5039439381, [email protected]
SOURCE Twelve Labs

Share this article