EXCLUSIVE: Luma launches creative AI agents powered by its new ‘UnifiedIntelligence’ models

AI video-generation startup Luma launched Luma Agents on Thursday. This new platform is designed to handle end-to-end creative work across text, image, video, and audio. The agents are powered by Luma’s Unified Intelligence family of models, which are built on a single multimodal reasoning system architecture.

Luma is pitching these agents as a transformative tool for ad agencies, marketing teams, design studios, and enterprises. The system is capable of planning and generating content across multiple media types. It can also coordinate with other AI models, including Luma’s own Ray 3.14, Google’s Veo 3 and Nano Banana Pro, ByteDance’s Seedream, and ElevenLabs’ voice models.

The foundation of Luma Agents is the startup’s Uni-1 model, the first in its Unified Intelligence family. According to Amit Jain, CEO and co-founder of Luma, Uni-1 has been trained on audio, video, image, language, and spatial reasoning. Jain describes the model as being able to “think in language and imagine and render in pixels or images,” a capability he calls “intelligence in pixels.” He added that other output capabilities like audio and video will come in subsequent model releases.

Luma has already begun rolling out its agentic platform with existing customers. These include global ad agencies Publicis Groupe and Serviceplan, as well as brands like Adidas, Mazda, and the Saudi AI company Humain. Jain stated, “Our customers aren’t buying the tool; they’re redoing how business is done.”

Jain emphasized that Luma Agents are a game changer because they can maintain persistent context across assets, collaborators, and creative iterations. The agents can also evaluate and refine their own outputs, improving results through an iterative self-critique. Jain compared this check-your-work capability to what has made coding agents so useful, noting the importance of a loop that evaluates, fixes, and repeats until a solution is good and accurate.

He criticized the current typical workflow for AI in creative environments, which often involves learning to prompt numerous individual models. What makes Luma Agents different, according to Jain, is that users do not need to prompt back and forth for each iteration. Instead, the system generates large sets of variations and allows users to steer the direction through conversation.

Jain explained that Unified Intelligence models understand content in addition to being able to generate it, enabling this end-to-end work. He drew an analogy to a human architect who, while drawing lines, creates an internal mental representation of the structure and experience. This is the same principle upon which Unified Intelligence is built.

The system is designed to significantly speed up creative workflows. In a demonstration, Jain showed how a 200-word brief and an image of a product, like a piece of lipstick, could lead the system to generate various ideas for locations, models, and color schemes for an ad campaign.

In a practical example, Jain said Luma Agents turned a brand’s $15 million, year-long ad campaign into multiple localized ads for different countries. This process was completed in 40 hours for under $20,000 and passed the brand’s internal quality and accuracy controls.

Luma Agents is now publicly available via API. Jain said the startup plans to roll out access gradually to ensure users maintain reliable access and avoid disruptions to their workflows.