Adobe launches expanded Firefly tools with new generative video features and partner models

Adobe´s Firefly now empowers video creators with enhanced generative Artificial Intelligence tools, including partner model integration and frame-level video editing.

Adobe has unveiled a comprehensive update to its Firefly application, delivering an expanded suite of generative Artificial Intelligence features and new partner model integrations aimed at video content creators. This release marks a significant move to boost creative workflows with advanced capabilities, including sound effect generation and granular editing controls. The Firefly platform now supports text- and voice-driven sound effect creation, enabling users to generate custom audio assets directly with written prompts or recorded speech. The tool is in beta and allows for both impact and atmospheric sounds, which can be layered with ambient audio or integrated into user-submitted video content.

In addition to sound design, Adobe has brought in several sophisticated frame-level video editing features within Firefly. The new Composition Reference feature guides users in matching shot framing to a reference visual, while Keyframe Cropping permits detailed refinement across sequences. Style Presets help maintain visual consistency throughout generated outputs. These features aim to reduce manual post-production effort and create a smooth path from idea to finished product for both seasoned editors and newcomers in digital video production.

Crucially, Adobe has expanded the pool of Artificial Intelligence models available in Firefly by integrating external partners such as Google’s Veo 3, Moonvalley’s Marey, and Runway’s Gen-4. This builds on existing choices from OpenAI, Ideogram, Black Forest Labs, Pika, Luma AI, and Topaz Labs, all accessible with a single Adobe sign-in. Adobe emphasizes that any content created in its apps will not be used for model training, reinforcing its data privacy stance. As part of its Content Authenticity Initiative, Firefly now attaches Content Credentials to all Artificial Intelligence–generated assets, specifying the model used for creation. Since the launch of Firefly models, Adobe reports more than 26 billion assets have been generated worldwide, signalling the widespread adoption of its generative tools in the creative community.

75

Impact Score

ARC-AGI-3 exposes limits in Artificial Intelligence reasoning

ARC-AGI-3 introduces interactive, instruction-free environments designed to test whether frontier Artificial Intelligence systems can adapt to genuinely novel situations. Early results show top models performing near zero, highlighting a sharp gap between pattern recognition and open-ended exploration.

NVIDIA Rubin Ultra reportedly hits packaging limits at TSMC

NVIDIA is reportedly running into manufacturing problems with Rubin Ultra as its planned package pushes beyond current TSMC capabilities. The issue centers on CoWoS-L packaging for a much larger multi-die, high-bandwidth memory design.

Intel BOT reshapes code execution through vectorization

Intel’s Binary Optimization Tool is changing how executable applications run on Arrow Lake Refresh systems, with measurable gains in some workloads. Primate Labs found that the tool cuts instruction counts and aggressively shifts execution from scalar code to vector instructions, prompting Geekbench to label BOT-enhanced results.

Replication studies challenge quantum computing claims

Physicists reviewing prominent topological quantum computing results found that signals described as breakthroughs could also be explained by simpler alternatives. Their effort also exposed how hard it can be to publish replication work in high-profile science journals.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.