Creators have disrupted Hollywood. Now artificial intelligence is coming to disrupt them.

As influencers hit peak cultural clout, artificial intelligence tools promise push-button video creation that could both supercharge and undercut their work. Creators and activists are sounding alarms over consent, compensation and competition.

As Sora 2 sweeps social media feeds with striking, realistic video clips, a new question looms over the creator economy: what happens to influencers when anyone can conjure high-quality footage in seconds? For many, the rise of artificial intelligence is both opportunity and threat. Automation could help creators communicate faster and at larger scale, yet it also flattens hard-won advantages in shooting and editing. With barriers to production falling, the field may soon be crowded with slick content made by anyone with the right prompt, not just those who spent years mastering the craft.

A growing grassroots push is warning about the costs of this shift. Toronto artist Sam Yang has used his channels to argue that artists are “fed up” because their copyrighted work is being used to train artificial intelligence models without consent, exposing them to reputation damage, forgery and fraud. Model and activist Sinead Bovell, who has built sizable followings on Instagram and TikTok, has raised similar concerns in fashion and modeling circles, cautioning that audiences could normalize synthetic images and forget to ask whether real human models are being compensated for the countless hours that honed the skills now powering the very engines competing for their livelihoods.

Those fears gained credence with an Atlantic investigation that found artificial intelligence had been trained on at least a million how-to videos from popular influencers, spanning woodworking to beauty. The implication is stark: models built on those tutorials could help anyone replicate the look and know-how of established creators, potentially siphoning audience growth and income from the people who produced the original knowledge. To many influencers, this feels like a replay of earlier battles over unlicensed scraping and monetization, only now extended to the full audiovisual playbook.

Others see upside. Proponents argue that when video creation becomes push-button, the differentiators will be personality, relatability and style, which are precisely where human influencers excel. They also note that virtual influencers such as Aitana López from Barcelona’s The Clueless and Lil Miquela from Vancouver-based Dapper Labs have long coexisted with human creators, with their personas still guided by people. Artificial intelligence will likely increase the “slop factor” and intensify competition, but it could also push human influencers to lean further into the unique attributes only they can deliver.

62

Impact Score

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

MIT method spots overconfident Artificial Intelligence models

MIT researchers developed a way to detect when large language models are confidently wrong by comparing their answers with outputs from similar models. The combined uncertainty measure outperformed standard techniques across a range of tasks and may help reduce unreliable responses.

MEPs back delay for parts of Artificial Intelligence Act

European Parliament committees have endorsed targeted delays to parts of the Artificial Intelligence Act while adding a proposed ban on certain non-consensual image manipulation tools. The changes aim to give companies clearer deadlines, reduce overlap with other EU rules, and extend support to small mid-cap enterprises.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.