Misinformation and Disinformation in the Digital Age: A Rising Risk

Artificial Intelligence poses increasing risks of misinformation and disinformation for business and society, particularly within the regulatory landscape of Europe.

The proliferation of digital technologies and Artificial Intelligence is accelerating the spread of misinformation and disinformation, posing significant risks to societies and businesses. The European Union has responded to these challenges with a multi-level, risk-based approach in its regulatory framework for Artificial Intelligence systems, aiming to ensure transparency, accountability, and security in the deployment of these technologies.

This regulatory framework applies to any business operating in the EU or offering Artificial Intelligence products and services within the region. Its measures categorize Artificial Intelligence applications based on their potential impact, subjecting high-risk systems to stringent compliance requirements. This approach is intended to mitigate harms associated with manipulated or false information, safeguarding both consumers and the broader public from the dangers of engineered narratives and information operations.

As the digital landscape continues to evolve, the intersection of emerging technologies and regulatory oversight remains at the forefront of efforts to combat online misinformation and disinformation. Ongoing adaptation by businesses will be necessary to remain compliant, and to ethically harness the powers of Artificial Intelligence while minimizing societal risks in the European market and beyond.

72

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.