EU’s New Artificial Intelligence Act Faces Criticism

New EU Artificial Intelligence law enforces high penalties, raising concerns over innovation stifling and investment deterring.

The European Union’s new Artificial Intelligence Act officially came into effect on April 1, marking a significant regulatory step aimed at ensuring privacy and providing legal certainty across member states. Key provisions include banning high-risk AI applications like real-time facial recognition and automated CV screening, classifying AI systems into risk categories, and establishing hefty penalties for non-compliance. These changes seek to curb the misuse of AI technologies and safeguard individual rights within the EU.

Despite the intentions behind this groundbreaking legislation, industry experts in the Czech Republic express concern over its potential negative impact on innovation and investment. Critics argue that this could deter companies from investing in European AI initiatives, as the region struggles to keep up with countries like the United States and China, which invest far more heavily into AI development. Some experts fear this regulation might exacerbate the EU’s existing lag behind other global players in the AI sector.

Moreover, the act has ignited a debate on potential misuse and overregulation. Although supporters like the Czech Association of Artificial Intelligence acknowledge benefits in preventing situations akin to China’s social scoring system, they also call for amendments to support research and development. Regulatory costs might push larger corporations to reconsider operations in Europe, potentially paving the way for increased reliance on non-European AI solutions.

77

Impact Score

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.