EU AI Act: what security leaders need to know and how DSPM supports compliance

The EU AI Act reshapes how businesses deploy Artificial Intelligence, mandating transparency, risk assessment, and data governance. Discover best practices for security and the emerging role of DSPM.

The European Union Artificial Intelligence Act (EU AI Act) represents a significant shift in regulatory oversight, aiming to govern the development, deployment, and use of artificial intelligence technologies across Europe. Approved in 2024 and set for full enforcement by 2026, the act introduces a risk-based framework that classifies artificial intelligence applications as minimal, limited, high, or unacceptable risk. Organizations operating within or offering services to EU citizens must comply, with preliminary obligations starting in April 2024. The act’s principal goal is to manage high-risk artificial intelligence systems—those affecting critical infrastructure, safety, or basic rights—through stringent transparency, accountability, and data protection requirements.

To foster ethical artificial intelligence use without smothering innovation, the EU AI Act requires companies to assess and document the risks associated with their systems, ensure transparency in their operation and outputs, and implement mechanisms to secure user data. Some artificial intelligence practices, such as manipulative algorithms or systems that exploit vulnerable individuals, are prohibited outright. Major penalties for non-compliance include fines reaching up to €30 million or 6% of global annual turnover, surpassing even the GDPR’s most severe sanctions. These rules not only harmonize artificial intelligence governance across member states but also set a global precedent for responsible artificial intelligence strategy and risk management.

The path to compliance introduces operational complexity. Challenges include navigating ambiguities around applying existing laws to artificial intelligence, managing the rapid evolution of technology, handling the scale and speed of automated decision-making, and ensuring internal collaboration across legal, technical, and business teams. Addressing these hurdles calls for robust, ongoing risk management strategies: classifying and continuously monitoring artificial intelligence data, rigorous documentation for high-risk systems, and enforcing privacy-by-design principles aligned with GDPR. For many, Data Security Posture Management (DSPM) is emerging as a cornerstone in the compliance toolkit. Tools like Zscaler’s DSPM provide centralized visibility and control over an organization’s data and artificial intelligence landscape, ensure secure data flows, detect and respond to risks, and facilitate transparency and auditability. By integrating data governance, artificial intelligence posture management, and continuous risk assessment, organizations can align with the EU AI Act, minimize compliance risks, and promote responsible, secure artificial intelligence deployment.

76

Impact Score

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.