What organisations need to know to comply with the EU AI Act

Preparing for the EU AI Act means mapping how Artificial Intelligence shapes decisions across operations, then embedding clear oversight and responsibilities into everyday workflows.

Zoë Webster argues that organisational oversight matters more than compute power when it comes to responsible use of Artificial Intelligence. The EU AI Act is changing the frame for regulation by covering not just development but also use, applying whenever a system could affect an EU citizen. Many tools in use today were not called Artificial Intelligence when adopted, or have been quietly extended with AI functionality, so they can sit outside formal compliance scopes and evade visibility by risk teams.

The regulation entered into force in August 2024 and uses a phased approach. Early prohibitions targeted unacceptable risks and an initial wave of obligations landed on general-purpose models and foundational systems, emphasising transparency, documentation and responsible model behaviour. By August 2026 a further set of rules arrives for high-risk systems, introducing formal duties around risk management, traceability and model performance. The law is aimed at areas with material or legal impact, such as healthcare, education and employment, and its staged timetable gives organisations time to prepare while increasing the scrutiny businesses must apply to systems already in production.

Webster recommends that businesses focus on tracing influence rather than only cataloguing inventory. A software audit can list tools, but it will not show how those tools influence decisions. Organisations need to map where logic, prioritisation or classification affect outcomes, discover what data feeds models, track update cadence and performance, and ensure there is a clear escalation path when outputs go wrong. Practical questions include who is accountable, who monitors day to day, and whether teams can explain decisions that matter. McKinsey´s 2025 survey noted widespread adoption of Artificial Intelligence yet very low perceived maturity, underscoring the gap between tooling and governance.

Trust will come from governance that lives in everyday processes rather than a filed policy. That means bringing operational leads, data owners and technical teams into workflows early, equipping accountable people to act, and building the non-technical skills to test assumptions and work across disciplines. Most barriers are not about infrastructure but about confidence and shared understanding. Organisations that surface how systems behave, assign ownership and create simple, repeatable responses will be best placed to comply and to use Artificial Intelligence with clarity and care.

67

Impact Score

How Artificial Intelligence is reshaping financial services oversight

Financial services regulators are largely treating Artificial Intelligence as another technology governed by existing rules rather than building new securities-specific frameworks. History suggests that clearer expectations will emerge through examinations, enforcement, and supervisory guidance.

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.