Zoë Webster argues that organisational oversight matters more than compute power when it comes to responsible use of Artificial Intelligence. The EU AI Act is changing the frame for regulation by covering not just development but also use, applying whenever a system could affect an EU citizen. Many tools in use today were not called Artificial Intelligence when adopted, or have been quietly extended with AI functionality, so they can sit outside formal compliance scopes and evade visibility by risk teams.
The regulation entered into force in August 2024 and uses a phased approach. Early prohibitions targeted unacceptable risks and an initial wave of obligations landed on general-purpose models and foundational systems, emphasising transparency, documentation and responsible model behaviour. By August 2026 a further set of rules arrives for high-risk systems, introducing formal duties around risk management, traceability and model performance. The law is aimed at areas with material or legal impact, such as healthcare, education and employment, and its staged timetable gives organisations time to prepare while increasing the scrutiny businesses must apply to systems already in production.
Webster recommends that businesses focus on tracing influence rather than only cataloguing inventory. A software audit can list tools, but it will not show how those tools influence decisions. Organisations need to map where logic, prioritisation or classification affect outcomes, discover what data feeds models, track update cadence and performance, and ensure there is a clear escalation path when outputs go wrong. Practical questions include who is accountable, who monitors day to day, and whether teams can explain decisions that matter. McKinsey´s 2025 survey noted widespread adoption of Artificial Intelligence yet very low perceived maturity, underscoring the gap between tooling and governance.
Trust will come from governance that lives in everyday processes rather than a filed policy. That means bringing operational leads, data owners and technical teams into workflows early, equipping accountable people to act, and building the non-technical skills to test assumptions and work across disciplines. Most barriers are not about infrastructure but about confidence and shared understanding. Organisations that surface how systems behave, assign ownership and create simple, repeatable responses will be best placed to comply and to use Artificial Intelligence with clarity and care.