EU Council backs streamlined rules for artificial intelligence implementation

EU governments have endorsed a mandate to streamline implementation of harmonised artificial intelligence rules, adjusting timelines for high-risk systems and adding new safeguards against non-consensual and abusive content while aiming to cut compliance burdens for businesses.

The Council of the EU has agreed its position on a proposal to streamline certain rules regarding artificial intelligence, as part of the “Omnibus VII” legislative package in the EU simplification agenda. The package includes two regulations intended to simplify the EU digital legislative framework and the implementation of harmonised artificial intelligence rules. The initiative is framed as essential for strengthening EU digital sovereignty, providing greater legal certainty, making requirements more proportionate and ensuring more harmonised implementation across member states, with the goal of supporting companies, facilitating innovation and boosting competitiveness in Europe.

The European Commission proposed to adjust the timeline for applying rules on high-risk artificial intelligence systems by up to 16 months, so that the rules start to apply once the Commission confirms the needed standards and tools are available. The Commission also proposed targeted amendments to the artificial intelligence act that would extend certain regulatory exemptions granted to SMEs also to small mid-caps (SMCs), reduce requirements in a very limited number of cases, extend the possibility to process sensitive personal data for bias detection and mitigation, reinforce the artificial intelligence office’s powers and reduce governance fragmentation. Member states broadly maintained this approach but expanded it with additional safeguards and clarifications.

The Council mandate adds a new provision in the artificial intelligence act prohibiting artificial intelligence practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material. The text also introduces a fixed timeline for the delayed application of high-risk rules: the new application dates would be 2 December 2027 for stand-alone high-risk artificial intelligence systems and 2 August 2028 for high-risk artificial intelligence systems embedded in products. The mandate reinstates the obligation for providers to register artificial intelligence systems in the EU database for high-risk systems when they consider their systems to be exempted from classification as high-risk, and it restores the standard of strict necessity for processing special categories of personal data for bias detection and correction.

Further changes postpone the deadline for establishing artificial intelligence regulatory sandboxes by competent national authorities until 2 December 2027. The text clarifies the competences of the artificial intelligence office for supervising artificial intelligence systems based on general-purpose models where the model and system are developed by the same provider, while listing exceptions in which national authorities remain competent, including law enforcement, border management, judicial authorities and financial institutions. The Council mandate also introduces a new obligation for the Commission to provide guidance to help economic operators of high-risk artificial intelligence systems covered by sectoral harmonisation legislation comply with high-risk requirements in a way that minimises compliance burdens. Following approval of the mandate, the Council presidency will open negotiations with the European Parliament, building on broader EU efforts since 2024 to simplify legislation and reduce administrative and reporting burdens under a series of ten “Omnibus” packages.

55

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.