European Parliament suspends Artificial Intelligence tools over data transfer risks

The European Parliament has temporarily disabled Artificial Intelligence features on official devices over fears that sensitive data could be routed to external cloud servers and foreign jurisdictions. The move highlights growing tension between data protection priorities and the widespread adoption of cloud based Artificial Intelligence services.

The European Parliament has temporarily disabled Artificial Intelligence features on lawmakers’ official devices because of concerns that sensitive information could be transmitted outside its secure systems. Internal communications from the institution’s IT services indicate that Artificial Intelligence tools have been switched off on corporate laptops, tablets and other equipment after technicians concluded they could not guarantee where user data might ultimately be stored or processed. The step reflects a cautious approach toward emerging Artificial Intelligence capabilities that are increasingly embedded into common productivity software.

According to an email from the technical support desk, some Artificial Intelligence functions depend on cloud processing to handle tasks such as email summarisation and document drafting. The communication warned that this process can involve transferring data off the device to external servers, and stated that “some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device.” The support team added that as these features evolve, the full extent of data shared with service providers is still being assessed, and that until this is fully clarified it is considered safer to keep such functions disabled. Parliament officials emphasized that everyday applications like calendars and basic email systems remain unaffected, and that the suspension targets only Artificial Intelligence features whose data handling practices are not yet fully understood.

The decision comes amid wider European unease about how sensitive information is treated by Artificial Intelligence systems, many of which are run by companies headquartered outside the European Union. Tools such as ChatGPT, Copilot and Claude typically process user requests in remote datacentres, and in some jurisdictions providers may be compelled to give user data to government authorities for national security or law enforcement purposes. Officials are wary that material uploaded to Artificial Intelligence assistants could be reused to improve underlying models, potentially exposing confidential content beyond its original context, a concern heightened by studies showing employees often paste internal documents, code or sensitive correspondence into such tools in violation of policies. The move also unfolds against a broader European debate, as the bloc enforces comprehensive Artificial Intelligence legislation while the European Commission proposes loosening certain data protection rules to support Artificial Intelligence training, and as tensions grow with the United States, where major providers are subject to domestic laws that have enabled authorities like the US Department of Homeland Security to issue hundreds of subpoenas to American technology and social media firms, including Meta, Google and Reddit, for data on individuals who criticised the Trump administration without judicial approval or court orders.

55

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.