The European Parliament has temporarily disabled Artificial Intelligence features on lawmakers’ official devices because of concerns that sensitive information could be transmitted outside its secure systems. Internal communications from the institution’s IT services indicate that Artificial Intelligence tools have been switched off on corporate laptops, tablets and other equipment after technicians concluded they could not guarantee where user data might ultimately be stored or processed. The step reflects a cautious approach toward emerging Artificial Intelligence capabilities that are increasingly embedded into common productivity software.
According to an email from the technical support desk, some Artificial Intelligence functions depend on cloud processing to handle tasks such as email summarisation and document drafting. The communication warned that this process can involve transferring data off the device to external servers, and stated that “some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device.” The support team added that as these features evolve, the full extent of data shared with service providers is still being assessed, and that until this is fully clarified it is considered safer to keep such functions disabled. Parliament officials emphasized that everyday applications like calendars and basic email systems remain unaffected, and that the suspension targets only Artificial Intelligence features whose data handling practices are not yet fully understood.
The decision comes amid wider European unease about how sensitive information is treated by Artificial Intelligence systems, many of which are run by companies headquartered outside the European Union. Tools such as ChatGPT, Copilot and Claude typically process user requests in remote datacentres, and in some jurisdictions providers may be compelled to give user data to government authorities for national security or law enforcement purposes. Officials are wary that material uploaded to Artificial Intelligence assistants could be reused to improve underlying models, potentially exposing confidential content beyond its original context, a concern heightened by studies showing employees often paste internal documents, code or sensitive correspondence into such tools in violation of policies. The move also unfolds against a broader European debate, as the bloc enforces comprehensive Artificial Intelligence legislation while the European Commission proposes loosening certain data protection rules to support Artificial Intelligence training, and as tensions grow with the United States, where major providers are subject to domestic laws that have enabled authorities like the US Department of Homeland Security to issue hundreds of subpoenas to American technology and social media firms, including Meta, Google and Reddit, for data on individuals who criticised the Trump administration without judicial approval or court orders.
