How hackers poison Artificial Intelligence business tools and defences

Researchers report attackers are now planting hidden prompts in emails to hijack enterprise Artificial Intelligence tools and even tamper with Artificial Intelligence-powered security features. With most organisations adopting Artificial Intelligence, email must be treated as an execution environment with stricter controls.

Generative Artificial Intelligence is boosting productivity, efficiency and security for businesses, but attackers are co-opting the same technologies to scale spam and craft highly targeted phishing. Security researchers at Barracuda and elsewhere now see threat actors directly targeting companies’ Artificial Intelligence tools and tampering with Artificial Intelligence security features to steal data and weaken defences. With an estimated 78 percent of organizations using Artificial Intelligence for at least one business function and 45 percent for three or more, this growing dependence is making Artificial Intelligence systems increasingly attractive targets.

Email-borne attacks are emerging as a key vector. Artificial Intelligence assistants and the large language models that power them can be abused through hidden prompts embedded in seemingly legitimate emails. One recent example involved Microsoft 365’s Artificial Intelligence assistant, Copilot, where a now-fixed vulnerability (CVE-2025-32711) could have allowed information to be extracted without standard authorization. The basic playbook: attackers send employees benign-looking emails laced with concealed instructions that require no user interaction. When an employee later asks the assistant for help, it scans historical emails and data for context, ingests the malicious prompt, and can be manipulated to silently exfiltrate sensitive information, execute malicious commands, or alter data.

Beyond prompt injection, attackers can corrupt an assistant’s underlying memory and data pathways. Systems that use retrieval-augmented generation are vulnerable to distortion of the external sources they consult, which can push assistants into making incorrect decisions, providing false information, or performing unintended actions based on corrupted inputs. These techniques expand the options for quietly steering Artificial Intelligence behaviour through ordinary-looking communications.

Adversaries are also learning to tamper with the Artificial Intelligence components inside defensive tools. Many email security platforms now feature Artificial Intelligence-powered conveniences such as auto-replies, smart forwarding, automated spam removal, and ticket creation. If subverted, these features can be turned against the organisation: an intelligent filter might auto-reply with sensitive data, helpdesk tickets could be escalated without verification to gain unauthorized access, and harmful automated actions could be triggered to deploy malware, alter critical records, or disrupt operations. As Artificial Intelligence systems operate with higher autonomy, they can be tricked into impersonating users or trusting impersonators, enabling data leakage or fraudulent emails.

Traditional defences such as legacy gateways, standard authentication protocols, and IP blocklists are no longer sufficient. Organisations need email security platforms that are resilient to generative Artificial Intelligence, capable of understanding context, tone, and behavioural patterns in addition to content, and equipped with Artificial Intelligence-based filters that learn over time to resist manipulation. Artificial Intelligence assistants should operate in isolation and avoid acting on unchecked instructions, with tools configured to verify requests before execution, regardless of sender claims. Looking ahead, as agentic Artificial Intelligence becomes more prevalent, security must shift from passive filtering to proactive threat modelling for Artificial Intelligence agents. Email should be treated not merely as a channel but as an execution environment governed by zero-trust principles and continuous, Artificial Intelligence-aware validation.

64

Impact Score

Meta unveils Business Artificial Intelligence as a 24/7 sales agent

Meta launched Business Artificial Intelligence, a customer assistant that lives across Facebook, Instagram and even third-party sites to answer questions, recommend products and guide checkout. The company is also rolling out generative Artificial Intelligence and creator tools to help brands produce targeted ads and scale influencer campaigns.

Latest Artificial Intelligence news in finance

Finextra’s Artificial Intelligence coverage this week spans central bank pilots, bank deployments, and new vendor products, plus insights from Sibos 2025 and a FinextraTV interview. Here are the key developments and themes.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.