Rising Artificial Intelligence prompt injection attacks pose new security risks, warns NCSC

The National Cyber Security Centre warns that rising Artificial Intelligence prompt injection attacks let attackers embed hidden commands in user input, forcing organizations to update defenses, monitoring and testing.

The National Cyber Security Centre (NCSC) has warned that rising Artificial Intelligence prompt injection attacks represent a new, stealthy class of manipulation that targets large language models (LLMs). These attacks use ordinary-looking text to influence model outputs and can bypass assumptions that separate code from data in traditional systems. The warning signals that security teams must rethink architectures, controls and monitoring when LLMs are part of critical systems.

A prompt injection occurs when untrusted user input is combined with developer-provided instructions in a large language model (LLM) prompt. It allows attackers to embed hidden commands within normal content and manipulate the model’s behavior. Unlike traditional vulnerabilities, where code and data are clearly separated, LLMs process all text as part of the same sequence, which makes every input potentially influential and creates unique security challenges. The NCSC explained the distinction concisely: “Under the hood of an LLM, there’s no distinction made between ‘data’ or ‘instructions’; there is only ever ‘next token’. When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far. As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be.”

Prompt injection differs from classical injection attacks such as SQL injection because LLMs treat all text uniformly, so attackers can hide harmful instructions inside seemingly benign content. That capability can lead models to produce misleading or dangerous outputs, undermining data integrity, system security and user trust. As organizations adopt more Artificial Intelligence tools, understanding these new vectors has become a top priority for defenders.

The article outlines practical mitigation strategies. Organizations should design secure architectures that separate trusted instructions from untrusted inputs, filter or restrict user-generated content, and embed inputs within clearly tagged or bounded prompt segments. Ongoing monitoring, logging of model inputs and outputs, tracking of external API calls and model-triggered actions, proactive red teaming and formal security reviews at each stage of development are recommended to detect anomalies early and reduce risk in enterprise environments.

55

Impact Score

How global R&D spending growth has shifted since 2000

Global research and development spending has nearly tripled since 2000, with China and a group of emerging economies driving the fastest growth. Slower but still substantial expansion in mature economies highlights a world that is becoming more research intensive overall.

Finance artificial intelligence compliance in European financial services

The article explains how financial firms can use artificial intelligence tools while meeting European, United Kingdom, Irish and United States regulatory expectations, focusing on risk, transparency and governance. It details the European Union artificial intelligence act, the role of cybersecurity, and the standards and practices that support compliant deployment across the financial sector.

Artificial intelligence becomes a lever for transformation in Africa

African researchers and institutions are positioning artificial intelligence as a tool to tackle structural challenges in health, education, agriculture and governance, while pushing for data sovereignty and local language inclusion. The continent faces hurdles around skills, infrastructure and control of data but is exploring frugal technological models tailored to its realities.

Microsoft unveils Maia 200 artificial intelligence inference accelerator

Microsoft has introduced Maia 200, a custom artificial intelligence inference accelerator built on a 3 nm process and designed to improve the economics of token generation for large models, including GPT-5.2. The chip targets higher performance per dollar for services like Microsoft Foundry and Microsoft 365 Copilot while supporting synthetic data pipelines for next generation models.

Samsung’s 2 nm node progress could revive foundry business and attract Qualcomm

Samsung Foundry’s 2 nm SF2 process is reportedly stabilizing at around 50% yields, positioning the Exynos 2600 as a key proof of concept and potentially helping the chip division return to profit. New demand from Tesla Artificial Intelligence chips and possible deals with Qualcomm and AMD are seen as central to the turnaround.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.