Google Apigee introduces built-in LLM governance with Model Armor

Google Cloud debuts Model Armor for Apigee, offering native governance and security controls for large language model APIs across all subscription tiers.

Google Cloud has unveiled the public preview of Model Armor, a native governance framework for large language models (LLMs) integrated into the Apigee API management platform. Model Armor introduces automated enforcement of LLM-centric policies, including prompt validation, token-level controls, and output filtering, all managed at Apigee´s proxy layer. These features allow organizations to enforce safety rules and compliance measures across LLM APIs without altering downstream services or applications.

The solution operates by inspecting both API requests and responses through declarative, XML-based policies. These policies are designed to detect and mitigate threats such as prompt injection, jailbreak attempts, and the exposure of personally identifiable information (PII), with configurable actions for redacting, modifying, or blocking risky outputs. Available across all Apigee subscription tiers, Model Armor enables universal access to LLM governance capabilities, supporting a broad range of architectures and customer needs. Hands-on resources, including tutorials and proxy templates, guide teams in implementing prompt inspection, rate limiting, and integration with platforms like Vertex AI.

Beyond basic API management, Model Armor offers centralized governance for multiple LLM providers, including Vertex AI, OpenAI, Anthropic, Meta Llama, and self-hosted models. Organizations deploying LLM-enabled services on Google Kubernetes Engine (GKE) can enforce Model Armor policies directly at inference gateways or load balancers. Integration with Google Security Command Center means that any policy violations are surfaced for monitoring, alerting, and remediation, tightening the feedback loop for security operations. Detailed logs from each policy evaluation support advanced monitoring and anomaly detection via Apigee´s observability pipelines.

Comparatively, whereas most API gateways provide generic traffic controls, Model Armor’s native LLM-specific enforcement eliminates the need for custom middleware, simplifying secure LLM deployment. By providing consistency in safety rules and observability across diverse services and endpoints, Model Armor positions itself as a robust solution for organizations seeking to manage the unique risks and complexities of generative Artificial Intelligence APIs within their production environments.

73

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend