Elastic brings llm observability to Azure Artificial Intelligence Foundry to optimize Artificial Intelligence agents

Elastic has integrated with Azure Artificial Intelligence Foundry to provide observability for agentic Artificial Intelligence applications and large language models, giving developers and site reliability engineers real-time insight into token usage, latency, costs and content filtering.

Elastic announced a new integration with Azure Artificial Intelligence Foundry that adds observability for agentic Artificial Intelligence applications and large language models. The integration surfaces real-time insights for site reliability engineers and developers into model usage, generative Artificial Intelligence workloads and agentic Artificial Intelligence behavior. Elastic positions the integration as a way to help teams build, monitor, and optimize intelligent agents on Azure Artificial Intelligence Foundry with improved reliability, efficiency and guardrails.

The release addresses common operational challenges organizations face as they deploy agentic Artificial Intelligence in production, including uncontrolled token usage, latency bottlenecks and compliance blind spots. Elastic provides pre-built dashboards that present a unified view of model usage, performance, costs and content filtering so teams can identify bottlenecks, optimize configurations and understand cost drivers in real time. The company says these capabilities let organizations scale Artificial Intelligence applications faster without sacrificing reliability, compliance or budget control.

Executives quoted in the announcement emphasized operational clarity and safeguards. “Agentic Artificial Intelligence is only as strong as the models and infrastructure that power it,” said Santosh Krishnan, general manager, observability & security at Elastic, noting that the integration helps teams fix performance bottlenecks and understand cost drivers. Amanda Silver, corporate vice president at Microsoft Azure CoreArtificial Intelligence, said the integration delivers real-time visibility into token usage, latency and costs, and adds built-in safeguards for models hosted in Azure Artificial Intelligence Foundry. The Elastic Azure Artificial Intelligence Foundry integration is available in tech preview on Elastic Observability.

50

Impact Score

Apple plans Intel 18A-P for M7 and 14A for A21

Apple is expected to use Intel’s 18A-P process for M7 chips in MacBook models and Intel’s 14A process for A21 chips in iPhones. The shift points to a broader supplier strategy as Apple moves beyond TSMC for parts of its future silicon roadmap.

Google and other chatbots surface real phone numbers

Generative Artificial Intelligence chatbots are surfacing real phone numbers and other personal details, sometimes by pulling from obscure public sources and sometimes by inventing plausible but wrong contact information. Privacy experts say users have few reliable ways to find out whether their data is in model training sets or to force its removal.

U.S. and China revisit Artificial Intelligence emergency talks

Washington and Beijing are exploring renewed talks on an emergency communication channel for Artificial Intelligence as fears grow over the capabilities of Anthropic’s Mythos model. The shift reflects rising concern in both capitals that competitive pressure is outpacing safeguards.

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.