Technical approach for classifying human-AI interactions at scale

Discover how Semantic Telemetry leverages large language model classifiers to extract actionable insights from massive volumes of human–Artificial Intelligence conversations, powering efficiency at scale.

As large language models rise to prominence in Artificial Intelligence deployments, Microsoft Research’s Semantic Telemetry project offers a technical blueprint for categorizing human–AI interactions on an unprecedented scale. Processing hundreds of millions of anonymized Bing Chat conversations weekly, the pipeline employs LLM-based classifiers to extract key features such as user expertise, satisfaction, and conversational topics. These insights feed back into improving the systems themselves, forming a feedback loop essential for iterative development and performance optimization.

To enable this operation at scale, the engineering team devised a high-throughput, high-performance pipeline architecture. Central to the system is a hybrid compute model blending PySpark for distributed processing and Polars for streamlined execution in smaller environments. The transformation layer is model-agnostic and leverages prompt templates adhering to the Prompty specification, enabling consistent classification workflows regardless of the underlying LLM. Robust parsing and cleaning mechanisms enforce schema alignment, correct label ambiguity, and address potential anomalies in LLM output to maintain integrity across batch operations.

The engineers faced significant challenges related to endpoint latency, rate limits, evolving model behaviors, and dynamic throughput optimization. Mitigation strategies included using multiple rotating LLM endpoints, asynchronous output saving, favoring high tokens-per-minute models, smart timeouts with retries, and comprehensive evaluation workflows for aligning prompts across new LLM iterations. The team’s dynamic concurrency control adapts to real-time task loads and latency data, further stabilizing throughput. Beyond foundational improvements, extensive optimization experiments explored batching strategies, embedding-based classification to minimize redundant calls, prompt compression tools, and intelligent text truncation. Each technique involved nuanced trade-offs between speed, cost, and classification accuracy—requiring careful evaluation to strike the right balance for production reliability.

Ultimately, Microsoft’s work demonstrates that scaling LLM-powered human–Artificial Intelligence interaction analysis requires not just robust infrastructure, but an agile approach to prompt engineering, model selection, and orchestration. While the current techniques establish a strong operational foundation, the lessons and tooling from Semantic Telemetry set the stage for even more sophisticated, near real-time insights as Artificial Intelligence infrastructure matures.

76

Impact Score

Artificial intelligence is coming for YouTube creators

More than 15.8 million YouTube videos from over 2 million channels appear in at least 13 public data sets used to train generative Artificial Intelligence video tools, often without creators’ permission. creators and legal advocates are contesting whether such mass downloading and training is lawful or ethical.

Netherlands issues new Artificial Intelligence Act guidance

Businesses in the Netherlands have been given updated guidance on how the new EU-wide Artificial Intelligence Act will affect them. The 21-page guide, available in English as a 287KB PDF, sets out practical steps to assess scope and compliance obligations.

FSU experts on the role of Artificial Intelligence in health care

Florida State University professors Zhe He and Delaney La Rosa are available to discuss how Artificial Intelligence is reshaping diagnosis, treatment planning and access to care in rural communities. Media can contact them for commentary and interviews.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.