Uber expands artificial intelligence data platform for global enterprise and lab use

Uber unveils a sweeping expansion of its artificial intelligence data platform, offering enterprise and lab customers advanced data, task, and agent-building tools on a global scale.

Uber Technologies has announced a significant global expansion of its artificial intelligence data services division, Uber AI Solutions. The move opens up Uber´s proprietary data and AI tools to enterprises and research labs worldwide, aiming to accelerate the development and deployment of advanced artificial intelligence models and agents. The expanded platform delivers a suite of custom data solutions, a global digital task network, and specialized infrastructure to support the creation, annotation, and evaluation of artificial intelligence systems efficiently at scale.

Drawing on a decade of experience in data collection, labeling, and testing from Uber´s own global operations—which include optimizing maps, powering self-driving technologies, and developing generative artificial intelligence for customer support—the company is now making this expertise accessible to external organizations. Uber’s revamped suite includes a worldwide digital task platform now live in 30 countries, connecting enterprises to skilled contributors in domains ranging from coding to sciences and linguistics. These contributors handle nuanced annotation, translation, and editing tasks for multilanguage, multimodal content, utilizing Uber’s foundational solutions for identity, compliance, and payments to ensure seamless global gig engagement in artificial intelligence workflows.

Central to the launch is Uber’s new data foundry, a service offering curated and custom-generated datasets in formats such as audio, video, images, and text. These comprehensive datasets, sourced from contributors globally, enable the training of large artificial intelligence models for generative applications, mapping, speech recognition, and more—prioritizing compliance and privacy by design. Further, Uber AI Solutions provides resources for building ‘agentic’ artificial intelligence with high-quality annotations, realistic workflow simulations, multilingual data, and tools for precise, scenario-driven agent training. Enterprises can also leverage Uber’s internal infrastructure for onboarding, quality checks, smart task routing and decomposition, and ongoing feedback—streamlining the path from data acquisition to tested artificial intelligence outcomes.

With this initiative, Uber positions itself as a key ‘human intelligence layer’ in global artificial intelligence development—bridging technical innovation with operational scale. Looking forward, the company revealed work on a natural-language interface to further simplify data requests: in the future, enterprise clients will be able to describe needs in plain language, leaving the platform to automate project setup, assignments, and quality assurance. The launch marks Uber’s strategic shift from internal artificial intelligence innovation to a broad, high-impact data and tooling partner for the next era of artificial intelligence.

73

Impact Score

Nvidia to sell fully integrated Artificial Intelligence servers

A report picked up on Tom’s Hardware and discussed on Hacker News says Nvidia is preparing to sell fully built rack and tray assemblies that include Vera CPUs, Rubin GPUs and integrated cooling, moving beyond supplying only GPUs and components for Artificial Intelligence workloads.

Navigating new age verification laws for game developers

Governments in the UK, European Union, the United States of America and elsewhere are imposing stricter age verification rules that affect game content, social features and personalization systems. Developers must adopt proportionate age-assurance measures such as ID checks, credit card verification or Artificial Intelligence age estimation to avoid fines, bans and reputational harm.

Large language models require a new form of oversight: capability-based monitoring

The paper proposes capability-based monitoring for large language models in healthcare, organizing oversight around shared capabilities such as summarization, reasoning, translation, and safety guardrails. The authors argue this approach is more scalable than task-based monitoring inherited from traditional machine learning and can reveal systemic weaknesses and emergent behaviors across tasks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.