Palantir is introducing Anthropic’s Claude Opus 4.6 across Anthropic, AWS Bedrock, and Google Vertex for non-georestricted enrollments, with availability for US and EU non-georestricted enrollments through AWS Bedrock and Google Vertex. Claude Opus 4.6 is described as Anthropic’s latest flagship model designed for advanced large language model use cases in coding, agentic workflows, and knowledge work, and it builds on its predecessor with stronger coding skills, deeper planning, longer agent autonomy, and improved code review and debugging. The model is optimized to operate more reliably in large codebases and handle complex, multi-step tasks with minimal intervention, and it supports a context window of 200,000 tokens, text and image modalities, and capabilities including extended thinking and function calling.
Model Studio, Palantir’s no-code model development workspace, is becoming generally available in all environments the week of February 9 following a public beta that began in October 2025. The tool allows users to train forecasting, classification, and regression models using a point-and-click interface, with built-in production-grade trainers, smart defaults, guided workflows, experiment tracking, data lineage, and secure access controls. It is aimed at both business users and data scientists, lowering the barrier to machine learning adoption while supporting more advanced configuration where needed, and the roadmap includes enhanced experiment logging, additional modeling tasks, marketplace support, and direct time series input support on datasets. Developer capabilities expand further with support for unscoped Developer Console applications, which unlock features such as OSDK usage, documentation access, marketplace integration, website hosting, and metrics that were previously unavailable with standalone OAuth clients, and standalone OAuth clients are now deprecated in favor of configurable scoped or unscoped Developer Console applications with restrictions managed directly in the console.
Workflow presentation and lineage tools are also advancing, with a new presentation mode in Workflow Lineage that lets users create, capture, and manage visual frames of workflow graphs, including node layout, colors, and zoom level, and navigate them via hotkeys for more dynamic presentations. Workflow Lineage now supports multi-ontology graphs, providing unified visualization of resource nodes across ontologies, cross-ontology awareness with grayed-out external nodes and warning icons, and easy ontology switching, while preserving functional limits for action nodes outside the selected ontology. On the Artificial Intelligence model side, Palantir is enabling Claude Sonnet 4.5 and Claude Haiku 4.5 in the Japan region via AWS Bedrock with context window: 200k tokens, knowledge cutoffs of January 2025 and February 2025 respectively, text and image modalities, and capabilities including tool calling, vision, and coding, and is also adding GPT-5.2 Codex from OpenAI to AIP for non-georestricted enrollments, with a context window of 400,000 tokens, knowledge cutoff in August 2025, text and image modalities, and capabilities such as Responses API, structured outputs, and function calling.
Foundry users gain new runtime flexibility through generally available compute modules that run containers which scale dynamically based on load, allowing existing code in any language to be brought into Foundry without rewriting. These compute modules support custom functions and APIs callable from Workshop, Slate, ontology-based applications, and AIP Logic, as well as data pipelines that ingest and transform external data into Foundry streams, datasets, or media sets, and integration of legacy business-critical code, with features including dynamic horizontal scaling, zero-downtime updates, native connections to Foundry resources, external connectivity over protocols such as REST, WebSockets, and SSE, and marketplace compatibility for module sharing. For document-heavy workflows, AIP Document Intelligence is becoming generally available on February 4, 2026 and is enabled by default for all AIP enrollments, offering a low-code interface to configure and test document extraction strategies using combinations of OCR and vision-language models, compare them on quality, speed, and cost, and then deploy production-ready Python transforms that convert PDFs and images into structured Markdown. These transforms replace earlier Spark-based flows to significantly reduce end-to-end processing times, and the system is tuned to handle diverse enterprise documents such as maintenance manuals, regulatory filings, and invoices by adapting to complex layouts, with upcoming roadmap items focused on entity extraction and broader integration of extraction configurations into AIP Logic and ontology functions.
