Amazon Web Services used AWS re:Invent 2025 to push deeper into custom large language models, unveiling serverless model customization in Amazon SageMaker artificial intelligence and a Reinforcement Fine-Tuning flow in Amazon Bedrock. The SageMaker capability removes infrastructure management from the customization process, letting developers build and fine-tune models without provisioning servers. Bedrock’s reinforcement fine-tuning offers an automated end-to-end path where teams can select a custom reward function or a pre-set workflow and have the platform run the full customization pipeline.
The SageMaker workflow offers two routes tailored to different teams: a self-guided point-and-click interface for teams with labeled data and clear requirements, and an agent-led natural language experience launching in preview that lets developers prompt SageMaker in conversational language to guide tuning. Ankur Mehrotra, general manager of artificial intelligence platforms at AWS, framed customization as the answer to the question, “If my competitor has access to the same model, how do I differentiate myself?” The announcement emphasizes making specialization practical for vertical use cases from healthcare terminology to blockchain analytics.
The article lists supported targets for customization, including Amazon Nova and certain open models with public weights such as DeepSeek and Meta Llama, and highlights Nova Forge as a managed offering to build custom Nova models for enterprise customers. AWS is betting that specialized models trained on proprietary data will allow firms to differentiate more effectively than relying on generic models from established foundation-model providers. The piece also notes market context: a July survey from Menlo Ventures found enterprise preferences for providers such as Anthropic, OpenAI, and Gemini, underscoring the challenge AWS faces even as it focuses on customization, simplified interfaces, and serverless infrastructure to lower the barrier to enterprise adoption.
