ByteDance open sources Eino, a Golang framework for large language model applications

ByteDance has open sourced Eino, a Golang-based framework for building large language model applications with strong typing, graph orchestration and integrated tooling across the full development lifecycle.

ByteDance has released Eino as an open source project under the CloudWeGo umbrella after more than half a year of internal use and iteration, positioning it as a comprehensive large language model application development framework written in Golang. Eino is built around clear component definitions, powerful orchestration and coverage of the full DevOps lifecycle, aiming to help developers quickly build sophisticated large language model applications while staying aligned with fast-moving research and industry practices. The framework emphasizes a stable core, simple and understandable APIs, an approachable onboarding path and strong extensibility, and it is already the preferred full-code framework for internal large language model applications at ByteDance, with business lines such as Doubao, TikTok and Coze and hundreds of services integrated.

Eino organizes large language model application logic into domain components, with the Chat Model as a central example for interacting with models such as Doubao through concise Go APIs. The framework is designed around characteristics specific to large language model workloads, including the need to provide sufficient and effective context, reliably connect model outputs to external environments, handle streaming outputs with real-time processing, copying, merging and concatenation, and address concurrency, fan-in/fan-out and option distribution on top of directed graph structures. Eino’s orchestration layer offers Chain, Graph and Workflow paradigms, allowing developers to wire components such as ChatModel, ChatTemplate, Retriever, Document Loader, Transformer and Tools into directed graphs that reflect common patterns like ReAct agents and multi-agent hosts. A ReAct-style agent can be implemented in a few dozen lines of graph orchestration code, while Eino automatically handles type checking, stream wrapping, concurrency safety on shared state, callback aspect injection and flexible option distribution when compiling graphs into runnable executors.

The framework’s design focuses on stable core abstractions combined with agile extension. Each component type can be extended with multiple implementations, such as ChatModel backends for OpenAI, Gemini and Claude, while remaining compatible with orchestration, and developers can introduce custom Lambda nodes when business logic falls outside predefined components, with full support for declared input and output types, streaming and callbacks. Eino leverages Golang’s strong typing to improve reliability and maintainability, exposing type mismatches during compile-time graph compilation rather than at runtime, and organizing the codebase into modular Go modules with minimal dependencies and isomorphic, intuitive APIs. The framework is practice-driven, with features such as field-level data mapping in Workflow and enhanced message structures shaped by requirements from TikTok and Doubao, and it ships with a tooling ecosystem that includes built-in tracing callbacks, integrations with APMPlus and Langfuse, and IDE plugins for visual graph inspection, drag-and-drop construction and export to Eino code. Documentation ranges from quick starts to deep-dive manuals, and the maintainers plan to use a unified internal and external codebase to evolve Eino with the community as a production-grade platform for large language model application development.

52

Impact Score

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.