Gemini CLI brings Google’s Artificial Intelligence models to the terminal

Google’s Gemini CLI is an open-source Artificial Intelligence agent that lets developers access Gemini 3 models, automation tools, and GitHub workflows directly from the command line.

Gemini CLI is an open-source Artificial Intelligence agent from Google that brings the power of Gemini models directly into the terminal, offering a lightweight way to interact with the system from a command line environment. The project is licensed under Apache License 2.0, is hosted on GitHub, and is designed as a terminal-first experience for developers who prefer to work in the command line. It supports custom behavior via context files, checkpointed conversations, and integration with broader tooling through model context protocol servers and GitHub actions.

The tool highlights several core benefits, including a free tier, built-in tools, and extensibility. With a personal Google account, the free tier provides 60 requests/min and 1,000 requests/day, along with access to Gemini 3 models and a 1M token context window. The project emphasizes that there is no API key management in this default mode, as users can just sign in with a Google account and receive automatic updates to the latest models. For users who prefer direct control or enterprise-grade features, authentication can also use a Gemini API key or Vertex AI, with options tailored for individual developers, paid Gemini Code Assist licenses, and enterprise teams that need integration with Google Cloud infrastructure.

Installation targets Node.js version 20 or higher and works across macOS, Linux, and Windows. Developers can run the tool instantly using npx, install it globally via npm with npm install -g @google/gemini-cli, or use Homebrew with brew install gemini-cli on macOS and Linux. Release channels are separated into preview, stable, and nightly builds, with preview releases published each week at UTC 2359 on Tuesdays, stable releases each week at UTC 2000 on Tuesdays, and nightly releases each day at UTC 0000. The CLI can query and edit large codebases, generate new applications from PDFs, images, or sketches, and help debug and troubleshoot using natural language, while also automating tasks like querying pull requests, handling complex rebases, and running non-interactively in scripts with structured JSON or streaming output formats.

Gemini CLI includes tools for Google Search grounding, file system operations, shell commands, and web fetching, and it supports media generation via connected servers such as Imagen, Veo, or Lyria. Custom MCP servers can be configured in a settings file to add capabilities like listing open GitHub pull requests, sending summaries into Slack channels, or running database queries. GitHub integration extends into workflows through a dedicated action that can handle pull request reviews with contextual feedback, automated issue triage, and on-demand assistance via @gemini-cli mentions. Documentation covers topics from authentication, configuration, and keyboard shortcuts to headless scripting, IDE integration, sandboxing and security, enterprise deployment, telemetry, and troubleshooting, and the maintainers encourage contributions ranging from bug reports and documentation improvements to new extensions and MCP servers.

55

Impact Score

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.