Connecting to a local LLM with Ollama

Run a local language model with Ollama and connect Mundi for offline, free Artificial Intelligence-powered geospatial analysis on your laptop.

Mundi is an open source web GIS that can connect to local language models via Ollama, enabling offline, free use of Artificial Intelligence features without data leaving your machine. The project is compatible with the Chat Completions API that Ollama exposes, so Mundi can talk to models such as gemma3-tools. The guide demonstrates running gemma3-1b on a MacBook Pro with an M1 Pro chip, a lightweight model that can run quickly on a laptop and fit within modest disk and memory constraints.

Preparation requires a self-hosted Mundi instance built with Docker Compose, Ollama installed on the host, and at least one Ollama model downloaded. The tutorial uses orieg/gemma3-tools:1b as an example because it supports tool calling. To wire Mundi to the local server, edit docker-compose.yml and set environment variables such as OPENAI_BASE_URL to http://host.docker.internal:11434/v1, OPENAI_API_KEY to a non-empty string like ollama, and OPENAI_MODEL to the model name. Adding extra_hosts with host.docker.internal mapped to host-gateway ensures the container can reach services on the host at the Ollama default port.

After saving configuration changes, start the Ollama server on the host with ollama serve and, if needed, launch the model with ollama run orieg/gemma3-tools:1b. Then bring up the Mundi application using docker compose up app from the mundi.ai project directory. Once migrations complete and services report ready, open http://localhost:8000 to access Mundi and its assistant, Kue. Upload a GIS file such as a GeoPackage and ask Kue about your layer; Kue will use the local model to summarize layer type, geometry, feature count, and more, and can invoke tools if the model supports tool calling.

The documentation also emphasizes caution: local 1B-parameter models will often give useful, concise summaries but may hallucinate details or misinterpret data relationships, as shown in an example where the model incorrectly suggested a PostGIS connection for a standalone vector file. Users should verify outputs and treat model responses as aids rather than authoritative answers.

70

Impact Score

Anumana wins FDA clearance for pulmonary hypertension ECG Artificial Intelligence tool

Anumana has received FDA 510(k) clearance for an Artificial Intelligence-enabled pulmonary hypertension algorithm designed for use with standard 12-lead electrocardiograms. The company says the software can help clinicians spot early signs of disease within existing workflows and without moving patient data outside the health system environment.

Anu Bradford on tech sovereignty and regulatory fragmentation

Anu Bradford argues that Europe is wavering in its role as the world’s digital rule-setter just as governments everywhere move toward more state control over technology. Global companies are being pushed to treat geopolitical risk, data sovereignty, and Artificial Intelligence governance as core strategic issues.

Mistral launches text-to-speech model

Mistral has expanded its Voxtral family with a text-to-speech system aimed at enterprise voice applications. The company is positioning the open-weights model as a flexible alternative for organizations that want more control over deployment, cost and customization.

UK Parliament opens workforce inquiry on Artificial Intelligence

A UK Parliament committee is examining how Artificial Intelligence is changing business and work, with a focus on both economic opportunity and labour disruption. The inquiry is seeking evidence on government priorities as adoption expands across the economy.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.