Model Management
ollama pull <model_name>Downloads a model from the Ollama library.
Example:
ollama pull llama3ollama listLists all models that you have downloaded.
Example:
ollama listollama rm <model_name>Deletes a model from your local machine.
Example:
ollama rm llama3ollama cp <source_model> <new_name>Creates a copy of a model.
Example:
ollama cp llama3 my-llama3-copyRunning Models
ollama run <model_name>Starts a conversation with a model.
Example:
ollama run llama3ollama run <model_name> "Your prompt"Runs a model with a single prompt and exits.
Example:
ollama run llama3 "What is the capital of France?"/set verboseInside a chat, toggles verbose mode to see more details.
Example:
/set verbose/show infoInside a chat, shows information about the current model.
Example:
/show infoModelfile Commands
ollama create <model_name> -f ./ModelfileCreates a model from a Modelfile.
Example:
ollama create my-custom-model -f ./ModelfileFROM <base_model>(In Modelfile) Specifies the base model to use.
Example:
FROM llama3PARAMETER <name> <value>(In Modelfile) Sets a parameter for the model.
Example:
PARAMETER temperature 0.7SYSTEM """..."""(In Modelfile) Sets a system-level message.
Example:
SYSTEM """You are a helpful AI assistant."""API & Server
ollama serveStarts the Ollama server. It usually runs in the background.
Example:
ollama servecurl http://localhost:11434/api/generate -d '{ ... }'Use the API to generate a response. (Check docs for JSON).
Example:
curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "Why is the sky blue?" }'curl http://localhost:11434/api/tagsFetches the list of local models via the API.
Example:
curl http://localhost:11434/api/tagsAdvanced Tips & Useful Tools
Tool Calling / Internet AccessModels can't access the internet directly. Use "Tool Calling" with a library like ollama-python to have the model request external data (e.g., search results), which your code provides.
Example:
// Your Python code detects this request from the model
// then calls a search API and feeds results back in.Open WebUI (Docker)A powerful, self-hosted web UI for Ollama. Supports RAG, multi-model chat, and more. Run with Docker for an easy setup.
Example:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:mainImportant WebsitesOfficial resources for documentation, updates, community tools, and SOPHIE's Daddy Blog.
