Local LLM host.
Installation
Linux and WSL. macOS is a direct download, and Windows native is not there yet.
curl https://ollama.ai/install.sh | sh
Usage
ollama run MODEL_NAME
This runs an interactive session with the model, downloading it first if need be. Capabilities are limited by available models and your hardware. None of them are GPT-4 or anything — not on my machine anyways — but it’s far more responsive than hosted solutions.
Full list of supported models here.
Also provides a REST server, but I haven’t messed with that yet.
Integrations
- ollama-python: official library for Python
- ollama-js: official library for JavaScript
- ollama-ai: support for Ruby
Related
Backlinks
Added to vault 2024-01-19. Updated on 2024-02-01