Collecting my attempts to improve at tech, art, and life


Local LLM host.


Linux and WSL. macOS is a direct download, and Windows native is not there yet.

curl | sh


ollama run MODEL_NAME

This runs an interactive session with the model, downloading it first if need be. Capabilities are limited by available models and your hardware. None of them are GPT-4 or anything — not on my machine anyways — but it’s far more responsive than hosted solutions.

Full list of supported models here.

Also provides a REST server, but I haven’t messed with that yet.


Added to vault 2024-01-19. Updated on 2024-02-01