Ollama is a tool for running local LLMs. It exposes an interface to download and run different published LLMs from it's library (which seems largely huggingface backed.)

How do I use it?

Currently running on Windows laptop that I use exclusively for gaming & VR stuff, since LLMs benefit from more memory and powerful GPUs.

Resources

This article gave a really good summary of choosing models from the ollama library:

https://www.toolify.ai/ai-news/exploring-the-ollamaai-library-a-guide-to-choosing-the-best-models-1573738

2024-12-18 ollama razerblade proxmox model storage

By default, on linux, ollama looks to store models either in ~/.ollama/models or /usr/share/ollama/.ollama/models, depending on how it's started.

did try symlink replacing those folders, but hit issues and instead just configured the /etc/systemd/system/ollama.service with extra Environment="OLLAMA_MODELS=/mnt/models/ollama/models"

Also OLLAMA_HOST=0.0.0.0:11434

Created a new 50gb disk on RazerBlade local storage volume, formatted it, used ls /dev/disk/by-uuid to get UUID to add to fstab

So models end up stored in /mnt/models/ollama/models, on the 50gb sdb device.

2025-03-12 resize disks and update

Increased sdb device to 60gb in proxmox. Still need to resize the partition in fdisk, then expand the partition. Main motivator is gemma 3 release, so probably going to stop ollama, update it, unmount the disk, resize it, update ollama, and probably update open-webui