Local LLMs with Ollama — Tuned For My Workstation
I've been spending a lot of evenings testing local models with Ollama on my home workstation. It's been a fun mix of trial and error: figuring out which models feel useful in real coding sessions instead of just benchmark runs.
The goal is simple: keep a setup that feels fast, practical, and private when I need it. Local models now sit beside my cloud tools so I can choose whatever fits the task best in the moment.