I stumbled upon Ollama while looking for ways to run large language models (LLMs) locally for research at work some time last year during the initial explosion of the interest in ChatGPT.
Being a long time Linux dabbler all my GPU have almost always been from Team Red (AMD) unless