Ollama and Open WebUI My graphics card is a P106-100 with 6GB of VRAM, my RAM is 64GB, and my CPU is quite old. Therefore, I’m looking for both a suitable local large language model and a solution that works with limited VRAM. I used Docker Compose to deploy Ollama and Open WebUI, and the…
The daily life of a digital nomad: deploying a local AI large language model with Docker.
Posted on