Deploy ollama using official docker image
Ollama can sure be installed directly on your local machine. Follow this tutorial for installing ollama on your machine.
However, if you do not want to install it locally, you can just run a docker container, officially published by Ollama.
https://hub.docker.com/r/ollama/ollama
Pull and run the Ollama docker image -
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Check if the image is running -
Run a model inside docker container -
docker exec -it ollama ollama run llama2
You can also get into the docker container and run ollama commands -
You can also run ollama deployed LLMs using ollama official python library -
import ollama
ollama.pull("mistral")
Check all deployed models within ollama docker container -
Basically, installing ollama locally on your machine, or deployed ollama docker container has absolutely no difference on how you access deployed LLMs.
Happy Learning !