Compose-Examples/examples/ollama-ui
LRVT dfd7ff1d7a
Update README.md
2024-04-23 14:01:17 +02:00
..
README.md Update README.md 2024-04-23 14:01:17 +02:00
docker-compose.yml add llms 2024-04-23 12:40:10 +02:00

README.md

References

Notes

You can spawn Ollama first and then download the respective LLM models via docker exec. Alternatively, spawn the whole stack directly and download LLM models within Open WebUI using a browser.

# spawn ollama and ui
docker compose up -d

# (optional) download an llm model via docker exec
docker exec ollama ollama run llama3:8b

Afterwards, we can browse Open WebUI on http://127.0.0.1:8080 and register our first user account. You may want to disable open user registration later on by uncommenting the env ENABLE_SIGNUP variable and restarting the Open WebUI container.

[!TIP]

You likely want to pass a GPU into the Ollama container. Please read this.