Skip to main content
  1. Data Science Blog/

Integrating Ollama AI Models and Open WebUI with Docker: A Step-by-Step Guide

·620 words·3 mins· loading · ·
AI Hardware & Infrastructure Development Environment & Tools AI/ML Models Specific AI Models Specific AI Tools Docker AI Model Deployment AI Model Integration

On This Page

Table of Contents
Share with :

Integrating Ollama AI Models and Open WebUI on Docker
#

Introduction
#

Ollama, Opensource LLM Framework from Meta, provides a powerful way to work with large language models (LLMs) efficiently. While Open WebUI is a user-friendly interface that simplifies interaction with Ollama-hosted AI models. You can host Ollama and WebUI on your local machine. By using Docker, we can containerize these components, ensuring a seamless and reproducible setup across different environments. This guide will walk you through integrating Ollama and Open WebUI within Docker.

Prerequisites
#

  • “Windows 11 with WSL 2 enabled”
  • “Docker installed and running within WSL 2”
  • “Basic knowledge of Docker commands and YAML configuration”
  • “Basic understanding of LLM working, Ollama framework, and Ollama model hosting”

Setting Up Open WebUI
#

“Open WebUI allows users to interact with AI models easily. We will configure Open WebUI within a Docker container to make deployment straightforward. You can install WebUI on WSL2 directly or inside a Docker container. For the second option, it is best to pick up the ready-made image from Docker Hub, then download and install it in the WSL environment.”

Steps to Install Open WebUI:
#

  1. “Pull the Open WebUI Docker image:”
    docker pull open-webui/open-webui:latest
    
  2. “Run the Open WebUI container:”
    docker run -d --name open-webui -p 3000:3000 open-webui/open-webui
    
  3. “Access the interface at http://localhost:3000 in your browser. Keep in mind that in the frontend, you will not see any Ollama model. Therefore, we need to download Ollama models using the WebUI frontend. But the problem with this approach is that these models will be deleted when you delete your container or Docker image (depending on how you set up). The second way is to download the model separately, find the location of the model, and attach that location as a volume (-v option) in the above docker run command.”

Configuring Docker for Ollama and Open WebUI
#

“Now, there are two ways.”

Method 1: Configuring Ollama on WSL2
#

“Ollama can be installed directly on WSL2, allowing seamless integration with Open WebUI running in a Docker container. This method provides better performance compared to running everything inside Docker.”

Steps to Install Ollama on WSL2:
#

  1. “Update your system and install required dependencies:”
    sudo apt update && sudo apt install -y curl
    
  2. “Download and install Ollama:”
    curl -fsSL https://ollama.ai/install.sh | sh
    
  3. “Verify the installation:”
    ollama --version
    
  4. “Start Ollama in the background:”
    ollama serve &
    
  5. “Download and store Ollama models in a persistent volume:”
    ollama run openthinker
    
  6. “Run the Open WebUI container and mount the model directory:”
    docker run -d --name open-webui -p 3000:3000 -v ~/.ollama/models:/root/.ollama/models open-webui/open-webui
    
  7. “Now, Open WebUI should be able to detect the Ollama model stored in the mounted volume.”

Method 2: Configuring Ollama with Docker
#

“To streamline deployment, we will set up a docker-compose.yml file to run Ollama alongside Open WebUI. We need Docker Compose when running multiple Docker containers simultaneously, and we want these containers to talk to each other to achieve a common application goal.”

Create a docker-compose.yml File:
#

version: "3.8"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped
    command: ["sh", "-c", "ollama run openthinker"]

  open-webui:
    image: open-webui/open-webui:latest
    container_name: open-webui
    ports:
      - "3000:3000"
    depends_on:
      - ollama
    volumes:
      - ./models:/root/.ollama/models
    restart: unless-stopped

Running the Containers:
#

  1. “Navigate to the directory where docker-compose.yml is saved.”
  2. “Run the following command to start both containers:”
    docker-compose up -d
    
  3. “Open WebUI will be available at http://localhost:3000, and Ollama will run on port 11434 with the openthinker model available in Open WebUI.”

Diagram Representation
#

Conclusion
#

“By using Docker, we efficiently integrate Ollama AI models with Open WebUI. This setup ensures easy deployment, scalability, and consistent performance. Future enhancements can include setting up persistent storage for models and optimizing resource allocation.”

Dr. Hari Thapliyaal's avatar

Dr. Hari Thapliyaal

Dr. Hari Thapliyal is a seasoned professional and prolific blogger with a multifaceted background that spans the realms of Data Science, Project Management, and Advait-Vedanta Philosophy. Holding a Doctorate in AI/NLP from SSBM (Geneva, Switzerland), Hari has earned Master's degrees in Computers, Business Management, Data Science, and Economics, reflecting his dedication to continuous learning and a diverse skill set. With over three decades of experience in management and leadership, Hari has proven expertise in training, consulting, and coaching within the technology sector. His extensive 16+ years in all phases of software product development are complemented by a decade-long focus on course design, training, coaching, and consulting in Project Management. In the dynamic field of Data Science, Hari stands out with more than three years of hands-on experience in software development, training course development, training, and mentoring professionals. His areas of specialization include Data Science, AI, Computer Vision, NLP, complex machine learning algorithms, statistical modeling, pattern identification, and extraction of valuable insights. Hari's professional journey showcases his diverse experience in planning and executing multiple types of projects. He excels in driving stakeholders to identify and resolve business problems, consistently delivering excellent results. Beyond the professional sphere, Hari finds solace in long meditation, often seeking secluded places or immersing himself in the embrace of nature.

Comments:

Share with :

Related

What is a Digital Twin?
·805 words·4 mins· loading
Industry Applications Technology Trends & Future Computer Vision (CV) Digital Twin Internet of Things (IoT) Manufacturing Technology Artificial Intelligence (AI) Graphics
What is a digital twin? # A digital twin is a virtual representation of a real-world entity or …
Frequencies in Time and Space: Understanding Nyquist Theorem & its Applications
·4103 words·20 mins· loading
Data Analysis & Visualization Computer Vision (CV) Mathematics Signal Processing Space Exploration Statistics
Applications of Nyquists theorem # Can the Nyquist-Shannon sampling theorem applies to light …
The Real Story of Nyquist, Shannon, and the Science of Sampling
·1146 words·6 mins· loading
Technology Trends & Future Interdisciplinary Topics Signal Processing Remove Statistics Technology Concepts
The Story of Nyquist, Shannon, and the Science of Sampling # In the early days of the 20th century, …
BitNet b1.58-2B4T: Revolutionary Binary Neural Network for Efficient AI
·2637 words·13 mins· loading
AI/ML Models Artificial Intelligence (AI) AI Hardware & Infrastructure Neural Network Architectures AI Model Optimization Language Models (LLMs) Business Concepts Data Privacy Remove
Archive Paper Link BitNet b1.58-2B4T: The Future of Efficient AI Processing # A History of 1 bit …
Ollama Setup and Running Models
·1753 words·9 mins· loading
AI and NLP Ollama Models Ollama Large Language Models Local Models Cost Effective AI Models
Ollama: Running Large Language Models Locally # The landscape of Artificial Intelligence (AI) and …