Setup your own local ChatGPT

Setup your own local ChatGPT
OpenWebUI

This tutorial will guide you through how you can set up your own local ChatGPT type LLM with OpenWebUI and Ollama.

Setting Up Your Own OpenWebUI and Ollama

Artificial Intelligence (AI) is transforming how we interact with technology, and setting up your own local AI environment can be both empowering and exciting. In this blog post, we’ll walk you through the process of installing and configuring OpenWebUI and Ollama on your machine. Whether you're a developer, a tech enthusiast, or simply curious about AI, this guide will help you get started with a powerful, self-hosted AI platform in no time.


What Are OpenWebUI and Ollama?

Before diving into the setup, let’s briefly introduce the tools we’ll be working with:

  • OpenWebUI: A feature-rich, self-hosted web interface designed to interact with large language models (LLMs). It’s user-friendly, extensible, and works offline, supporting various LLM runners like Ollama and OpenAI-compatible APIs.
  • Ollama: A tool that simplifies running LLMs locally on your machine. It allows you to download, manage, and use models seamlessly, and it integrates perfectly with OpenWebUI.

Together, these tools provide a private, customizable AI experience right on your computer.


Prerequisites

To follow this guide, you’ll need the following:

  • Docker: The easiest way to set up OpenWebUI and Ollama is by using Docker. If you don’t have it installed, download and install Docker Desktop from docker.com. This works on Windows, macOS, and Linux.
  • Basic command-line knowledge: We’ll use the terminal to run a few commands, so familiarity with your system’s command-line interface is helpful.

Step 1: Install OpenWebUI with Ollama Using Docker

The simplest and most efficient way to get started is by using Docker, which bundles OpenWebUI and Ollama into a single container. Follow these steps:

  1. Open your terminal:
    • On Windows: Use PowerShell or Command Prompt.
    • On macOS/Linux: Use the Terminal app.
  2. Run the Docker command:
    Depending on your hardware, choose one of the following commands:What this command does:
    • With GPU support (for faster performance if you have a compatible GPU):
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    • CPU only (if you don’t have a GPU or prefer not to use it):
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    • -d: Runs the container in the background.
    • -p 3000:8080: Maps port 3000 on your machine to port 8080 in the container.
    • --gpus=all: Enables GPU support (omit for CPU-only).
    • -v: Creates persistent storage volumes for Ollama models and OpenWebUI data.
    • --name open-webui: Names the container for easy management.
    • --restart always: Ensures the container restarts automatically if your system reboots.
  1. Wait for the setup:
    Docker will download the necessary images and start the container. This might take a few minutes, depending on your internet speed.

Step 2: Access OpenWebUI

Once the container is running, you can access OpenWebUI in your web browser:

  1. Open your browser: Navigate to http://localhost:3000.
  2. Create an account: On your first visit, you’ll be prompted to set up an account. Follow the instructions to create your login credentials.
  3. Log in: Use your account details to access the OpenWebUI dashboard.

You should now see a clean, intuitive interface ready for AI interaction!


Step 3: Select Models and Start Using OpenWebUI

With OpenWebUI up and running, it’s time to explore its capabilities:

  1. Choose a model:
    • At the top of the interface, you’ll find a dropdown menu to select models.
    • Ollama comes with a variety of options, such as Llama 3.1, Mistral, and others. Pick one that suits your needs.
  2. Start interacting:
    • Type a prompt in the chat box, like “Tell me a joke” or “Explain how recursion works in Python.”
    • The model will respond, and you can continue the conversation with follow-up questions.
  3. Explore features:
    • Contextual conversation history: OpenWebUI keeps track of your chat, making it easy to reference earlier responses.
    • Code formatting: Perfect for developers—code snippets are displayed with proper syntax highlighting.
    • Model switching: Experiment with different models without restarting the app.

Additional Tips

  • Managing models:
    • Want more models? Go to Admin Settings > Connections > Ollama > Manage in OpenWebUI to download additional models or tweak settings.
  • Alternative installation methods:
    • Don’t want to use Docker? OpenWebUI can also be installed with Podman or directly via Python’s pip, but these methods require more manual setup. Check the official OpenWebUI documentation for details.
  • Keeping it updated:
    • To automatically update your Docker container, consider using Watchtower. Run this command:
docker run -d --name watchtower --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --interval 300 open-webui
    • This checks for updates every 5 minutes (300 seconds) and applies them seamlessly.

Conclusion

Congratulations! You’ve just set up your own OpenWebUI and Ollama environment. With this powerful duo, you can explore AI locally, experiment with different models, and even build your own projects—all without relying on cloud services. It’s a fantastic way to dive into the world of AI while keeping full control over your data and setup.

Give it a try today, and let us know in the comments how it went or what cool things you’ve done with it. Happy AI exploring!