Artificial Intelligence (AI) is revolutionizing how we interact with technology, and conversational AI models such as ChatGPT and DeepSearch have led that change.

DeepSeek Artificial Intelligence Co., Ltd. (also called “DeepSeek”) is a Chinese company that was founded in 2023. Its mission is to make Artificial General Intelligence (AGI) a reality. AGI refers to highly independent systems that can do better than humans at most important jobs. DeepSeek focuses on improving AI technologies, including natural language processing, machine learning, and other AI-driven solutions.

DeepSeek is an advanced open-source AI model designed for understanding and generating language.With DeepSeek-R1 (1.5B model), organizations can use local AI to build secure, efficient, and customized solutions without relying on external cloud APIs.

This is particularly valuable for enterprises looking to:

  • Keep data on-premises for compliance and security.Fine-tune AI models using proprietary data.
  • Reduce operational costs by using self-hosted infrastructure.
  • DeepSeek can be used on your computer using Ollama and managed via Open WebUI, which makes it accessible and manageable through a web-based interface.

In this article we will create an isolated docker container to run Deepseek locally using Ollama and Open WebUI. Let’s commence.

Prerequisites:

  • Docker: Ensure Docker is installed on your system.
  • Ollama: We’ll use Ollama to manage and run the DeepSeek-R1 model.
  • Open WebUI

You can install the docker from https://www.docker.com/products/docker-desktop/.

Ensure the Docker is instaled.

Docker --version

Check if the Docker desktop is running.

Deploying DeepSeek-R1 with Ollama & Open WebUI on Docker

The steps to run DeepSeek-R1 inside a Docker container with Ollama and Open WebUI are given below:

Step 1: Pull the Required Docker Images

First, we need to download the Docker images for Ollama (for running AI models) and Open WebUI (for easy access).

Run the following commands:

docker pull ollama/ollama:latest
docker pull ghcr.io/open-webui/open-webui:main

Step 2: Start the Ollama Container

Ollama is the core system that will manage and execute the DeepSeek model. Let’s start it in detached mode:

docker run -d --name ollama --restart unless-stopped -p 11434:11434 ollama/ollama:latest

Check if it’s running:

docker ps

You can will the result as shown below.

Step 3: Access the Ollama Container and Download DeepSeek Model

Before downloading the DeepSeek-R1 model, we need to access the Ollama container to ensure we are inside the environment where the model will be installed.

Step 3.1: Access the Ollama Container

Run the following command to enter the running Ollama container:

docker exec -it ollama bash

Once inside the container, we should see a shell prompt like this:

root@<container-id>:/#

Step 3.2: Download the DeepSeek-R1 Model

Now, within the container, download the DeepSeek-R1 model:

To more about deepseek-r1 models: https://www.ollama.com/library/deepseek-r1

We will be using deepseek-r1:1.5b model for this article, which is lightweight, smallest model and can run with basic CPU and minimum RAM.

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b

Now, we can write some prompts to check the responses.

This assures that deepseek-r1 is installed and you interact with any prompts.

Additionally, you can very with below command.

ollama list

Step 4: Deploy Open WebUI for Easy Access

Now, let’s set up Open WebUI, which provides a web-based interface for interacting with our DeepSeek AI.

Run this command:

docker run -d --name open-webui -p 3100:8080 -e OLLAMA_API_BASE_URL=http://host.docker.internal:11434 --restart always ghcr.io/open-webui/open-webui:main

Now open http://localhost:3100 in your local browser

We need to provide the auth for first time.

Now our own Deepseek is ready to use. You can prompts and get answers.

Here a sample result.

Congratulations! W now have our own private AI running locally!

Why Use DeepSeek-R1 for Enterprise AI?

DeepSeek-R1 is highly beneficial for enterprises because:

  • Self-hosted & secure: No reliance on external APIs, keeping sensitive data in-house.
  • Customizable & trainable: Can be fine-tuned with local business data.
  • Cost-effective: Eliminates cloud AI API costs.
  • Versatile use cases: Chatbots, document summarization, coding assistance, and more.

Conclusion

In summary, combining Ollama with the Open Web UI offers a strong and adaptable solution for businesses that want to use AI-based tools in-house while keeping control over their data and operations.By setting up a local development environment with Docker containers, we ensure a smooth deployment process and can train models using specific company data.The Open Web UI, with its easy-to-use interface, makes it simple to interact with AI models, providing useful insights and increasing productivity. This setup is good for internal projects or customer-facing applications. It is scalable and secure. It can be customized to meet various business needs.This environment can be run locally. This allows enterprises to adhere to data privacy policies and customize the AI system to suit their unique requirements.Furthermore, with Ollama’s robust API, businesses can easily integrate and enhance their AI capabilities.

By following the setup process described in this article, organizations can unlock the full potential of AI, enhance operational efficiency, and stay competitive in today’s fast-paced technological landscape.