The Ultimate Guide to Docker on CentOS

Okay, here is the detailed article on “The Ultimate Guide to Docker on CentOS”.


The Ultimate Guide to Docker on CentOS

Containerization has revolutionized the way we develop, deploy, and manage applications. At the forefront of this revolution is Docker, an open-source platform that automates the deployment of applications inside lightweight, portable containers. CentOS, a popular, free, enterprise-class Linux distribution derived from Red Hat Enterprise Linux (RHEL), provides a stable and secure foundation for running these containers.

Combining Docker’s flexibility with CentOS’s stability creates a powerful environment for modern application workloads. This guide aims to be your comprehensive resource for understanding, installing, configuring, and effectively using Docker on CentOS (including CentOS 7, CentOS Stream 8, and CentOS Stream 9). We’ll cover everything from the fundamental concepts to advanced usage patterns, security considerations, and troubleshooting tips.

Table of Contents:

  1. Introduction to Docker and Containerization
    • What is Containerization?
    • Virtual Machines vs. Containers
    • What is Docker?
    • Why Use Docker on CentOS?
  2. Prerequisites for Installing Docker on CentOS
    • Supported CentOS Versions
    • System Requirements
    • User Privileges
    • Network Configuration
  3. Understanding Docker Core Concepts
    • Docker Engine
    • Images
    • Containers
    • Dockerfile
    • Registries (Docker Hub)
    • Volumes
    • Networking
  4. Installing Docker Engine on CentOS
    • Uninstalling Old Versions
    • Setting up the Docker Repository (Recommended Method)
    • Installing Docker Engine, CLI, and Containerd
    • Starting and Enabling the Docker Service
    • Post-Installation Steps (Managing Docker as a non-root user)
    • Verifying the Installation
  5. Essential Docker Commands for Daily Use
    • Working with Images (pull, images, rmi, inspect, tag)
    • Running Containers (run, basic options like -d, -it, --name, -p)
    • Managing Containers (ps, stop, start, restart, rm, logs, exec, inspect)
    • Cleaning Up Resources (system prune)
  6. Building Docker Images with Dockerfile
    • Dockerfile Basics and Syntax (FROM, RUN, COPY, ADD, WORKDIR, EXPOSE, CMD, ENTRYPOINT, USER, ARG, ENV)
    • Example: Creating a Simple Nginx Image
    • Building an Image (docker build)
    • Tagging Images
  7. Managing Data in Docker Containers
    • The Ephemeral Nature of Containers
    • Docker Volumes (Recommended)
      • Creating and Managing Volumes (docker volume create/ls/inspect/rm)
      • Using Volumes with Containers
    • Bind Mounts
      • Using Bind Mounts
      • Use Cases and Caveats (Permissions, SELinux)
    • tmpfs Mounts
  8. Docker Networking on CentOS
    • Default Bridge Network
    • Inspecting Networks (docker network ls/inspect)
    • Container Communication on the Default Bridge
    • Exposing Container Ports (-p, -P)
    • User-Defined Bridge Networks (Recommended for multi-container apps)
    • Host Network
    • Overlay Networks (for Swarm)
    • CentOS Firewall (firewalld) Interaction
  9. Orchestrating Multi-Container Applications with Docker Compose
    • What is Docker Compose?
    • Installing Docker Compose
    • The docker-compose.yml File Structure (version, services, volumes, networks)
    • Example: Web Application with a Database
    • Common Docker Compose Commands (up, down, ps, logs, exec, build)
  10. Dockerfile Best Practices for Efficient and Secure Images
    • Use Official Base Images
    • Use Specific Image Tags
    • Minimize Layers (Combine RUN commands)
    • Leverage Build Cache Effectively
    • Use .dockerignore
    • Keep Images Small (Multi-stage builds)
    • Run Containers as Non-Root User (USER instruction)
    • Scan Images for Vulnerabilities
    • Parameterize with ARG and ENV Appropriately
  11. Security Considerations for Docker on CentOS
    • Kernel Namespaces and Cgroups
    • Running Docker Daemon as Root (Security Implications)
    • Running Containers as Non-Root
    • SELinux Integration with Docker on CentOS
      • Understanding SELinux Contexts
      • Volume Labeling (:z, :Z)
      • Troubleshooting SELinux Issues
    • Limiting Container Resources (--memory, --cpus, ulimits)
    • Image Security Scanning (e.g., Trivy, Clair, Docker Scout)
    • Managing Secrets
    • Network Security
  12. Advanced Topics (Brief Overview)
    • Docker Swarm (Native Orchestration)
    • Kubernetes (Alternative Orchestration)
    • Private Docker Registries
    • Docker BuildKit
  13. Troubleshooting Common Docker Issues on CentOS
    • Docker Service Fails to Start
    • Permission Denied Errors (Docker Socket)
    • Cannot Connect to Docker Daemon
    • Network Connectivity Problems (Firewall, Port Conflicts)
    • Volume Permission Issues (SELinux, User Mismatches)
    • “No space left on device” Errors
    • Image Pull Issues
  14. Conclusion

1. Introduction to Docker and Containerization

Before diving into the specifics of Docker on CentOS, let’s establish a foundational understanding of containerization and Docker itself.

What is Containerization?

Containerization is a lightweight form of operating system virtualization. It allows you to package an application along with its dependencies (libraries, configuration files, binaries) into a standardized unit called a container. Unlike traditional virtual machines (VMs), containers share the host operating system’s kernel. This makes them significantly more lightweight, faster to start, and more resource-efficient.

Virtual Machines vs. Containers

  • Virtual Machines (VMs): Each VM includes a full copy of an operating system, the application, necessary binaries, and libraries – taking up tens of GBs. VMs run on a hypervisor, which manages hardware access. They provide strong isolation but are resource-heavy and slow to boot.
  • Containers: Containers package the application and its dependencies but share the host OS kernel. They run as isolated processes in user space. Containers typically measure in tens or hundreds of MBs, start almost instantly, and consume fewer resources (CPU, RAM). Isolation is generally very good but relies on kernel features like namespaces and cgroups, making it less absolute than hypervisor-based isolation.

What is Docker?

Docker is the leading platform for developing, shipping, and running applications using container technology. It provides:

  • Docker Engine: A client-server application comprising a daemon process (the server), a REST API, and a command-line interface (CLI) client. The daemon does the heavy lifting of building, running, and managing containers.
  • Docker Images: Read-only templates used to create containers. Images contain the application code, runtime, libraries, environment variables, and configuration files.
  • Docker Containers: Runnable instances of Docker images. They are isolated environments where applications execute.
  • Docker Hub/Registries: Cloud-based or self-hosted repositories for storing and distributing Docker images.

Why Use Docker on CentOS?

CentOS is renowned for its stability, security, and long-term support, making it a favorite in enterprise environments. Running Docker on CentOS offers several advantages:

  • Stability: CentOS provides a reliable and predictable base OS, crucial for running production container workloads.
  • Security: CentOS benefits from RHEL’s security features and timely updates. Features like SELinux (Security-Enhanced Linux) can be integrated with Docker for enhanced container security.
  • Ecosystem: A large community and extensive documentation support both CentOS and Docker.
  • Consistency: Docker ensures that applications run the same way regardless of the underlying CentOS environment (development, staging, production), eliminating “it works on my machine” problems.
  • Resource Efficiency: Maximize hardware utilization by running multiple isolated containers on a single CentOS host.
  • Rapid Deployment: Quickly deploy and scale applications using pre-built Docker images.

2. Prerequisites for Installing Docker on CentOS

Before you install Docker Engine, ensure your CentOS system meets the following requirements:

Supported CentOS Versions

Docker officially supports the following versions (check the Docker documentation for the most up-to-date list):

  • CentOS 7 (requires centos-extras repository enabled, which is default)
  • CentOS Stream 8
  • CentOS Stream 9

Note: While older versions or other RHEL derivatives (like AlmaLinux, Rocky Linux) often work, using officially supported versions ensures compatibility and access to support.

System Requirements

  • Architecture: Docker Engine on CentOS requires a 64-bit installation (x86_64 or amd64).
  • Kernel: A Linux kernel version 3.10 or higher is required. CentOS 7 and later versions meet this requirement.
  • RAM: At least 2GB of RAM is recommended for running Docker and a few containers. More complex workloads will require more memory.
  • Storage: Sufficient disk space for the Docker Engine, images, and container data. The overlay2 storage driver is recommended and usually the default on supported CentOS versions.

User Privileges

You need a user account with sudo privileges to install Docker, manage the Docker service, and perform certain administrative tasks. While you can run Docker commands as root, it’s highly recommended to configure Docker to run as a non-root user for security reasons (covered in the Post-Installation section).

Network Configuration

  • Internet Access: Required to download Docker packages and pull images from registries like Docker Hub.
  • Firewall: CentOS systems often use firewalld. You might need to configure firewall rules to allow access to container ports from external networks.

3. Understanding Docker Core Concepts

Grasping these fundamental concepts is key to using Docker effectively:

Docker Engine

The core of Docker. It’s a background service (daemon) running on your CentOS host that manages containers, images, networks, and volumes. You interact with the Docker Engine primarily through the Docker CLI client (docker) or via its REST API.

Images

Read-only templates containing instructions for creating a Docker container. An image includes a filesystem snapshot (application code, libraries, dependencies) and metadata (like default command to run). Images are built from a set of instructions defined in a Dockerfile or pulled from a registry. Images are layered; each instruction in a Dockerfile typically creates a new layer.

Containers

Runnable, isolated instances created from Docker images. When you run an image, you create a container. Each container has its own filesystem (based on the image layers plus a writable top layer), process space, and network interface. Containers are ephemeral by default – changes made inside a container are lost when it’s removed unless persisted using volumes or bind mounts.

Dockerfile

A text file containing a series of instructions that Docker uses to build an image automatically. It defines the base image, commands to install software, files to copy, ports to expose, and the default command to execute when a container starts.

Registries (Docker Hub)

A service for storing and distributing Docker images. Docker Hub is the default public registry, hosting a vast collection of official and community images. You can also set up private registries to store proprietary images.

  • Repository: A collection of related Docker images, usually different versions of the same application (e.g., nginx, which contains tags like latest, 1.25, stable-alpine).

Volumes

The preferred mechanism for persisting data generated by and used by Docker containers. Volumes are managed by Docker (/var/lib/docker/volumes/ on the host) and are independent of the container lifecycle. They offer better performance and easier management than bind mounts for most use cases.

Networking

Docker provides various networking options to enable communication between containers and between containers and the outside world. Key types include:

  • Bridge: The default network. Containers on the same bridge network can communicate with each other using IP addresses (or container names if using user-defined bridges or Compose).
  • Host: Removes network isolation between the container and the host. The container shares the host’s network stack.
  • Overlay: Enables communication between containers running on different Docker hosts (used in Swarm mode).
  • None: Disables networking for the container.

4. Installing Docker Engine on CentOS

The recommended method for installing Docker Engine on CentOS is using Docker’s official package repository. This ensures you get the latest stable version and updates.

Step 1: Uninstall Old Versions

If you have older versions of Docker (often named docker, docker-engine, docker.io, docker-ce, etc.) or associated dependencies installed from the default CentOS repositories, uninstall them first:

“`bash
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine \
docker-ce \
docker-ce-cli \
containerd.io \
docker-compose-plugin

For CentOS 8/9 Stream, use dnf instead of yum

sudo dnf remove … (same packages)

“`

It’s okay if yum or dnf reports that none of these packages are installed. Existing Docker data under /var/lib/docker/ (images, containers, volumes) is usually preserved.

Step 2: Set up the Docker Repository

Install the yum-utils (CentOS 7) or dnf-plugins-core (CentOS 8/9) package, which provides the yum-config-manager or dnf config-manager utility. Then, add the official Docker repository.

For CentOS 7:

bash
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

For CentOS 8/9 Stream:

bash
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Step 3: Install Docker Engine, CLI, and Containerd

Install the latest version of Docker Engine, the containerd runtime, and the Docker CLI client:

“`bash

For CentOS 7

sudo yum update -y # Optional, but recommended
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

For CentOS 8/9 Stream

sudo dnf update -y # Optional, but recommended
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
“`

  • docker-ce: The Docker Engine (daemon).
  • docker-ce-cli: The Docker command-line interface.
  • containerd.io: The underlying container runtime.
  • docker-buildx-plugin: Enables advanced build features with BuildKit.
  • docker-compose-plugin: Integrates Docker Compose functionality directly into the Docker CLI (using docker compose ...).

Note: If you need a specific version, list available versions first (yum list docker-ce --showduplicates | sort -r or dnf list docker-ce --showduplicates | sort -r) and then install using the version string (e.g., sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> ...).

Step 4: Start and Enable the Docker Service

Once the installation is complete, start the Docker daemon and enable it to start automatically on system boot:

bash
sudo systemctl start docker
sudo systemctl enable docker

Check the status to ensure it’s running:

bash
sudo systemctl status docker

You should see output indicating the service is active (running).

Step 5: Post-Installation Steps (Manage Docker as a non-root user)

By default, the Docker daemon binds to a Unix socket owned by the root user, and other users can only access it using sudo. To allow your regular user to run Docker commands without sudo:

  1. Create the docker group (if it doesn’t exist):
    bash
    sudo groupadd docker

    (This group is usually created during Docker installation, but it’s good practice to ensure it exists).

  2. Add your user to the docker group:
    Replace your_user with your actual username.
    bash
    sudo usermod -aG docker your_user

  3. Activate the group changes:
    You need to either log out and log back in for the group membership changes to take effect, or run the following command (which starts a new shell with the updated group):
    bash
    newgrp docker

    Logging out and back in is generally the more reliable method.

Security Warning: Adding a user to the docker group grants them privileges equivalent to the root user. This is because they can use Docker to run containers that can interact with the host system in privileged ways (e.g., mounting host directories). Only add trusted users to this group. For details, refer to the Docker documentation on daemon attack surface.

Step 6: Verifying the Installation

Run the classic hello-world image to confirm Docker Engine is installed and running correctly:

bash
docker run hello-world

If successful, Docker will:
1. Check if the hello-world:latest image exists locally.
2. If not, pull the image from Docker Hub.
3. Create and run a new container from the image.
4. The container will print a confirmation message and exit.

You should see output similar to:

“`
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
[…]
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
[…]
“`

Congratulations! Docker is now installed and configured on your CentOS system.

5. Essential Docker Commands for Daily Use

Here are some fundamental commands you’ll use frequently:

Working with Images

  • docker pull <image_name>[:<tag>]: Downloads an image from a registry (Docker Hub by default).
    bash
    docker pull nginx:latest # Pulls the latest Nginx image
    docker pull ubuntu:22.04 # Pulls Ubuntu version 22.04
  • docker images or docker image ls: Lists images stored locally on your machine.
    bash
    docker images
  • docker rmi <image_id_or_name>[:<tag>] or docker image rm ...: Removes one or more local images. You usually need to remove containers based on the image first. Use -f to force removal (use with caution).
    bash
    docker rmi nginx:latest
    docker rmi fca2e753a3d1 # Remove by Image ID
  • docker image inspect <image_id_or_name>: Displays detailed information about an image in JSON format (layers, creation date, environment variables, etc.).
    bash
    docker image inspect ubuntu:22.04
  • docker tag <source_image>[:<tag>] <target_image>[:<tag>]: Creates a new tag that refers to an existing image. Useful for renaming or pushing to different repositories/tags.
    bash
    docker tag myapp:latest myusername/myapp:v1.0

Running Containers

  • docker run [OPTIONS] <image_name>[:<tag>] [COMMAND] [ARG...]: Creates and starts a new container from an image.

    • -d or --detach: Runs the container in the background (detached mode).
    • -it: Runs the container in interactive mode with a pseudo-TTY, allowing you to interact with the container’s shell.
    • --name <container_name>: Assigns a specific name to the container. If omitted, Docker assigns a random name.
    • -p <host_port>:<container_port>: Maps a port on the host to a port inside the container (e.g., -p 8080:80).
    • --rm: Automatically removes the container when it exits. Useful for short-lived tasks.
    • -v <host_path_or_volume_name>:<container_path>: Mounts a volume or bind mount into the container.
    • -e <VAR_NAME>=<value>: Sets an environment variable inside the container.

    Examples:
    “`bash

    Run Nginx in the background, mapping host port 8080 to container port 80

    docker run -d –name my-nginx -p 8080:80 nginx:latest

    Run an Ubuntu container interactively and get a bash shell

    docker run -it –rm ubuntu:22.04 bash
    “`

Managing Containers

  • docker ps or docker container ls: Lists currently running containers.
    • -a or --all: Shows all containers (running and stopped).
      bash
      docker ps -a
  • docker stop <container_id_or_name>: Gracefully stops one or more running containers (sends SIGTERM, then SIGKILL after a timeout).
    bash
    docker stop my-nginx
  • docker start <container_id_or_name>: Starts one or more stopped containers.
    bash
    docker start my-nginx
  • docker restart <container_id_or_name>: Stops and then starts a container.
    bash
    docker restart my-nginx
  • docker rm <container_id_or_name>: Removes one or more stopped containers. Use -f to force removal of a running container (use with caution).
    bash
    docker rm my-stopped-container
    docker rm -f my-nginx # Force remove running container
  • docker logs <container_id_or_name>: Fetches the logs (stdout/stderr) of a container.
    • -f or --follow: Follows the log output in real-time.
    • --tail <number>: Shows the last N lines.
      bash
      docker logs my-nginx
      docker logs -f my-nginx
  • docker exec [OPTIONS] <container_id_or_name> <command>: Executes a command inside a running container.

    • -it: For interactive commands (like opening a shell).
      “`bash

    Get a bash shell inside the running ‘my-nginx’ container

    docker exec -it my-nginx bash

    List files in /etc inside the container without attaching

    docker exec my-nginx ls /etc
    * **`docker container inspect <container_id_or_name>`**: Displays detailed information about a container in JSON format (state, network settings, mounts, etc.).bash
    docker container inspect my-nginx
    “`

Cleaning Up Resources

Docker resources (containers, images, volumes, networks) can accumulate over time.

  • docker system prune: Removes all stopped containers, dangling images (those without tags), and unused networks.

    • -a: Removes all unused images (not just dangling ones) and all stopped containers.
    • --volumes: Also removes unused volumes (use with extreme caution, as this deletes data!).
      “`bash

    Basic prune

    docker system prune

    Prune including unused images

    docker system prune -a

    Prune including unused images AND volumes (DANGEROUS!)

    docker system prune -a –volumes
    “`
    Always confirm the prompts carefully before pruning.

6. Building Docker Images with Dockerfile

While you can pull pre-built images, you’ll often need to create custom images for your own applications. This is done using a Dockerfile.

Dockerfile Basics and Syntax

A Dockerfile is a script containing ordered instructions to build a Docker image.

  • FROM <image>[:<tag>]: Specifies the base image for your build. This must be the first instruction.
    dockerfile
    FROM ubuntu:22.04
    FROM python:3.9-slim
  • RUN <command>: Executes a command in a new layer on top of the current image. Used for installing packages, creating directories, etc. Chain commands using && to reduce layers.
    dockerfile
    RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates && rm -rf /var/lib/apt/lists/*
  • COPY <src>... <dest>: Copies files or directories from the build context (the directory containing the Dockerfile) into the image’s filesystem.
    dockerfile
    COPY ./app /app
    COPY requirements.txt .
  • ADD <src>... <dest>: Similar to COPY, but with added features like automatically extracting compressed files (tar, gzip, etc.) and supporting URLs as <src>. Generally, COPY is preferred unless you need ADD‘s specific features.
  • WORKDIR /path/to/workdir: Sets the working directory for subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.
    dockerfile
    WORKDIR /app
  • EXPOSE <port> [<port>/<protocol>...]: Informs Docker that the container listens on the specified network ports at runtime. This is informational; you still need -p or -P with docker run to actually publish the port.
    dockerfile
    EXPOSE 80
    EXPOSE 5432/tcp
  • CMD ["executable","param1","param2"] (exec form, preferred) or CMD command param1 param2 (shell form): Provides the default command and/or parameters for an executing container. There can only be one CMD instruction. If specified in docker run, it overrides the CMD in the Dockerfile.
    dockerfile
    CMD ["python", "app.py"]
    CMD ["nginx", "-g", "daemon off;"]
  • ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred) or ENTRYPOINT command param1 param2 (shell form): Configures the container to run as an executable. Arguments specified in docker run are appended to the ENTRYPOINT. Often used in combination with CMD to specify default parameters that can be overridden.
    dockerfile
    ENTRYPOINT ["/usr/sbin/nginx"]
    CMD ["-g", "daemon off;"]
    # Allows 'docker run my-image -h' to run '/usr/sbin/nginx -h'
  • USER <user>[:<group>]: Sets the user name (or UID) and optionally the group name (or GID) to use when running subsequent commands and for the final running container. Essential for security (avoiding root).
    dockerfile
    RUN groupadd -r myuser && useradd -r -g myuser myuser
    # ... install app files ...
    USER myuser
    CMD ["./my-app"]
  • ARG <name>[=<default_value>]: Defines a build-time variable. Users can pass values using --build-arg <name>=<value> during docker build.
  • ENV <key>=<value> ...: Sets environment variables within the image and for running containers.

Example: Creating a Simple Nginx Image with Custom Content

  1. Create a directory, e.g., my-custom-nginx.
  2. Inside this directory, create a file named Dockerfile:

    “`dockerfile

    Use the official Nginx image based on Alpine Linux as the base

    FROM nginx:stable-alpine

    Maintainer label (optional good practice)

    LABEL maintainer=”Your Name your.email@example.com

    Remove the default Nginx welcome page

    RUN rm /usr/share/nginx/html/index.html

    Copy a custom HTML file from our build context to the Nginx web root

    COPY index.html /usr/share/nginx/html/index.html

    Expose port 80 (standard HTTP port Nginx listens on)

    EXPOSE 80

    Default command to run Nginx in the foreground

    CMD [“nginx”, “-g”, “daemon off;”]
    “`

  3. Create a simple index.html file in the same my-custom-nginx directory:

    html
    <!DOCTYPE html>
    <html>
    <head>
    <title>My Custom Nginx</title>
    </head>
    <body>
    <h1>Welcome to My Custom Dockerized Nginx on CentOS!</h1>
    </body>
    </html>

Building an Image (docker build)

Navigate to the directory containing your Dockerfile and index.html (my-custom-nginx) in your terminal and run the build command:

“`bash
docker build -t my-custom-nginx:v1.0 .

Or: docker build –tag my-custom-nginx:v1.0 .

“`

  • -t or --tag: Assigns a name and optionally a tag (in the format name:tag) to the image.
  • .: Specifies the build context (the current directory). Docker sends the contents of this directory (respecting .dockerignore) to the Docker daemon for the build process.

Docker will execute the instructions in your Dockerfile step by step, creating layers and outputting progress. Once finished, you can see your new image:

bash
docker images | grep my-custom-nginx

Running Your Custom Image

Now run a container from your newly built image:

bash
docker run -d --name my-webserver -p 8888:80 my-custom-nginx:v1.0

Access http://<your_centos_ip>:8888 in your web browser. You should see your custom HTML page.

7. Managing Data in Docker Containers

Containers are ephemeral. Any data written inside a container’s writable layer is lost when the container is removed. To persist data or share data between containers and the host, Docker provides three main mechanisms: Volumes, Bind Mounts, and tmpfs Mounts.

The Ephemeral Nature of Containers

When a container is started, Docker adds a thin writable layer on top of the read-only image layers. Any changes (file creation, modification, deletion) happen in this writable layer. When the container is deleted (docker rm), this writable layer is destroyed along with the data.

Docker Volumes (Recommended)

Volumes are the preferred way to persist data for Docker containers.

  • Docker-Managed: Volumes are created and managed by Docker. Their data is stored in a dedicated area on the host filesystem, managed by Docker (usually /var/lib/docker/volumes/).
  • Independent Lifecycle: Volumes persist even if the container using them is removed. Multiple containers can mount the same volume.
  • Performance: Often offer better performance than bind mounts, especially for write-heavy workloads on certain platforms.
  • Portability: Easier to back up, migrate, and manage as they are handled via the Docker API/CLI.
  • Features: Support volume drivers for storing data on remote hosts or cloud providers.

Creating and Managing Volumes:

“`bash

Create a volume named ‘my-data’

docker volume create my-data

List volumes

docker volume ls

Inspect a volume (shows mountpoint on host)

docker volume inspect my-data

Remove a volume (only if not used by any container)

docker volume rm my-data

Remove all unused volumes (CAUTION!)

docker volume prune
“`

Using Volumes with Containers:

Use the -v or --volume flag with docker run or define them in docker-compose.yml.

  • Syntax: -v <volume_name>:<container_path>

“`bash

Run Postgres, storing data in the ‘pgdata’ volume mounted at /var/lib/postgresql/data

docker run -d –name my-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-v pgdata:/var/lib/postgresql/data \
postgres:15

Even if you ‘docker rm my-postgres’, the ‘pgdata’ volume remains.

You can later start a new container using the same volume:

docker run -d –name new-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-v pgdata:/var/lib/postgresql/data \
postgres:15
“`

If you provide a name (pgdata in the example) that doesn’t exist, Docker creates the volume automatically.

Bind Mounts

Bind mounts map a file or directory from the host machine directly into the container. The path on the host is specified explicitly.

  • Direct Host Access: Changes made in the container to the mounted path are immediately reflected on the host, and vice-versa.
  • Use Cases: Useful for sharing configuration files from the host, sharing source code during development, or accessing host resources.
  • Caveats:
    • Tightly couples the container to the host’s directory structure.
    • Can cause permission issues if the UID/GID inside the container doesn’t match the ownership on the host path.
    • On SELinux-enabled systems like CentOS, you often need to append :z or :Z to the volume mount option to relabel the host directory so the container can access it.

Using Bind Mounts:

  • Syntax: -v /path/on/host:/path/in/container or --volume /path/on/host:/path/in/container (Note: The first part must be an absolute path on the host).

“`bash

Mount the current directory’s ‘config’ subfolder to /etc/myapp/config in the container

docker run -d –name my-app \
-v “$(pwd)/config”:/etc/myapp/config \
my-app-image

On CentOS with SELinux, you might need :z or :Z

:z – Shared content label (multiple containers can read/write)

:Z – Private unshared label (only this container can access)

docker run -d –name my-dev-app \
-v “$(pwd)/src”:/app/src:z \
my-dev-image
“`

Choose :z if you expect multiple containers to potentially share access to the bind-mounted host directory. Choose :Z if only this specific container needs access. Misusing :Z can prevent other containers (or even host processes) from accessing the data.

tmpfs Mounts

Store data in the host’s memory (as a temporary filesystem). Data is non-persistent; it’s lost when the container stops. Useful for sensitive temporary data or performance-critical temporary files.

  • Syntax: --tmpfs /path/in/container

bash
docker run -d --name temp-app --tmpfs /app-cache my-app-image

Volumes vs. Bind Mounts: For most common use cases like persisting database data, application state, or user uploads, volumes are the recommended approach. Use bind mounts primarily for development workflows (mounting source code) or providing specific host configuration files/directories to the container, keeping SELinux implications in mind on CentOS.

8. Docker Networking on CentOS

Docker provides robust networking capabilities out of the box.

Default Bridge Network

When Docker Engine starts, it creates a default bridge network named bridge. Unless specified otherwise, new containers connect to this network.

  • Containers on the default bridge can communicate with each other using their IP addresses.
  • Communication by container name is not supported on the default bridge network.
  • Docker assigns an IP address to each container from the subnet associated with the bridge network (e.g., 172.17.0.0/16).

Inspecting Networks

“`bash

List all networks

docker network ls

Inspect a specific network (e.g., the default bridge)

docker network inspect bridge

This shows subnet, gateway, connected containers, etc.

“`

Container Communication on the Default Bridge

  1. Run two simple containers (e.g., alpine) on the default network:
    bash
    docker run -dit --name container1 alpine ash
    docker run -dit --name container2 alpine ash
  2. Find the IP address of container2:
    bash
    docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container2
    # Let's say it outputs 172.17.0.3
  3. Ping container2 from container1 using its IP address:
    bash
    docker exec container1 ping -c 3 172.17.0.3

    This should work. Trying docker exec container1 ping container2 will likely fail on the default bridge.

Exposing Container Ports

To allow external access (from the CentOS host or other machines) to services running inside a container, you need to map container ports to host ports.

  • -p <host_port>:<container_port> (or --publish ...): Maps a specific host_port to a container_port.
    bash
    # Map host port 8080 to container port 80
    docker run -d --name web -p 8080:80 nginx
    # Access via http://<centos_ip>:8080
  • -P (or --publish-all): Maps all ports exposed in the Dockerfile (EXPOSE instruction) to random high-numbered ports on the host. Less predictable but useful sometimes.
    bash
    docker run -d --name web-auto -P nginx
    docker ps # Check which random host port was assigned to container port 80

User-Defined Bridge Networks (Recommended)

For applications involving multiple containers that need to communicate, it’s highly recommended to create custom bridge networks.

  • Automatic DNS Resolution: Containers on the same user-defined bridge network can resolve each other by container name. This is a major advantage over the default bridge.
  • Better Isolation: Provides better network isolation, as only containers attached to the same custom network can communicate easily.
  • Dynamic Attachment/Detachment: Containers can be connected to or disconnected from user-defined networks on the fly without restarting them.

Creating and Using a User-Defined Bridge:

“`bash

Create a new bridge network

docker network create my-app-net

Run containers attached to this network

docker run -d –name db –network my-app-net postgres:15
docker run -d –name webapp –network my-app-net -p 8000:80 my-webapp-image

Now, ‘webapp’ can connect to ‘db’ using the hostname ‘db’

(e.g., database connection string: postgresql://user:pass@db:5432/mydb)

Connect an existing container to the network

docker network connect my-app-net some-existing-container

Disconnect a container

docker network disconnect my-app-net some-existing-container
“`

Host Network

Using --network host removes network isolation. The container shares the host’s network namespace directly.

  • The container listens directly on the host’s network interfaces (no port mapping needed via -p).
  • Can be slightly faster for network performance but sacrifices isolation.
  • Can lead to port conflicts if multiple containers try to bind to the same port on the host.
  • Generally used only when specific network performance is critical or direct access to host network interfaces is required.

“`bash
docker run -d –name nginx-host –network host nginx

Nginx will try to bind to port 80 directly on the CentOS host

“`

Overlay Networks

Designed for multi-host networking, enabling containers running on different Docker hosts (participating in a Docker Swarm cluster) to communicate securely. Configuration is more complex and tied to Swarm setup.

CentOS Firewall (firewalld) Interaction

Docker manipulates iptables rules directly for network isolation and port mapping. On CentOS systems using firewalld, Docker and firewalld generally coexist, but awareness is needed:

  • Port Mapping: When you use -p to map a port, Docker adds iptables rules to allow traffic to that port, bypassing some firewalld rules for established connections originating from the container.
  • Exposing Ports: If you want external machines (beyond the host itself) to access a mapped port (e.g., -p 8080:80), you might still need to explicitly open the host port (e.g., 8080) in firewalld:
    bash
    sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent
    sudo firewall-cmd --reload
  • Docker Network Subnets: By default, traffic between Docker networks and the host might be allowed. You might need specific firewalld rules if you want finer control over traffic originating from Docker’s default bridge (172.17.0.0/16 usually) or custom network subnets.

9. Orchestrating Multi-Container Applications with Docker Compose

Running individual containers is fine, but most real-world applications consist of multiple interconnected services (web server, database, caching layer, etc.). Manually managing the docker run commands, networks, and volumes for these becomes complex. Docker Compose simplifies this.

What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications. You use a YAML file (docker-compose.yml) to configure your application’s services, networks, and volumes. Then, with a single command, you can create and start all the services from your configuration.

Installing Docker Compose

As of recent Docker versions, Compose functionality is included as a plugin (docker-compose-plugin) which gets installed alongside Docker Engine (as shown in the installation section). You can invoke it using docker compose (note the space, not hyphen).

If you need the standalone docker-compose binary (older method, or if the plugin wasn’t installed):

  1. Check the latest release version on the Docker Compose GitHub releases page.
  2. Download the appropriate binary for your system:
    bash
    # Example for version 2.24.6 (replace with the latest version)
    VERSION="2.24.6"
    DESTINATION="/usr/local/bin/docker-compose"
    sudo curl -L "https://github.com/docker/compose/releases/download/v${VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o $DESTINATION
    sudo chmod +x $DESTINATION
  3. Verify the installation:
    bash
    docker-compose --version

However, using the integrated docker compose plugin is generally recommended if available.

The docker-compose.yml File Structure

A typical docker-compose.yml file looks like this:

“`yaml
version: ‘3.8’ # Specifies the Compose file format version

services:
# Define the first service (e.g., web application)
webapp:
build: . # Build the image from Dockerfile in the current directory
# Or use a pre-built image:
# image: myusername/mywebapp:latest
ports:
– “8000:5000” # Map host port 8000 to container port 5000
volumes:
– ./src:/app/src # Mount source code (bind mount)
– webapp-data:/app/data # Mount a named volume
networks:
– app-net # Connect to the ‘app-net’ network
environment:
– DATABASE_HOST=db
– REDIS_HOST=cache
depends_on: # Start ‘db’ and ‘cache’ before starting ‘webapp’
– db
– cache

# Define the database service
db:
image: postgres:15-alpine
volumes:
– db-data:/var/lib/postgresql/data # Use a named volume for persistence
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: myappdb
networks:
– app-net

# Define a caching service
cache:
image: redis:7-alpine
networks:
– app-net

Define named volumes

volumes:
db-data: # Data for the postgres container
webapp-data: # Other persistent data for the webapp

Define custom networks

networks:
app-net:
driver: bridge # Use the bridge driver
“`

Key elements:

  • version: Specifies the Compose file syntax version. ‘3.8’ is a common modern version.
  • services: Defines the different containers that make up your application. Each key under services is a service name (e.g., webapp, db).
  • build: Specifies the path to a directory containing a Dockerfile to build the service’s image.
  • image: Specifies the image to use for the service (can be from Docker Hub or a private registry).
  • ports: Defines port mappings (similar to docker run -p).
  • volumes: Defines volume mounts (named volumes or bind mounts).
  • networks: Specifies which networks the service should connect to. Services on the same user-defined network can reach each other by service name.
  • environment: Sets environment variables inside the container.
  • depends_on: Defines startup dependencies between services (Note: only waits for the container to start, not necessarily for the application inside to be ready).
  • volumes (top-level): Declares named volumes used by services. Docker Compose will create these if they don’t exist.
  • networks (top-level): Declares custom networks used by services. Docker Compose will create these.

Example: Web Application with a Database

Using the YAML structure above, you can manage a simple web app (webapp) that depends on a PostgreSQL database (db) and a Redis cache (cache). All services are connected via the custom app-net bridge network, allowing webapp to connect to db and cache using their service names as hostnames.

Common Docker Compose Commands

Run these commands from the directory containing your docker-compose.yml file. Use docker compose (with space) if using the plugin, or docker-compose (with hyphen) if using the standalone binary.

  • docker compose up: Builds (if necessary), creates, and starts all services defined in the docker-compose.yml.
    • -d: Runs containers in detached mode (background).
    • --build: Forces a rebuild of images before starting.
      bash
      docker compose up -d # Start in background
  • docker compose down: Stops and removes containers, networks, and (optionally) volumes created by up.
    • --volumes: Removes named volumes declared in the volumes section. Use with caution!
    • -rmi all: Removes images used by the services.
      bash
      docker compose down
      docker compose down --volumes # Stop, remove containers AND volumes
  • docker compose ps: Lists containers managed by the current Compose project.
  • docker compose logs [service_name]: Displays logs from services.
    • -f: Follows log output.
      bash
      docker compose logs webapp
      docker compose logs -f
  • docker compose exec <service_name> <command>: Executes a command in a running service container.
    bash
    docker compose exec webapp bash # Get a shell in the webapp container
    docker compose exec db psql -U user -d myappdb # Run psql in the db container
  • docker compose build [service_name]: Builds or rebuilds images for services.
  • docker compose pull [service_name]: Pulls the latest images for services.
  • docker compose start|stop|restart [service_name]: Manages the lifecycle of services without removing them.

Docker Compose drastically simplifies the management of complex applications on your CentOS Docker host.

10. Dockerfile Best Practices for Efficient and Secure Images

Writing a good Dockerfile is crucial for creating images that are small, fast to build, secure, and easy to maintain.

  1. Use Official Base Images: Start with official images from Docker Hub (e.g., python, node, nginx, alpine) whenever possible. They are regularly updated and scanned for vulnerabilities. Choose minimal variants (like alpine or slim) if appropriate for smaller image sizes.
  2. Use Specific Image Tags: Avoid using the latest tag in FROM. Use specific version tags (e.g., python:3.9.18-slim, nginx:1.25.3-alpine). This ensures predictable builds and avoids unexpected breaking changes when the latest tag gets updated.
  3. Minimize Layers (Combine RUN commands): Each RUN, COPY, and ADD instruction creates a new image layer. More layers can increase image size and potentially slow down builds and container startup. Combine related commands using && and backslashes (\) for readability. Clean up temporary files within the same RUN command.
    “`dockerfile
    # Less efficient: 3 layers
    RUN apt-get update
    RUN apt-get install -y curl
    RUN rm -rf /var/lib/apt/lists/*

    More efficient: 1 layer

    RUN apt-get update && \
    apt-get install -y –no-install-recommends curl && \
    rm -rf /var/lib/apt/lists/*
    4. **Leverage Build Cache Effectively:** Docker attempts to reuse layers from previous builds if the instructions and context haven't changed. Order your Dockerfile instructions from least frequently changing (e.g., installing dependencies) to most frequently changing (e.g., copying application code). This maximizes cache hits.dockerfile

    Example for Node.js app

    WORKDIR /app

    Copy dependency file first – cache layer unless package.json changes

    COPY package*.json ./

    Install dependencies – cache layer unless package*.json changes

    RUN npm install

    Copy app code last – this layer invalidates cache most often

    COPY . .
    CMD [“node”, “server.js”]
    5. **Use `.dockerignore`:** Create a `.dockerignore` file in your build context directory (similar to `.gitignore`) to exclude files and directories not needed for the build (e.g., `.git`, `node_modules`, temporary files, local configs, `.env` files). This reduces the size of the build context sent to the Docker daemon, speeds up builds, and prevents sensitive files from being accidentally copied into the image.
    6. **Keep Images Small (Multi-stage builds):** Use multi-stage builds to separate the build environment from the final runtime environment. This allows you to use build tools, compilers, and development dependencies in an earlier stage and then copy only the necessary artifacts (compiled binaries, static assets) into a minimal final image.
    dockerfile

    Stage 1: Build stage (e.g., using a full SDK image)

    FROM golang:1.21 as builder
    WORKDIR /src
    COPY . .
    RUN CGO_ENABLED=0 GOOS=linux go build -o /app .

    Stage 2: Final stage (using a minimal base image)

    FROM alpine:latest
    WORKDIR /root/

    Copy only the compiled binary from the builder stage

    COPY –from=builder /app .
    EXPOSE 8080
    CMD [“./app”]
    7. **Run Containers as Non-Root User (`USER` instruction):** Avoid running containers as the `root` user. Create a dedicated user and group within the Dockerfile and switch to that user using the `USER` instruction before the final `CMD` or `ENTRYPOINT`. This enhances security by limiting the potential impact if the application inside the container is compromised.dockerfile
    FROM alpine:latest
    RUN addgroup -S appgroup && adduser -S appuser -G appgroup
    WORKDIR /home/appuser
    COPY –chown=appuser:appgroup ./app-binary .
    USER appuser
    CMD [“./app-binary”]
    8. **Scan Images for Vulnerabilities:** Integrate image scanning tools (like Trivy, Clair, Docker Scout, or commercial solutions) into your CI/CD pipeline or manually scan images regularly to identify known vulnerabilities in OS packages and application dependencies.bash

    Example using Trivy (needs to be installed separately)

    trivy image my-custom-image:latest
    ``
    9. **Parameterize with
    ARGandENVAppropriately:** UseARGfor variables needed only during the build process (e.g., specifying a version to download). UseENV` for variables needed by the application at runtime within the container.

11. Security Considerations for Docker on CentOS

Running containers introduces new security considerations. CentOS provides a solid base, especially with SELinux, but vigilance is necessary.

Kernel Namespaces and Cgroups

Docker relies heavily on Linux kernel features for isolation:
* Namespaces: Provide process isolation (PID namespace), network isolation (Net namespace), filesystem isolation (Mount namespace), etc. Each container gets its own set of namespaces, making it appear like a separate OS.
* Control Groups (cgroups): Limit and track resource usage (CPU, memory, block I/O) for a group of processes. Docker uses cgroups to enforce resource limits on containers.

While generally effective, these are OS-level virtualization features. A severe kernel vulnerability could potentially bypass this isolation. Keeping your CentOS host kernel updated is critical.

Running Docker Daemon as Root (Security Implications)

The Docker daemon typically runs as root. This has significant security implications:
* Privilege Escalation: Anyone with access to the Docker API (e.g., users in the docker group) effectively has root access on the host, as they can run privileged containers, mount sensitive host directories, etc.
* Daemon Vulnerabilities: A vulnerability in the Docker daemon itself could be exploited for root access.

Consider exploring Rootless mode for Docker if your use case allows it, although it comes with certain limitations. Otherwise, strictly control access to the Docker daemon socket and the docker group.

Running Containers as Non-Root

As emphasized in Best Practices, always run the process inside your container as a non-root user. Use the USER instruction in your Dockerfile. If a vulnerability is exploited in your application, it will run with the limited privileges of the container user, not root, reducing the potential damage.

SELinux Integration with Docker on CentOS

CentOS uses SELinux by default to enforce Mandatory Access Control (MAC). Docker and SELinux can work together to provide an additional layer of security.

  • SELinux Contexts: Every process and file on a SELinux system has a security context (label). SELinux rules define how processes with certain contexts can interact with files or other processes based on their contexts.
  • Container Labels: By default, Docker assigns the container_t SELinux type to containers. SELinux policies typically restrict what container_t processes can do on the host.
  • Volume Labeling (:z, :Z): This is where SELinux most commonly impacts Docker usage. When using bind mounts (-v /host/path:/container/path), the container process (container_t) might not have permission to read/write files from the host path, which have different SELinux labels.
    • Appending :z to the bind mount (-v /host/path:/container/path:z) tells Docker to relabel the host content with a shared content label (container_file_t). Multiple containers can use this label.
    • Appending :Z (-v /host/path:/container/path:Z) tells Docker to relabel the host content with a private, unshared label specific to that container. Only this container can access it.
    • Caution: These options modify the SELinux labels on the host filesystem. Use them carefully. :Z is particularly restrictive.
    • Docker Volumes are generally handled better by Docker regarding SELinux, as the data resides within /var/lib/docker/volumes, which usually has appropriate default labeling. Relabeling options are typically not needed for named volumes.
  • Troubleshooting SELinux Issues: If containers fail to start or access mounted files with “Permission Denied” errors (even when filesystem permissions seem correct), SELinux is a likely culprit. Check the audit log: sudo ausearch -m avc -ts recent. Look for denials related to container_t. You might need to adjust volume labeling (:z/:Z) or, in rare, complex cases, create custom SELinux policy modules (which is an advanced topic). Avoid disabling SELinux globally (setenforce 0) as a workaround in production.

Limiting Container Resources

Prevent denial-of-service (DoS) attacks (intentional or accidental) where one container consumes all host resources. Use docker run flags or Docker Compose settings:

  • Memory:
    • --memory=<limit> (e.g., --memory=512m)
    • --memory-swap=<limit> (Total swap + memory)
  • CPU:
    • --cpus=<limit> (e.g., --cpus="1.5" for 1.5 CPU cores)
    • --cpu-shares, --cpuset-cpus (for finer control)
  • Block I/O:
    • --blkio-weight
    • --device-read-bps, --device-write-bps
  • ulimits:
    • --ulimit <type>=<soft>:<hard> (e.g., --ulimit nofile=1024:2048)

Image Security Scanning

Regularly scan your Docker images (especially base images and final application images) for known vulnerabilities (CVEs) using tools like:

  • Trivy: Open-source, easy to use.
  • Clair: Open-source, often integrated with registries.
  • Docker Scout: Docker’s native offering (may require subscription for advanced features).
  • Commercial scanners (Snyk, Aqua Security, etc.).

Integrate scanning into your CI/CD pipeline to catch issues before deployment.

Managing Secrets

Avoid hardcoding sensitive information (passwords, API keys, certificates) directly into Dockerfiles or images. Use proper secret management techniques:

  • Docker Secrets: Native solution for Docker Swarm. Secrets are mounted as files in /run/secrets/ within the container.
  • Environment Variables (with caution): Can be used for less sensitive config, but be aware they can be inspected. Inject them at runtime (docker run -e), not build time (ENV in Dockerfile).
  • Volume Mounting: Mount configuration files containing secrets from a secure host location (using bind mounts with careful permissions and SELinux labels) or Docker Volumes.
  • External Secret Management Tools: HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault. These often have integrations or client libraries to securely fetch secrets at runtime.

Network Security

  • Use user-defined bridge networks to isolate application components.
  • Limit port exposure (-p) only to necessary ports.
  • Use firewalld on the CentOS host to control external access to exposed ports.
  • Consider network policies if using orchestration tools like Kubernetes or Swarm for finer-grained control over container-to-container communication.

12. Advanced Topics (Brief Overview)

This guide covers the essentials, but the Docker ecosystem is vast. Here are pointers to more advanced areas:

  • Docker Swarm: Docker’s native orchestration tool for managing a cluster of Docker hosts. It allows you to deploy, scale, and manage containerized applications across multiple machines, providing high availability and load balancing. Simpler to set up than Kubernetes but less feature-rich.
  • Kubernetes (K8s): The industry-leading container orchestration platform. Offers powerful features for automated deployment, scaling, service discovery, load balancing, self-healing, and configuration management across clusters of hosts. Steeper learning curve than Swarm. Can run on CentOS.
  • Private Docker Registries: While Docker Hub is convenient, organizations often need private registries (e.g., Docker Trusted Registry, Harbor, GitLab Container Registry, Nexus, AWS ECR, GCP Artifact Registry, Azure ACR) to store proprietary images securely.
  • Docker BuildKit: An improved backend for docker build. Enabled by default in recent Docker versions (via docker-buildx-plugin). Offers better performance, improved caching, concurrent builds, and new Dockerfile syntax features.

13. Troubleshooting Common Docker Issues on CentOS

Even with careful setup, you might encounter issues. Here are common problems and how to approach them:

  • Docker Service Fails to Start (sudo systemctl status docker)
    • Check logs: sudo journalctl -u docker -n 50 --no-pager (shows last 50 log entries). Look for error messages (config file issues, storage driver problems, conflicts).
    • Configuration issues: Check /etc/docker/daemon.json for syntax errors if you customized it.
    • Resource exhaustion: Ensure sufficient disk space and memory.
  • Permission Denied Errors (Docker Socket)
    • Symptom: Got permission denied while trying to connect to the Docker daemon socket... when running docker commands without sudo.
    • Cause: Your user is not in the docker group, or you haven’t logged out/in (or used newgrp docker) since being added.
    • Fix: Verify group membership (groups your_user), add if needed (sudo usermod -aG docker your_user), and log out and log back in.
  • Cannot Connect to Docker Daemon
    • Symptom: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    • Cause: The Docker service is not running.
    • Fix: Start the service: sudo systemctl start docker. Check status: sudo systemctl status docker. Troubleshoot service start failure if necessary (see first point).
  • Network Connectivity Problems
    • Container cannot reach internet: Check host network connectivity. Check if firewalld is blocking outbound traffic (less common). Check Docker’s configured DNS servers (usually inherited from host, or set in /etc/docker/daemon.json).
    • External cannot reach container port: Ensure correct port mapping (-p host:container). Ensure the application inside the container is listening on 0.0.0.0:<container_port> (not just 127.0.0.1). Ensure firewalld on the CentOS host allows traffic on the host_port (sudo firewall-cmd --list-all, sudo firewall-cmd --add-port=...).
    • Containers cannot reach each other by name: Ensure they are on the same user-defined bridge network (not the default bridge). Verify network attachment (docker network inspect <network_name>).
  • Volume Permission Issues
    • Symptom: “Permission Denied” when container tries to write to a mounted volume/bind mount.
    • Cause (Bind Mounts): UID/GID mismatch between host directory owner and container user. SELinux blocking access.
    • Fix (Bind Mounts): Adjust host directory ownership/permissions (chown, chmod). Use SELinux relabeling flags (:z or :Z) on the mount option (-v /host:/container:z).
    • Cause (Volumes): Less common, but could happen if the image tries to write as root to a volume initialized by a non-root user, or vice-versa (though Docker usually handles volume permissions). Check application logs inside the container.
  • “No space left on device” Errors
    • Cause: Docker’s storage location (/var/lib/docker by default) has run out of disk space, or you’ve hit inode limits. Can be caused by too many images, stopped containers, build cache, or large volumes.
    • Fix: Prune unused resources: docker system prune -a (consider --volumes carefully). Remove old/unused images (docker rmi ...) and volumes (docker volume rm ...). Check disk usage with df -h /var/lib/docker and df -i /var/lib/docker. Consider moving the Docker root directory if needed (requires stopping Docker, editing daemon.json, copying data, restarting).
  • Image Pull Issues
    • Symptom: manifest unknown, rate limit exceeded, connection timed out.
    • Cause: Typo in image name/tag. Network issues blocking access to the registry (Docker Hub). Docker Hub rate limits for anonymous/free users. Incorrect credentials for private registries.
    • Fix: Double-check image name/tag. Verify network/firewall settings. Log in to Docker Hub (docker login). Check Docker Hub status pages. Configure credentials for private registries.

14. Conclusion

Docker provides a transformative approach to application deployment, and CentOS offers a robust, secure, and stable platform to run it on. By mastering the concepts outlined in this guide – from installation and core commands to Dockerfiles, networking, volumes, Compose, and security best practices specific to CentOS (like SELinux integration) – you are well-equipped to leverage the full power of containerization.

Containerizing applications on CentOS allows for consistent environments, efficient resource utilization, rapid deployments, and improved security posture when implemented correctly. While the learning curve exists, the benefits in development pipelines and operational stability are substantial.

Remember to always prioritize security, keep your Docker Engine and CentOS host updated, manage resources effectively, and continue exploring the rich Docker ecosystem. Happy Dockering on CentOS!


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top