Understanding Docker CE

Okay, here’s a comprehensive article on Understanding Docker CE (Community Edition), aiming for around 5000 words. This will cover a wide range of topics, from basic concepts to more advanced usage.

Understanding Docker CE: A Comprehensive Guide

Docker has revolutionized the way software is developed, deployed, and managed. At the heart of this revolution is Docker CE (Community Edition), a free and open-source platform that enables developers and system administrators to build, ship, and run distributed applications anywhere. This article provides an in-depth look at Docker CE, covering its core concepts, architecture, benefits, use cases, and practical examples. We’ll delve into everything from basic commands to more advanced topics like networking, storage, and orchestration.

1. Introduction: What is Docker and Why is it Important?

Before diving into the specifics of Docker CE, let’s understand the fundamental problem Docker solves and why it has become so crucial in modern software development.

1.1. The “It Works on My Machine” Problem

Traditionally, deploying applications has been a complex and error-prone process. Differences in operating systems, libraries, dependencies, and configurations between development, testing, and production environments often lead to the infamous “it works on my machine” problem. An application that runs flawlessly on a developer’s laptop might fail spectacularly on a production server due to subtle (or not-so-subtle) environmental discrepancies.

1.2. Virtual Machines: A Partial Solution

Virtual Machines (VMs) offered a partial solution to this problem. VMs allow you to create isolated environments with their own operating systems and resources, running on top of a hypervisor. This ensures greater consistency between environments. However, VMs have significant drawbacks:

  • Resource Overhead: Each VM runs a full copy of an operating system, consuming significant CPU, memory, and storage resources. This makes them heavy and slow to start.
  • Boot Time: VMs take time to boot up, slowing down the development and deployment process.
  • Management Complexity: Managing a large number of VMs can be complex and require specialized tools.

1.3. Docker: A Lightweight and Efficient Alternative

Docker offers a more lightweight and efficient approach to application packaging and deployment using containers. Instead of virtualizing the entire hardware stack like VMs, Docker utilizes containerization, which leverages the host operating system’s kernel to create isolated user-space environments.

1.4 Key Benefits of Docker

  • Lightweight and Fast: Containers share the host OS kernel, making them significantly smaller and faster to start than VMs. They consume fewer resources, allowing you to run more applications on the same hardware.
  • Consistency and Portability: Docker packages an application and all its dependencies into a single unit called a container image. This image can be run consistently across any environment that supports Docker, eliminating the “it works on my machine” problem.
  • Isolation: Containers provide strong isolation between applications, preventing conflicts and ensuring that one application’s issues don’t affect others.
  • Version Control: Docker images can be versioned, making it easy to track changes, roll back to previous versions, and manage different releases of an application.
  • Scalability: Docker makes it easy to scale applications by creating multiple instances of a container. This can be done manually or automatically using orchestration tools like Docker Swarm or Kubernetes.
  • Efficiency: Docker improves resource utilization and reduces infrastructure costs by allowing you to run more applications on the same hardware.
  • DevOps and CI/CD: Docker is a cornerstone of modern DevOps practices and Continuous Integration/Continuous Delivery (CI/CD) pipelines. It enables faster and more reliable deployments.

2. Core Concepts of Docker CE

Understanding Docker CE requires grasping several key concepts:

2.1. Images

A Docker image is a read-only template that contains everything needed to run an application:

  • Code: The application’s source code or compiled binaries.
  • Runtime: The programming language runtime (e.g., Python, Node.js, Java).
  • Libraries: Any required libraries and dependencies.
  • Environment Variables: Configuration settings for the application.
  • System Tools: Any necessary system tools or utilities.

Images are built from a Dockerfile (explained below) and are layered. Each instruction in the Dockerfile creates a new layer, and Docker uses a copy-on-write mechanism to efficiently manage these layers. This means that if multiple images share common layers, Docker only stores those layers once, saving disk space.

2.2. Containers

A container is a running instance of a Docker image. It’s a lightweight, isolated environment where the application executes. Containers are ephemeral, meaning they can be easily created, started, stopped, and deleted. When a container is deleted, any changes made within the container are lost unless they are explicitly persisted (using volumes, explained later).

2.3. Dockerfile

A Dockerfile is a text file that contains a set of instructions for building a Docker image. It’s a declarative way to define the environment and steps required to create a reproducible image. Here’s a simple example of a Dockerfile:

“`dockerfile

Use an official Python runtime as a parent image

FROM python:3.9-slim-buster

Set the working directory to /app

WORKDIR /app

Copy the current directory contents into the container at /app

COPY . /app

Install any needed packages specified in requirements.txt

RUN pip install –no-cache-dir -r requirements.txt

Make port 80 available to the world outside this container

EXPOSE 80

Define environment variable

ENV NAME World

Run app.py when the container launches

CMD [“python”, “app.py”]
“`

Let’s break down the key instructions:

  • FROM: Specifies the base image to use. This is the foundation upon which your image is built.
  • WORKDIR: Sets the working directory inside the container.
  • COPY: Copies files or directories from the host machine to the container.
  • RUN: Executes a command inside the container during the image build process (e.g., installing packages).
  • EXPOSE: Informs Docker that the container listens on the specified network ports at runtime. It doesn’t actually publish the port (that’s done when running the container).
  • ENV: Sets environment variables inside the container.
  • CMD: Specifies the default command to execute when the container starts. There can be only one CMD instruction, and it can be overridden when running the container.

2.4. Docker Hub (and other Registries)

Docker Hub is a cloud-based registry service provided by Docker. It’s a central repository for storing and sharing Docker images. You can:

  • Pull Images: Download pre-built images from Docker Hub to use as base images or to run directly.
  • Push Images: Upload your own custom-built images to Docker Hub to share them with others or to deploy them to different environments.

While Docker Hub is the most popular registry, other options exist, including:

  • Private Registries: You can set up your own private registry to store images securely within your organization.
  • Cloud Provider Registries: Major cloud providers like AWS (ECR), Google Cloud (GCR), and Azure (ACR) offer their own container registry services.

2.5. Docker Engine

The Docker Engine is the core component of Docker CE. It’s a client-server application that consists of:

  • Docker Daemon (dockerd): A persistent background process that manages Docker objects (images, containers, networks, volumes). It listens for requests from the Docker client.
  • Docker CLI (docker): The command-line interface that users interact with to issue commands to the Docker daemon.
  • REST API: An API that the Docker client uses to communicate with the Docker daemon. This API can also be used by other tools and applications to interact with Docker.

2.6. Docker Compose

Docker Compose is a tool for defining and managing multi-container Docker applications. It uses a YAML file (docker-compose.yml) to describe the services that make up your application, their dependencies, and how they should be configured. Docker Compose simplifies the process of building, starting, stopping, and scaling complex applications.

2.7. Docker Swarm

Docker Swarm is a native clustering and orchestration tool for Docker. It allows you to create and manage a cluster of Docker nodes (machines running the Docker Engine) and deploy applications across the cluster. Swarm provides features like:

  • Service Discovery: Containers can easily find and communicate with each other.
  • Load Balancing: Traffic is automatically distributed across multiple instances of a container.
  • Scaling: You can easily scale services up or down to meet demand.
  • Rolling Updates: You can update applications without downtime.

While Docker Swarm is a built-in option, Kubernetes has become the dominant container orchestration platform.

3. Docker Architecture

Docker follows a client-server architecture. The Docker client (CLI) talks to the Docker daemon (dockerd), which does the heavy lifting of building, running, and distributing your Docker containers. The Docker daemon and the client can run on the same host, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

3.1. Client-Server Interaction

  1. User Input: You use the Docker CLI to issue commands (e.g., docker run, docker build, docker pull).
  2. CLI to Daemon: The Docker CLI translates your commands into API calls and sends them to the Docker daemon.
  3. Daemon Processes Request: The Docker daemon receives the request and performs the necessary actions. This might involve:
    • Pulling an image from a registry.
    • Building an image from a Dockerfile.
    • Creating and starting a container.
    • Managing networks and volumes.
  4. Daemon to Client (Response): The Docker daemon sends a response back to the Docker CLI, indicating the success or failure of the operation, and providing any relevant output.
  5. Client Output: The Docker CLI displays the response to the user.

3.2. Underlying Technologies

Docker leverages several underlying Linux kernel features to achieve containerization:

  • Namespaces: Provide isolation for different aspects of the system, such as:

    • PID Namespace: Isolates process IDs. A process inside a container has its own set of PIDs, separate from the host system.
    • Network Namespace: Isolates network interfaces, IP addresses, routing tables, etc. Each container has its own virtual network stack.
    • Mount Namespace: Isolates mount points. A container has its own file system view, separate from the host.
    • UTS Namespace: Isolates hostname and domain name.
    • IPC Namespace: Isolates inter-process communication resources.
    • User Namespace: Isolates user and group IDs.
  • Control Groups (cgroups): Limit and monitor resource usage (CPU, memory, disk I/O, network) for a group of processes. This prevents a single container from consuming all the resources on the host.

  • Union File Systems (UnionFS): Allow multiple file systems to be layered on top of each other, creating a single, unified view. Docker uses UnionFS (or similar technologies like OverlayFS) to efficiently manage image layers.

4. Installing Docker CE

Docker CE is available for a variety of platforms, including Linux distributions (Ubuntu, Debian, Fedora, CentOS, etc.), macOS, and Windows. The installation process varies slightly depending on the platform. Here’s a general overview:

4.1. Linux (Ubuntu Example)

The following steps are for Ubuntu, but the process is similar for other distributions. Refer to the official Docker documentation for detailed instructions for your specific distribution.

  1. Update Package Index:
    bash
    sudo apt-get update

  2. Install Prerequisite Packages:
    bash
    sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

  3. Add Docker’s GPG Key:
    bash
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

  4. Set up the Stable Repository:
    bash
    echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

  5. Update Package Index Again:
    bash
    sudo apt-get update

  6. Install Docker Engine, containerd, and Docker Compose:
    bash
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

  7. Verify Installation:
    bash
    sudo docker run hello-world

    This command downloads a test image and runs it in a container. If the installation is successful, you’ll see a “Hello from Docker!” message.

  8. (Optional, but Recommended) Manage Docker as a non-root user:
    bash
    sudo groupadd docker
    sudo usermod -aG docker $USER
    newgrp docker # Or log out and log back in

    This allows you to run Docker commands without using sudo.

4.2. macOS

  1. Download Docker Desktop for Mac: Download the .dmg installer from the Docker website (Docker Hub).
  2. Install: Double-click the .dmg file and drag the Docker icon to the Applications folder.
  3. Launch Docker Desktop: Open Docker Desktop from the Applications folder. It will start the Docker Engine and the Docker CLI.

4.3. Windows

  1. Enable WSL 2 (Windows Subsystem for Linux 2): Docker Desktop for Windows uses WSL 2 as its backend. Make sure WSL 2 is enabled and a Linux distribution is installed (e.g., Ubuntu). You can find instructions for enabling WSL 2 in the Microsoft documentation.
  2. Download Docker Desktop for Windows: Download the installer from the Docker website.
  3. Install: Run the installer and follow the on-screen instructions.
  4. Launch Docker Desktop: Open Docker Desktop. It will start the Docker Engine and the Docker CLI, integrated with WSL 2.

5. Basic Docker Commands

Now that you have Docker CE installed, let’s explore some fundamental commands:

5.1. docker run

The docker run command is used to create and start a container from an image.

  • Basic Usage:
    bash
    docker run <image_name>

    For example:
    bash
    docker run ubuntu

    This will pull the latest ubuntu image from Docker Hub (if it’s not already cached locally) and start a container. Since we didn’t specify a command, the container will likely start and immediately exit because the default command for the ubuntu image is usually a shell that exits when it doesn’t receive any input.

  • Running a Command:
    bash
    docker run ubuntu echo "Hello from inside the container"

    This will run the echo command inside the container and print the output to your terminal.

  • Interactive Mode (-it):
    bash
    docker run -it ubuntu bash

    The -it flags (short for --interactive and --tty) allocate a pseudo-TTY connected to your terminal and keep STDIN open, allowing you to interact with the container’s shell. This will start a container and drop you into a bash shell inside the container.

  • Detached Mode (-d):
    bash
    docker run -d ubuntu sleep 3600

    The -d flag (short for --detach) runs the container in the background. The sleep 3600 command keeps the container running for an hour.

  • Port Mapping (-p):
    bash
    docker run -p 8080:80 nginx

    The -p flag (short for --publish) maps a port on the host machine to a port inside the container. In this example, port 8080 on the host is mapped to port 80 inside the container (where Nginx is listening). You can now access the Nginx web server by browsing to http://localhost:8080.

  • Naming a Container (--name):
    bash
    docker run --name my-nginx -p 8080:80 -d nginx

    The --name flag gives the container a specific name, making it easier to refer to it later.

  • Environment Variables (-e):
    bash
    docker run -e MY_VAR="my value" ubuntu env

    The -e flag sets environment variables inside the container. This example sets MY_VAR and then runs the env command to show all environment variables.

5.2. docker ps

The docker ps command lists running containers.

  • Basic Usage:
    bash
    docker ps

    This shows a table of currently running containers, including their container ID, image, command, creation time, status, ports, and names.

  • Show All Containers (-a):
    bash
    docker ps -a

    The -a flag (short for --all) shows all containers, including stopped ones.

5.3. docker stop

The docker stop command stops a running container.

bash
docker stop <container_id_or_name>

For example:
bash
docker stop my-nginx

5.4. docker start

The docker start command starts a stopped container.

bash
docker start <container_id_or_name>

5.5. docker rm

The docker rm command removes a stopped container.

bash
docker rm <container_id_or_name>

To remove a running container, you must first stop it, or use the -f (force) flag:
bash
docker rm -f <container_id_or_name>

5.6. docker images

The docker images command lists available Docker images.

bash
docker images

This shows a table of images, including their repository, tag, image ID, creation time, and size.

5.7. docker pull

The docker pull command downloads an image from a registry (e.g., Docker Hub).

bash
docker pull <image_name>:<tag>

For example:
bash
docker pull nginx:latest # Pulls the latest version of the nginx image
docker pull ubuntu:20.04 # Pulls a specific version of the ubuntu image

5.8. docker build

The docker build command builds an image from a Dockerfile.

bash
docker build -t <image_name>:<tag> <path_to_dockerfile>

For example:
bash
docker build -t my-app:1.0 . # Builds an image named "my-app" with tag "1.0" from the Dockerfile in the current directory

The -t flag specifies the image name and tag. The . indicates that the Dockerfile is in the current directory. You can also specify a URL to a Git repository containing a Dockerfile.

5.9. docker logs

The docker logs command displays the logs of a container.

bash
docker logs <container_id_or_name>

  • Follow Logs (-f):
    bash
    docker logs -f <container_id_or_name>

    The -f flag (short for --follow) streams the logs in real-time, similar to tail -f.

5.10. docker exec

The docker exec command runs a command inside a running container.

bash
docker exec -it <container_id_or_name> <command>

For example:
bash
docker exec -it my-nginx bash # Opens a bash shell inside the running "my-nginx" container
docker exec my-nginx ls /etc #Lists files in /etc inside the container

The -it flags are often used with docker exec to get an interactive shell inside the container.

5.11. docker commit
docker commit creates a new image from a container’s changes.

bash
docker commit <container_id> <new_image_name>

While useful for quick experiments, it is generally recommended to use Dockerfiles for building images to maintain reproducibility and version control. docker commit creates an image that isn’t easily reproducible.

5.12. docker rmi
docker rmi removes one or more images.

bash
docker rmi <image_id_or_name>

5.13. docker system prune
A very useful command to clean up unused data. This command removes:

  • All stopped containers
  • All networks not used by at least one container
  • All dangling images (images that are not tagged and are not referenced by any container)
  • All build cache

bash
docker system prune

You can add the -a flag to also remove all unused images, not just dangling ones. Be careful with this flag, as it can remove images you might still need.

5.14 docker info and docker version

  • docker info: Displays system-wide information about the Docker installation, including the number of containers and images, the Docker version, and details about the storage driver and networking configuration.
  • docker version: Shows the versions of the Docker client and server components.

6. Docker Volumes

Containers are ephemeral; when a container is removed, any data written inside the container is lost. Docker volumes provide a way to persist data beyond the lifecycle of a container. They also enable data sharing between containers.

6.1. Types of Volumes

  • Bind Mounts: Mount a directory or file from the host machine’s file system into the container. Changes made in the container are reflected on the host, and vice versa. This is useful for development, where you might want to edit code on the host and have the changes immediately reflected in the running container. The path on the host is absolute.
  • Named Volumes: Managed by Docker. Docker creates and manages the storage location for the volume. Named volumes are more portable than bind mounts because you don’t need to worry about the specific path on the host machine. The storage is managed by Docker, usually within /var/lib/docker/volumes/ (on Linux).
  • tmpfs Mounts: (Linux only) Store data in the host system’s memory (RAM). This is useful for temporary data that doesn’t need to be persisted to disk. They are very fast but are lost when the container stops.

6.2. Creating and Using Volumes

  • Bind Mount (Example):
    bash
    docker run -v /path/on/host:/path/in/container <image_name>

    For example:
    bash
    docker run -v $(pwd)/src:/app -it ubuntu bash

    This mounts the src directory in the current working directory on the host to the /app directory inside the container.

  • Named Volume (Example):
    bash
    docker volume create my-data # Create a named volume
    docker run -v my-data:/data <image_name> # Use the named volume

    This creates a named volume called my-data and mounts it to the /data directory inside the container. Docker manages the storage location of my-data.

  • tmpfs Mount (Example):
    bash
    docker run --tmpfs /tmp <image_name>

    This mounts a tmpfs volume at /tmp inside the container.

6.3. docker volume Commands

  • docker volume create: Creates a named volume.
  • docker volume ls: Lists volumes.
  • docker volume inspect: Displays detailed information about a volume.
  • docker volume rm: Removes a volume.
  • docker volume prune: Removes all unused local volumes.

7. Docker Networking

Docker provides several networking options to allow containers to communicate with each other and with the outside world.

7.1. Default Bridge Network

By default, Docker creates a bridge network called bridge. When you run a container without specifying any networking options, it’s connected to this bridge network. Containers on the same bridge network can communicate with each other using their IP addresses. Docker provides an embedded DNS server that resolves container names to IP addresses (if the containers were started with --name).

7.2. User-Defined Networks

You can create your own custom networks using the docker network command. This is generally recommended for better isolation and control.

  • Creating a Network:
    bash
    docker network create my-network

    This creates a new bridge network called my-network.

  • Connecting a Container to a Network:
    bash
    docker run --network my-network --name container1 <image_name>
    docker run --network my-network --name container2 <image_name>

    These commands create two containers and connect them to the my-network. container1 and container2 can now communicate with each other using their container names (e.g., ping container2 from within container1).

7.3. Other Network Types

  • host: The container shares the host’s network namespace. The container uses the host’s IP address and network interfaces. This provides the best network performance but offers no isolation.
  • none: The container has no network access.
  • overlay: Used for multi-host networking with Docker Swarm. It allows containers on different hosts to communicate with each other.
  • macvlan: Assigns a MAC address to a container, making it appear as a physical device on the network. This is useful for integrating with existing network infrastructure.

7.4. docker network Commands

  • docker network create: Creates a network.
  • docker network ls: Lists networks.
  • docker network inspect: Displays detailed information about a network.
  • docker network connect: Connects a running container to a network.
  • docker network disconnect: Disconnects a container from a network.
  • docker network rm: Removes a network.

8. Docker Compose: Managing Multi-Container Applications

Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file (docker-compose.yml) to define the services that make up your application, their dependencies, and their configurations.

8.1. docker-compose.yml Example

“`yaml
version: “3.9” # Specifies the Compose file version

services:
web:
build: . # Build the image from the Dockerfile in the current directory
ports:
– “5000:5000” # Map port 5000 on the host to port 5000 in the container
volumes:
– .:/code # Mount the current directory to /code in the container
depends_on:
– db # Specifies that the “web” service depends on the “db” service

db:
image: postgres:13 # Use the official PostgreSQL image
ports:
– “5432:5432”
environment:
POSTGRES_PASSWORD: mysecretpassword # Set the PostgreSQL password
volumes:
– db_data:/var/lib/postgresql/data # Persist database data using a named volume
volumes:
db_data: #Define named volume

“`

8.2. Key Sections of docker-compose.yml

  • version: Specifies the version of the Compose file format.
  • services: Defines the services that make up your application. Each service represents a container.
    • build: Specifies the path to the Dockerfile to build the image for the service (or a context object for more advanced builds).
    • image: Specifies the image to use for the service (if you’re not building from a Dockerfile).
    • ports: Defines port mappings.
    • volumes: Defines volume mappings.
    • depends_on: Specifies dependencies between services. Docker Compose will start services in the correct order based on their dependencies.
    • environment: Sets environment variables.
    • networks Specifies which networks a service is connected to. If omitted, a default network is created.
  • volumes: Defines named volumes that can be used by services.
  • networks: (Optional) Defines custom networks.

8.3. Docker Compose Commands

  • docker-compose up: Builds, (re)creates, starts, and attaches to containers for a service. The -d flag runs the containers in detached mode.
    bash
    docker-compose up -d

  • docker-compose down: Stops and removes containers, networks, volumes, and images created by up.
    bash
    docker-compose down

  • docker-compose ps: List containers
  • docker-compose logs: View output from containers
  • docker-compose build: Builds or rebuilds services.
  • docker-compose start: Starts existing containers for a service.
  • docker-compose stop: Stops running containers without removing them.
  • docker-compose restart: Restarts services.
  • docker-compose exec: Executes a command in a running container.
  • docker-compose run: Runs a one-off command against a service.
  • docker-compose scale: Sets the number of containers for a service (requires a compatible orchestrator, like Swarm).
  • docker-compose config: Validates and views the Compose file.

9. Docker Security Best Practices

Security is crucial when working with Docker. Here are some essential best practices:

  • Use Official Images: Whenever possible, use official images from trusted sources (e.g., Docker Hub Official Images). These images are regularly scanned for vulnerabilities and maintained by the respective project maintainers.
  • Keep Images Up-to-Date: Regularly update your base images to include the latest security patches. Use specific tags (e.g., ubuntu:20.04) instead of latest to control when updates are applied. Consider using automated image scanning tools.
  • Minimize Image Size: Smaller images have a smaller attack surface. Use multi-stage builds (described below) to create smaller production images. Remove unnecessary packages and files.
  • Don’t Run Containers as Root: By default, processes inside a container run as root. Create a non-root user inside your Dockerfile and use the USER instruction to switch to that user. This limits the potential damage if the container is compromised.
  • Limit Container Capabilities: Docker containers have certain Linux capabilities enabled by default. You can drop unnecessary capabilities using the --cap-drop flag with docker run to further restrict what the container can do.
  • Use Secrets Management: Don’t store sensitive information (passwords, API keys, etc.) directly in your Dockerfile or environment variables. Use Docker secrets or a dedicated secrets management solution.
  • Network Segmentation: Use user-defined networks to isolate containers from each other and from the host network. Only expose the necessary ports.
  • Resource Limits: Use cgroups to limit the resources (CPU, memory) that a container can consume. This prevents a compromised container from overwhelming the host system.
  • Regularly Scan Images: Use image scanning tools (e.g., Clair, Trivy, Docker’s built-in scanning) to identify vulnerabilities in your images.
  • Read-Only Root Filesystem: Consider mounting the container’s root filesystem as read-only (--read-only) with docker run. This prevents attackers from modifying the container’s filesystem if they gain access. Only allow writes to specific directories using volumes.
  • Content Trust: Use Docker Content Trust to ensure that you are pulling images from trusted sources and that the images have not been tampered with.

10. Advanced Docker Topics

10.1. Multi-Stage Builds

Multi-stage builds allow you to use multiple FROM instructions in a single Dockerfile to create smaller and more efficient production images. This is particularly useful for compiled languages (e.g., Go, Java, C++).

“`dockerfile

Stage 1: Build the application

FROM golang:1.18 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

Stage 2: Create the production image

FROM alpine:latest
WORKDIR /root/
COPY –from=builder /app/myapp .
CMD [“./myapp”]
“`
In this example:

  1. The first stage (builder) uses the golang:1.18 image to build the application.
  2. The second stage uses the much smaller alpine:latest image.
  3. The COPY --from=builder instruction copies only the compiled binary (myapp) from the first stage to the second stage.

The final image only contains the compiled binary and the minimal runtime environment, resulting in a significantly smaller image size.

10.2. Docker and CI/CD

Docker is a fundamental component of modern CI/CD pipelines. Here’s how it integrates:

  1. Source Code Management: Developers commit code changes to a version control system (e.g., Git).
  2. CI Server Trigger: The CI server (e.g., Jenkins, GitLab CI, GitHub Actions, CircleCI) detects the code changes and triggers a build.
  3. Docker Build: The CI server uses a Dockerfile to build a Docker image of the application.
  4. Testing: The CI server runs automated tests inside a Docker container. This ensures that the tests are executed in a consistent environment.
  5. Image Push: If the tests pass, the CI server pushes the Docker image to a container registry (e.g., Docker Hub, ECR, GCR).
  6. Deployment: The CD process deploys the new image to a staging or production environment. This can be done using Docker Compose, Docker Swarm, Kubernetes, or other deployment tools.

10.3. Docker and Kubernetes

Kubernetes (often abbreviated as K8s) is the leading container orchestration platform. While Docker Swarm is a built-in option, Kubernetes has become the dominant choice for managing containerized applications at scale. Kubernetes provides features like:

  • Automated Deployments and Rollbacks: Kubernetes manages the deployment of your application across a cluster of nodes, ensuring high availability and fault tolerance.
  • Self-Healing: Kubernetes automatically restarts failed containers, reschedules containers on different nodes if a node fails, and ensures that the desired number of replicas is maintained.
  • **Service Discovery

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top