Configure Nginx Reverse Proxy inside Docker

Okay, here’s a comprehensive article on configuring an Nginx reverse proxy inside Docker, aiming for approximately 5000 words. This will cover a wide range of scenarios, from basic setups to more advanced configurations, including detailed explanations, best practices, and troubleshooting tips.

Configure Nginx Reverse Proxy inside Docker: A Comprehensive Guide

This article provides a deep dive into setting up and configuring an Nginx reverse proxy within a Docker environment. We’ll cover everything from the fundamentals to advanced techniques, enabling you to leverage the power and flexibility of Docker and Nginx for your web applications.

Table of Contents

  1. Introduction: Why Use an Nginx Reverse Proxy with Docker?

    • What is a Reverse Proxy?
    • Benefits of Using Nginx
    • Why Docker?
    • Combining Nginx and Docker: The Synergy
  2. Prerequisites

    • Docker Installation
    • Basic Docker Knowledge (Images, Containers, Networks)
    • Text Editor or IDE
    • Basic Linux Command-Line Familiarity
  3. Basic Setup: Reverse Proxy for a Single Application

    • Scenario: Simple Web Application
    • Creating the Backend Application (Example: Node.js)
      • Dockerfile for the Backend
      • app.js (Simple Node.js Server)
    • Creating the Nginx Configuration
      • nginx.conf
    • Creating the Nginx Dockerfile
      • Dockerfile for Nginx
    • Building and Running the Containers
      • Docker Compose (Recommended)
        • docker-compose.yml
      • Manual docker build and docker run (Alternative)
    • Testing the Setup
    • Explanation of Configuration Directives
  4. Reverse Proxy for Multiple Applications

    • Scenario: Multiple Services on Different Ports/Paths
    • Creating Additional Backend Applications (Example: Python Flask)
      • Dockerfile for the Second Backend
      • app.py (Simple Flask Server)
    • Modifying the Nginx Configuration
      • nginx.conf (Multiple server Blocks)
    • Updating Docker Compose (or Manual Commands)
      • docker-compose.yml (Multiple Services)
    • Testing the Multi-Application Setup
    • Using Server Names (Host Headers)
      • Modifying nginx.conf (Using server_name)
      • Updating /etc/hosts (Local Testing)
      • DNS Configuration (Production)
  5. SSL/TLS Encryption with Let’s Encrypt

    • Scenario: Secure Communication with HTTPS
    • Understanding Let’s Encrypt and Certbot
    • Using the nginx-proxy and nginx-proxy-companion Images
      • docker-compose.yml (with nginx-proxy and acme-companion)
      • Environment Variables (VIRTUAL_HOST, LETSENCRYPT_HOST, LETSENCRYPT_EMAIL)
    • Alternative: Manual Certbot Integration
      • Dockerfile for Certbot
      • Running Certbot to Obtain Certificates
      • Configuring Nginx to Use Certificates
      • Automating Certificate Renewal
    • Best Practices for SSL/TLS
  6. Advanced Nginx Configuration

    • Caching:
      • proxy_cache_path
      • proxy_cache
      • proxy_cache_valid
      • proxy_cache_key
      • proxy_cache_bypass
      • proxy_no_cache
    • Load Balancing:
      • upstream
      • Load Balancing Methods (Round Robin, Least Connections, IP Hash, etc.)
      • Health Checks
    • Request Rewriting:
      • rewrite
      • return
    • Rate Limiting:
      • limit_req_zone
      • limit_req
    • HTTP/2 Support:
      • listen 443 ssl http2;
    • Custom Headers:
      • add_header
      • proxy_set_header
    • WebSockets
      • proxy_http_version 1.1;
      • proxy_set_header Upgrade $http_upgrade;
      • proxy_set_header Connection "Upgrade";
    • Error Handling:
      • error_page
      • Custom Error Pages
  7. Docker Networking Considerations

    • Default Bridge Network
    • User-Defined Networks
    • Linking Containers (Deprecated)
    • Service Discovery with Docker Compose
    • External Network Access
  8. Security Best Practices

    • Keep Nginx and Docker Updated
    • Limit Container Privileges
    • Use a Non-Root User Inside the Container
    • Secure Nginx Configuration
      • Disable Server Tokens
      • Configure Strong Ciphers and Protocols
      • Implement Security Headers (HSTS, X-Frame-Options, etc.)
    • Monitor Logs
    • Use a Web Application Firewall (WAF) (Optional)
  9. Troubleshooting

    • Common Nginx Errors (502 Bad Gateway, 404 Not Found, etc.)
    • Debugging Docker Containers
      • docker logs
      • docker exec
      • docker inspect
    • Network Connectivity Issues
    • Certificate Problems
    • Resource Limits
  10. Conclusion


1. Introduction: Why Use an Nginx Reverse Proxy with Docker?

What is a Reverse Proxy?

A reverse proxy is a server that sits in front of one or more backend servers, forwarding client requests to those servers. It acts as an intermediary, hiding the internal structure and characteristics of your backend infrastructure from the outside world. Clients interact only with the reverse proxy, unaware of the specific servers handling their requests.

Benefits of Using Nginx

Nginx is a high-performance, open-source web server and reverse proxy known for its:

  • Performance: Nginx is designed to handle a large number of concurrent connections with low resource consumption. Its event-driven architecture makes it exceptionally efficient.
  • Stability: Nginx is known for its reliability and stability, even under heavy load.
  • Flexibility: Nginx is highly configurable, supporting a wide range of features, including reverse proxying, load balancing, caching, SSL/TLS termination, and more.
  • Modules: Nginx’s modular architecture allows you to extend its functionality with a variety of modules.
  • Open Source: Being open-source means it’s free to use and has a large, active community.

Why Docker?

Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, isolated environments that package everything an application needs to run, including code, runtime, libraries, and system tools. Key benefits of Docker include:

  • Consistency: Docker ensures that your application runs the same way, regardless of the underlying infrastructure.
  • Isolation: Containers isolate applications from each other and the host system, preventing conflicts and improving security.
  • Portability: Docker containers can be easily moved between different environments (development, testing, production).
  • Scalability: Docker makes it easy to scale applications by running multiple instances of containers.
  • Resource Efficiency: Containers share the host operating system kernel, making them more lightweight than virtual machines.

Combining Nginx and Docker: The Synergy

Using Nginx as a reverse proxy within a Docker environment provides a powerful and flexible solution for deploying and managing web applications. Here’s why this combination is so effective:

  • Simplified Deployment: Docker makes it easy to package and deploy both your application and Nginx in a consistent and reproducible manner.
  • Improved Scalability: You can easily scale your application by running multiple backend containers behind the Nginx reverse proxy.
  • Enhanced Security: Nginx can handle SSL/TLS termination, providing a secure connection for your users, and can act as a first line of defense against attacks.
  • Centralized Management: Nginx provides a single point of entry for managing traffic to your applications, making it easier to configure routing, load balancing, and other features.
  • Microservices Architecture: This combination is ideal for microservices architectures, where multiple independent services are deployed as separate containers. Nginx can route traffic to the appropriate service based on the request.

2. Prerequisites

Before you begin, make sure you have the following:

  • Docker Installation: Install Docker Desktop (for Windows/macOS) or Docker Engine (for Linux) on your system. Follow the official Docker documentation for your operating system.

    • Verification: Run docker --version and docker-compose --version (if using Docker Compose) to verify the installation.
  • Basic Docker Knowledge: You should have a basic understanding of:

    • Images: Read-only templates used to create containers.
    • Containers: Running instances of images.
    • Networks: How containers communicate with each other and the outside world.
    • Volumes: Persistent data storage for containers.
  • Text Editor or IDE: You’ll need a text editor or IDE to create and edit configuration files (e.g., nginx.conf, Dockerfile, docker-compose.yml). Popular choices include VS Code, Sublime Text, Atom, and Vim.

  • Basic Linux Command-Line Familiarity: You should be comfortable navigating directories, creating files, and running commands in a terminal or command prompt.

3. Basic Setup: Reverse Proxy for a Single Application

Let’s start with a simple scenario: a single web application that we want to expose through an Nginx reverse proxy.

Scenario: Simple Web Application

We’ll use a basic Node.js application as our backend. This application will simply respond with “Hello from Backend!” when accessed.

Creating the Backend Application (Example: Node.js)

  1. Create a directory: Create a directory for your project (e.g., nginx-docker-example).

  2. Create app.js: Inside the project directory, create a file named app.js with the following content:

    “`javascript
    // app.js
    const http = require(‘http’);

    const server = http.createServer((req, res) => {
    res.writeHead(200, { ‘Content-Type’: ‘text/plain’ });
    res.end(‘Hello from Backend!’);
    });

    const port = 3000;
    server.listen(port, () => {
    console.log(Server running on port ${port});
    });
    “`

    This code creates a simple HTTP server that listens on port 3000.

  3. Create Dockerfile for the Backend: Create a file named Dockerfile (no extension) in the same directory as app.js:

    “`dockerfile

    Dockerfile (for backend)

    FROM node:16-alpine # Use a Node.js base image (Alpine Linux for smaller size)

    WORKDIR /app # Set the working directory inside the container

    COPY package*.json ./ # Copy package.json and package-lock.json (if present)
    RUN npm install # Install dependencies

    COPY . . # Copy the rest of the application code

    EXPOSE 3000 # Expose port 3000 (for documentation, doesn’t actually publish the port)

    CMD [“node”, “app.js”] # Command to run the application
    “`

    • FROM node:16-alpine: This specifies the base image for the container. We’re using a Node.js image based on Alpine Linux, which is a lightweight Linux distribution. You can choose a different Node.js version if needed.
    • WORKDIR /app: Sets the working directory inside the container to /app. All subsequent commands will be executed relative to this directory.
    • COPY package*.json ./ and RUN npm install: This copies the package.json and package-lock.json files (if you have them) and installs the application’s dependencies. This is done in a separate step to take advantage of Docker’s layer caching. If the package*.json files don’t change, this step won’t be re-run on subsequent builds.
    • COPY . .: Copies the rest of the application code (including app.js) into the container.
    • EXPOSE 3000: Documents that the container listens on port 3000. This doesn’t actually publish the port; it’s primarily for informational purposes.
    • CMD ["node", "app.js"]: Specifies the command to run when the container starts. In this case, it runs node app.js to start the Node.js server.

    Note: Even though we have not created package.json and package-lock.json in our directory, it is always good practice to include it in the Dockerfile.

Creating the Nginx Configuration

  1. Create a directory for Nginx: Inside your project directory, create a subdirectory named nginx.

  2. Create nginx.conf: Inside the nginx directory, create a file named nginx.conf with the following content:

    “`nginx

    nginx.conf

    events {
    worker_connections 1024; # Maximum number of simultaneous connections
    }

    http {
    server {
    listen 80; # Listen on port 80 (HTTP)

        location / {
            proxy_pass http://backend:3000; # Forward requests to the backend container
            proxy_set_header Host $host;       # Pass the original Host header
            proxy_set_header X-Real-IP $remote_addr; # Pass the client's IP address
        }
    }
    

    }
    “`

    • events { ... }: Configures event-processing settings. worker_connections specifies the maximum number of simultaneous connections that each worker process can handle.
    • http { ... }: Configures the HTTP server.
    • server { ... }: Defines a virtual server.
    • listen 80;: Specifies that the server should listen for incoming connections on port 80 (the standard HTTP port).
    • location / { ... }: Defines how to handle requests for a specific location (in this case, the root path /).
    • proxy_pass http://backend:3000;: This is the core of the reverse proxy configuration. It tells Nginx to forward requests to the backend server running at http://backend:3000. backend is the name we’ll give to our backend container (using Docker Compose or linking).
    • proxy_set_header Host $host;: Passes the original Host header from the client’s request to the backend server. This is important for applications that rely on the Host header (e.g., virtual hosting).
    • proxy_set_header X-Real-IP $remote_addr;: Passes the client’s IP address to the backend server. This is useful for logging and other purposes.

Creating the Nginx Dockerfile

  1. Create Dockerfile for Nginx: Inside the nginx directory, create a file named Dockerfile:

    “`dockerfile

    Dockerfile (for Nginx)

    FROM nginx:alpine # Use the official Nginx image (Alpine Linux)

    COPY nginx.conf /etc/nginx/nginx.conf # Replace the default Nginx configuration

    EXPOSE 80 # Expose port 80
    “`

    • FROM nginx:alpine: Uses the official Nginx image based on Alpine Linux as the base image.
    • COPY nginx.conf /etc/nginx/nginx.conf: Copies your custom nginx.conf file into the container, replacing the default Nginx configuration.
    • EXPOSE 80 Expose port 80.

Building and Running the Containers

We have two primary methods for building and running our containers: Docker Compose (recommended) and manual docker build and docker run commands.

Docker Compose (Recommended)

Docker Compose simplifies managing multi-container applications.

  1. Create docker-compose.yml: In your project’s root directory (where you have the backend directory and nginx directory), create a file named docker-compose.yml:

    “`yaml

    docker-compose.yml

    version: “3.9” # Use a compatible Docker Compose version

    services:
    nginx:
    build: ./nginx # Build the Nginx image from the ./nginx directory
    ports:
    – “80:80” # Map port 80 on the host to port 80 in the container
    depends_on:
    – backend # Ensure the backend container starts before Nginx

    backend:
    build: . # Build the backend image from the current directory (.)
    expose:
    – “3000” # Expose port 3000 (for communication between containers)
    “`

    • version: "3.9": Specifies the Docker Compose file version.
    • services:: Defines the services (containers) that make up your application.
    • nginx:: Defines the Nginx service.
      • build: ./nginx: Tells Docker Compose to build the Nginx image using the Dockerfile in the ./nginx directory.
      • ports: - "80:80": Maps port 80 on the host machine to port 80 inside the Nginx container. This makes the Nginx server accessible from your browser.
      • depends_on: - backend: Specifies that the nginx service depends on the backend service. Docker Compose will start the backend container before starting the nginx container.
    • backend:: Defines the backend service.
      • build: .: Builds the backend image using the Dockerfile in the current directory (the project root).
      • expose: - "3000": Exposes port 3000. This makes port 3000 accessible to other containers within the same Docker network, but not to the host machine. This is important because we only want Nginx to be accessible from the outside.
  2. Run Docker Compose: Open a terminal in your project’s root directory and run:

    bash
    docker-compose up -d

    • docker-compose up: Builds and starts the services defined in docker-compose.yml.
    • -d: Runs the containers in detached mode (in the background).

    Docker Compose will build the images (if they don’t exist), create a network, and start the containers.

Manual docker build and docker run (Alternative)

If you prefer not to use Docker Compose, you can build and run the containers manually:

  1. Build the Backend Image:

    bash
    docker build -t my-backend .

    • docker build: Builds a Docker image.
    • -t my-backend: Tags the image with the name my-backend. You can choose a different name.
    • .: Specifies the build context (the current directory).
  2. Build the Nginx Image:

    bash
    docker build -t my-nginx ./nginx

    • ./nginx: Specifies the build context as the nginx directory.
  3. Run the Backend Container:

    bash
    docker run -d --name backend -p 3000:3000 my-backend

    * docker run: Runs the container
    * -d: Detached mode
    * --name: Assign name for the container.
    * -p 3000:3000 Maps the port.

  4. Inspect backend container’s IP address

bash
docker inspect backend

Look for "IPAddress" field inside the Networks configuration, e.g.:"IPAddress": "172.17.0.2"
You can also get the IP address by using the following command
bash
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' backend

  1. Run the Nginx Container:

    bash
    docker run -d --name nginx -p 80:80 my-nginx

    • --link backend:backend (Deprecated): Links the nginx container to the backend container. This allows the nginx container to access the backend container using the hostname backend. Note: Linking is a deprecated feature. It’s better to use Docker networks (as Docker Compose does automatically).
    • If the backend service is not running in the same network, replace backend with the IP address from previous step.

Testing the Setup

  1. Open a Web Browser: Open your web browser and go to http://localhost (or http://localhost:80 if the port is explicitly needed).

  2. You should see: “Hello from Backend!”

If you see this message, your Nginx reverse proxy is working correctly! Nginx is receiving the request on port 80 and forwarding it to your backend Node.js application running on port 3000 inside the backend container.

Explanation of Configuration Directives

Let’s recap the key Nginx configuration directives we used:

  • listen: Specifies the port and (optionally) the IP address that Nginx should listen on.
  • location: Defines how to handle requests for a specific URI path.
  • proxy_pass: Forwards requests to a backend server. The URL specified after proxy_pass can include a hostname or IP address and a port number.
  • proxy_set_header: Sets HTTP headers that are passed to the backend server. This is crucial for preserving information about the original client request.

4. Reverse Proxy for Multiple Applications

Now, let’s extend our setup to handle multiple backend applications.

Scenario: Multiple Services on Different Ports/Paths

We’ll add a second backend application (a simple Python Flask server) and configure Nginx to route requests to the appropriate backend based on the requested path.

Creating Additional Backend Applications (Example: Python Flask)

  1. Create a new directory: In your project directory, create a new directory named backend2.

  2. Create app.py: Inside backend2, create a file named app.py with the following content:

    “`python

    backend2/app.py

    from flask import Flask
    app = Flask(name)

    @app.route(“/”)
    def hello():
    return “Hello from Backend 2!”

    if name == “main“:
    app.run(host=’0.0.0.0′, port=5000)
    “`

    This creates a simple Flask application that listens on port 5000 and responds with “Hello from Backend 2!”. host='0.0.0.0' is important; it makes the server accessible from outside the container.

  3. Create requirements.txt:
    Flask==2.0.3

  4. Create Dockerfile for the Second Backend: Inside backend2, create a Dockerfile:

    “`dockerfile

    backend2/Dockerfile

    FROM python:3.9-alpine

    WORKDIR /app

    COPY requirements.txt ./
    RUN pip install –no-cache-dir -r requirements.txt

    COPY . .

    EXPOSE 5000

    CMD [“python”, “app.py”]
    “`

    This is similar to the Node.js Dockerfile, but uses a Python base image and installs Flask using pip.

Modifying the Nginx Configuration

  1. Edit nginx/nginx.conf: Modify your existing nginx/nginx.conf file to include a new location block for the second backend:

    “`nginx

    nginx/nginx.conf

    events {
    worker_connections 1024;
    }

    http {
    server {
    listen 80;

        location / {
            proxy_pass http://backend:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    
        location /backend2 {  # New location block for the second backend
            proxy_pass http://backend2:5000; # Forward requests to backend2 on port 5000
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    

    }
    “`

    • location /backend2 { ... }: This new block handles requests that start with /backend2.
    • proxy_pass http://backend2:5000;: Forwards requests to the backend2 container on port 5000. Note that we’re using backend2 as the hostname, which will be resolved by Docker’s internal DNS.

Updating Docker Compose (or Manual Commands)

Using Docker Compose (Recommended):

  1. Edit docker-compose.yml: Update your docker-compose.yml file to include the backend2 service:

    “`yaml

    docker-compose.yml

    version: “3.9”

    services:
    nginx:
    build: ./nginx
    ports:
    – “80:80”
    depends_on:
    – backend
    – backend2

    backend:
    build: .
    expose:
    – “3000”

    backend2: # New service definition
    build: ./backend2
    expose:
    – “5000”
    “`

    • backend2:: Adds the definition for the backend2 service.
    • build: ./backend2: Builds the image using the Dockerfile in the ./backend2 directory.
    • depends_on is updated.
  2. Rebuild and Restart:

    bash
    docker-compose up -d --build

    • --build: Forces Docker Compose to rebuild the images, even if they already exist. This is important because we’ve changed the Dockerfile and nginx.conf.

Using Manual Commands:

  1. Build the backend2 Image:

    bash
    docker build -t my-backend2 ./backend2

  2. Run the backend2 Container:

    bash
    docker run -d --name backend2 my-backend2

  3. Rebuild and Restart Nginx (if needed):

    If you modified the nginx.conf file, you’ll need to rebuild and restart the Nginx container:

    bash
    docker stop nginx
    docker rm nginx
    docker build -t my-nginx ./nginx
    docker run -d --name nginx -p 80:80 my-nginx

Testing the Multi-Application Setup

  1. http://localhost: Should still show “Hello from Backend!” (from the Node.js application).

  2. http://localhost/backend2: Should show “Hello from Backend 2!” (from the Flask application).

Nginx is now correctly routing requests to the appropriate backend container based on the URL path.

Using Server Names (Host Headers)

Instead of using different paths, you can use different hostnames (server names) to distinguish between your applications. This is how virtual hosting works.

  1. Modifying nginx.conf (Using server_name):

    “`nginx

    nginx/nginx.conf

    events {
    worker_connections 1024;
    }

    http {
    server {
    listen 80;
    server_name app1.local; # Use a server name for the first application

        location / {
            proxy_pass http://backend:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    
    server {
        listen 80;
        server_name app2.local; # Use a different server name for the second application
    
        location / {
            proxy_pass http://backend2:5000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
    

    }
    “`

    • server_name app1.local;: Specifies that this server block should handle requests for the hostname app1.local.
    • server_name app2.local;: Specifies that this server block should handle requests for the hostname app2.local.
  2. Updating /etc/hosts (Local Testing):

    For local testing, you need to modify your system’s hosts file to map these hostnames to your local machine’s IP address (usually 127.0.0.1).

    • Linux/macOS: Edit /etc/hosts (you’ll need root/administrator privileges).
    • Windows: Edit C:\Windows\System32\drivers\etc\hosts (run Notepad as administrator).

    Add the following lines to your hosts file:

    127.0.0.1 app1.local
    127.0.0.1 app2.local

  3. Rebuild and Restart Nginx: If using manual commands, stop, remove, rebuild and run Nginx. If using docker-compose, just run docker-compose up -d --build.

  4. Testing:

    • http://app1.local: Should show “Hello from Backend!”
    • http://app2.local: Should show “Hello from Backend 2!”

DNS Configuration (Production)

In a production environment, you would configure your DNS server to point the actual domain names (e.g., app1.example.com, app2.example.com) to the IP address of your server running the Nginx reverse proxy. You would not modify the /etc/hosts file on the server.

5. SSL/TLS Encryption with Let’s Encrypt

Securing your applications with HTTPS is essential for protecting user data and improving your website’s security and SEO. Let’s Encrypt provides free SSL/TLS certificates, and we can integrate it with our Nginx reverse proxy inside Docker.

Understanding Let’s Encrypt and Certbot

  • Let’s Encrypt: A free, automated, and open certificate authority (CA) that provides digital certificates for enabling HTTPS.
  • Certbot: A command-line tool that automates the process of obtaining and installing Let’s Encrypt certificates.

Using the nginx-proxy and nginx-proxy-companion Images (Recommended)

The nginx-proxy and nginx-proxy-companion (often referred to as acme-companion) Docker images provide a highly convenient way to automate Let’s Encrypt certificate management.

  1. docker-compose.yml (with nginx-proxy and acme-companion):

    “`yaml
    version: “3.9”

    services:
    nginx-proxy:
    image: nginxproxy/nginx-proxy
    ports:
    – “80:80”
    – “443:443”
    volumes:
    – /var/run/docker.sock:/tmp/docker.sock:ro
    – certs:/etc/nginx/certs # Add the certs volume
    – vhost:/etc/nginx/vhost.d
    – html:/usr/share/nginx/html
    labels:
    – “com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy”

    acme-companion:
    image: nginxproxy/acme-companion
    volumes:
    – /var/run/docker.sock:/var/run/docker.sock:ro
    – certs:/etc/nginx/certs # Share the certs volume
    – acme:/etc/acme.sh
    – vhost:/etc/nginx/vhost.d
    – html:/usr/share/nginx/html
    depends_on:
    – nginx-proxy
    environment:
    [email protected] # Replace with your email address

    backend:
    build: .
    expose:
    – “3000”
    environment: # Add environment variables for Let’s Encrypt
    – VIRTUAL_HOST=app1.local
    – LETSENCRYPT_HOST=app1.local
    [email protected]

    backend2:
    build: ./backend2
    expose:
    – “5000”
    environment: # Add environment variables for Let’s Encrypt
    – VIRTUAL_HOST=app2.local
    – LETSENCRYPT_HOST=app2.local
    [email protected]

    volumes: # Add the named volumes
    certs:
    vhost:
    html:
    acme:
    “`

    • nginx-proxy Service:
      • image: nginxproxy/nginx-proxy: Uses the nginx-proxy image.
      • ports: - "80:80" - "443:443": Exposes both port 80 (HTTP) and port 443 (HTTPS).
      • volumes::
        • /var/run/docker.sock:/tmp/docker.sock:ro: Allows nginx-proxy to monitor Docker events (container starts/stops) to automatically update its configuration. :ro makes it read-only for security.
        • certs:/etc/nginx/certs: Mount the certs volume to store SSL certificates.
        • vhost:/etc/nginx/vhost.d: This volume is used by the companion to communicate with the proxy.
        • html:/usr/share/nginx/html: Mounts the default HTML directory.
      • labels::
        • **`com.github

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top