Introduction to NGINX: Core Concepts & Setup

Okay, here’s a comprehensive article on “Introduction to NGINX: Core Concepts & Setup”, aiming for approximately 5000 words:

Introduction to NGINX: Core Concepts & Setup

Table of Contents

  1. What is NGINX?

    • 1.1 A Brief History
    • 1.2 NGINX vs. Apache: Key Differences
    • 1.3 NGINX Open Source vs. NGINX Plus
  2. Core Concepts

    • 2.1 Event-Driven Architecture
    • 2.2 Master and Worker Processes
    • 2.3 Configuration File Structure
      • 2.3.1 Directives and Contexts
      • 2.3.2 Main Context
      • 2.3.3 Events Context
      • 2.3.4 HTTP Context
      • 2.3.5 Server Context
      • 2.3.6 Location Context
      • 2.3.7 Upstream Context
      • 2.3.8 Mail Context (Less Common)
    • 2.4 Request Processing Flow
    • 2.5 Modules: Extending Functionality
    • 2.6 Regular Expressions in NGINX
  3. Installation and Setup

    • 3.1 Installation on Linux (Various Distributions)
      • 3.1.1 Ubuntu/Debian
      • 3.1.2 CentOS/RHEL/Fedora
      • 3.1.3 Using the official NGINX repository
    • 3.2 Installation on Windows
    • 3.3 Installation from Source (Advanced)
    • 3.4 Verifying Installation
    • 3.5 Basic NGINX Commands (start, stop, reload, status)
  4. Basic Configuration: Serving Static Content

    • 4.1 Default Configuration File
    • 4.2 Creating a Simple Virtual Host
    • 4.3 Setting up root and index Directives
    • 4.4 Testing Static Content Serving
  5. Reverse Proxy Configuration

    • 5.1 What is a Reverse Proxy?
    • 5.2 proxy_pass Directive
    • 5.3 Setting Proxy Headers (proxy_set_header)
    • 5.4 Basic Reverse Proxy Example
    • 5.5 Load Balancing with NGINX
      • 5.5.1 Round Robin
      • 5.5.2 Least Connections
      • 5.5.3 IP Hash
      • 5.5.4 Weighted Load Balancing
      • 5.5.5 Health Checks
  6. HTTPS Configuration (SSL/TLS)

    • 6.1 Obtaining SSL/TLS Certificates (Let’s Encrypt)
    • 6.2 Configuring SSL/TLS in NGINX
      • 6.2.1 ssl_certificate and ssl_certificate_key
      • 6.2.2 Enabling SSL/TLS Protocols and Ciphers
    • 6.3 Redirecting HTTP to HTTPS
  7. Caching

    • 7.1 Benefits of Caching
    • 7.2 Proxy Cache Configuration (proxy_cache_path, proxy_cache)
    • 7.3 Cache Key Customization (proxy_cache_key)
    • 7.4 Cache Bypass and Purging
    • 7.5 Browser Caching Directives
  8. Request and Response Modification

    • 8.1 rewrite
    • 8.2 add_header
    • 8.3 sub_filter
  9. Security Considerations

    • 9.1 Hiding NGINX Version (server_tokens)
    • 9.2 Limiting Request Methods (limit_except)
    • 9.3 Preventing Buffer Overflow Attacks
    • 9.4 Using a Web Application Firewall (WAF) – ModSecurity
    • 9.5 Rate Limiting (limit_req)
    • 9.6 Access control using allow and deny.
    • 9.7. Using NGINX as a WAF (NGINX App Protect) – NGINX Plus Feature
  10. Logging

    • 10.1 Access Logs (access_log)
    • 10.2 Error Logs (error_log)
    • 10.3 Log Rotation
  11. Troubleshooting

    • 11.1 Common Error Messages
    • 11.2 Debugging Configuration Issues
    • 11.3 Checking NGINX Status and Processes
  12. Advanced Topics

    • 12.1 NGINX as an API Gateway
    • 12.2 Streaming Media with NGINX
    • 12.3 Using NGINX with Docker and Kubernetes
    • 12.4. NGINX Unit
    • 12.5. NGINX Service Mesh
  13. Conclusion


1. What is NGINX?

NGINX (pronounced “engine-x”) is a high-performance, open-source web server, reverse proxy server, load balancer, HTTP cache, and mail proxy. It’s known for its stability, rich feature set, simple configuration, and low resource consumption. Unlike traditional web servers that create a new thread for each request, NGINX uses an asynchronous, event-driven architecture, allowing it to handle thousands of concurrent connections with a minimal memory footprint.

  • 1.1 A Brief History

NGINX was initially developed by Igor Sysoev in 2002 to address the C10k problem – the challenge of handling 10,000 concurrent connections on a single server. The first public release was in 2004. Its event-driven design proved highly effective, and NGINX quickly gained popularity, particularly for serving static content and acting as a reverse proxy in front of other application servers.

  • 1.2 NGINX vs. Apache: Key Differences

While both NGINX and Apache are popular web servers, they have fundamental differences in their architecture and how they handle requests:

Feature NGINX Apache
Architecture Event-driven, asynchronous Process-based or thread-based (MPM)
Concurrency Handles many connections with low memory Higher memory usage per connection
Static Content Extremely efficient Less efficient than NGINX
Dynamic Content Typically used with a separate app server Can handle directly (e.g., mod_php)
Configuration Generally simpler and more concise Can be more complex
Modules Statically compiled (mostly) Dynamically loaded (DSO)
Flexibility Highly configurable, reverse proxy focus Highly flexible, wider range of modules

In short, NGINX excels at handling high concurrency and serving static content, while Apache offers more flexibility for dynamic content processing and a broader range of modules through its Dynamic Shared Object (DSO) system. Many modern web architectures use NGINX as a reverse proxy in front of Apache to leverage the strengths of both.

  • 1.3 NGINX Open Source vs. NGINX Plus

NGINX is available in two main editions:

  • NGINX Open Source: The free, open-source version. It includes the core web server, reverse proxy, and basic load balancing features. This is the version most users will start with.
  • NGINX Plus: A commercial version with additional features, including advanced load balancing (session persistence, health checks), active monitoring, enhanced security features (like a Web Application Firewall), and commercial support. NGINX Plus is geared towards enterprise deployments.

This guide primarily focuses on NGINX Open Source, but we’ll mention NGINX Plus features where relevant.

2. Core Concepts

Understanding the core concepts of NGINX is crucial for effective configuration and troubleshooting.

  • 2.1 Event-Driven Architecture

NGINX’s event-driven architecture is the key to its performance. Instead of creating a new thread or process for each incoming connection (like Apache’s traditional approach), NGINX uses a small number of worker processes. Each worker process can handle thousands of connections simultaneously by using non-blocking I/O operations and an event loop.

Here’s a simplified explanation:

  1. Non-Blocking I/O: When a worker process needs to read data from a client or send data to a backend server, it doesn’t wait (block) for the operation to complete. Instead, it registers an interest in the event (e.g., “data is available to read”) with the operating system.
  2. Event Loop: The worker process continuously monitors a set of file descriptors (representing connections, sockets, etc.) for events. When an event occurs (e.g., data becomes available, a connection is established), the operating system notifies the worker process.
  3. Event Handling: The worker process handles the event, performs the necessary processing (e.g., reading data, sending a response), and then goes back to monitoring for other events.

This approach allows a single worker process to manage many connections efficiently without the overhead of creating and managing numerous threads or processes.

  • 2.2 Master and Worker Processes

NGINX has a multi-process architecture consisting of a master process and one or more worker processes.

  • Master Process: The master process is responsible for:

    • Reading and validating the configuration file.
    • Creating, binding, and managing listening sockets.
    • Starting, stopping, and managing worker processes.
    • Handling signals (e.g., reload, graceful shutdown).
    • Re-opening log files.
  • Worker Processes: The worker processes are responsible for:

    • Accepting connections from clients.
    • Processing requests.
    • Communicating with backend servers (if configured as a reverse proxy).
    • Caching content (if caching is enabled).

The number of worker processes is typically configured in the NGINX configuration file (usually based on the number of CPU cores).

  • 2.3 Configuration File Structure

NGINX’s configuration is defined in a text-based configuration file, typically located at /etc/nginx/nginx.conf (although the location can vary depending on the installation). The configuration file uses a hierarchical structure with directives and contexts.

  • 2.3.1 Directives and Contexts

    • Directives: A directive is a configuration instruction that specifies a setting. It consists of a name and one or more parameters, ending with a semicolon (;). For example:
      nginx
      worker_processes 4;

      This directive sets the number of worker processes to 4.

    • Contexts: A context is a block of directives that applies to a specific scope. Contexts are enclosed in curly braces {}. Contexts can be nested within other contexts, creating a hierarchy. For example:
      nginx
      http {
      server {
      listen 80;
      server_name example.com;
      ...
      }
      }

      This example shows the http context containing a server context.

  • 2.3.2 Main Context

    The main context (also called the global context) is the outermost context in the configuration file. It contains directives that apply globally to NGINX. Common directives in the main context include:

    • user: Specifies the user and group that worker processes will run as.
    • worker_processes: Specifies the number of worker processes.
    • pid: Specifies the path to the file that stores the process ID (PID) of the master process.
    • include: Includes other configuration files. This is useful for organizing configurations into smaller, more manageable files.
  • 2.3.3 Events Context

    The events context contains directives that affect connection processing. Common directives include:

    • worker_connections: Specifies the maximum number of simultaneous connections that each worker process can handle.
    • use: Specifies the connection processing method (e.g., epoll, kqueue). NGINX usually selects the most efficient method automatically.
    • multi_accept: If enabled, a worker process will accept all new connections at once, rather than one at a time.
  • 2.3.4 HTTP Context

    The http context is the main context for configuring HTTP server behavior. It contains directives that apply to all virtual hosts (server blocks) within it. Common directives include:

    • include mime.types: Includes a file that defines MIME types for different file extensions.
    • default_type: Specifies the default MIME type to use if the type cannot be determined from the file extension.
    • sendfile: Enables or disables the use of the sendfile() system call for efficient file transfer.
    • keepalive_timeout: Specifies the timeout for keep-alive connections.
    • gzip: Enables or disables gzip compression.
  • 2.3.5 Server Context

    The server context defines a virtual host. Each server block configures a specific domain name or IP address. Common directives include:

    • listen: Specifies the IP address and port that the server will listen on.
    • server_name: Specifies the domain name(s) that this server block should handle.
    • root: Specifies the document root directory for this server.
    • index: Specifies the default file(s) to serve when a directory is requested.
  • 2.3.6 Location Context

    The location context defines how NGINX should handle requests for specific URIs (paths) within a server block. location blocks can be nested and use various matching methods:

    • Prefix Match: Matches the beginning of the URI. For example, location /images { ... } matches /images/logo.png.
    • Exact Match: Matches the URI exactly. For example, location = / { ... } matches only the root URI (/).
    • Regular Expression Match: Uses regular expressions to match URIs. For example, location ~ \.php$ { ... } matches any URI ending in .php. Case-insensitive regular expressions use ~*.
    • Priority: NGINX uses a specific order of precedence to determine which location block to use:
      1. Exact matches (=)
      2. Prefix matches with the ^~ modifier (stops searching after a match)
      3. Regular expression matches (~ and ~*), in the order they appear in the configuration file.
      4. Longest matching prefix matches.

    Common directives within a location block include:

    • root: Overrides the root directive from the server context.
    • index: Overrides the index directive from the server context.
    • try_files: Attempts to serve files in the specified order, returning the first one found or a specified fallback.
    • proxy_pass: Passes the request to a backend server (used for reverse proxy configurations).
  • 2.3.7 Upstream Context

    The upstream context defines a group of backend servers that NGINX can use for load balancing. It’s used in conjunction with the proxy_pass directive. Common directives include:

    • server: Specifies the address (IP address or domain name) and port of a backend server. You can also specify weights, max_fails, fail_timeout, and other parameters.
  • 2.3.8 Mail Context (Less Common)
    The mail context is used to configure NGINX as a mail proxy server.

  • 2.4 Request Processing Flow

A simplified overview of how NGINX processes a request:

  1. Client Connection: A client (e.g., a web browser) establishes a connection to NGINX.
  2. Worker Process Accepts: A worker process accepts the connection.
  3. Request Parsing: The worker process parses the HTTP request headers.
  4. Server Block Selection: NGINX determines which server block should handle the request based on the listen and server_name directives.
  5. Location Block Selection: NGINX determines which location block should handle the request based on the URI and the location matching rules.
  6. Request Handling:
    • Static Content: If the request is for static content (e.g., an image or CSS file), NGINX serves the file directly from the file system (using the root and index directives).
    • Dynamic Content (Reverse Proxy): If the request needs to be processed by a backend server (e.g., a PHP application), NGINX forwards the request to the backend server specified by the proxy_pass directive.
    • Other Actions: NGINX might perform other actions based on the configuration, such as caching, rewriting the URL, or applying security rules.
  7. Response Generation: NGINX receives the response from the backend server (or generates the response itself for static content).
  8. Response Headers: NGINX may modify the response headers (e.g., adding caching headers).
  9. Response Transmission: NGINX sends the response to the client.
  10. Connection Handling: NGINX either closes the connection or keeps it alive for subsequent requests (based on keep-alive settings).

  11. 2.5 Modules: Extending Functionality

NGINX’s functionality is extended through modules. Most modules are compiled directly into the NGINX binary. Some key modules include:

  • Core Modules: Provide fundamental functionality (e.g., HTTP core, event handling).
  • HTTP Modules: Handle HTTP-specific tasks (e.g., ngx_http_core_module, ngx_http_proxy_module, ngx_http_ssl_module, ngx_http_gzip_module).
  • Mail Modules: Handle mail proxy functionality.
  • Stream Modules: Handle TCP and UDP proxy (available since version 1.9.0)

  • 2.6 Regular Expressions in NGINX
    NGINX makes extensive use of regular expressions, primarily within location blocks and for URL rewriting. NGINX uses the Perl Compatible Regular Expressions (PCRE) library. Here’s a quick overview of common regular expression syntax used in NGINX:

  • . (Dot): Matches any single character (except a newline).

  • * (Asterisk): Matches the preceding character zero or more times.
  • + (Plus): Matches the preceding character one or more times.
  • ? (Question Mark): Matches the preceding character zero or one time.
  • [] (Square Brackets): Matches any single character within the brackets. [a-z] matches any lowercase letter.
  • [^ ] (Caret inside Brackets): Matches any single character not within the brackets.
  • ^ (Caret): Matches the beginning of a string (or line, in multi-line mode).
  • $ (Dollar Sign): Matches the end of a string (or line, in multi-line mode).
  • \ (Backslash): Escapes a special character (e.g., \. matches a literal dot).
  • () (Parentheses): Groups characters together, often used with | for alternation.
  • | (Pipe): Alternation – matches either the expression before or the expression after the pipe.
  • \d: Matches any digit (equivalent to [0-9]).
  • \w: Matches any “word” character (alphanumeric plus underscore – equivalent to [a-zA-Z0-9_]).
  • \s: Matches any whitespace character (space, tab, newline, etc.).

Examples in NGINX:

  • location ~ \.php$ { ... }: Matches any URI ending in .php.
  • location ~* \.(gif|jpg|png)$ { ... }: Matches any URI ending in .gif, .jpg, or .png (case-insensitively).
  • rewrite ^/users/(.*)$ /show.php?user=$1 last;: Captures the part of the URI after /users/ and uses it as a parameter in a rewritten URL. The (.*) part is a capturing group, and $1 refers to the captured value.
  • if ($http_user_agent ~* (mobile|android)) { ... } Checks if the user-agent string contains “mobile” or “android” (case-insensitively).

Important Considerations:

  • Case Sensitivity: The ~ operator performs a case-sensitive match, while ~* performs a case-insensitive match.
  • Capturing Groups: Parentheses () create capturing groups, allowing you to extract parts of the matched string and use them in rewrites or other directives (e.g., $1, $2, etc., refer to the captured groups).
  • Greediness: By default, quantifiers like * and + are “greedy,” meaning they match as much as possible. You can make them “lazy” (match as little as possible) by adding a ? after them (e.g., *?, +?). This is less common in NGINX configurations.
  • Escaping: Remember to escape special characters with a backslash (\) if you want to match them literally.
  • Performance: While powerful, overly complex regular expressions can impact performance. Use them judiciously and test their efficiency. Prefix matches are generally faster than regular expression matches.

3. Installation and Setup

The installation process for NGINX varies depending on the operating system.

  • 3.1 Installation on Linux (Various Distributions)

    • 3.1.1 Ubuntu/Debian

      The easiest way to install NGINX on Ubuntu or Debian is to use the apt package manager:

      bash
      sudo apt update
      sudo apt install nginx

    • 3.1.2 CentOS/RHEL/Fedora

      On CentOS, RHEL, or Fedora, use the yum (or dnf on newer Fedora versions) package manager:

      “`bash

      CentOS/RHEL 7 and earlier:

      sudo yum install epel-release # (Often needed for NGINX)
      sudo yum install nginx

      Fedora and newer CentOS/RHEL:

      sudo dnf install nginx
      “`

    • 3.1.3 Using the official NGINX repository

    For the latest stable or mainline version of NGINX, it’s recommended to use the official NGINX repository. This provides more up-to-date packages than the default distribution repositories. The steps involve adding the NGINX repository and then installing:

    Ubuntu/Debian:
    1. Install prerequisites:
    bash
    sudo apt install curl gnupg2 ca-certificates lsb-release

    2. Import the official NGINX signing key:
    curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo apt-key add -
    3. Verify the key:
    apt-key fingerprint ABF5BD827BD9BF62
    Should display 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
    4. Add the repository (replace [CODENAME] with your distribution’s codename, e.g., focal, jammy, bullseye, buster):

        ```bash
        # For the stable version:
        echo "deb http://nginx.org/packages/ubuntu [CODENAME] nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
        # For the mainline version:
        echo "deb http://nginx.org/packages/mainline/ubuntu [CODENAME] nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
    
        ```
     5. Update and Install
        ```
          sudo apt update
          sudo apt install nginx
        ```
    
    **CentOS/RHEL:**
    
    1.  Create a file named `/etc/yum.repos.d/nginx.repo` with the following content (replace `[OS]` with `rhel` or `centos` and `[VERSION]` with your OS version, e.g., `7`, `8`, `9`):
    
        ```
        [nginx-stable]
        name=nginx stable repo
        baseurl=http://nginx.org/packages/[OS]/[VERSION]/$basearch/
        gpgcheck=1
        enabled=1
        gpgkey=https://nginx.org/keys/nginx_signing.key
        module_hotfixes=true
    
        [nginx-mainline]
        name=nginx mainline repo
        baseurl=http://nginx.org/packages/mainline/[OS]/[VERSION]/$basearch/
        gpgcheck=1
        enabled=0
        gpgkey=https://nginx.org/keys/nginx_signing.key
        module_hotfixes=true
        ```
        To switch between stable (default) and mainline, use:
        ```
        sudo yum-config-manager --enable nginx-mainline # Enable mainline
        sudo yum-config-manager --disable nginx-stable  # Disable Stable
    
        ```
    2.  Install NGINX:
        ```bash
        sudo yum install nginx
        ```
    
    • 3.2 Installation on Windows

    • Download: Download the latest stable version of NGINX for Windows from the official website (nginx.org). It comes as a ZIP archive.

    • Extract: Extract the ZIP archive to a directory of your choice (e.g., C:\nginx).
    • Run: Open a command prompt, navigate to the NGINX directory, and run nginx.exe.

    Important Notes for Windows:

    • NGINX on Windows is less commonly used for production deployments than on Linux.
    • The Windows version has some limitations compared to the Linux version.
    • You may need to adjust firewall settings to allow incoming connections on port 80 (and 443 for HTTPS).
    • NGINX on windows uses the select() method, limiting it to 1024 simultaneous connections.
  • 3.3 Installation from Source (Advanced)

    Installing NGINX from source gives you the most control over the build process and allows you to include specific modules or apply patches. This is generally recommended only for advanced users. Here’s a general outline:

    1. Install Dependencies: You’ll need a C compiler (e.g., GCC), the PCRE library, the zlib library, and the OpenSSL library (for HTTPS support). The specific package names vary depending on your Linux distribution.
    2. Download Source Code: Download the latest stable source code tarball from nginx.org.
    3. Extract: Extract the tarball.
    4. Configure: Run the ./configure script with desired options. Common options include:
      • --prefix: Specifies the installation directory.
      • --with-http_ssl_module: Enables HTTPS support.
      • --with-http_v2_module: Enables HTTP/2 support.
      • --with-http_realip_module: Enables the Real IP module.
      • --with-http_addition_module: Enables adding text before or after a response
      • --with-http_sub_module: Enables replacing text in a response.
      • --with-http_gzip_static_module: Enables pre-compressed files.
      • --with-stream: enables the stream proxy module.
      • --with-mail: Enables the mail proxy module.
      • --add-module: Includes a third-party module.
    5. Compile: Run make.
    6. Install: Run sudo make install.
  • 3.4 Verifying Installation

    After installation, verify that NGINX is running:

    • Systemd (most modern Linux distributions):
      bash
      sudo systemctl status nginx

    • SysVinit (older Linux distributions):
      bash
      sudo service nginx status

    • Windows: Check the Task Manager for the nginx.exe processes.

    You should also be able to access the default NGINX welcome page by opening a web browser and navigating to http://localhost (or your server’s IP address).

  • 3.5 Basic NGINX Commands (start, stop, reload, status)

    • Systemd:

      bash
      sudo systemctl start nginx # Start NGINX
      sudo systemctl stop nginx # Stop NGINX
      sudo systemctl restart nginx # Restart NGINX
      sudo systemctl reload nginx # Reload configuration (graceful)
      sudo systemctl status nginx # Check status
      sudo systemctl enable nginx # Enable NGINX to start on boot
      sudo systemctl disable nginx # Disable NGINX from starting on boot

    • SysVinit:

      bash
      sudo service nginx start
      sudo service nginx stop
      sudo service nginx restart
      sudo service nginx reload
      sudo service nginx status

    • Windows (from the NGINX directory):

      bash
      nginx.exe # Start NGINX
      nginx.exe -s stop # Fast shutdown
      nginx.exe -s quit # Graceful shutdown
      nginx.exe -s reload # Reload configuration
      nginx.exe -s reopen # Reopen log files

    • Using the NGINX binary directly
      bash
      sudo /usr/sbin/nginx -t # Check config for syntax errors
      sudo /usr/sbin/nginx -s reload # Gracefully reload configuration

    Important Note: The reload command is crucial. It applies configuration changes without dropping existing connections, making it ideal for updating NGINX in a production environment. It does so by starting new worker processes with the updated config, and gracefully shutting down the old worker processes after they have finished handling existing requests.

4. Basic Configuration: Serving Static Content

This section covers serving static files (HTML, CSS, JavaScript, images, etc.) – a fundamental use case for NGINX.

  • 4.1 Default Configuration File

    The main configuration file is usually located at /etc/nginx/nginx.conf. Many distributions also use an include directive to load additional configuration files from /etc/nginx/conf.d/. A default virtual host is often configured in /etc/nginx/conf.d/default.conf.

  • 4.2 Creating a Simple Virtual Host

    Let’s create a new virtual host to serve static content from a specific directory. Create a new configuration file:

    bash
    sudo nano /etc/nginx/conf.d/mysite.conf

    Add the following configuration:

    “`nginx
    server {
    listen 80;
    server_name example.com www.example.com; # Replace with your domain name

    root /var/www/mysite;  # Replace with your document root
    index index.html index.htm;
    
    location / {
        try_files $uri $uri/ =404;
    }
    

    }
    “`

    • listen 80;: This tells NGINX to listen for connections on port 80 (the standard HTTP port).
    • server_name example.com www.example.com;: This specifies the domain names that this server block should handle. Replace this with your actual domain name (or use localhost for testing). If you don’t have a domain name, you can omit this line, and this server block will become the default for requests that don’t match any other server_name.
    • root /var/www/mysite;: This sets the document root – the directory where NGINX will look for files. Create this directory if it doesn’t exist:
      bash
      sudo mkdir /var/www/mysite
    • index index.html index.htm;: This specifies the default files to serve when a directory is requested. NGINX will try to serve index.html first, then index.htm.
    • location / { ... }: This location block handles all requests to the root URI (/).
      • try_files $uri $uri/ =404;: This is a very common and important directive. It tells NGINX to:
        1. Try to serve the requested URI as a file ($uri).
        2. If that fails, try to serve the requested URI as a directory ($uri/).
        3. If both fail, return a 404 Not Found error.
  • 4.3 Setting up root and index Directives (Explained above)

  • 4.4 Testing Static Content Serving

    1. Create an index.html file:
      bash
      sudo nano /var/www/mysite/index.html

      Add some basic HTML content:
      html
      <!DOCTYPE html>
      <html>
      <head>
      <title>My Website</title>
      </head>
      <body>
      <h1>Hello from NGINX!</h1>
      </body>
      </html>
    2. Check Configuration Syntax:
      bash
      sudo nginx -t

      This command checks the NGINX configuration for syntax errors. Fix any errors reported before proceeding.
    3. Reload NGINX:
      bash
      sudo systemctl reload nginx
    4. Access the Website: Open a web browser and navigate to http://example.com (or http://localhost if you’re testing locally). You should see the “Hello from NGINX!” message. If you are using a domain name, make sure it’s properly configured to point to your server’s IP address (in your DNS settings).

5. Reverse Proxy Configuration

  • 5.1 What is a Reverse Proxy?

    A reverse proxy is a server that sits in front of one or more backend servers, forwarding client requests to those servers and returning the responses to the clients. To the client, it appears as if the reverse proxy itself is serving the content.

    Benefits of Using a Reverse Proxy:

    • Load Balancing: Distribute traffic across multiple backend servers to improve performance and availability.
    • Security: Hide the internal network structure and protect backend servers from direct exposure.
    • SSL/TLS Encryption: Handle SSL/TLS encryption and decryption, offloading this task from backend servers.
    • Caching: Cache static

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top