Okay, here’s a comprehensive article on Nginx, covering its free download, installation, configuration, and various use cases, exceeding the 5000-word mark:
Nginx: Free Download & Easy Installation Instructions – A Comprehensive Guide
Introduction
In the world of web servers, Apache has long held a dominant position. However, in recent years, Nginx (pronounced “engine-x”) has emerged as a powerful, versatile, and increasingly popular alternative. Nginx is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. It’s not just a web server; it’s also a reverse proxy, load balancer, mail proxy, and HTTP cache. This makes it a crucial component in many modern web architectures.
This comprehensive guide will walk you through everything you need to know about Nginx:
- What is Nginx and why is it so popular? We’ll delve into its architecture and advantages.
- Free Download: Where and how to obtain the correct Nginx version for your operating system.
- Easy Installation Instructions: Step-by-step guides for various platforms (Linux distributions, Windows, macOS).
- Basic Configuration: Understanding the core configuration files and directives.
- Advanced Configuration: Exploring features like reverse proxying, load balancing, SSL/TLS setup, and more.
- Troubleshooting Common Issues: Tips for diagnosing and resolving problems.
- Use Cases: Real-world examples of how Nginx is used.
- Nginx vs. Apache: A direct comparison of these two popular web servers.
- Nginx Plus: A brief look at the commercial version and its added features.
1. What is Nginx and Why is it Popular?
Nginx was created by Igor Sysoev in 2002 and first publicly released in 2004. It was designed to address the “C10k problem” – the challenge of handling ten thousand concurrent connections. Traditional web servers, like Apache, often struggled with this level of load, leading to performance bottlenecks.
Key Differences and Advantages of Nginx:
-
Event-Driven Architecture: This is the core of Nginx’s performance advantage. Instead of creating a new thread or process for each incoming request (like Apache’s traditional model), Nginx uses an asynchronous, event-driven approach. A small number of worker processes handle multiple connections concurrently. Each worker process listens for events (new connections, data ready to be read, etc.) and processes them efficiently. This minimizes overhead and allows Nginx to handle a massive number of connections with minimal resource usage.
-
Lightweight and Efficient: Because of its event-driven architecture, Nginx consumes significantly less memory and CPU resources compared to process-based servers. This makes it ideal for high-traffic websites, resource-constrained environments, and virtualized setups.
-
Reverse Proxy and Load Balancing: Nginx excels as a reverse proxy. It sits in front of one or more backend servers, forwarding client requests to the appropriate server. This offers several benefits:
- Load Balancing: Distributes traffic across multiple backend servers, preventing overload on any single server and improving overall performance and availability. Nginx supports various load-balancing algorithms (round-robin, least connections, IP hash, etc.).
- Security: Hides the internal network structure and protects backend servers from direct exposure to the internet.
- Caching: Caches static content (images, CSS, JavaScript) from backend servers, reducing load and improving response times.
- SSL/TLS Termination: Handles SSL/TLS encryption and decryption, offloading this computationally intensive task from backend servers.
-
Static Content Handling: Nginx is exceptionally fast at serving static content. It can directly serve files from the file system without involving backend application servers, significantly speeding up delivery.
-
Modular Architecture: Nginx is built with a modular design. This means you can enable or disable specific features (modules) based on your needs. This keeps the core server lightweight and allows for customization.
-
Easy Configuration: While powerful, Nginx’s configuration files are generally considered easier to read and understand than Apache’s configuration. The syntax is clean and consistent.
-
Active Community and Support: Nginx has a large and active community, providing ample documentation, tutorials, and support forums.
2. Free Download: Obtaining Nginx
Nginx is open-source software and is available for free download. There are two main versions:
- Nginx Open Source: The free, community-supported version. This is what we’ll focus on in this guide.
- Nginx Plus: A commercial version with additional features, support, and pre-built modules.
You can download Nginx Open Source from several sources:
-
Official Nginx Website (nginx.org): This is the primary source. You’ll find pre-built packages for various operating systems and source code for compiling it yourself. Go to http://nginx.org/en/download.html.
-
Operating System Package Managers: Most Linux distributions include Nginx in their official repositories. This is often the easiest and recommended way to install Nginx. We’ll cover specific commands for different distributions in the installation section.
-
Third-Party Repositories: Some third-party repositories (like EPEL for CentOS/RHEL) may offer more up-to-date versions of Nginx than the default repositories.
Choosing the Right Version:
-
Mainline vs. Stable: Nginx offers two main release branches:
- Mainline: This branch contains the latest features and bug fixes. It’s updated more frequently but may be less stable.
- Stable: This branch is considered more stable and is recommended for production environments. It receives fewer updates but focuses on bug fixes and security patches. For most users, the Stable branch is the best choice.
-
Operating System Compatibility: Ensure you download the correct package for your operating system and architecture (32-bit or 64-bit).
3. Easy Installation Instructions
The installation process varies depending on your operating system. Here are detailed instructions for the most common platforms:
3.1. Linux Distributions
3.1.1. Debian/Ubuntu (and derivatives like Linux Mint, Pop!_OS)
Debian and Ubuntu use the apt
package manager.
-
Update Package Index:
bash
sudo apt update -
Install Nginx:
bash
sudo apt install nginx -
Start Nginx:
bash
sudo systemctl start nginx -
Enable Nginx to start on boot:
bash
sudo systemctl enable nginx -
Verify Installation:
Open a web browser and navigate to your server’s IP address or domain name. You should see the default Nginx welcome page. -
Check Nginx Status
bash
sudo systemctl status nginx
3.1.2. CentOS/RHEL/Fedora (and derivatives like Rocky Linux, AlmaLinux)
CentOS/RHEL/Fedora use the yum
(older versions) or dnf
(newer versions) package manager.
-
Install EPEL Repository (for CentOS/RHEL, not needed for Fedora):
bash
sudo yum install epel-release # For older versions
sudo dnf install epel-release # For newer versions -
Install Nginx:
bash
sudo yum install nginx # For older versions
sudo dnf install nginx # For newer versions -
Start Nginx:
bash
sudo systemctl start nginx -
Enable Nginx to start on boot:
bash
sudo systemctl enable nginx -
Verify Installation:
Open a web browser and navigate to your server’s IP address or domain name. -
Check Nginx Status:
bash
sudo systemctl status nginx -
Firewall Configuration (CentOS/RHEL):
If you have a firewall enabled (likefirewalld
), you need to allow HTTP (port 80) and HTTPS (port 443) traffic:bash
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
3.1.3. Arch Linux
Arch Linux uses the pacman
package manager.
-
Install Nginx:
bash
sudo pacman -S nginx -
Start Nginx:
bash
sudo systemctl start nginx -
Enable Nginx to start on boot:
bash
sudo systemctl enable nginx -
Verify Installation:
Open a web browser and navigate to your server’s IP address. -
Check Nginx Status:
bash
sudo systemctl status nginx
3.2. Windows
-
Download: Go to the Nginx website (http://nginx.org/en/download.html) and download the Windows version (e.g.,
nginx/Windows-x.x.x.zip
). -
Extract: Extract the downloaded ZIP file to a directory of your choice (e.g.,
C:\nginx
). -
Start Nginx: Open a command prompt (as administrator), navigate to the Nginx directory, and run:
start nginx
-
Verify Installation: Open a web browser and navigate to
http://localhost
. -
Stop Nginx: In the same command prompt, run:
nginx -s stop
nginx -s quit
: Graceful shutdownnginx -s reload
: Reload configuration-
nginx -s reopen
: Reopen log files -
Running Nginx as a Windows Service: For a more robust setup, you can use a third-party tool like
NSSM
(Non-Sucking Service Manager) to run Nginx as a Windows service. This ensures it starts automatically on boot and restarts if it crashes. Download NSSM from https://nssm.cc/.- Extract NSSM.
- Open a command prompt as administrator.
- Run
nssm install nginx
. - In the NSSM GUI, set the “Path” to your
nginx.exe
file (e.g.,C:\nginx\nginx.exe
). - Click “Install service”.
- You can now start, stop, and manage the Nginx service through the Windows Services manager (
services.msc
).
3.3. macOS
-
Using Homebrew (Recommended): Homebrew is a popular package manager for macOS. If you don’t have it installed, follow the instructions on https://brew.sh/.
bash
brew install nginx -
Start Nginx:
bash
brew services start nginx -
Verify Installation: Open a web browser and navigate to
http://localhost:8080
. Nginx on macOS, by default through Homebrew, often uses port 8080 instead of 80. -
Stop Nginx:
bash
brew services stop nginx -
Restart Nginx
bash
brew services restart nginx
3.4. Docker
If you use Docker, running Nginx is incredibly simple:
-
Pull the Nginx image:
bash
docker pull nginx -
Run the Nginx container:
bash
docker run -d -p 80:80 nginx-d
: Runs the container in detached mode (in the background).-p 80:80
: Maps port 80 on your host machine to port 80 inside the container.
-
Verify
Open a web browser and go tolocalhost
.
This will run a basic Nginx container. For more complex setups, you can use Docker Compose to define your services and configurations. You can also mount your own configuration files and website content into the container.
4. Basic Configuration
After installation, you’ll need to configure Nginx to serve your website or application. Nginx’s configuration is primarily controlled through text files.
4.1. Key Configuration Files:
- /etc/nginx/nginx.conf (Linux): This is the main configuration file. It includes global settings and often includes other configuration files from the
/etc/nginx/conf.d/
and/etc/nginx/sites-enabled/
directories. - /usr/local/etc/nginx/nginx.conf (macOS with Homebrew): The main config file location on a Homebrew installation.
- conf/nginx.conf (Windows): Located within the Nginx installation directory.
- /etc/nginx/conf.d/: This directory typically contains configuration files for specific virtual hosts or applications. Files ending in
.conf
in this directory are usually automatically included by the mainnginx.conf
file. - /etc/nginx/sites-available/ and /etc/nginx/sites-enabled/: This is a common convention (especially on Debian/Ubuntu systems) for managing virtual host configurations.
sites-available
contains all available virtual host configurations.sites-enabled
contains symbolic links to the configurations insites-available
that you want to activate. This allows you to easily enable or disable virtual hosts without deleting the configuration files.
4.2. Basic Structure of nginx.conf
:
Nginx configuration files use a hierarchical structure with directives and blocks.
“`nginx
Global Context (outside any blocks)
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
Events Block (connection processing)
events {
worker_connections 1024;
}
HTTP Block (global HTTP settings)
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf; # Include virtual host configs
include /etc/nginx/sites-enabled/*; # Include enabled sites (Debian/Ubuntu)
}
“`
Explanation of Key Directives:
user
: Specifies the user that worker processes will run as. It’s important to use a non-privileged user (likenginx
,www-data
) for security reasons.worker_processes
: Sets the number of worker processes.auto
is usually the best option, as it will automatically determine the optimal number based on the number of CPU cores.error_log
: Specifies the path to the error log file.pid
: Specifies the path to the file containing the process ID (PID) of the main Nginx process.events { ... }
: Contains directives related to connection processing.worker_connections
: Sets the maximum number of simultaneous connections that each worker process can handle.
http { ... }
: Contains directives related to HTTP server configuration.include
: Includes other configuration files.default_type
: Sets the default MIME type for responses.log_format
: Defines the format of the access log.access_log
: Specifies the path to the access log file and the log format to use.sendfile
: Enables or disables the use of thesendfile()
system call (for efficient file transfer).keepalive_timeout
: Sets the timeout for keep-alive connections.gzip
: Enables or disables Gzip compression (to reduce the size of responses).
4.3. Creating a Simple Virtual Host (Server Block):
A virtual host (or server block) allows you to host multiple websites on a single Nginx server. Each virtual host has its own configuration, specifying the domain name, document root, and other settings.
Example (Debian/Ubuntu):
-
Create a configuration file in
sites-available
:
bash
sudo nano /etc/nginx/sites-available/example.com -
Add the following configuration (replace
example.com
and/var/www/example.com
with your actual domain name and document root):“`nginx
server {
listen 80;
listen [::]:80;server_name example.com www.example.com; root /var/www/example.com; index index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ =404; }
}
“` -
Create a symbolic link in
sites-enabled
:
bash
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/ -
Create the document root directory and an
index.html
file:
bash
sudo mkdir -p /var/www/example.com
sudo nano /var/www/example.com/index.html
Add some basic HTML content toindex.html
. -
Test the configuration:
bash
sudo nginx -t
This command checks for syntax errors in your configuration files. If there are errors, it will provide details. -
Reload Nginx:
bash
sudo systemctl reload nginx
This applies the configuration changes without restarting the server (graceful reload). -
Update DNS
Make sure your domain’s DNS records are pointing to your server’s IP address.
Now, when you visit example.com
in your browser, you should see the content of your index.html
file.
Explanation of Server Block Directives:
listen
: Specifies the port and IP address that the server block will listen on.80
is the default HTTP port.[::]:80
listens on IPv6.server_name
: Specifies the domain name(s) that this server block should handle. You can include multiple domain names (e.g.,example.com www.example.com
).root
: Specifies the document root directory, which is the base directory for serving files for this virtual host.index
: Specifies the default files to serve if a directory is requested (e.g.,index.html
,index.php
).location / { ... }
: This is a location block that matches all requests (the root path/
).try_files
: This directive attempts to serve files in the order specified.$uri
refers to the requested URI.$uri/
checks for a directory with the same name.=404
returns a 404 error if none of the files are found.
5. Advanced Configuration
Nginx offers a wide range of advanced features. Here are some of the most important ones:
5.1. Reverse Proxying
As mentioned earlier, Nginx excels as a reverse proxy. Here’s how to configure it:
“`nginx
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000; # Forward requests to a backend server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
“`
Explanation:
proxy_pass
: This is the core directive for reverse proxying. It specifies the URL of the backend server (in this case,http://localhost:3000
). This could be a Node.js application, a Python/Django app, a Java/Tomcat server, or any other web application running on a different port or even a different machine.proxy_set_header
: These directives set HTTP headers that are passed to the backend server. This is crucial for passing information about the original client request, such as the client’s IP address, the host name, and the protocol (HTTP or HTTPS).Host
: Passes the originalHost
header from the client.X-Real-IP
: Passes the client’s IP address.X-Forwarded-For
: Appends the client’s IP address to theX-Forwarded-For
header (which can contain a chain of proxy servers).X-Forwarded-Proto
: Passes the protocol used by the client (http or https).
5.2. Load Balancing
Nginx can distribute traffic across multiple backend servers. Here’s a basic example:
“`nginx
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend; # Use the upstream block
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
“`
Explanation:
upstream backend { ... }
: This defines a group of backend servers. You can give it any name (here, it’sbackend
).server
: Specifies each backend server, along with its address (hostname or IP address) and optional parameters (likeweight
,max_fails
,fail_timeout
).
proxy_pass http://backend;
: In theserver
block,proxy_pass
now refers to theupstream
block, indicating that traffic should be distributed among the servers defined there.
Load Balancing Methods:
Nginx supports several load balancing methods:
- Round Robin (Default): Requests are distributed sequentially to each server in the
upstream
block. - Least Connections: Requests are sent to the server with the fewest active connections.
- IP Hash: Requests from the same client IP address are always sent to the same backend server (useful for maintaining session persistence).
- Weighted Round Robin/Least Connections: You assign weights to each server using the
weight
parameter within theupstream
block.
nginx
upstream backend {
server backend1.example.com weight=3;
server backend2.example.com;
server backend3.example.com;
}
5.3. SSL/TLS Encryption (HTTPS)
To secure your website with HTTPS, you need an SSL/TLS certificate. You can obtain a free certificate from Let’s Encrypt or purchase one from a commercial certificate authority.
Using Let’s Encrypt (Certbot):
-
Install Certbot: The installation process varies depending on your operating system. Refer to the Certbot website (https://certbot.eff.org/) for instructions.
-
Obtain and Install Certificate:
bash
sudo certbot --nginx -d example.com -d www.example.com
This command will automatically obtain a certificate forexample.com
andwww.example.com
, install it, and modify your Nginx configuration to use it. Certbot will also set up automatic renewal.
Manual SSL/TLS Configuration:
If you’re not using Certbot, you’ll need to configure SSL/TLS manually:
-
Obtain Certificate and Key: Obtain your SSL/TLS certificate (
.crt
or.pem
file) and private key (.key
file) from your certificate authority. -
Modify Nginx Configuration:
“`nginx
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri; # Redirect HTTP to HTTPS
}server {
listen 443 ssl;
server_name example.com www.example.com;ssl_certificate /path/to/your/certificate.crt; ssl_certificate_key /path/to/your/private.key; ssl_protocols TLSv1.2 TLSv1.3; # Recommended protocols ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'; # Recommended ciphers ssl_prefer_server_ciphers off; root /var/www/example.com; index index.html index.htm; location / { try_files $uri $uri/ =404; }
}
“`
Explanation:
- First
server
block: This block listens on port 80 (HTTP) and redirects all requests to HTTPS using a 301 (Permanent Redirect) status code. -
Second
server
block: This block listens on port 443 (HTTPS) and handles the secure connection.ssl_certificate
: Specifies the path to your SSL/TLS certificate file.ssl_certificate_key
: Specifies the path to your private key file.ssl_protocols
: Specifies the SSL/TLS protocols to support. It’s recommended to use TLSv1.2 and TLSv1.3.ssl_ciphers
: Specifies the encryption ciphers to use. The example provides a strong set of ciphers.ssl_prefer_server_ciphers off;
: This directive is important for security. It tells Nginx to use the client’s preferred cipher suite, which can help mitigate certain attacks.
-
Test and Reload:
bash
sudo nginx -t
sudo systemctl reload nginx
5.4. Caching
Nginx can cache static content (images, CSS, JavaScript) and even dynamic content (with careful configuration) to improve performance and reduce load on backend servers.
Basic Static Content Caching:
“`nginx
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
# … other configurations …
location / {
proxy_pass http://backend;
proxy_cache my_cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status; # For debugging
}
}
“`
Explanation:
proxy_cache_path
: This directive defines a cache zone./data/nginx/cache
: The directory where cached content will be stored.levels=1:2
: Defines the directory structure for the cache (a two-level hierarchy).keys_zone=my_cache:10m
: Creates a shared memory zone namedmy_cache
with a size of 10MB to store cache keys and metadata.max_size=10g
: Sets the maximum size of the cache to 10GB.inactive=60m
: Specifies that cached content is considered stale after 60 minutes of inactivity.use_temp_path=off
: Disables the use of a temporary directory for storing files before they are moved to the cache.
proxy_cache my_cache;
: Enables caching for this location block using themy_cache
zone.proxy_cache_valid
: Specifies how long to cache responses with different status codes. Here, 200 (OK) and 302 (Found) responses are cached for 60 minutes, and 404 (Not Found) responses are cached for 1 minute.proxy_cache_use_stale
: Specifies that stale cached content can be served if the backend server is unavailable (due to errors, timeouts, etc.).add_header X-Cache-Status $upstream_cache_status
: This adds a custom header to the response that indicates whether the content was served from the cache (HIT), missed the cache (MISS), or was bypassed (BYPASS). This is useful for debugging.
5.5. URL Rewriting (rewrite)
The rewrite
directive allows you to modify the requested URL before it’s processed by Nginx. This is useful for creating clean URLs, redirecting old URLs to new ones, and implementing various URL-based logic.
“`nginx
server {
# … other configurations …
location / {
rewrite ^/old-page$ /new-page permanent; # Permanent redirect
rewrite ^/users/(.*)$ /profile?user=$1 break; # Internal rewrite
}
}
“`
Explanation:
rewrite regex replacement [flag];
This is general syntax.^/old-page$
: This regular expression matches the exact URL/old-page
. The^
matches the beginning of the string, and the$
matches the end./new-page
: This is the replacement URL.permanent
: This flag specifies a permanent redirect (301 status code). Other flags includeredirect
(302 temporary redirect),break
(stops processing further rewrite rules in the current location block), andlast
(stops processing further rewrite rules and starts a new search for a matching location).^/users/(.*)$
: This matches any url that begins with/users/
and captures everything that comes after./profile?user=$1
: The $1 refers to the first captured group in the regular expression.
5.6. Limit Request Rate (limit_req)
Nginx can limit the rate of requests from a particular client to prevent abuse or denial-of-service attacks.
“`nginx
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
server {
# … other configurations …
location / {
limit_req zone=mylimit burst=5 nodelay;
# limit_req zone=mylimit burst=5; #Allow bursts without delaying
}
}
“`
Explanation:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
: This defines a shared memory zone (mylimit
) to track request rates.$binary_remote_addr
: Uses the client’s IP address as the key for tracking.zone=mylimit:10m
: Creates a zone namedmylimit
with a size of 10MB.rate=1r/s
: Limits requests to 1 request per second.
limit_req zone=mylimit burst=5 nodelay;
: Applies the rate limit to the location block.zone=mylimit
: Uses themylimit
zone.burst=5
: Allows a burst of up to 5 requests above the rate limit.nodelay
: Processes requests exceeding theburst
limit without delay. Without this option, the requests beyond burst are delayed to meet the rate limit.
5.7. Access Control (allow, deny)
You can control access to specific locations based on IP address or other criteria.
nginx
location /admin {
allow 192.168.1.0/24;
allow 10.0.0.1;
deny all;
}
Explanation:
allow
: Allows access from the specified IP address or network.deny
: Denies access from the specified IP address or network.deny all;
: Denies access from all other addresses. The order ofallow
anddeny
directives is important. Nginx processes them in the order they appear, and the first matching rule is applied.
6. Troubleshooting Common Issues
Here are some common Nginx issues and how to troubleshoot them:
-
Nginx Won’t Start:
- Check for Syntax Errors: Use
sudo nginx -t
to check for syntax errors in your configuration files. - Check Error Logs: Look in the error log file (usually
/var/log/nginx/error.log
) for detailed error messages. - Check Port Conflicts: Make sure another process isn’t already using port 80 or 443. You can use
sudo netstat -tulnp | grep :80
(or:443
) to check. - Check Permissions: Ensure the Nginx user has the necessary permissions to access the document root directories and configuration files.
- Check for Syntax Errors: Use
-
502 Bad Gateway Error:
- Backend Server Down: This usually indicates that the backend server (the application Nginx is proxying to) is down or unresponsive. Check the backend server’s logs and status.
- Incorrect Proxy Configuration: Verify that the
proxy_pass
directive is pointing to the correct backend server address and port. - Timeout Issues: If the backend server takes too long to respond, Nginx might return a 502 error. You can adjust proxy timeout settings (e.g.,
proxy_connect_timeout
,proxy_send_timeout
,proxy_read_timeout
).
-
403 Forbidden Error:
- File Permissions: Make sure the Nginx user has read permissions on the files and directories you’re trying to serve. Use
ls -l
to check permissions. index
Directive: Ensure that an appropriateindex
file (e.g.,index.html
,index.php
) exists in the document root directory, and
- File Permissions: Make sure the Nginx user has read permissions on the files and directories you’re trying to serve. Use