Okay, here’s a comprehensive article on “Introduction to Graylog: Setup and Configuration,” aiming for approximately 5000 words. Due to the length, I’ll provide a very detailed outline and fill in substantial sections. I will also include realistic code examples and configuration snippets.
Introduction to Graylog: Setup and Configuration
Table of Contents
-
Introduction to Centralized Logging and Graylog
- The Importance of Centralized Logging
- What is Graylog?
- Key Features and Benefits
- Graylog Architecture (Single Node vs. Cluster)
- Use Cases
-
Prerequisites and System Requirements
- Hardware Requirements (CPU, RAM, Disk Space)
- Sizing for Small, Medium, and Large Deployments
- Software Requirements
- Operating System (Linux distributions: Ubuntu, CentOS, Debian)
- Java (OpenJDK or Oracle JDK)
- MongoDB
- Elasticsearch (or OpenSearch)
- Network Requirements
- Firewall Ports
- Hardware Requirements (CPU, RAM, Disk Space)
-
Installation (Step-by-Step Guide)
- Option 1: Using Package Managers (apt, yum, dnf)
- Ubuntu/Debian (apt)
- CentOS/RHEL (yum/dnf)
- Adding the Graylog Repository
- Installing Graylog, MongoDB, and Elasticsearch/OpenSearch
- Option 2: Manual Installation (from .tar.gz archives)
- Downloading the Archives
- Extracting the Archives
- Setting up System Users and Permissions
- Option 3: Docker and Docker Compose
- Benefits of using Docker
- Example
docker-compose.yml
file - Starting and Managing the Containers
- Option 4: Kubernetes
- Deploying Graylog Using Helm Charts
- Option 1: Using Package Managers (apt, yum, dnf)
-
Initial Configuration
- Graylog Configuration File (
server.conf
)password_secret
(Generating and Setting)root_password_sha2
(Generating and Setting the Admin Password)http_bind_address
(Setting the Listening Interface and Port)elasticsearch_hosts
(Connecting to Elasticsearch/OpenSearch)mongodb_uri
(Connecting to MongoDB)- Other Important Configuration Options
timezone
message_journal_max_age
message_journal_max_size
output_batch_size
processbuffer_processors
outputbuffer_processors
- MongoDB Configuration
- Basic Security (Setting up Authentication – Optional but Recommended)
- Elasticsearch/OpenSearch Configuration
cluster.name
node.name
network.host
http.port
discovery.seed_hosts
(for multi-node clusters)- Memory Settings (Heap Size)
- Template Optimization
- Graylog Configuration File (
-
Starting and Verifying Graylog
- Starting the Graylog Service (using
systemctl
,service
, or Docker commands) - Accessing the Graylog Web Interface
- Initial Login and User Interface Overview
- Checking Logs and Service Status
- Troubleshooting Common Startup Issues
- Starting the Graylog Service (using
-
Configuring Inputs
- Understanding Inputs
- Common Input Types
- Syslog (UDP, TCP, RELP)
- GELF (Graylog Extended Log Format)
- Beats (Filebeat, Metricbeat, etc.)
- Raw/Plaintext
- Kafka
- AWS CloudTrail, CloudWatch
- HTTP Inputs
- Creating and Configuring Inputs (Step-by-Step Examples)
- Syslog UDP Input
- GELF TCP Input
- Beats Input
- Global vs. Node-Specific Inputs
-
Configuring Extractors
- What are Extractors?
- Why Use Extractors?
- Extractor Types
- Regular Expressions (Regex)
- Grok
- JSON
- Substring
- Split & Index
- Lookup Tables
- Copy Input
- Creating and Managing Extractors (Examples)
- Extracting Fields from Syslog Messages using Regex
- Using Grok to Parse Complex Log Formats
- Extracting Data from JSON Payloads
-
Streams and Routing
- What are Streams?
- Stream Rules
- Matching Conditions (Field Values, Regular Expressions)
- Connecting Streams to Outputs
- Creating and Managing Streams (Examples)
- Creating a Stream for Apache Access Logs
- Routing Error Logs to a Separate Stream
- Using Multiple Stream Rules
-
Pipelines and Processing Rules
- What are Pipelines?
- What are Pipeline Rules?
- Stages and Ordering
- Common Pipeline Functions
set_field()
remove_field()
rename_field()
route_to_stream()
drop_message()
grok()
regex()
json()
- Lookup Table Functions
- Creating and Managing Pipelines (Examples)
- Enriching Log Data with GeoIP Information
- Dropping Unwanted Log Messages
- Converting Field Types
-
Configuring Outputs
- Understanding Outputs
- Common Output Types
- Elasticsearch/OpenSearch (Default)
- GELF Output
- Forwarding to other Graylog Instances
- HTTP Outputs (e.g., Webhooks)
- Script Outputs (for Custom Actions)
- Creating and Configuring Outputs (Examples)
- Setting up a GELF Output
-
Alerting and Notifications
- Alerting Concepts
- Alert Conditions
- Field Content Aggregation
- Message Count
- Field Aggregation
- Notification Types
- HTTP (Webhook)
- Slack
- PagerDuty
- Script
- Creating and Managing Alerts (Examples)
- Creating an Alert for High CPU Usage
- Sending Email Notifications for Critical Errors
-
User Management and Permissions
- Users and Roles
- Built-in Roles (Admin, Reader)
- Creating Custom Roles
- Assigning Permissions to Roles
- Managing Users
- LDAP/Active Directory Integration (Optional)
-
Dashboards and Widgets
- Creating Dashboards
- Adding Widgets
- Message Count
- Field Charts (Histograms, Pie Charts)
- Quick Values
- World Maps (for GeoIP Data)
- Custom Widgets
- Customizing Dashboards
-
Maintenance and Troubleshooting
- Regular Maintenance Tasks
- Monitoring Disk Space
- Managing Indices (Rotation, Deletion)
- Backing up Configuration
- Updating Graylog, MongoDB, and Elasticsearch/OpenSearch
- Troubleshooting Common Issues
- Connectivity Problems
- Performance Issues
- Input/Output Errors
- Processing Errors
- Searching for Specific Errors in Logs
- Regular Maintenance Tasks
-
Advanced Topics
- Clustering Graylog for High Availability and Scalability
- Setting up a Multi-Node Graylog Cluster
- Load Balancing
- Content Packs
- Creating and Using Content Packs
- Plugins
- API Usage
- Security Best Practices
- Hardening the Operating System
- Securing Network Communication (TLS/SSL)
- Implementing Authentication and Authorization
- Auditing
- Regular Security Updates
- Integrating with other Monitoring Tools.
Detailed Sections (Expanded Content)
1. Introduction to Centralized Logging and Graylog
-
The Importance of Centralized Logging: In modern IT environments, applications, servers, and network devices generate vast amounts of log data. Managing this data effectively is crucial for several reasons:
- Troubleshooting: Logs are the primary source of information for diagnosing application errors, performance bottlenecks, and system failures. Centralized logging makes it significantly easier to correlate events across different systems and pinpoint the root cause of problems.
- Security Monitoring: Logs contain valuable security-related information, such as login attempts, access violations, and suspicious activity. A centralized logging system enables security teams to monitor for threats, detect intrusions, and respond to incidents effectively.
- Compliance: Many regulatory frameworks (e.g., PCI DSS, HIPAA, GDPR) require organizations to collect, store, and audit log data. Centralized logging simplifies compliance by providing a single, auditable repository for logs.
- Business Intelligence: Log data can be analyzed to gain insights into user behavior, application usage patterns, and system performance trends. This information can be used to improve applications, optimize resource allocation, and make data-driven business decisions.
- Operational Monitoring Get a clear picture of the state of your infrastructure.
-
What is Graylog? Graylog is a powerful, open-source log management platform designed to collect, index, and analyze log data from various sources. It provides a user-friendly web interface for searching, filtering, visualizing, and alerting on log data.
-
Key Features and Benefits:
- Centralized Log Collection: Gathers logs from diverse sources (syslog, GELF, Beats, etc.).
- Real-time Processing: Processes logs as they arrive, enabling immediate analysis and alerting.
- Powerful Search and Filtering: Uses a flexible query language to quickly find specific log events.
- Data Visualization: Creates dashboards and charts to visualize log data and identify trends.
- Alerting and Notifications: Triggers alerts based on predefined conditions and sends notifications via email, Slack, etc.
- Extensibility: Supports plugins and content packs to extend functionality and integrate with other tools.
- Scalability: Can be deployed as a single-node instance or a multi-node cluster to handle large volumes of log data.
- Open Source: Free to use and modify, with a large and active community.
- User and Permissions System: Limit and control access to data.
-
Graylog Architecture (Single Node vs. Cluster):
- Single Node: Suitable for small deployments or testing environments. All components (Graylog server, MongoDB, Elasticsearch/OpenSearch) run on a single machine.
- Cluster: Recommended for production environments with high log volumes or high availability requirements. Components are distributed across multiple nodes for improved performance, scalability, and fault tolerance. A typical cluster setup involves:
- Multiple Graylog Server Nodes: Handle log processing, web interface, and API requests.
- Multiple Elasticsearch/OpenSearch Nodes: Store and index log data. A cluster typically has at least three nodes for data redundancy.
- MongoDB Replica Set: Stores Graylog configuration data. A replica set provides high availability and data redundancy.
- Message Flow
- Logs go to an input.
- The Input sends the message to connected streams.
- Streams check if the message matches their rules, and if so, forwards it.
- Pipelines are processed.
- The message is sent to any connected outputs, which store the message in Elasticsearch.
-
Use Cases:
- Application Monitoring: Troubleshooting application errors, performance monitoring, and debugging.
- Security Information and Event Management (SIEM): Detecting security threats, investigating incidents, and auditing security events.
- Infrastructure Monitoring: Monitoring server health, network performance, and resource utilization.
- Compliance Auditing: Meeting regulatory requirements for log data collection and retention.
- Business Intelligence: Analyzing user behavior, application usage, and business trends.
2. Prerequisites and System Requirements
-
Hardware Requirements:
- Small Deployment (e.g., testing, small business):
- CPU: 2+ cores
- RAM: 4GB+
- Disk Space: 50GB+ (depends heavily on log retention policy)
- Medium Deployment (e.g., medium-sized business, departmental use):
- CPU: 4+ cores
- RAM: 8GB+
- Disk Space: 200GB+
- Large Deployment (e.g., enterprise, high log volume):
- CPU: 8+ cores (per node)
- RAM: 16GB+ (per node)
- Disk Space: 500GB+ (per node, consider using fast storage like SSDs)
- Note: These are general guidelines. Actual requirements will vary based on the volume of logs generated, the complexity of processing, and the desired retention period. It’s essential to monitor resource usage and scale accordingly.
- Small Deployment (e.g., testing, small business):
-
Software Requirements:
- Operating System: Graylog is primarily designed for Linux. Supported distributions include:
- Ubuntu 20.04 LTS, 22.04 LTS (and later)
- CentOS/RHEL 7, 8, 9
- Debian 10, 11, 12
- Java: Graylog requires Java (OpenJDK or Oracle JDK). Java 17 is the recommended and supported version. You should verify the specific Java version required by your Graylog version. Always consult the official Graylog documentation for the latest recommendations.
- To check the Java version:
java -version
- To check the Java version:
- MongoDB: Graylog uses MongoDB to store configuration data. MongoDB 4.4, 5.0, and 6.0 are supported. Again, check the Graylog documentation for compatibility.
- Elasticsearch/OpenSearch: Graylog uses Elasticsearch or OpenSearch to store and index log data.
- Elasticsearch: Version 7.10.2 is the last compatible version of Elasticsearch. Later versions are not supported.
- OpenSearch: Graylog supports OpenSearch 1.x and 2.x. OpenSearch is generally the recommended option, especially for new deployments, as it’s the actively developed fork of Elasticsearch.
- Important: Choose either Elasticsearch or OpenSearch, not both.
- Operating System: Graylog is primarily designed for Linux. Supported distributions include:
-
Network Requirements:
- Firewall Ports: The following ports need to be open for Graylog to function correctly:
- 9000 (TCP): Graylog web interface and API.
- 514 (UDP/TCP): Standard Syslog port (if using Syslog inputs).
- 12201 (UDP/TCP): GELF default port (if using GELF inputs).
- 9200 (TCP): Elasticsearch/OpenSearch HTTP API.
- 9300 (TCP): Elasticsearch/OpenSearch transport protocol (for inter-node communication in a cluster).
- 27017 (TCP): MongoDB default port.
- Any other custom ports used by your chosen inputs or outputs.
- DNS Resolution: Ensure that all nodes in a cluster can resolve each other’s hostnames.
- Firewall Ports: The following ports need to be open for Graylog to function correctly:
3. Installation (Step-by-Step Guide)
This section will cover multiple installation methods. Choose the one that best suits your environment.
-
Option 1: Using Package Managers (apt, yum, dnf) (Recommended for most users)
-
Ubuntu/Debian (apt):
“`bash
Update package lists
sudo apt update
Install prerequisite packages
sudo apt install apt-transport-https openjdk-17-jre-headless uuid-runtime pwgen
Download and install the Graylog repository package
wget https://packages.graylog2.org/repo/packages/graylog-5.1-repository_latest.deb # Adjust version as needed
sudo dpkg -i graylog-5.1-repository_latest.debUpdate package lists again
sudo apt update
Install Graylog, MongoDB, and OpenSearch
sudo apt install graylog-server mongodb-org opensearch
Enable and start services
sudo systemctl enable –now mongod.service
sudo systemctl enable –now opensearch.service
sudo systemctl enable –now graylog-server.service“`
-
CentOS/RHEL (yum/dnf):
“`bash
Install prerequisite packages
sudo yum install java-17-openjdk-headless pwgen # or dnf
Install the Graylog repository
sudo rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-5.1-repository_latest.rpm # Adjust version
Install Graylog, MongoDB, and OpenSearch
sudo yum install graylog-server mongodb-org opensearch # or dnf
Enable and start services
sudo systemctl enable –now mongod.service
sudo systemctl enable –now opensearch.service
sudo systemctl enable –now graylog-server.service``
dnf
**Note:** Ifgives you a GPG key error, you can usually resolve it by running:
sudo rpm –import https://packages.graylog2.org/repo/debian/keyring.gpg` before installing the packages.
-
-
Option 2: Manual Installation (from .tar.gz archives)
This method is more involved but provides more control over the installation process.
- Download the Archives: Download the latest Graylog, MongoDB, and OpenSearch archives from their respective websites.
- Extract the Archives: Extract the archives to appropriate directories (e.g.,
/opt/graylog
,/opt/mongodb
,/opt/opensearch
). - Create System Users: Create dedicated system users for each service (e.g.,
graylog
,mongod
,opensearch
). This enhances security. Do not run these services as root.
bash
sudo useradd -r -M -s /bin/false graylog
sudo useradd -r -M -s /bin/false mongod
sudo useradd -r -M -s /bin/false opensearch - Set Permissions: Set appropriate ownership and permissions for the directories and files.
bash
sudo chown -R graylog:graylog /opt/graylog
sudo chown -R mongod:mongod /opt/mongodb
sudo chown -R opensearch:opensearch /opt/opensearch - Create Configuration Files: Copy the example configuration files and modify them as needed (see Section 4).
- Create Systemd Service Files: Create systemd service files for each service to manage them easily (start, stop, enable on boot). This involves creating files like
/etc/systemd/system/graylog-server.service
,/etc/systemd/system/mongod.service
, and/etc/systemd/system/opensearch.service
. These files define how the services should be started and managed. (See example below) - Enable and Start Services:
bash
sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl enable opensearch.service
sudo systemctl enable graylog-server.service
sudo systemctl start mongod.service
sudo systemctl start opensearch.service
sudo systemctl start graylog-server.service
Example Systemd Service File for Graylog (
/etc/systemd/system/graylog-server.service
):“`ini
[Unit]
Description=Graylog server
Documentation=https://docs.graylog.org/
Wants=network-online.target
After=network-online.target[Service]
Type=simple
User=graylog
Group=graylog
WorkingDirectory=/opt/graylog/server # Adjust to your installation path
ExecStart=/opt/graylog/server/bin/graylog-server # Adjust to your installation path
Restart=on-failure
StandardOutput=journal
StandardError=inherit
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
“` -
Option 3: Docker and Docker Compose (Ideal for testing and development, also suitable for production with proper configuration)
-
Benefits of using Docker:
- Simplified Deployment: Easily deploy and manage Graylog and its dependencies.
- Isolation: Each component runs in its own container, preventing conflicts.
- Reproducibility: Consistent environments across different systems.
- Portability: Easily move the deployment to another machine.
-
Example
docker-compose.yml
file:“`yaml
version: ‘3.8’services:
mongodb:
image: mongo:4.4
container_name: graylog-mongodb
volumes:
– mongodb_data:/data/db
networks:
– graylogopensearch:
image: opensearchproject/opensearch:2.5.0 # Choose your desired version
container_name: graylog-opensearch
environment:
– cluster.name=graylog
– node.name=graylog-opensearch
– discovery.type=single-node
– bootstrap.memory_lock=true
– “OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m” # Adjust memory as needed
ulimits:
memlock:
soft: -1
hard: -1
volumes:
– opensearch_data:/usr/share/opensearch/data
networks:
– grayloggraylog:
image: graylog/graylog:5.1 # Choose your desired version
container_name: graylog
environment:
– GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Generate with pwgen
– GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 # Generate with sha256sum
– GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/ # Adjust to your external address
– GRAYLOG_ELASTICSEARCH_HOSTS=http://graylog-opensearch:9200
– GRAYLOG_MONGODB_URI=mongodb://graylog-mongodb:27017/graylog
depends_on:
– mongodb
– opensearch
ports:
– “9000:9000”
– “12201:12201/udp”
– “1514:1514/udp”
– “514:514/udp”
networks:
– graylogvolumes:
mongodb_data:
opensearch_data:networks:
graylog:
driver: bridge``
docker-compose.yml`:**
**Important Notes about theGRAYLOG_PASSWORD_SECRET
: Generate a strong, random secret usingpwgen -N 1 -s 96
. This is used for encrypting sensitive data.GRAYLOG_ROOT_PASSWORD_SHA2
: Generate the SHA256 hash of your desired admin password. You can do this from the command line:echo -n "yourpassword" | sha256sum
. Do not store the plain text password in the file.GRAYLOG_HTTP_EXTERNAL_URI
: This is the URL that you will use to access the Graylog web interface. If you are running Graylog on a remote server, replace127.0.0.1
with the server’s public IP address or domain name.GRAYLOG_ELASTICSEARCH_HOSTS
andGRAYLOG_MONGODB_URI
: These point to the other containers within the Docker network.OPENSEARCH_JAVA_OPTS
: You should adjust the-Xms
(initial heap size) and-Xmx
(maximum heap size) values based on your available memory. Setting them to the same value can improve performance. Start with 512m or 1g and monitor memory usage.- Volumes: The named volumes (
mongodb_data
andopensearch_data
) ensure that data persists even if the containers are restarted or removed.
-
Starting and Managing the Containers:
“`bash
Navigate to the directory containing the docker-compose.yml file
cd /path/to/your/docker-compose/
Start the services in detached mode (background)
docker-compose up -d
Check the status of the containers
docker-compose ps
Stop the services
docker-compose down
View logs
docker-compose logs -f graylog # Or use ‘mongodb’ or ‘opensearch’
“`
-
-
Option 4: Kubernetes
-
Deploying Graylog on Kubernetes offers advanced scalability, resilience, and management capabilities. The recommended method is using Helm charts. Helm is a package manager for Kubernetes, simplifying the deployment and management of applications.
-
Install Helm: Follow the official Helm installation instructions for your system.
- Add the Graylog Helm Chart Repository:
bash
helm repo add graylog https://helm.graylog.org/
helm repo update -
Create a
values.yaml
file: This file will contain your custom configuration for the Graylog deployment. You can start with the default values from the chart and override them as needed. Here’s a simplified example:“`yaml
graylog:
replicas: 1 # Number of Graylog instances
image:
repository: graylog/graylog
tag: 5.1.0
passwordSecret: “your-password-secret”
rootPasswordSha2: “your-root-password-sha2”
elasticsearch:
hosts: “http://elasticsearch-master:9200”mongodb:
enabled: trueelasticsearch: #This is an example of deploying without an external ES cluster.
enabled: true
clusterName: “graylog-elasticsearch”
nodeGroup: “master”
master:
replicas: 14. **Install Graylog using Helm:**
bash
helm install graylog graylog/graylog -f values.yaml -n graylog –create-namespace``
graylog
*: The release name.
graylog/graylog
*: The chart name.
-f values.yaml
*: Specifies your custom configuration file.
-n graylog
*: Install into the
graylognamespace.
–create-namespace`: Create the namespace if it does not exist.
* -
Verify Installation:
- Get the pods in the
graylog
namespace:
bash
kubectl get pods -n graylog - Get the services:
bash
kubectl get svc -n graylog- Access Graylog: If you’ve exposed the Graylog service using a LoadBalancer or NodePort, you can access the web interface using the external IP address and port. If you’re using a cluster-internal setup (e.g., for development), you can use port forwarding:
bash
kubectl port-forward svc/graylog 9000:9000 -n graylog
- Access Graylog: If you’ve exposed the Graylog service using a LoadBalancer or NodePort, you can access the web interface using the external IP address and port. If you’re using a cluster-internal setup (e.g., for development), you can use port forwarding:
- Get the pods in the
Important Notes for Kubernetes:
- External Elasticsearch/OpenSearch: For production deployments, it’s highly recommended to use an external, managed Elasticsearch or OpenSearch cluster instead of deploying it within the same Kubernetes cluster as Graylog. This improves performance, scalability, and manageability. Update the
elasticsearch.hosts
setting in yourvalues.yaml
to point to your external cluster. - Persistent Volumes: Ensure you configure persistent volumes for MongoDB and Elasticsearch/OpenSearch to prevent data loss.
- Resource Limits: Set appropriate resource requests and limits for the Graylog, MongoDB, and Elasticsearch/OpenSearch pods to prevent resource exhaustion.
- Ingress: For production, you’ll likely want to set up an Ingress controller to manage external access to the Graylog web interface.
-
4. Initial Configuration
This section covers the essential configuration steps for Graylog, MongoDB, and Elasticsearch/OpenSearch.
-
Graylog Configuration File (
server.conf
)The main configuration file for Graylog is typically located at
/etc/graylog/server/server.conf
(for package installations) or within theconf
directory of your Graylog installation (for manual installations).-
password_secret
:- Purpose: Used to encrypt sensitive data stored by Graylog (e.g., passwords, API keys).
- Generation: Generate a strong, random secret using
pwgen
:
bash
pwgen -N 1 -s 96 - Setting: Paste the generated secret into the
password_secret
setting inserver.conf
. Important: This value must be the same across all Graylog nodes in a cluster. - Example:
password_secret = Wx$V!7z#r@8tYp9q%b&f*gNc2s5v$uJ1xL3zN6m8a0kQ
-
root_password_sha2
:- Purpose: Sets the password for the default
admin
user. - Generation: Generate the SHA256 hash of your desired password:
bash
echo -n "your_new_admin_password" | shasum -a 256
(Replaceyour_new_admin_password
with your actual password). The-n
is very important, as it prevents a newline character from being included in the hash. - Setting: Paste the generated hash into the
root_password_sha2
setting. - Example:
root_password_sha2 = e5e9fa1ba31ecd1ae84f75caaa474f3a663f05f480b7645d57d09456e991886d
- Purpose: Sets the password for the default
-
http_bind_address
:- Purpose: Specifies the IP address and port that Graylog will listen on for web interface and API requests.
- Setting:
0.0.0.0:9000
: Listen on all interfaces on port 9000 (default). Use this for most single-node setups.127.0.0.1:9000
: Listen only on the localhost interface. Useful for security if you’re using a reverse proxy.<your_server_ip>:9000
: Listen on a specific IP address.
- Example:
http_bind_address = 0.0.0.0:9000
-
http_external_uri
- Purpose: Informs Graylog of its externally accessible URL. This is essential for generating correct links in emails and other features.
- Setting:
http_external_uri = http://<your_server_ip_or_domain>:9000/
-
elasticsearch_hosts
:- Purpose: Specifies the connection string(s) for your Elasticsearch/OpenSearch cluster.
- Setting:
- Single Node:
http://<elasticsearch_ip>:9200
- Cluster:
http://<node1_ip>:9200,http://<node2_ip>:9200,http://<node3_ip>:9200
(comma-separated list)
- Single Node:
- Example (OpenSearch Single Node):
elasticsearch_hosts = http://127.0.0.1:9200
- Example (OpenSearch Cluster):
elasticsearch_hosts = http://192.168.1.10:9200,http://192.168.1.11:9200,http://192.168.1.12:9200
-
mongodb_uri
:- Purpose: Specifies the connection string for your MongoDB database.
- Setting:
- Default (Single Node, No Authentication):
mongodb://localhost/graylog
- With Authentication:
mongodb://<username>:<password>@<mongodb_ip>:<port>/graylog?authSource=admin
- Default (Single Node, No Authentication):
- Example (Single Node, No Authentication):
mongodb_uri = mongodb://localhost/graylog
-
Other Important Configuration Options:
-
timezone
: Sets the timezone for Graylog. It’s crucial to set this correctly to ensure accurate timestamps in your logs. Use a valid timezone string from the IANA Time Zone Database (e.g.,America/Los_Angeles
,Europe/London
,UTC
).
timezone = UTC
-
message_journal_max_age
: Controls how long messages are kept in the Graylog journal (a temporary buffer before being processed). Default is12h
.
message_journal_max_age = 12h
-
message_journal_max_size
: Sets the maximum size of the Graylog journal. Default is5gb
.
message_journal_max_size = 5gb
-
**`output_batch_size
-
-