Okay, here’s a comprehensive article on “Learn RKE2: A Complete Beginner’s Guide,” aiming for approximately 5000 words and covering a wide range of topics, from basic concepts to advanced configurations.
Learn RKE2: A Complete Beginner’s Guide
This guide provides a comprehensive introduction to RKE2 (Rancher Kubernetes Engine 2), a lightweight, secure, and production-ready Kubernetes distribution. Whether you’re a system administrator, a DevOps engineer, or a developer looking to deploy and manage containerized applications, this guide will equip you with the knowledge and skills necessary to effectively use RKE2.
Table of Contents
-
Introduction to Kubernetes and RKE2
- 1.1 What is Kubernetes?
- 1.2 Why Use Kubernetes?
- 1.3 What is RKE2?
- 1.4 Key Features and Benefits of RKE2
- 1.5 RKE2 vs. RKE vs. K3s
- 1.6 When to Choose RKE2
-
RKE2 Architecture and Components
- 2.1 Control Plane Components
- 2.1.1 kube-apiserver
- 2.1.2 kube-scheduler
- 2.1.3 kube-controller-manager
- 2.1.4 etcd
- 2.1.5 cloud-controller-manager (optional)
- 2.2 Node Components
- 2.2.1 kubelet
- 2.2.2 kube-proxy
- 2.2.3 Container Runtime (containerd)
- 2.2.4 CNI Plugin (Canal, Cilium, Calico, etc.)
- 2.3 RKE2-Specific Components
- 2.3.1 rke2-server
- 2.3.2 rke2-agent
- 2.4 High-Availability (HA) Architecture
- 2.1 Control Plane Components
-
Prerequisites and System Requirements
- 3.1 Operating System Support
- 3.2 Hardware Requirements (CPU, Memory, Disk)
- 3.3 Network Requirements (Ports, Firewall)
- 3.4 Software Dependencies (selinux, firewalld)
- 3.5 User Accounts and Permissions
-
Installation and Configuration
- 4.1 Installing the RKE2 Server (Control Plane)
- 4.1.1 Single-Node Installation
- 4.1.2 Multi-Node (HA) Installation
- 4.1.3 Using the Installation Script
- 4.1.4 Manual Installation (Advanced)
- 4.1.5 Air-Gapped Installation
- 4.2 Installing the RKE2 Agent (Worker Nodes)
- 4.3 Configuration File (
config.yaml
)- 4.3.1 Server Configuration Options
- 4.3.2 Agent Configuration Options
- 4.3.3 Network Configuration (CNI)
- 4.3.4 Cluster Configuration (etcd, TLS)
- 4.4 Accessing the Cluster (
kubeconfig
) - 4.5 Verifying the Installation
- 4.1 Installing the RKE2 Server (Control Plane)
-
Basic Cluster Operations
- 5.1 Using
kubectl
- 5.1.1 Installing
kubectl
- 5.1.2
kubectl
Basic Commands (get, describe, create, delete) - 5.1.3 Contexts and Namespaces
- 5.1.1 Installing
- 5.2 Deploying a Simple Application (e.g., Nginx)
- 5.2.1 Creating a Deployment
- 5.2.2 Creating a Service
- 5.2.3 Exposing the Application (NodePort, LoadBalancer, Ingress)
- 5.3 Scaling Applications
- 5.4 Updating Applications
- 5.5 Deleting Applications and Resources
- 5.1 Using
-
Networking in RKE2
- 6.1 Container Network Interface (CNI)
- 6.1.1 Canal (Default CNI)
- 6.1.2 Cilium
- 6.1.3 Calico
- 6.1.4 Other CNI Plugins
- 6.1.5 Choosing a CNI Plugin
- 6.2 Network Policies
- 6.2.1 Understanding Network Policies
- 6.2.2 Creating and Applying Network Policies
- 6.3 Services
- 6.3.1 ClusterIP
- 6.3.2 NodePort
- 6.3.3 LoadBalancer
- 6.3.4 ExternalName
- 6.4 Ingress
- 6.4.1 Setting up an Ingress Controller (e.g., Nginx Ingress)
- 6.4.2 Configuring Ingress Resources
- 6.1 Container Network Interface (CNI)
-
Storage in RKE2
- 7.1 Persistent Volumes (PVs) and Persistent Volume Claims (PVCs)
- 7.2 Storage Classes
- 7.3 Local Path Provisioner
- 7.4 Integrating with External Storage Providers (e.g., NFS, cloud provider storage)
- 7.5 Using Longhorn
-
Security in RKE2
- 8.1 CIS Kubernetes Benchmark Compliance
- 8.2 Hardening RKE2
- 8.2.1 System Hardening
- 8.2.2 Network Security
- 8.2.3 RBAC (Role-Based Access Control)
- 8.2.4 Pod Security Policies (deprecated) and Pod Security Admission
- 8.2.5 Secrets Management
- 8.2.6 Using Image Scanners
- 8.3 TLS Certificates
- 8.4 Authentication and Authorization
-
Monitoring and Logging
- 9.1 Kubernetes Dashboard
- 9.2 Prometheus and Grafana
- 9.2.1 Deploying Prometheus
- 9.2.2 Deploying Grafana
- 9.2.3 Configuring Dashboards
- 9.3 Logging with Fluentd, Elasticsearch, and Kibana (EFK stack)
-
Upgrading RKE2
- 10.1 Upgrade Strategies
- 10.2 Upgrading the Control Plane
- 10.3 Upgrading Worker Nodes
- 10.4 Rolling Back Upgrades
-
Troubleshooting
- 11.1 Common Issues and Solutions
- 11.2 Debugging Pods and Services
- 11.3 Examining Logs
- 11.4 Using
kubectl
for Troubleshooting - 11.5 Cluster Health Checks
-
Advanced Topics
- 12.1 Customizing the RKE2 Installation
- 12.1.1 Using Custom Registries
- 12.1.2 Configuring Containerd
- 12.2 Integrating with Rancher
- 12.3 Using Helm for Package Management
- 12.4 Serverless with KNative.
- 12.5 Service Meshes (Istio, Linkerd)
- 12.6 Backup and Restore.
- 12.1 Customizing the RKE2 Installation
1. Introduction to Kubernetes and RKE2
1.1 What is Kubernetes?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a powerful and flexible framework for running distributed systems reliably. Think of it as an operating system for your cloud-native applications.
1.2 Why Use Kubernetes?
- Automated Deployment and Scaling: Kubernetes automates the process of deploying and scaling applications based on demand, ensuring high availability and efficient resource utilization.
- Service Discovery and Load Balancing: Kubernetes automatically handles service discovery and load balancing, making it easy for applications to communicate with each other.
- Self-Healing: Kubernetes monitors the health of your applications and automatically restarts or replaces failed containers, ensuring high availability.
- Rolling Updates and Rollbacks: Kubernetes allows you to perform rolling updates to your applications with zero downtime, and provides the ability to roll back to previous versions if necessary.
- Resource Management: Kubernetes manages the allocation of resources (CPU, memory) to containers, optimizing resource utilization and preventing resource contention.
- Portability: Kubernetes can run on various infrastructure platforms, including public clouds, private clouds, and on-premises environments.
1.3 What is RKE2?
RKE2, also known as Rancher Kubernetes Engine 2, is a fully conformant Kubernetes distribution built by Rancher (now part of SUSE). It’s designed to be secure, lightweight, and easy to install and manage, particularly in environments with strict security requirements or resource constraints. It’s essentially a carefully curated and packaged version of Kubernetes, optimized for production deployments.
1.4 Key Features and Benefits of RKE2
- Security-Focused: RKE2 prioritizes security by default. It’s designed to meet the requirements of the CIS Kubernetes Benchmark, a set of security best practices. It uses minimal attack surfaces and secure configurations.
- Lightweight and Fast: RKE2 has a small footprint and fast startup times, making it ideal for edge computing, IoT devices, and resource-constrained environments.
- Easy Installation and Upgrades: RKE2 simplifies the installation and upgrade process with a single binary and a user-friendly configuration file.
- Production-Ready: RKE2 is designed for production workloads and includes features like high availability, automated certificate management, and support for various storage and networking options.
- Air-Gapped Support: RKE2 can be installed and operated in environments without internet access (air-gapped).
- Embedded etcd: RKE2 packages etcd (the Kubernetes database) within its binary, simplifying the setup and management of the control plane.
- Containerd Runtime: RKE2 uses containerd as its default container runtime, which is known for its performance and security.
1.5 RKE2 vs. RKE vs. K3s
Rancher offers three main Kubernetes distributions: RKE, RKE2, and K3s. Here’s a comparison:
Feature | RKE | RKE2 | K3s |
---|---|---|---|
Target Use Case | General-purpose Kubernetes | Security-focused, production-ready | Edge, IoT, resource-constrained |
Security | Requires manual configuration | High security by default (CIS) | Optimized for security, but less strict |
Ease of Use | More complex configuration | Easy installation and configuration | Very easy installation and configuration |
Resource Usage | Moderate | Lightweight | Extremely lightweight |
Components | Separate components (Docker, etcd) | Embedded components (containerd, etcd) | Single binary, embedded components |
Upgrades | More complex | Simplified | Very simple |
Installation | CLI based | Single binary, config file | Single binary, config file |
- RKE (Rancher Kubernetes Engine): The original Rancher Kubernetes distribution. It’s a flexible and powerful option, but requires more manual configuration. It relies on Docker as the container runtime.
- RKE2 (Rancher Kubernetes Engine 2): The successor to RKE, designed for security and ease of use. It uses containerd as the container runtime and has embedded components for simpler management.
- K3s: A lightweight Kubernetes distribution specifically designed for edge computing, IoT, and resource-constrained environments. It’s even simpler than RKE2 and has an extremely small footprint.
1.6 When to Choose RKE2
RKE2 is an excellent choice in the following scenarios:
- Production Deployments: When you need a secure, reliable, and production-ready Kubernetes distribution.
- Security-Sensitive Environments: When security is a top priority, and you need a distribution that meets strict compliance requirements (e.g., CIS Benchmark).
- Ease of Management: When you want a Kubernetes distribution that’s easy to install, configure, and upgrade.
- On-Premises Deployments: When you’re deploying Kubernetes in your own data center.
- Hybrid Cloud Deployments: When you’re deploying Kubernetes across multiple environments (on-premises and cloud).
- Air-gapped environments.
2. RKE2 Architecture and Components
Understanding the architecture of RKE2 is crucial for effective management and troubleshooting. Like all Kubernetes distributions, RKE2 follows a master-worker (control plane-node) architecture.
2.1 Control Plane Components
The control plane is the “brain” of the Kubernetes cluster. It manages the overall state of the cluster and makes decisions about scheduling, scaling, and resource allocation. In RKE2, the control plane runs on one or more “server” nodes.
-
2.1.1 kube-apiserver: The API server is the front end of the control plane. It exposes the Kubernetes API, which is used by clients (like
kubectl
) to interact with the cluster. It handles authentication, authorization, and validation of requests. -
2.1.2 kube-scheduler: The scheduler is responsible for assigning Pods (the smallest deployable units in Kubernetes) to worker nodes. It considers factors like resource availability, node affinity, and constraints to make optimal placement decisions.
-
2.1.3 kube-controller-manager: The controller manager runs various controller processes that regulate the state of the cluster. These controllers watch for changes in the desired state (defined in the cluster configuration) and take actions to reconcile the actual state with the desired state. Examples include the Replication Controller (for managing ReplicaSets) and the Node Controller (for monitoring node health).
-
2.1.4 etcd: etcd is a distributed key-value store that serves as the database for the Kubernetes cluster. It stores all the cluster configuration data, including information about Pods, Services, Deployments, and other resources. RKE2 embeds etcd within its binary for easier management.
-
2.1.5 cloud-controller-manager (optional): The cloud controller manager is an optional component that integrates Kubernetes with cloud provider-specific services. For example, if you’re running RKE2 on AWS, the cloud controller manager can provision load balancers, storage volumes, and other cloud resources.
2.2 Node Components
Node components run on each worker node (also called “agent” nodes in RKE2) and are responsible for running containers and managing their lifecycle.
-
2.2.1 kubelet: The kubelet is the primary node agent. It communicates with the API server and is responsible for ensuring that the containers defined in Pod specifications are running and healthy. It manages the container runtime (containerd) and interacts with the CNI plugin for networking.
-
2.2.2 kube-proxy: kube-proxy is a network proxy that maintains network rules on each node. It handles service discovery and load balancing for applications running within the cluster. It can operate in different modes (e.g., iptables, IPVS) to manage network traffic.
-
2.2.3 Container Runtime (containerd): RKE2 uses containerd as its default container runtime. containerd is a CNCF-graduated project that provides a lightweight and secure way to run containers. It handles image pulling, container execution, and resource isolation.
-
2.2.4 CNI Plugin (Canal, Cilium, Calico, etc.): The Container Network Interface (CNI) plugin is responsible for configuring the network for Pods. It assigns IP addresses to Pods, sets up network routes, and implements network policies. RKE2 uses Canal by default, but you can choose other CNI plugins like Cilium or Calico.
2.3 RKE2-Specific Components
RKE2 includes some components that are specific to its implementation:
-
2.3.1 rke2-server: This is the main process that runs on control plane nodes. It packages and manages the Kubernetes control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, etcd).
-
2.3.2 rke2-agent: This is the main process that runs on worker nodes. It packages and manages the Kubernetes node components (kubelet, kube-proxy).
2.4 High-Availability (HA) Architecture
For production environments, it’s recommended to run RKE2 in a high-availability (HA) configuration. This involves running multiple control plane nodes (typically 3 or 5) to ensure that the cluster remains operational even if one or more control plane nodes fail.
In an HA setup:
- Multiple etcd instances: etcd is run in a clustered mode, with multiple instances replicating data to each other. This ensures data redundancy and fault tolerance.
- Leader Election: The kube-apiserver, kube-scheduler, and kube-controller-manager use leader election to ensure that only one instance of each component is active at any given time. If the active instance fails, another instance will take over.
- Load Balancer (optional but recommended): A load balancer is typically used in front of the kube-apiserver instances to distribute traffic across the control plane nodes. This provides a single point of access to the cluster API.
3. Prerequisites and System Requirements
Before installing RKE2, you need to ensure that your system meets the necessary requirements.
3.1 Operating System Support
RKE2 supports a wide range of Linux distributions, including:
- Ubuntu (18.04, 20.04, 22.04)
- CentOS/RHEL (7, 8, 9)
- SLES (15)
- openSUSE Leap (15)
- Rocky Linux (8,9)
- AlmaLinux (8,9)
It’s recommended to use a recent version of a supported distribution with the latest security updates.
3.2 Hardware Requirements (CPU, Memory, Disk)
The hardware requirements for RKE2 depend on the size and complexity of your workload. Here are some general guidelines:
Component | Minimum | Recommended |
---|---|---|
Control Plane | 1 vCPU, 2 GB RAM, 20 GB Disk | 2 vCPU, 4 GB RAM, 40 GB Disk |
Worker Node | 1 vCPU, 2 GB RAM, 20 GB Disk | 2+ vCPU, 4+ GB RAM, 40+ GB Disk |
- CPU: More CPU cores will allow you to run more containers concurrently.
- Memory: Sufficient memory is crucial for the performance of Kubernetes and your applications.
- Disk: The disk space should be sufficient for storing container images, persistent volumes, and etcd data. SSD storage is recommended for better performance.
3.3 Network Requirements (Ports, Firewall)
RKE2 requires certain ports to be open for communication between the control plane and worker nodes. You’ll need to configure your firewall to allow traffic on these ports:
Port | Protocol | Component | Description |
---|---|---|---|
6443 | TCP | kube-apiserver | Kubernetes API |
2379 | TCP | etcd (server) | etcd client communication |
2380 | TCP | etcd (server) | etcd peer communication |
10250 | TCP | kubelet | kubelet API |
9345 | TCP | rke2 | RKE2 supervisor port |
UDP 8472 | UDP | Flannel VXLAN (Canal) | Overlay network communication (if using Flannel) |
TCP 9099 | TCP | metrics-server | Access to Metrics Server. |
TCP 443 | TCP | External access | Allows connection to LoadBalancer services, like an external ingress controller. |
TCP 80 | TCP | External access | Allows connection to LoadBalancer services, like an external ingress controller. |
Make sure your firewall allows inbound and outbound traffic on these ports between the nodes in your cluster. |
3.4 Software Dependencies (selinux, firewalld)
-
SELinux: RKE2 is designed to work with SELinux in enforcing mode. It is highly recommended to keep SELinux enabled. If you must disable it, do so with caution and understand the security implications. If it’s not enforced, you can set it to permissive mode instead of completely disabling it.
-
Firewalld: Firewalld is the default firewall management tool on many Linux distributions. You need to configure Firewalld to allow the necessary ports for RKE2, as described in the previous section. Alternatively, you can use other firewall management tools like
iptables
directly.
3.5 User Accounts and Permissions
You’ll need a user account with sudo
privileges to install and manage RKE2. It’s recommended to use a dedicated user account for this purpose, rather than the root account. The user account should have permission to:
- Install packages.
- Start and stop system services.
- Modify network settings.
- Access the container runtime (containerd).
4. Installation and Configuration
This section covers the process of installing and configuring RKE2.
4.1 Installing the RKE2 Server (Control Plane)
There are several ways to install the RKE2 server, including using the installation script, manual installation, and air-gapped installation.
-
4.1.1 Single-Node Installation
A single-node installation is suitable for development, testing, or small-scale deployments.
-
Download the installation script:
bash
curl -sfL https://get.rke2.io > install.sh -
Make the script executable:
bash
chmod +x install.sh -
Run the installation script:
bash
INSTALL_RKE2_TYPE="server" sudo ./install.sh
TheINSTALL_RKE2_TYPE="server"
environment variable tells the script to install the RKE2 server. -
Enable and start the rke2-server service:
bash
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service -
(Optional) Set the KUBECONFIG environment variable:
bash
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
-
-
4.1.2 Multi-Node (HA) Installation
For a multi-node (HA) installation, you’ll need to install the RKE2 server on multiple nodes and configure them to work together.
-
Install RKE2 on the first server node (same as single-node installation).
-
Obtain the server token:
On the first server node, get the token:
bash
sudo cat /var/lib/rancher/rke2/server/node-token -
Install RKE2 on the other server nodes:
On each additional server node, run the installation script with the
INSTALL_RKE2_TYPE
,RKE2_URL
, andRKE2_TOKEN
environment variables:
bash
export INSTALL_RKE2_TYPE="server"
export RKE2_URL="https://<FIRST_SERVER_IP>:9345" # Replace with the first server's IP
export RKE2_TOKEN="<YOUR_NODE_TOKEN>" # Replace with the token from step 2
curl -sfL https://get.rke2.io > install.sh
chmod +x install.sh
sudo ./install.sh
Replace<FIRST_SERVER_IP>
with the IP address or hostname of the first server node, and<YOUR_NODE_TOKEN>
with the token you obtained in step 2. -
Enable and start the rke2-server service on each additional server node:
bash
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
-
-
4.1.3 Using the Installation Script (Details)
The
install.sh
script automates several tasks:- Downloads the RKE2 binary.
- Creates systemd unit files for the
rke2-server
andrke2-agent
services. - Sets up basic configuration files.
- Starts the RKE2 services.
The script supports various environment variables to customize the installation, such as:
INSTALL_RKE2_VERSION
: Specifies the RKE2 version to install.INSTALL_RKE2_CHANNEL
: Specifies the release channel (e.g., “stable”, “latest”).INSTALL_RKE2_TYPE
: Specifies whether to install the “server” or “agent”.RKE2_URL
: Specifies the URL of the RKE2 server (for agent installations).RKE2_TOKEN
: Specifies the node token (for agent and additional server installations).
-
4.1.4 Manual Installation (Advanced)
For advanced users or specific customization needs, you can install RKE2 manually. This involves downloading the RKE2 binary, creating systemd unit files, and configuring the cluster manually. This approach is not recommended for beginners. Refer to the official RKE2 documentation for detailed instructions on manual installation.
-
4.1.5 Air-Gapped Installation
In air-gapped environments (no internet access), the installation is a bit more involved. You need to:- Download the necessary files: On a machine with internet access, download the
install.sh
script, the RKE2 images tarball (rke2-images.linux-amd64.tar.zst
), and the RKE2 binary for your architecture (rke2.linux-amd64.tar.gz
). Find these on the RKE2 GitHub releases page. - Transfer the files: Transfer the downloaded files to your air-gapped server(s).
- Create a registries.yaml file: This file tells containerd where to find the images. Place it in
/etc/rancher/rke2/
. A simple example is:
yaml
mirrors:
"docker.io":
endpoint:
- "file:///var/lib/rancher/rke2/agent/images/" - Install: Run the installation script with the
INSTALL_RKE2_TAR_PREFIX
environment variable pointing to the directory where you placed therke2-images...tar.zst
file:
bash
export INSTALL_RKE2_TAR_PREFIX=/path/to/your/files/
INSTALL_RKE2_TYPE="server" sudo ./install.sh -
Load Images: You need to unpack the images and place them in the right directory.
bash
tar -I zstd -xvf /path/to/your/files/rke2-images.linux-amd64.tar.zst -C /var/lib/rancher/rke2/agent/images/ -
Start the server:
bash
sudo systemctl enable rke2-server --now
Repeat a similar process for agent nodes, making sure to setRKE2_URL
andRKE2_TOKEN
as in the HA installation, and using theINSTALL_RKE2_TYPE="agent"
flag. Ensure that the images tarball is also available on agent nodes, and that they use aregistries.yaml
file, if necessary, that correctly points to the local image directory.
- Download the necessary files: On a machine with internet access, download the
4.2 Installing the RKE2 Agent (Worker Nodes)
To add worker nodes to your cluster, you need to install the RKE2 agent on each worker node.
-
Obtain the server token (from the server node):
bash
sudo cat /var/lib/rancher/rke2/server/node-token -
Install RKE2 on each worker node:
bash
export RKE2_URL="https://<SERVER_IP>:9345" # Replace with your server's IP
export RKE2_TOKEN="<YOUR_NODE_TOKEN>" # Replace with the token from step 1
curl -sfL https://get.rke2.io > install.sh
chmod +x install.sh
INSTALL_RKE2_TYPE="agent" sudo ./install.sh
Replace<SERVER_IP>
with the IP address or hostname of your RKE2 server node. For HA setups, this could be the load balancer’s IP address if you’re using one. -
Enable and start the rke2-agent service:
bash
sudo systemctl enable rke2-agent.service
sudo systemctl start rke2-agent.service
4.3 Configuration File (config.yaml
)
RKE2 uses a configuration file (/etc/rancher/rke2/config.yaml
) to customize various settings. This file is written in YAML format. You can modify this file before starting the rke2-server
or rke2-agent
service for the first time, or you can modify it and then restart the service to apply the changes.
-
4.3.1 Server Configuration Options
Some common server configuration options include:
token
: The node token (automatically generated, but you can specify a custom token).tls-san
: Additional Subject Alternative Names (SANs) for the server’s TLS certificate. This is useful if you want to access the API server using different hostnames or IP addresses.node-label
: Labels to apply to the server node.node-taint
: Taints to apply to the server node.cluster-cidr
: The CIDR block for Pod IP addresses.service-cidr
: The CIDR block for Service IP addresses.cluster-dns
: The IP address of the cluster DNS server.disable
: An array listing components to disable (e.g.,["rke2-ingress-nginx"]
).
Example:
yaml
token: my-super-secret-token
tls-san:
- my-cluster.example.com
node-label:
- "node-role.kubernetes.io/control-plane=true"
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16" -
4.3.2 Agent Configuration Options
Some common agent configuration options include:
server
: The URL of the RKE2 server (including the port, usually 9345).token
: The node token.node-label
: Labels to apply to the worker node.node-taint
: Taints to apply to the worker node.
Example:
yaml
server: https://my-server-ip:9345
token: my-super-secret-token
node-label:
- "environment=production" -
4.3.3 Network Configuration (CNI)
RKE2 uses Canal as its default CNI plugin. Canal combines Flannel (for the overlay network) and Calico (for network policies).
To change the CNI, use thecni
key:yaml
cni: "calico"You can also customize the configuration of the CNI plugin. Refer to the documentation for the specific CNI plugin you’re using for details on available configuration options.
-
4.3.4 Cluster Configuration (etcd, TLS)
You can configure various aspects of etcd and TLS certificate management in the
config.yaml
file. For example, you can specify custom etcd data directories, snapshot intervals, and TLS certificate settings. Refer to the RKE2 documentation for detailed information on these advanced configuration options.
4.4 Accessing the Cluster (kubeconfig
)
After installing RKE2, a kubeconfig
file is generated at /etc/rancher/rke2/rke2.yaml
on the server node. This file contains the necessary credentials and configuration information to access the Kubernetes cluster using kubectl
.
To use kubectl
, you can either:
- Copy the
kubeconfig
file: Copy the/etc/rancher/rke2/rke2.yaml
file to your local machine (e.g., to~/.kube/config
). Make sure to adjust theserver
address in thekubeconfig
if necessary (especially if using a load balancer or accessing the cluster from outside). -
Set the
KUBECONFIG
environment variable: Set theKUBECONFIG
environment variable to point to thekubeconfig
file:bash
export KUBECONFIG=/etc/rancher/rke2/rke2.yamlYou can add this line to your shell’s configuration file (e.g.,
~/.bashrc
or~/.zshrc
) to make it permanent.
4.5 Verifying the Installation
After installing RKE2, you should verify that the cluster is running correctly.
-
Check the status of the RKE2 services:
bash
sudo systemctl status rke2-server # On server nodes
sudo systemctl status rke2-agent # On agent nodesThe services should be in the
active (running)
state. -
Use
kubectl
to get the nodes:bash
kubectl get nodesThis command should list all the nodes in your cluster, and their status should be
Ready
. -
Check the pods in the kube-system namespace:
bash
kubectl get pods -n kube-system
This will show you the core Kubernetes components running. All of these should ideally be in the Running
state.
If you encounter any issues, check the RKE2 logs (located in /var/log/rke2/
and /var/lib/rancher/rke2/agent/logs/
) and the system logs (/var/log/syslog
or journalctl
) for error messages.
5. Basic Cluster Operations
This section covers fundamental operations for interacting with your RKE2 cluster, primarily using the kubectl
command-line tool.
5.1 Using kubectl
kubectl
is the primary command-line tool for interacting with a Kubernetes cluster.
-
5.1.1 Installing
kubectl
If
kubectl
is not already installed on your system,