Kubernetes Cluster Setup On Ubuntu 24.04: A Quick Guide

by Team 56 views
Kubernetes Cluster Setup on Ubuntu 24.04: A Quick Guide

Hey guys! Today, we're diving into setting up a Kubernetes cluster on Ubuntu 24.04. Whether you're a seasoned DevOps engineer or just starting to explore container orchestration, this guide will walk you through the process step-by-step. We'll cover everything from preparing your Ubuntu machines to deploying your first application on the cluster. So, buckle up and let's get started!

Prerequisites

Before we jump into the actual setup, let's make sure you have everything you need:

  • Ubuntu 24.04 Machines: You'll need at least two Ubuntu 24.04 machines – one for the master node and one (or more) for worker nodes. For a production environment, consider having multiple master nodes for high availability.
  • Internet Connection: Ensure all machines have a stable internet connection to download packages and container images.
  • User with Sudo Privileges: You'll need a user account with sudo privileges on all machines.
  • Basic Linux Knowledge: Familiarity with basic Linux commands will be helpful.

Preparing the Ubuntu Machines

First, we need to prepare our Ubuntu machines by updating the package lists and installing necessary dependencies. Log in to each of your Ubuntu machines and follow these steps:

  1. Update Package Lists:

    Open your terminal and run the following command to update the package lists:

    sudo apt update
    

    This command ensures that you have the latest information about available packages.

  2. Upgrade Installed Packages:

    Next, upgrade the installed packages to their newest versions:

    sudo apt upgrade -y
    

    The -y flag automatically answers "yes" to all prompts, making the upgrade process smoother.

  3. Install Docker:

    Kubernetes uses Docker (or another container runtime) to run containers. Let's install Docker on all machines:

    sudo apt install docker.io -y
    

    After the installation, start and enable the Docker service:

    sudo systemctl start docker
    sudo systemctl enable docker
    
  4. Install kubeadm, kubelet, and kubectl:

    These are the core Kubernetes components we'll use to set up and manage the cluster:

    sudo apt install kubeadm kubelet kubectl -y
    

    Hold the package versions to prevent automatic updates that might cause compatibility issues:

    sudo apt-mark hold kubeadm kubelet kubectl
    

Configuring the Master Node

Now that all machines are prepared, let's configure the master node. This node will be the control plane for our Kubernetes cluster.

  1. Initialize the Kubernetes Cluster:

    On the master node, run the following command to initialize the Kubernetes cluster:

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16
    

    The --pod-network-cidr flag specifies the IP address range for pods. 10.244.0.0/16 is a common choice, especially when using Calico as the network plugin. Important: Make sure to save the kubeadm join command that is outputted at the end of this process. You will need it to join the worker nodes to the cluster. This command outputs a kubeadm join command that you'll need later to join worker nodes.

  2. Configure kubectl:

    To use kubectl with the cluster, you need to configure it to connect to the API server. Run the following commands:

    mkdir -p $HOME/.kube
    sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    These commands copy the Kubernetes configuration file to your user's .kube directory and set the correct ownership.

  3. Deploy a Pod Network:

    Kubernetes requires a pod network to enable communication between pods. We'll use Calico, a popular and powerful networking solution. Apply the Calico manifest:

    kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
    

    This command deploys Calico to your cluster. Give it a few minutes to initialize.

Joining Worker Nodes

With the master node set up, let's join the worker nodes to the cluster. On each worker node, run the kubeadm join command that was outputted during the kubeadm init process on the master node. It should look something like this:

sudo kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Replace <master-ip>, <master-port>, <token>, and <hash> with the values from the kubeadm join command you saved earlier.

Verifying the Cluster

After joining the worker nodes, let's verify that the cluster is set up correctly. On the master node, run the following command:

kubectl get nodes

You should see all the nodes listed, with their status as Ready. If any nodes show NotReady, give them a few minutes to initialize and try again. You can also check the logs of the kubelet service on the worker nodes for any errors:

sudo journalctl -u kubelet

Deploying Your First Application

Now that the cluster is up and running, let's deploy a simple application to test it out. We'll deploy a basic Nginx deployment.

  1. Create a Deployment:

    Create a file named nginx-deployment.yaml with the following content:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
    

    This YAML file defines a deployment with two replicas of the Nginx container.

  2. Apply the Deployment:

    Run the following command to apply the deployment to the cluster:

    kubectl apply -f nginx-deployment.yaml
    
  3. Create a Service:

    To access the Nginx deployment from outside the cluster, we need to create a service. Create a file named nginx-service.yaml with the following content:

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
      type: LoadBalancer
    

    This YAML file defines a service of type LoadBalancer, which will expose the Nginx deployment.

  4. Apply the Service:

    Run the following command to apply the service to the cluster:

    kubectl apply -f nginx-service.yaml
    
  5. Access the Application:

    To access the application, get the external IP address of the service:

    kubectl get service nginx-service
    

    Look for the EXTERNAL-IP field. If it shows <pending>, wait a few minutes and try again. Once you have the external IP address, open it in your web browser. You should see the default Nginx welcome page.

Troubleshooting

Setting up a Kubernetes cluster can sometimes be tricky. Here are a few common issues and how to resolve them:

  • Nodes Not Joining:

    • Firewall Issues: Ensure that the firewall is not blocking communication between the master and worker nodes. You may need to allow traffic on ports 6443 (Kubernetes API server), 2379-2380 (etcd), 10250 (kubelet API), 10251 (kube-scheduler), and 10252 (kube-controller-manager).
    • Incorrect kubeadm join Command: Double-check the kubeadm join command for any typos or incorrect values.
    • Network Connectivity: Verify that the worker nodes can reach the master node via its IP address and port.
  • Pods Not Starting:

    • Insufficient Resources: Ensure that the nodes have enough CPU and memory to run the pods.
    • Image Pull Errors: Check that the container images specified in the pod definitions are available and can be pulled from the registry.
    • Network Issues: Verify that the pod network is correctly configured and that pods can communicate with each other.
  • DNS Resolution Issues:

    • CoreDNS Not Running: Ensure that CoreDNS is running in the cluster. You can check its status with kubectl get pods -n kube-system.
    • DNS Configuration: Verify that the DNS configuration is correct in the /etc/resolv.conf file on the nodes.

Conclusion

And there you have it! You've successfully set up a Kubernetes cluster on Ubuntu 24.04 and deployed your first application. This is just the beginning, though. Kubernetes is a vast and powerful platform, and there's much more to explore. Experiment with different deployments, services, and networking configurations to deepen your understanding. Happy clustering, guys!

Additional Resources

Keep experimenting and have fun learning! Remember to always consult the official documentation for the most up-to-date information and best practices.