Kubernetes setup on bare metal hypervisor (VMWare ESXI 6.5)

In this post I am providing a guide (with scripts) on how to install a 3 node Kubernetes cluster on bare metal hypervisor ESXI 6.5
  • These scripts are for RHEL 8/CentOS 8/Oracle Linux 8 systems

Hypervisor

    Hardware Specs

  • Dell PowerEdge R710
  • 2x Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
  • 48GB memory
  • HDD (1x 500GB SSD, 1x 240 GB SSD)
  • HDD (1x 300GB 10K SAS)
  • HDD RAID-1 (2x 300GB 10K SAS)
  • HDD RAID-0 (3x 146GB 10K SAS)

        OS

    • ESXi-6.5.0 (Update 3)

    VM Layout


    Setup sequence ...

    1. Install Docker (On Master and all Worker nodes)
    2. Configure  Prerequisites (SELinux/firewall/iptables etc...) (On Master and all Worker nodes) 
    3. Install Kubernetes via package repo (On Master and all Worker nodes)
    4. Configure master node to achieve the below... 
      • Configuring firewall ports for Kubernetes and Docker
      • Initializing master node
      • Install Calico container network interface plugin for Kubernetes
      • Note down the token returned by --> kubeadm init
    5. Join worker nodes to master 
    If all went fine , you should end up with the below...
     
     kubectl get nodes -o wide
     
    NAME STATUS ROLES AGE VERSION INTERNAL-IP KERNEL-VERSION CONTAINER-RUNTIME
    k8s-master Ready control-plane,master 94d v1.20.2 192.168.1.104 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
    k8s-worker1 Ready
    94d v1.20.2 192.168.1.105 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
    k8s-worker2 Ready
    94d v1.20.2 192.168.1.106 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
     
    In the next 2 posts I will walk you through on how to change the container runtime from docker to..

    Migrate from Docker to containerd in Kubernetes

    In my previous post I had walked you through on how to setup a 3 node Kubernetes cluster.

    In this post I will tell you on how to change the container runtime from docker to containerd.

    Kubernetes is deprecating Docker as a container runtime after v1.20. The dockershim/Docker, the layer between Kubernetes and containerd is deprecated and will be removed from version 1.22+. 

    So if you are running docker you need to change to a supported container runtime interface (CRI). containerd is a good choice, it is already running on your Kubernetes node if you are running Docker. 

    An extra advantage is, less overhead and there is no docker-shim and Docker translation layers.

     

    Will change one node at a time, first the worker nodes then our control node...picking k8-worder2 node to switch

    1. Cordon and Drain node (from k8-master node execute below commands...)
    kubectl cordon k8s-worker2
    kubectl drain k8s-worker2 --ignore-daemonsets
    1. Stop services (this is to be done on k8-worker2 node)
    systemctl stop kubelet
    systemctl stop docker
    1. Remove docker (optional) (on k8-worker2 node)
    yum remove docker-ce docker-ce-cli
    1. Generate default config --> /etc/containerd/config.toml (on k8-worker2 node)
    sudo mkdir -p /etc/containerd
    containerd config default | sudo tee /etc/containerd/config.toml
    To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      ...
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        SystemdCgroup = true 
    1. Restart containerd (on k8-worker2 node)
    systemctl restart containerd
    1. Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags (on k8-worker2 node).
    --container-runtime=remote and --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
    1. Start kubelet and Uncordon node (on k8-worker2 node)
    systemctl start kubelet
    kubectl uncordon k8s-worker2
    1. Check by running
    kubectl get nodes -o wide

    If all went fine , you should end up with...

     
    NAME STATUS ROLES AGE VERSION INTERNAL-IP KERNEL-VERSION CONTAINER-RUNTIME
    k8s-master Ready control-plane,master 94d v1.20.2 192.168.1.104 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
    k8s-worker1 Ready
    94d v1.20.2 192.168.1.105 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
    k8s-worker2 Ready
    94d v1.20.2 192.168.1.106 4.18.0-240.22.1.el8_3.x86_64 containerd://1.4.4
     
    In the next post I will walk you through on how to change the container runtime from docker to..
    1. cri-o 

    Migrate from Docker to cri-o in Kubernetes

    In my previous post I had walked you through on how to change the container runtime to containerd.

    In this post I will tell you on how to change the container runtime from docker to cri-o.

    Kubernetes is deprecating Docker as a container runtime after v1.20. The dockershim/Docker, the layer between Kubernetes and containerd is deprecated and will be removed from version 1.22+. 

    So if you are running docker you need to change to a supported container runtime interface (CRI). cri-o is a good choice, a Kubernetes-specific, high-level runtime. 

    An extra advantage is, less overhead and there is no docker-shim and Docker translation layers.

     



    Will change one node at a time, first the worker nodes then our control node...picking k8-worder1 node to switch

    1. Cordon and Drain node (from k8-master node execute below commands...)
    kubectl cordon k8s-worker1
    kubectl drain k8s-worker1 --ignore-daemonsets
    1. Stop services (this is to be done on k8-worker1 node)
    systemctl stop kubelet
    systemctl stop docker
    1. Remove docker (optional) (on k8-worker1 node)
    yum remove docker-ce docker-ce-cli
    1. Install CRI-O(on k8-worker1 node)
      • Setup 2 environment variables
    OS=CENTOS_8
    VERSION=1.20
      • Download repos
    curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo    
    https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS
    /devel:kubic:libcontainers:stable.repo
    curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo    
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/
    devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
      • Install cri-o
    yum install crio
    1. Enable & Start cri-o (on k8-worker1 node)
    systemctl enable crio
    systemctl start crio
    1. Edit the file /var/lib/kubelet/config.yaml and change cgroupDriver (on k8-worker1 node).
    cgroupDriver: systemd
    1. Then edit /etc/sysconfig/kubelet to inform we are going to use cri-o (on k8-worker1 node).
    KUBELET_EXTRA_ARGS=--container-runtime=remote 
    --container-runtime-endpoint=unix:///var/run/crio/crio.sock 
    --runtime-request-timeout=10m --cgroup-driver="systemd"
    1. Start kubelet and Uncordon node (on k8-worker1 node)
    systemctl start kubelet
    kubectl uncordon k8s-worker1
    1. Check by running
    kubectl get nodes -o wide

    If all went fine , you should end up with...

     
    NAME STATUS ROLES AGE VERSION INTERNAL-IP KERNEL-VERSION CONTAINER-RUNTIME
    k8s-master Ready control-plane,master 94d v1.20.2 192.168.1.104 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
    k8s-worker1 Ready
    94d v1.20.2 192.168.1.105 4.18.0-240.22.1.el8_3.x86_64 cri-o://1.20.0
    k8s-worker2 Ready
    94d v1.20.2 192.168.1.106 4.18.0-240.22.1.el8_3.x86_64 containerd://1.4.4

    FreeIPA - Install and Configuration

    In this tutorial, I will show you how to install FreeIPA(an open-source integrated Identity and Authentication solution for Linux and Unix based systems) on CentOS 8. 

    What is FreeIPA?

    FreeIPA (FreeIPA) is an open-source integrated Identity and Authentication solution for Linux and Unix based systems. It provides centralized authentication by storing data about user, groups, hosts, and other objects. It provides an integrated identity management service for Linux, Mac, and Windows. 
    FreeIPA is based on the 389 Directory Server, Kerberos, SSSD, Dogtag, NTP, and DNS. 
    It provides a web-based interface to manage Linux users and clients in your realm from the central location.

    The setup and configuration is a 2 step process, with a minimum of 2 machines...
    • a machine for installing FreeIPA server components
    • a machine for installing FreeIPA client components to connect and authenticate with the server

    Step 1 : Install FreeIPA server

    1. Edit the '/etc/hosts/' file and and add your server ip and hostname.
      • 192.168.1.107 services.localhost.com    services
    2. By default, FreeIPA package is not available in the CentOS standard repository.
        Enable it with the below command
      • dnf module enable idm:DL1
      • Sync the repository with the below command
      • dnf distro-sync
      • Install the FreeIPA server
      • dnf install ipa-server ipa-server-dns –y
    3. ipa-server-install
        To accept the default shown in brackets, press the Enter key.
        Do you want to configure integrated DNS (BIND)? [no]:
        Server host name [freeipa.mydomain10.com]:services.localhost.com
        Please confirm the domain name [mydomain10.com]:services.localhost.com
         
        The kerberos protocol requires a Realm name to be defined.
        This is typically the domain name converted to uppercase.
        Please provide a realm name [MYDOMAIN10.COM]:
         
        Certain directory server operations require an administrative user.
        This user is referred to as the Directory Manager and has full access
        to the Directory for system management tasks and will be added to the
        instance of directory server created for IPA.
        The password must be at least 8 characters long.
         
        Directory Manager password:
        Password (confirm): <specify-pwd-here>
         
        The IPA server requires an administrative user, named 'admin'.
        This user is a regular system account used for IPA server administration.
         
        IPA admin password:
        Password (confirm): <specify-pwd-here>
         
        Do you want to configure chrony with NTP server or pool address? [no]:
        Continue to configure the system with these values? [no]: yes
        The ipa-server-install command was successful
    4. Configure firewall rules
      • firewall-cmd --add-service={http,https,dns,ntp,freeipa-ldap,freeipa-ldaps} --permanent
        firewall-cmd –reload
    5. Open a browser and try to access https://services.localhost.com, you should be redirected to a login page (specify user as admin and pwd )





    Step 2 : Install FreeIPA client

    1. Edit the '/etc/hosts/' file and and add your server ip and hostname.Add the server and client ip's
      • 192.168.1.107 services.localhost.com    services
      • 192.168.1.103 ol83.localhost.com    ol83
    2. Install the FreeIPA client
      •   yum install ipa-client 
    3. ipa-client-install--mkhomedir
        Provide the domain name of your IPA server (ex: example.com): services.localhost.com
        Provide your IPA server name (ex: ipa.example.com): services.localhost.com
        Proceed with fixed values and no DNS discovery? [no]: yes
        Do you want to configure chrony with NTP server or pool address? [no]:
        Client hostname: ol83.localhost.com
        Realm: LOCALHOST.COM
        DNS Domain: services.localhost.com
        IPA Server: services.localhost.com
        BaseDN: dc=localhost,dc=com

        Continue to configure the system with these values? [no]: yes
        Synchronizing time
        No SRV records of NTP servers found and no NTP server or pool address was provided.
        Using default chrony configuration.
        Attempting to sync time with chronyc.
        Time synchronization was successful.
        User authorized to enroll computers:admin
        Password for admin@LOCALHOST.COM:<admin-pwd-goes-here>

        Successfully retrieved CA cert

        The ipa-client-install command was successful
         
    4. Open a browser and try to access https://services.localhost.com, click on 'Hosts' tab to see the host get added...



    Deploying Kubernetes objects using Ansible playbook

    • In this post I will walk you through on how to deploy/un-deploy Kubernetes objects using Ansible playbook

    Pre-requisites

    • Setup Ansible control machine (from where scripts will be run)
    • Define user and SSH key
    • Allow user for for sudo without password

    What we will do

    • Create inventory file called "inventory.ini" in home directory or any other folder. Default location of inventory file in ''etc/ansible/hosts' when custom inventory file is not specified
    • Create "ansible.cfg" file and specify inventory file created in step above.
    • Create ansible playbook
    • Deploy namespace on Kubernetes using playbook

    Create Inventory file

    Create a file "inventory.ini" specifying the hosts info.
    [master]
    K8s-master kubernetes_role=master


    [workers]
    K8s-worker1 kubernetes_role=node
    K8s-worker2 kubernetes_role=node

    Create "ansible.cfg file"

    In the same folder where "inventory.ini" is created, create a file "ansible.cfg" and specify the below...

    [defaults]
    remote_user = <specify_user_with_SSH_setup>
    host_key_checking = false
    ask_pass = no
    inventory = inventory.ini
    interpreter_python = auto_legacy_silent

    [privilege_escalation]
    become = yes
    become_ask_pass = no
    become_method = sudo
    become_user = root

    Test the setup by trying to ping the hosts : 

        ansible all -m ping 


    The k8s module requires the OpenShift Python client to communicate with the Kubernetes API. So before using the k8s role, you need to install the client. Since it’s installed with pip, we need to install Pip as well.

    Create file ‘InstallAnsibleK8sModule.yaml’ with the below content

    - hosts: K8s-master

      pre_tasks:
        - name: Ensure Pip is installed.
          package:
            name: python3-pip
            state: present

        - name: Ensure OpenShift client is installed.
          pip:
            name: openshift
            state: present

    To run playbook

    ansible-playbook InstallAnsibleK8Module.yaml

    Deploying kubernetes objects can be achieved in 2  ways...

    • Create a playbook 'CreateNameSpace.yaml'

    Specify the task to run from yaml file.
    - hosts: k8s-master

      pre_tasks:
        - name: Ensure Pip is installed.
          package:
            name: python3-pip
            state: present

        - name: Ensure OpenShift client is installed.
          pip:
            name: openshift
            state: present
      tasks:
        - name: Creating Demo namespace by applying YAML definition file.
          k8s:
           state: present
           definition: "{{ lookup('file', 'demo-namespace.yaml') | from_yaml }}"

    • Specify the task to run from inline specification.
    - hosts: k8s-master
      
      pre_tasks:
        - name: Ensure Pip is installed.
          package:
            name: python3-pip
            state: present

        - name: Ensure OpenShift client is installed.
          pip:
            name: openshift
            state: present
      tasks:
        - name: Creating DEMO namespace by applying inline definition.
          k8s:
            state: present
            definition:
              apiVersion: v1
              kind: Namespace
              metadata:
                name: demo-system
                labels:
                 app: demo 
     
    To run playbook

    ansible-playbook CreateNamespace.yaml

    Un-deploying kubernetes object...

    To remove the namespace, you can create another playbook ‘RemoveNamespace.yaml’ (which can be in-line or yaml file based) and set the state to absent
        state: absent

    To run playbook
    ansible-playbook RemoveNamespace.yaml 

    Running VMs on Kubernetes with Kubevirt

    • In this post I will walk you through on how to deploy a VM in Kubernetes using kubevirt

    What is kubevirt?

    KubeVirt is a virtual machine management add-on for Kubernetes. The aim is to provide a common ground for virtualization solutions on top of Kubernetes
    At its core, KubeVirt extends Kubernetes by adding additional virtualization resource types (especially the VM type) through Kubernetes's Custom Resource Definitions API. By using this mechanism, the Kubernetes API can be used to manage these VM resources alongside all other resources Kubernetes provides.

    Pre-requisites

    • You require a Kubernetes platform deployed on a cloud environment, a bare-metal instance, or a local computer
    • Ensure that your hosts are capable of running virtualization workloads

    What we will do

    • Download and install KubeVirt operator
    • Download and install CDI custom operator
    • Download the virtctl cli to manage the VM's
    • Deploy a VM using a iso image specified 
    • Download and install NoVnc to access the VM via web browser

    Download and Install KubeVirt operator

    Pick an upstream version of KubeVirt to install
    Deploy the KubeVirt operator
        export RELEASE=v0.46.0
        kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml

    Create the KubeVirt CR (instance deployment request) which triggers the actual installation

         kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml

    Download and Install CDI custom operator

    Install the CDI custom operator
        export VERSION=v0.40.0

    Download the virtctl cli to manage the VM's

    Get virtctl to manage VM's

        wget virtctl https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/virtctl-${RELEASE}-linux-amd64

       chmod +x virtctl

    Deploy a VM using a iso image specified

    • Create 2 PV for the VMs which will serve as :
      • CDROM where iso will boot from 
      • VM HDD where iso will be installed 

    apiVersion: v1
    kind: PersistentVolume
    metadata:
        name: pv-nfs-vmdata
        namespace: cdi
        labels:
          type: nfs
          app: vm-system
    spec:
        storageClassName: vm-machine
        capacity:
          storage: 2Gi
        accessModes:
          - ReadWriteMany
        nfs:
          server: 192.168.1.107
          path: "/nfs_shares/data/vm/fedora/netiso"

    apiVersion: v1
    kind: PersistentVolume
    metadata:
        name: pv-nfs-vmdata1
        namespace: cdi
        labels:
          type: nfs
          app: vm-system
    spec:
        storageClassName: vm-machine1
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteMany
        nfs:
          server: 192.168.1.107
         path: "/nfs_shares/data/vm/fedora"

    • Create PVC, pay attention to the annotation in the CDROM PVC claim. This is where you specify the iso image for kubevirt to use.
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: pvc-fedora-netinst
        namespace: cdi
        labels:
          app: containerized-data-importer
        annotations:
          cdi.kubevirt.io/storage.import.endpoint: "http://mirror.math.princeton.edu/pub/centos/8-stream/isos/x86_64/CentOS-Stream-8-x86_64-20211007-boot.iso"
    spec:
        storageClassName: vm-machine
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 2Gi

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: pvc-fedora-hd
        namespace: cdi
        labels:
          type: nfs
          app: vm-system
    spec:
        storageClassName: vm-machine1 
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
     
    • Create VM deployment.
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: fedora-vm
      namespace: cdi
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: fedora-vm
            app: vm-system
        spec:
          domain:
            cpu:
              cores: 2
            devices:
              disks:
              - bootOrder: 1
                cdrom:
                  bus: sata
                name: cdromiso
              - disk:
                  bus: virtio
                name: harddrive
            machine:
              type: q35
            resources:
              requests:
                memory: 1G
          volumes:
          - name: cdromiso
            persistentVolumeClaim:
              claimName: pvc-fedora-netinst
          - name: harddrive
            persistentVolumeClaim:
              claimName: pvc-fedora-hd

    To start the VM

    The Deployed VM will be in a stopped state. To start the VM execute the below command
       [root@K8s-master Downloads]# virtctl start fedora-vm -n cdi
       VM fedora-vm was scheduled to start

    To access the VM via Web Console

    Deploy NoVnc

    Since NoVnc runs as nodeport, get the port # exposed by kubernetes
    kubectl get svc -n kubevirt virtvnc

    Open a web browser and replace <ip>:<port> from previous command and replace in (next line)

    If all goes well you should see the below in your web browser


    Clicking on VNC will bring up the install/setup screen for the VM