Migrate from Docker to cri-o in Kubernetes

In my previous post I had walked you through on how to change the container runtime to containerd.

In this post I will tell you on how to change the container runtime from docker to cri-o.

Kubernetes is deprecating Docker as a container runtime after v1.20. The dockershim/Docker, the layer between Kubernetes and containerd is deprecated and will be removed from version 1.22+. 

So if you are running docker you need to change to a supported container runtime interface (CRI). cri-o is a good choice, a Kubernetes-specific, high-level runtime. 

An extra advantage is, less overhead and there is no docker-shim and Docker translation layers.

 



Will change one node at a time, first the worker nodes then our control node...picking k8-worder1 node to switch

  1. Cordon and Drain node (from k8-master node execute below commands...)
kubectl cordon k8s-worker1
kubectl drain k8s-worker1 --ignore-daemonsets
  1. Stop services (this is to be done on k8-worker1 node)
systemctl stop kubelet
systemctl stop docker
  1. Remove docker (optional) (on k8-worker1 node)
yum remove docker-ce docker-ce-cli
  1. Install CRI-O(on k8-worker1 node)
    • Setup 2 environment variables
OS=CENTOS_8
VERSION=1.20
    • Download repos
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo    
https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS
/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo    
https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/
devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
    • Install cri-o
yum install crio
  1. Enable & Start cri-o (on k8-worker1 node)
systemctl enable crio
systemctl start crio
  1. Edit the file /var/lib/kubelet/config.yaml and change cgroupDriver (on k8-worker1 node).
cgroupDriver: systemd
  1. Then edit /etc/sysconfig/kubelet to inform we are going to use cri-o (on k8-worker1 node).
KUBELET_EXTRA_ARGS=--container-runtime=remote 
--container-runtime-endpoint=unix:///var/run/crio/crio.sock 
--runtime-request-timeout=10m --cgroup-driver="systemd"
  1. Start kubelet and Uncordon node (on k8-worker1 node)
systemctl start kubelet
kubectl uncordon k8s-worker1
  1. Check by running
kubectl get nodes -o wide

If all went fine , you should end up with...

 
NAME STATUS ROLES AGE VERSION INTERNAL-IP KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 94d v1.20.2 192.168.1.104 4.18.0-240.22.1.el8_3.x86_64 docker://20.10.5
k8s-worker1 Ready
94d v1.20.2 192.168.1.105 4.18.0-240.22.1.el8_3.x86_64 cri-o://1.20.0
k8s-worker2 Ready
94d v1.20.2 192.168.1.106 4.18.0-240.22.1.el8_3.x86_64 containerd://1.4.4