Kubernetes Cluster Autoscaler for OpenStack
Introduction
Kubernetes Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster. When there are pods that failed to run in the cluster due to insufficient resources, the cluster autoscaler will kick start and add new Kubernetes worker nodes to the cluster. This article is about Kubernetes Cluster Autoscaler for OpenStack, learning Golang project of mine. If you are looking for Kubernetes Cluster Autoscaler for OpenStack to run on a production cluster visit to official Kubernetes Autoscaler Github project. If you like to learn a little bit about my first Golang project keep scrolling.
Why Kubernetes Cluster Autoscaler?
There are several ways to install a Kubernetes Cluster. Kubeadm, Kubernetes Operations (kops), Kubicorn, Kubernetes the Hard Way, and event there are community developed Terraform and Ansible / Puppet resource to install a Kubernetes Cluster. At the cluster installation, we have to decide the worker node count. In a production cluster resources in those worker nodes can run out fast. The use of Horizontal Pod Autoscaler (HPA) is also contribute to resource outages. So, people either use more computer resources which leads to underutilized the Kubernetes cluster or use fewer computer resources which leads to overutilized the cluster. So, the use of the autoscaler is a best practice. Mager cloud providers support this feature in their managed clusters. Refer to this link to find all the supported cloud providers. Official Kubernetes Cluster Autoscaler supports OpenStack Magnum as well.
Difference between Official Cluster Autoscaler and mine
As mentioned earlier this is a learning project. As a result of me starts learning Golang. The full project is written purely with Golang. And the second reason is official Kubernetes cluster autoscaler use Magnum. So, to remove Magnum from the picture I wrote my own version of Kubernetes Cluster Autoscaler for OpenStack. My version works only with Nova, Glance, Neutron, and Horizon.
I used below tools and libraries to write my autoscaler, Source code is hosted here (https://github.com/Chathuru/kubernetes-cluster-autoscaler).
- Golang 1.14
- Kubernetes SDK client-go
- OpenStack SDK gophercloud
- YAML
- Plugin
Autoscaler listens to pod watch events. When a pod failed to schedule due to insufficient resources autoscaler capture that event. Then it spawns a new virtual machine by calling OpenStack APIs. When pods are deleted from worker nodes and start running ideal for a certain period of time autoscaler removes the node from the cluster.
A YAML file is used to configure the autoscaler. The above image shows a sample conf.yml file used. OpenStack authentication details, worker node minimum and maximum count, time to wait before removing the ideal worker node can configure with this YAML.
We can write plugins and put them into <git_repo_source>/bin/plugin directory to add support to other cloud providers and hypervisor. Plugins must include “ModifyEventAnalyzer” and “DeleteEventAnalyzer” functions. When loading a plugin, the autoscaler looks for these two functions.
Currently to use Kubernetes Cluster Autoscaler for OpenStack is a bit of work. First, install a Kubernetes Cluster using kubeadm. Then remove Kube proxy from the master node. Create a worker node image by referring to this article from Mumshad the Kubernetes Hard Way article. To make it easier I copped certificates and configuration from the Kube proxy install by kubeadm before deleting. You can also refer to the original Kubernetes Hard Way article from Kelsey Hightower at this link.