Where Should I Deploy My K8s Cluster?

Kubernetes

This week there was an announcement about Amazon Red Hat Openshift. It is an enterprise Kubernetes (K8s) service on AWS jointly managed and supported by AWS and Red Hat. Upon reading more about the service, found out that Red Hat already has two more OpenShift services available on AWS. If you count AWS’ own managed K8s service Amazon Elastic Kubernetes Service (EKS) then there are four different ways you can run a K8s cluster on top of AWS. I am sure there are many other companies providing similar managed K8s services on top of AWS.

For a beginner starting to use K8s this is overwhelming. This brings us to the question: Where should I deploy my K8s cluster?

As always, the answer is: It Depends

It depends on:

  • Your team’s K8s expertise
  • Your company’s budget
  • Your data locality requirements
  • Your preferred Cloud Vendor
  • Your company’s already existing deals with Software vendors
  • and many more.

You could run your K8s cluster in:

  • On Premises
  • Hybrid Cloud
  • IaaS
  • PaaS
  • Others

On Premises

If you already own datacenters or if you have strict privacy/security requirements for making sure the data does not leave your premises then On Premises solution is the way to go.

Install host operating system on the bare metal servers and then install K8s on top of it. Kubeadm is one option. However, be aware that this is a bare bones solution. You have to build/integrate authentication, authorization, dashboard, security, networking plugins, service mesh, storage, the list goes on.

You could choose to install Openshift or Rancher. These come fully loaded.

On Premises solutions are usually the slowest ones to complete the installation as you have to deal with hardware. It takes time to order, ship, install and configure them.

Hybrid Cloud

Amazon Outposts, Google Anthos and Azure Stack provide rack full of servers which you can install in your datacenter. These racks are connected to their Public cloud and you manage it just like VMs on public cloud.

This option gives you the flexibility of cloud deployment with the advantage of not managing the hardware.

Keep in mind that this is the most costliest option. Bill can run into millions easily.

Once you have Outposts, Anthos or Azure Stack rack on premises, you can use their managed K8s solution on top of it. Google Anthos GKE is one such option.

The timeline depends on the cloud provider and honestly I have no idea about it.

IaaS

If you need full control of the K8s cluster and you are a pro in managing K8s then this is the option to go.

You install K8s on top of Amazon EC2 or Google Compute Engine or Azure Virtual Machine.

Several K8s deployment tools like kops, kubespray or KRIB exist. You can also install Red Hat Openshift or Rancher on the virtual machines.

Use this option only when you have experience running k8s clusters.

PaaS

If all you wanted is a K8s cluster and don’t know or don’t want to know K8s cluster management then this option is for you.

Managed K8s solutions like Google GKE, Amazon EKS, Amazon Red Hat Openshift, Azure AKS does fit the bill.

You click a button and you get a cluster and the kubeconfig/credentials to the cluster.

You might want to customize some options, enable logging, move the API server to private endpoint etc.

Usually this is a good place to start for development clusters. Deploy the k8s cluster, tune it, test it, run your applications and then customize more.

Since this is a managed solution, you will not have full control of the cluster. You have to use whatever version they support, don’t get access to the API server or etcd servers barring some flags.

Others

Minikube, kind, k3s are for developments purposes. These software are light weight and are designed to run on your laptop.

These solutions can be used for learning about k8s, for local testing of your applications.

K8s distributions like Red Hat Openshift or Rancher can be installed on bare metal, IaaS, and PaaS. Usually this option is useful if you have more than one type of infrastructure and you want to use the same K8s distribution everywhere. You could build automation on top of it and deploy it any where you want.

Conclusion

In this blog post I have tried to cover different ways you can deploy your kubernetes cluster. This list is not exhaustive and I might have missed some options.

Kubernetes has become the industry standard for running containers and all the major public cloud providers have K8s services.

Purpose of writing this blog post is to showcase the variety of K8s deployment options you have, be it on a bare metal server, or virtual machine or managed solution like this week’s announcement of Amazon Red Hat Openshift.

Comments

Send your feedback by commenting on my tweet below.