CaaS Part 1 - Kubernetes on Centos 7.x Bare Metal

I’ve found these talks really helpful in understanding what Kubernetes does and how it works from an architectural level:

“Hello World” with Kubernetes

I’d highly recommend starting your first cluster in Google Container Engine (which is Kubernetes under the hood) by following this guide Quickstart for Kubernetes on GCE and then moving on to deploying a sample app such as a simple PHP/Redis guestbook to get a feel for how to use and interact with Kubernetes at an introductory level. New accounts currently get $300 in credits, so this shouldn’t cost you anything for a good while.

Deploying with Kubespray

So now that we know what Kubernetes does and a few things about how apps and services get deployed, exposed, and scaled in Kubernetes, it’s time to build our own cluster. For that task, I’m choosing the Kubespray project. It follows a similar pattern to that of the Kolla project: clone a repo, edit your inventory and some vars in a file, then say “go”. Out pops a working cluster.

Requirements

What’s needed:

  • Some spare hardware on the same subnet (e.g. 172.22.10.x/24)
  • All systems running a recent Linux and Docker (e.g. Centos 7.x updated and Docker 1.12.3)
  • Kubespray cloned to a deployment host
  • Internet Access on all nodes
  • Firewall disabled on all nodes
  • SSH Key on a sudo-enabled account on all nodes
  • Ansible 2.x and python-netaddr installed on the deploy system
  • Centos 7 specific: TTY support for sudo in /etc/sudoers on all nodes

Deploy System

On my OSX system, I did the following:

  • git clone https://github.com/kubernetes-incubator/kargo
  • cd kargo
  • pip install -r requirements.txt
  • cp inventory/inventory.example inventory/hosts
  • Edited inventory/hosts to be:
[nodes]
mm1 ansible_ssh_host=172.22.10.51 ip=172.22.10.51 ansible_user=admin
mm2 ansible_ssh_host=172.22.10.52 ip=172.22.10.52 ansible_user=admin
mm3 ansible_ssh_host=172.22.10.53 ip=172.22.10.53 ansible_user=admin
smpc ansible_ssh_host=172.22.10.54 ip=172.22.10.54 ansible_user=admin
sm1 ansible_ssh_host=172.22.10.55 ip=172.22.10.55 ansible_user=admin
sm2 ansible_ssh_host=172.22.10.56 ip=172.22.10.56 ansible_user=admin
d1 ansible_ssh_host=172.22.10.57 ip=172.22.10.57 ansible_user=admin
d2 ansible_ssh_host=172.22.10.58 ip=172.22.10.58 ansible_user=admin

[kube-master]
mm1
mm2

[etcd]
mm1
mm2
mm3

[kube-node]
mm1
mm2
mm3
smpc
sm1
sm2
d1
d2

[k8s-cluster:children]
kube-node
kube-master

I decided to make all my nodes available for scheduling as “minions”, have two “masters” on mm1 and mm2, and have three etcd nodes on all three mac minis. Added up, this provides over 80 vCPUs and 300+GB of RAM all running on top of local SSDs.

Node Preparation

On each of the nodes, I ensured that

Defaults    !requiretty

was in my /etc/sudoers file.

I then ran:

systemctl stop firewalld
systemctl disable firewalld

to disable the firewall for now.

Kubespray

Finally, on the deployment host, I ran:

$ ansible-playbook -i inventory/hosts -b --become-user=root cluster.yml

About 15-20 mins later, I had a working Kubernetes v1.4.3 cluster on all my nodes. To verify, I SSHed into mm1 using ssh admin@172.22.10.51and ran:

$ kubectl get nodes
NAME      STATUS    AGE
d1        Ready     1d
d2        Ready     1d
mm1       Ready     1d
mm2       Ready     1d
mm3       Ready     1d
sm1       Ready     1d
sm2       Ready     1d
smpc      Ready     1d

and

$ kubectl cluster-info 
Kubernetes master is running at http://localhost:8080
dnsmasq is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/dnsmasq

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

and

$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}

Next Steps

At this point, I now have a working Kubernetes cluster with a 3-node etcd cluster (fully TLS enabled), 2 master nodes, and 8 total worker/minion nodes with almost zero additional configuration. This should hopefully set myself up for following the Multi-Node Kolla-Kubernetes guide for getting Kolla/Openstack going on a cluster. That, however, is for another day.

Back to Index