This is yet another container related article. You might ask, do we really need another tool? Why do we need it? Aren't things complicated enough? The answer is not very simple.
Containers as a concept and docker as an implementation have many practical applications, that span from the development cycle up to deployment in production environments.
Actually this one of the main selling points of docker. That you can run in production exactly the same artifact you run in your development environment.
The problem is that in todays world, applications are slightly more complicated than they used to be, we need ways to support auto scaling and high availability not only in isolated components (containers) but on whole environments as well. We also need self-healing and disaster with as much automation as possible. The paradigm of microservices relies in having large numbers of minimal systems, that talk to each other. Without automation it is impossible to manage them, and without monitoring and alerting it is impossible to catch any glitches in time in order to provide highly available services of high quality.
Some of the tools that we have today, cover some of these challenges, but leave others unaddressed.
Kubernetes tries to provide an enterprise level answer to most of these problems, and in this article we will try to take a glimpse of its functionality.
Using GCE is not mandatory. There are alternative configurations:
The backend used during this investigation was Vagrant.
Kubernetes provides tools to manage:
The main entity in Kubernetes is not the container. It is a Pod.
Pods are the smallest deployable units that can be created, scheduled, and managed. A pod corresponds to a colocated group of applications running with a shared context.
A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled.
The context of the pod can be defined as the conjunction of several Linux namespaces:
In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. PID namespace sharing is not yet implemented with Docker.
Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs.
Individual pods are not intended to run multiple instances of the same application, in general.
Pods aren't intended to be treated as durable pets. They won't survive scheduling failures, node failures, or other evictions.
Users should almost always use controllers (e.g., replication controller), even for singletons.
Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
The current best practice for pets is to create a replication controller with replicas equal to 1 and a corresponding service.
Kubernetes tries to solve 4 distinct networking problems:
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
What this means in practice is that you can not just take two computers running Docker and expect Kubernetes to work. You must ensure that the fundamental requirements are met.
In order to meet the requirements, we need to use software defined networks (SDN) and wire that up to our docker services in all of our nodes
In order to install and run kubernetes we need to:
export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash
It will create a master and a minion based on a fedora box. Kubernetes 1.0.4 was installed. For some reason kubelet.service failed to start a few times, eventually it managed to start. Docker service is also failing in master. After manuall fixing the docker service, salt managed to import all the necessary components for kubernetes (and the api service) to start properly.
The software network switch used by vagrant based kubernetes is openVSwitch.
After getting Kubernetes up and running we can interact with it through its command line tools:
[vagrant@kubernetes-master ~]$ kubectl get nodes NAME LABELS STATUS 10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready [vagrant@kubernetes-master ~]$
[vagrant@kubernetes-master ~]$ kubectl cluster-info Kubernetes master is running at http://localhost:8080 KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui [vagrant@kubernetes-master ~]$
[vagrant@kubernetes-master ~]$ kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.247.0.1 443/TCP [vagrant@kubernetes-master ~]$
[vagrant@kubernetes-master ~]$ kubectl run my-nginx --image=nginx --replicas=3 --port=80 CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-nginx my-nginx nginx run=my-nginx 3 [vagrant@kubernetes-master ~]$ [vagrant@kubernetes-master ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-e2r0j 0/1 Pending 0 38s my-nginx-ippdj 0/1 Pending 0 38s my-nginx-s0nu7 0/1 Pending 0 38s [vagrant@kubernetes-master ~]$
[vagrant@kubernetes-minion-1 ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8dade8cd9b7a nginx "nginx -g 'daemon off" 7 seconds ago Up 4 seconds k8s_my-nginx.f89646ea_my-nginx-ippdj_default_b9cdbd6d-669e-11e5-beed-080027fdddda_b5daab66 3d0e158e20cb nginx "nginx -g 'daemon off" 7 seconds ago Up 4 seconds k8s_my-nginx.f89646ea_my-nginx-s0nu7_default_b9cc421e-669e-11e5-beed-080027fdddda_5b8962d4 7a72f793bc01 nginx "nginx -g 'daemon off" 7 seconds ago Up 4 seconds k8s_my-nginx.f89646ea_my-nginx-e2r0j_default_b9ce41d5-669e-11e5-beed-080027fdddda_247b2e9d 3662646b2f7b gcr.io/google_containers/pause:0.8.0 "/pause" About a minute ago Up About a minute k8s_POD.ef28e851_my-nginx-ippdj_default_b9cdbd6d-669e-11e5-beed-080027fdddda_3c4744cd d851e7be10aa gcr.io/google_containers/pause:0.8.0 "/pause" About a minute ago Up About a minute k8s_POD.ef28e851_my-nginx-e2r0j_default_b9ce41d5-669e-11e5-beed-080027fdddda_34da036a 6d3a77f14422 gcr.io/google_containers/pause:0.8.0 "/pause" About a minute ago Up About a minute k8s_POD.ef28e851_my-nginx-s0nu7_default_b9cc421e-669e-11e5-beed-080027fdddda_e1f203dc 781ac93991a8 gcr.io/google_containers/exechealthz:1.0 "/exechealthz '-cmd=n" 5 minutes ago Up 5 minutes k8s_healthz.9183a299_kube-dns-v8-hsa7v_kube-system_cc3d0538-669d-11e5-beed-080027fdddda_4708cfdc 3e52ceb86952 gcr.io/google_containers/skydns:2015-03-11-001 "/skydns -machines=ht" 6 minutes ago Up 5 minutes k8s_skydns.b5272dfc_kube-dns-v8-hsa7v_kube-system_cc3d0538-669d-11e5-beed-080027fdddda_da045f37 cae8358a9d37 gcr.io/google_containers/kube2sky:1.11 "/kube2sky -domain=cl" 6 minutes ago Up 6 minutes k8s_kube2sky.68146a83_kube-dns-v8-hsa7v_kube-system_cc3d0538-669d-11e5-beed-080027fdddda_d2edcdf4 7bce50f6e30e gcr.io/google_containers/kube-ui:v1.1 "/kube-ui" 6 minutes ago Up 6 minutes k8s_kube-ui.593183d8_kube-ui-v1-k7ofv_kube-system_cc391e03-669d-11e5-beed-080027fdddda_0765df4b 93e33799923c gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " 6 minutes ago Up 6 minutes k8s_etcd.7b5ebf3b_kube-dns-v8-hsa7v_kube-system_cc3d0538-669d-11e5-beed-080027fdddda_cb8bff75 c776a62bcbd6 gcr.io/google_containers/pause:0.8.0 "/pause" 6 minutes ago Up 6 minutes k8s_POD.2688308a_kube-dns-v8-hsa7v_kube-system_cc3d0538-669d-11e5-beed-080027fdddda_a1d945be 8dc82410855c gcr.io/google_containers/pause:0.8.0 "/pause" 6 minutes ago Up 6 minutes k8s_POD.3b46e8b9_kube-ui-v1-k7ofv_kube-system_cc391e03-669d-11e5-beed-080027fdddda_c2866347 [vagrant@kubernetes-minion-1 ~]$
As we can see multiple docker containers are started in the minion, some for the nginx pods others for kubernetes health monitoring, service discovery etc
we can retrieve the cluster state from the master:
[vagrant@kubernetes-master ~]$ kubectl get replicationcontrollers CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-nginx my-nginx nginx run=my-nginx 3 [vagrant@kubernetes-master ~]$