Play with Kubernetes!

How to install Kubernetes locally via Docker

In this blog post, I’ll discuss the procedure of installing Kubernetes locally via Docker. I am now going to set up a simple,single node Kubernetes cluster using Docker. 

First of all, we should make sure that we have installed Docker in our machine correctly. For that, If you need any help, refer this which is one of my previous blog posts that describes the procedure of installing Docker in a local machine.

Then you are almost ready to install Kubernetes. Final setup of this process is shown below.
k8s-singlenode-docker.png

As shown in the above figure, the final result is a single machine that will be both the master and the worker node. Etcd is used for storage. The final set up has some Kubernetes components like Kubernetes master, service proxy and kubelet. There are also some containers to run.

There are three main steps in the process of installing Kubernetes locally via Docker.


Step 1 : Run etcd

Use the following command to run the etcd. If etcd image does not exist in your local host, it will be downloaded from Docker Hub.

docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data

Kubernetes uses etcd to distribute information across the cluster by storing master state and configuration. Etcd is a key-value store designed for strong consistency and high-availability. The various master components watch this data and act accordingly. For example, starting a new container to maintain a desired number of replicas.

Step 2 : Run master

Then run the master. Similar to etcd, if master image does not exist in your local host, it will be downloaded from Docker Hub.

docker run \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/dev:/dev \
    --volume=/var/lib/docker/:/var/lib/docker:ro \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged=true \
    -d \
    gcr.io/google_containers/hyperkube:v1.1.1 \
    /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
    

Kubernetes has a simple Master-Minion architecture . Master provides all the Kubernetes infrastructure to manage containers. The master handles the APIs, scheduler and the replication controller.
  • API Server: API server provides RESTful Kubernetes API to manage cluster configuration, backed by the etcd datastore. 
  • Scheduler: The scheduler determines what should run based on capacity and constraints. the replication controller ensures the right number of nodes have replicated pods. It places unscheduled pods on nodes according to labels.
  • Controller Manager: This manages all cluster-level functions, including creating and updating endpoints, discovering nodes, managing and monitoring pods etc. 
Executing the run command for master actually runs the kubelet, which in turn runs a pod that contains the other master components or cluster wide components that forms the Kubernetes master.

Step 3 : Run service proxy

Then run the service proxy. Similar to etcd and master, if image of service proxy does not exist in your local host, it will be downloaded from Docker Hub.


docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.1.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2

The service proxy is also known as the kube-proxy. It runs on each node and provides simple network proxy and load balancing capability. This service proxy enables services to be exposed with a stable network address and name. In brief, service proxy is a combination of load balancing and DNS service discovery that provides Kubernetes services which are collections of containers.
That’s it. At this point, you should have a running Kubernetes cluster. Let’s test it out.

Check the containers in your local host by executing the command : docker ps  
We have actually started a kubelet which is responsible for managing all the containers. And its surface forms a variety of containers such as API server and scheduler as listed in the above container list.


Test it out

We can test the running Kubernetes cluster using kubelet binary.


  • Download the kubectl binary, which is a command line tool to interface with Kubernetes. For that enter the following command.

  • $ wget
    http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl


  • You may have to make it readable by

  • chmod a+x kubectl


  • Then you can list the nodes in your cluster by executing the following command.
  • ./kubectl get nodes

    This should display the following.






  • The quickest way to start a container in the cluster is to use kubectl command. For eg: If you want to start a nginx container, use this command.

  • ./kubectl run web --image=nginx

    Run an application

    ./kubectl -s http://localhost:8080 run nginx --image=nginx --port=80

    Executing docker ps will display you nginx running.


    Expose it as a service

    ./kubectl expose rc nginx --port=80

    Following command will show you the IP address of this service.

    ./kubectl get svc nginx --template={{.spec.clusterIP}}

    Hit the web server with that IP address.

    curl <insert-cluster-ip-here>

    The result is shown below.

    Selection_002.png


    Pods are usually created using configuration artifacts. Here is an example of a pod configuration for the nginx pod. Create a file and save it as nginx-pod.yml.

    apiVersion: v1
    kind: Pod
    metadata:
    name: nginx
    labels:
    app: web
    spec:
    containers:
    - name: nginx
    image: nginx

    Then execute the following command.

    ./kubectl create -f nginx-pod.yml


    Kubernetes will pull the container image, start the container and register the pod. You can see the nginx pod running by the following command. Nginx container is now running in a pod. 

    ./kubectl get pods




    Selection_003.png







    Each pod is assigned an IP address from an internal network for inter communication. These IP addresses are accessible locally.

    Let’s check nginx is really running by using wget to fetch the index page:

    wget <ip address of the nginx pod>


    Selection_007.png


    We can access the welcome page of nginx using the browser as well. We will see the nginx welcome page as below. We have now deployed our first Kubernetes pod!





    0 comments :

    Post a Comment