Cloud Native Revolution


Have you heard all the buzz around cloud native applications? So, what do they really mean? During 2000s, lot of IT companies like Google, eBay and Tweeter required efficient means of building and deploying their applications in order to provide global services. Along the way, these companies realized that they need a stable architecture without rewriting their architecture over and over again. So this made the entrance to cloud native applications. Cloud native refers to applications that are installed in cloud based virtual machines. Cloud native applications use an elastic infrastructure. This underlying infrastructure should support the applications to scale up and down at a rapid rate, with the ability to offer millions of nodes or instances at the same time. As I think, adopting cloud native applications helps an organization to transform their information technology into a force for true agility. 

Virtual Machines

During the early stages of cloud computing, the primary mover was the virtual machine. Many other hardware virtualization methods sprang.

  • One of the most important open source project called Xen project had been started with the expectation to power some of the largest clouds in production today. Amazon web services, Aliyun, Rackspace Public Cloud and many other hosting services use Xen Project software. It is also integrated with multiple cloud orchestration projects like OpenStack. 
  • KVM (Kernel-based Virtual Machine) was introduced as a virtualization infrastructure for the Linux kernel that turns into a hypervisor. 
  • Microsoft Hyper-V on Windows was introduced to provide software infrastructure and basic management tools to create and manage a virtualized server computing environment.

Multi tenancy

Multi tenancy is a type of architecture in which a single instance of a software application serves multiple customers, where each customer is called a tenant. A tenant can customize some parts of the application, but not the application's code. Multi tenancy can be identified as a model where each computer has accounts and users can log into their accounts and share disk space and CPU resources. Multi tenant computing takes advantage of resource sharing, virtualization and remote access. 

These technologies that were originally purposely built for a small set of selected companies like Google, eBay and Facebook are now been brought to the world through open source projects so that everyone other than the handful of companies with large budgets and large teams can also benefit from them. 
And that's where this really leads to a revolution: a cloud native revolution! 


Containerization



The concept of "containerization" is one of the most important areas of innovation for modern computing. This has a huge impact on cloud computing and on how enterprises develop their cloud applications. 

Over the last decade, applications were monolithic. They were built on a single stack such as .NET, Java etc. and they had been deployed to a single server. Any how, they lived long. Then came load balancers where the workload was distributed across multiple computing resources. Still they were monolithic.

After a huge revolution, today applications are constantly developed where security patches and updates are often done. New versions are being developed often. They are built from loosely coupled components and they are deployed to a multitude of servers. This is where containers comes into play.

In brief container is a,
                           Lightweight Linux environment
                           Deployable application
                           Introspectable, runnable artifact

So what does a container provide that a virtual machine does not?

  • Simple deployment : A container simplifies the deployment process of the application no matter where you are deploying it. It packages the application as a single addressable, registry stored, deployable component. 
  • Rapid availability : The package can boot faster compared to a modern virtual machine. 
  • Leverage micro services : Containers allow developers and operators to subdivide the compute resources further. 
  • Better performance 
  • Faster provisioning of resources : Since the developer has in their laptop plenty of compute power to run multiple containers, development is fast and easy. 
  • Quicker availability of new application instances : Release administration is easy as pushing a new container is a single command. 
  • Easy and cheap testing : With a container, we can do thousands of simple tests at the same cost. 
The significance of containers also lies in the fact that, no matter where we run it, it looks the same everywhere we run it. Therefor using containers, we can install the application in a production data center and we can make it to work exactly the same in the development environment. 

According to my point of view, the most important feature that containerization ensures is a portable, consistent format for running applications on diverse hosts. Considered as a whole, containerization allows workloads to easily flow to where they are cheapest and fastest to run. 

Containers are the idea of running multiple applications on a single host. Instead of virtualizing a server to create multiple operating systems, containers offer a more lightweight solution by virtualizing the operating system, allowing multiple workloads to run on single host. 

Recently containers became popularized by a containerization mechanism called Docker.

What is Docker ?

Software’s next big thing : a new way to build and ship applications


Docker is a platform for managing Linux containers. In brief, Docker is a containerization mechanism. It is a new open source container technology that makes it easier to get many applications running on the same old servers. It also makes packaging and shipping programs very easy. 

Docker was started in March 2013. It began as an open source implementation of the deployment engine which powers dotCloud. By today, Docker has become a huge industry phenomenon. During the past few months, many industrial giants such as Dell, HP, Google, IBM, Microsoft, Amazon and RedHat have asked to join forces with Docker. This has been a great achievement of the Docker’s founder, Solomon Hykes. I was surprised to get to know that he started working on Docker in his mother’s Paris basement as a tiny side thing that he thought only a handful of other people would ever care about. But do you know that Docker has become the most popular container standard by today?

Docker has introduced a new revolution for packaging and deploying applications on Linux servers. I love open source software, because it allows anyone to view and modify the code, which can then be returned to the original application. There is no doubt that contributing to an open source software provides a sense of satisfaction which in turns provides a service for everyone who uses it. With open source software such as Docker, you get the application code which you can work with if you want. Otherwise you can just download the application and run it. It is really awesome !

Traditional virtualization vs Docker virtualization


Traditional virtualization

Docker virtualization














App-A and App-B are two virtualized applications. Each application in traditional virtualization includes the necessary binaries and libraries along with a guest operating system which has a size of several Gigabytes. Contrasting to this type of virtualization, docker engine container comprises just the application and its dependencies. It runs as an isolated process on the host operating system while sharing the kernel with other containers. Therefore it enjoys the benefits of resource allocation, portability and efficiency.

This is how Docker.com describes Docker:
"Docker allows you to package an application with all of its dependencies into a standardized unit for software development"

Docker exhibits three main characteristics.
  • Lightweight : All containers that run on a single machine share the same operating system kernel. They start instantly and make efficient use of RAM. 
  • Open : Docker containers are based on open standards. Docker allows all containers to run on all major Linux distributions and Microsoft operating systems. 
  • Secure : Containers act as a layer of protection for the application by isolating the applications from each other and the underlying infrastructure. 
These features made more community attraction towards Docker.

What's so special?

In Docker's own words:
"Docker aims to enable a new age of agile and creative development, by building 'the button' that enables any code to instantly and consistently run on any server, anywhere.

Docker is an open source engine that enables any application to be deployed as a lightweight, portable, self-sufficient container that will run virtually anywhere. By delivering on the twin promises "Build Once…Run Anywhere” and “Configure Once…Run Anything," Docker has seen explosive growth, and its impact is being seen across devops, PaaS, scale-out, hybrid cloud and other environments that need a lightweight alternative to traditional virtualization."

Docker project is increasing the attention of developers and Dev Ops communities. These are some of the Docker highlights.

  • Over 140,000 container downloads
  • Over 6,700 GitHub stars and over 800 forks
  • Over 600 GitHub Dockerfiles created in three months
  • Thousands of containerized applications on the Docker public registry
  • Over 150 projects built on top of the open source engine
  • Over 50 Meetups in 30 cities around the world
  • Almost 200 contributors, 92 percent of whom don’t work for Docker, Inc.
It is not surprising that companies like Yandex, eBay, Rackspace and CludFare have already started their journey with Docker. Docker is used in many important enterprise projects like Chef, Puppet, Travis and Jerkins.

Before starting my journey with Docker, I tried to analyse its architecture first. 




Understanding Docker's Architecture

Have you ever come up with any bitter experience where a certain software works in your friend's machine, but not works in your machine? This is a very common situation where we come across. The applications need to prepare the required environment and fulfill the required prerequisites prior the process of running.

Sometimes a code you have developed may not work as you expected in another environment. This happens because you might be on a different operating system, say your machine is a Windows machine and you are going to push it out to a Linux server. Many problems arise when the environments are not the same. Docker solves this problem !

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. It helps to deliver the applications fast. Docker separates applications from its infrastructure like a managed application. With Docker, we can ship code faster, test code faster and deploy code faster. This will shorten the cycle between writing the code and running the code.

With Docker, the applications run securely isolated in a container. Many containers can be run simultaneously as Docker provides isolation and security. The lightweight nature of containers allows to get more out of the hardware.

Docker allows developers to develop on local containers that contain the applications and services. It can then be integrated into a continuous integration and deployment workflow. The developers can share their development code with others. Then the code can be pushed onto a test environment. After that they can be pushed into production.

The process of deploying and scaling is more easy. Docker supports portable workloads as they can run on a developer's local host, on physical or virtual machines in a data center or in the Cloud. With Docker, applications and services can be quickly scaled up and down in near real time.

Major Docker components

There are two major Docker components.
                Docker: the open source container virtualization platform.
                Docker Hub: Software-as-a-Service platform for sharing and managing Docker containers.


Docker's architecture

Docker has a client server architecture as below.


Docker client talks to the Docker daemon. This communication happens via sockets or through a RESTful API. Docker client and the Docker Daemon can reside in the same system or a remote system.

Docker daemon 

Runs on a host machine. This does the job of building, running and distributing the Docker containers. User interacts with the Docker daemon through Docker client.

Docker client

This is the primary user interface to Docker. It accepts the commands from the user and communicates with the Docker daemon. For eg: If the user requires to run a docker image, the user enters the command 'docker run' along with the image name, so that the Docker client passes the message to the Docker daemon to execute the action. Docker daemon checks for the image locally on the host and if it doesn't exist, Docker downloads it from the Docker Hub registry.

Docker images



Docker image is a read only template. Docker image contains the installments that are required. The complete application is wrapped up in an image. These images are used to create Docker containers.

If a machine has Docker on it, we can run as many containers as we want from that Docker image. Say you want to run an image for a node.js application in your machine. This whole image, source code and all will run with a complete environment. You do not need to install node.js in your machine, because image has every thing. Everything you need to run a particular container is defined in its corresponding image. If Docker is installed, that's all you need.

Each image has a series of layers. Docker makes use of union file systems to combine theses layers into a single image. These layers make Docker lightweight. Suppose you upgrade an application to a new version by updating its image. Then a new layer gets built. Now you don't need to distribute a whole new image, just the update, making distributing Docker images faster and simpler.

Every image starts from a base image and every container uses the same base image. Docker images are built from these base images using a set of instructions which is stored in a file called Dockerfile. Dockerfile is usually a small file where each instruction in this file creates a new layer in the image. These instructions can include running a command, adding a file or directory, creating an environment variable etc. When you request a build of an image, Docker reads the Dockerfile and executes the instructions in order to return an image.

Docker image
Docker registry

This is where the Docker images are stored and shared. Registry include private as well as public repositories. Images can be uploaded or downloaded from Docker registry. Docker Hub which is like Git Hub is a public SaaS that provides the public Docker registry. We can use existing images from the registry, rebuild them and push them back to the registry so that others also can share it. Hence we can create our own images or use images that others have created.

Public repositories are searchable and can be downloaded by anyone. Private repositories are excluded from search results, because only you and your users can pull images down and use them to build containers.

Docker containers

A Docker container holds everything that is needed for an application to run. It might contain an operating system, user added files and meta data. How is a container created from an image? Image tells Docker what the container holds and what process to run when the container is launched along with some configuration data. When Docker runs a container from an image, it adds a red write layer on top of the image in which the specific application can then run. Docker containers can be started, run, stopped or deleted. Each container is an isolated and secure application platform. As I feel, a container is the run time representation of the image.

Underlying technology

Docker is written in a programming language called 'Go'.
Uses several Linux features.
  1. Namespaces : To provide isolation
  2. Control groups : To share and limit hardware resources
  3. Union file system : Makes it light and fast by layering
  4. libcontainer : Defines container format

    SUMMARY : 

    • Build component of Docker is Docker image.
    • Distribution component of Docker is Docker registry.
    • Run component of Docker is Docker container.

    Hello World with Docker !


    To start using Docker, I installed Docker on my machine. At first Docker was only available on Ubuntu. But now Docker engine is supported on Cloud, OS X and Windows as well. Docker can be installed on Mac or Windows using boot2docker by downloading tiny core Linux VM.

    Installing Docker for Ubuntu

    Log into the Ubuntu installation as a user with sudo privileges

    Make sure all your list of packages from all repositories and PPA's is up to date.
    $ sudo apt-get update
    Download and install Docker.
    $ sudo apt-get install -y docker.io
    Then verify that Docker has been installed correctly.
    $ sudo docker version
    If you have installed Docker successfully, above command should pop up details about the client and the server.



    Then I ran the 'Hello-world' image.


    So what happens when we run this command? 


    1. Docker checks for the presence of the hello-world image locally on the host. Since the image does not exist locally, Docker tries to download or pull the latest hello-world image from Docker Hub. Each line which indicates 'Pull complete' ensures that each layer of the image is pulled successfully. These layers may correspond to installing some prerequisites for the application to run.
    2. Then it creates a new container. Once Docker has the image, it uses it to create a container. A container is created in the file system and a read-write layer is added to the image.
    3. Then it creates a network interface that allows Docker container to talk to the local host.An IP address is attached.
    4. After that application is executed. In this scenario, a message 'Hello from Docker' is displayed. If that message pops up, we can ensure that installation of Docker has been completed successfully. 
    The above command has three parts.
                docker : This tells the operating system that you are using the Docker program
                run : This is a sub command that creates and runs a docker container
                hello-world : This tells Docker which image to load into the container

    Working with images

    The key to start working with any docker container is using images. There are many freely available images shared across docker image index and the command line interface allows simple access to query the image repository and to download new ones.

    Docker help : If you are struggled to find out what commands to use with Docker, type docker help or docker and it will provide all the basic Docker commands as follows.

    List images : If you want to list the images that are locally available in your host, use the command sudo docker images and the details of the available images will be shown. 




    Search images : If you want to search for a specific image, use the command below. For eg: If you enter the command, docker search ubuntu, it will display a very long list of available images matching the query 'ubuntu'.
                                           sudo docker search <image-name> 

    Download images : If you are to build a container, you need to have an image at the host machine. If the image is not available in the host, Docker has to download or pull the image from the Docker Hub. 
                                           sudo docker pull <image-name> 

    Commit images : Committing makes sure that everything continues from where they left next time you use one.
                                           sudo docker commit <container-ID> <image-name>


    Push images : Once we have created our own container, we can sign up at Docker Hub and then we can push our images to Docker Hub so that we can share them with rest of the world.
                                           sudo docker push <user-name / image-name> 



    Working with containers

    Create a new container : To create a new container, we need to use a base image and specify a command to run. 
                                          sudo docker run <image-name> <command-to-run> 

    Run a container : We can get a stopped container to run using this command.
                      sudo docker run <container-id> 

    Stop a container : Stop a container's process from running.
                                           sudo docker stop <container-id>


    Delete a container : Delete an existing container. 
                                          sudo docker rm <container-id>


    List the running containers.
                                          sudo docker ps


    Docker Pros

    1. Extreme application portability
    2. Very easy to create and work with derivative.
    3. Fast boot on containers.

    Docker Cons

    1. Host-centric solution
    2. No higher level provisioning
    3. No usage tracking or reporting

      What is Kubernetes?


      Kubernetes has become an interesting topic in the present cloud native revolution. Have you ever used Docker? Docker is an open platform for developing, shipping and running applications by creating containers for software applications. Docker helps to deliver applications fast by separating the application from its underlying infrastructure. It is also like package once and deploy anywhere. If you have the Docker virtual machine running on whatever the operating system it is, your application will run seamlessly. It has lots of benefits in terms of replicating your development environment, production environment and testing environment. Some of the key benefits that Docker provides are listed below.

      • Faster deployment
      • Isolation : Docker is a way of running a process so that it thinks and behaves as if it is the only process running in the computer. 
      • Portability : Docker lets people to create and share software through Docker images. Using Docker, you don’t have to worry about whether your computer can run the software in a Docker image, a Docker container can always run it. 
      • Snapshotting : Docker can snapshot your environment by saving the state of the container file or image file, tag them and recreate them again.
      • Security sandbox : Because it runs on Linux containers.
      • Limit resource usage
      • Simplified dependency : Docker allows you to package an application with all of its dependencies into a standardized unit for software development.
      • Sharing 
      It will be better to have some prior knowledge about Docker to proceed with this article.[1], [2], [3] are some blog posts that I have written about the fundamental concepts of Docker, Docker’s architecture and a simple ‘Hello-World’ example with Docker.

      In this blog post, I’ll discuss some basic concepts of Kubernetes and the architecture of the process of running Kubernetes locally via Docker.


      What is Kubernetes?


      Containers are everywhere. So we need some kind of a distributed process manager.


      Selection_001.png


      If you are building an application, you have lots of components as parts of that application. Today we have very complex software applications. They need to be deployed and updated at a very rapid pace. With lots of containers, it becomes a hard work to manage and keep them running in production. Just think of a web application. You will have some kind of application server, a database server, a web server, a reverse proxy, load balancers and many other things. Considering in the perspective of micro services, this web application might be further decomposed into many loosely coupled services as well. Does that mean that everything needs to be a container? Even a simple web application requires a number of containers. On the other hand, each of these containers will require replicas for scale out and high availability. It is going to be a mess if we are to manage an infrastructure with lots of containers like this. It’s not just the number of containers that becomes challenging. Services are also required to be deployed together to various regions. Hence we need some kind of orchestration system to manage the containers. Well, that’s where Kubernetes comes in.

      Kubernetes is an open source orchestration system developed for managing containerized applications across multiple hosts in a clustered environment. Orchestration means the process of automated arrangement, coordination and management of complex computer systems , middleware and services. In brief, Kubernetes handles the execution of a defined workflow.

      The name Kubernetes originates from Greek, meaning “helmsman” or “pilot”. “K8s” is an abbreviation derived by replacing the eight letters “ubernete” with “8”. Kubernetes project was started by Google in 2014 and now it is supported by many companies like Microsoft, RedHat, IBM and Docker. There are almost 400 contributors from across industry, over 8000 stars and 12000+ commits on GitHub.


      Google has been using containerization technology for over ten years. They have containers for everything. Google has an internal system called “Borg” that runs Google’s entire infrastructure to manage vast server clusters across the globe. Google has maintained it as a secret source that until not long ago was never mentioned, even as a secret code name. Any way, Google stepped further and started an open source implementation of a container management system called Kubernetes which was inspired by Borg and its predecessor. As they describe, Kubernetes is even better than Borg. Since Kubernetes is free and available to all of us, it is awesome! With Kubernetes, Google shares its container expertise.


      The significance of Kubernetes lies in the fact that it provides declarative primitives for the ‘desired state’. This is done by self-healing, auto-restarting, scheduling across hosts and replicating. Say you tell for Kubernetes that you need three servers to be up, not more or not less. Then Kubernetes always makes sure that three servers are up, so that if a server goes down, it brings it back up. If an additional server spins up, it kills that server. And that’s what exactly Kubernetes does. Thus Kubernetes actively manages the containers to ensure that the state of the cluster continually matches the user’s intentions.

      With Kubernetes, we get the following benefits.
                    Scale our applications on the fly
                    Roll out new features
                    Optimize use of hardware by using only the required resources

      How Kubernetes is related to Docker?


      Kubernetes also supports Docker. The purpose of using Kubernetes is to manage a cluster of Linux containers as a single system. It can be used to manage and run Docker containers across multiple hosts. On the other hand, it provides co-location of containers, service discovery and replication control as well. 

      Kubernetes treats groups of Docker containers as single units with their own addressable IP across hosts and scale them as you wish by letting Kubernetes take care of the details. Kubernetes provides a means of scaling and balancing the Docker containers across multiple Docker hosts. It also adds a higher level API to define how containers are logically grouped and load balanced.

      Kubernetes architecture


      This is a representation of the Kubernetes Master-minion architecture. 

      kubernetes-key-concepts.png

      Let’s try to understand the main components of Kubernetes.

      1. Pod
      Selection_002.png

      Pods are the smallest deployable units that can be created, scheduled and managed. In Kubernetes, containers run inside pods. Pod is a collection of containers that belong to an application. Closely related containers are grouped together in a pod. A pod can contain one or more containers. They can be deployed and scaled as a single application. They are managed and scheduled as a unit and they share an environment of resources. A pod can contain a main container accompanied with helper containers that facilitate related tasks.
      The containers inside a pod live and die together. So if you have some processes that require same host or need to interact with each other very tightly, pod is a kind of way to group those processes together. A pod file is a file in JSON or YAML and it basically specifies what containers are to be launched by Kubernetes.

      In Docker’s perspective, pod is a collocated group of Docker containers that share an IP and storage volume.


      • Group of containers : Reuse across environments 
      • Settings in a template : Repeatable, manageable
      2. Service
      Key abstraction in Kubernetes is called a service. Service is a set of pods that work together. They are exposed with a single and stable name and network address. With or without an external load balancer, service provides load balancing to the underlying pods. Kubernetes provides load balancing for all components of the system. Services provide an interface to a group of containers so that users do not have to worry about anything beyond a single access location.

      • Stable address : Clients shielded from implementation details 
      • Decoupled from Controllers : Independently control each, build for resiliency
      3. Replication controllers

      Replication controllers manage the lifecycle of pods by ensuring specific number of pod replicas or minions are running. Say you want three replicas of a pod to run, and Kubernetes always ensures that using replication controllers by defining how many pods or containers are need to be run at a particular time. If one of them fails, a new one is started. Kubernetes job is to keep three replicas running at all the time. When you need more or less, you update the definition and then it will be updated.

      • Keeps Pods running : Restarts Pods, desired state 
      • Gives direct control of number of Pods : Fine grained control for scaling
      4. Label

      Label is a simple name value pair. Each of the above components like pod, service and replication controllers talk to each other using labels.

      5. Master

      The controlling unit in a Kubernetes cluster is called the master server. It is the main management contact point for the administrators. Master server runs different services that are used to manage the cluster’s workload and direct communications across the system. Some components that are specific to Master server are as follows.

      • API server : This is the main management center for the entire cluster. API server allows a user to configure Kubernetes workloads and organizational units. Further, it makes sure that the etcd store and the service details of containers are in agreement. API server has a RESTful interface in order to communicate with many different tools and libraries. 
      • etcd : Kubernetes should have a globally available configuration store. etcd is used to store configuration data to be used by each of the nodes in the cluster. etcd is a lightweight, distributed key value store that can be distributed across multiple nodes. 
      • Controller manager server : This is used to handle the replication processes. Controller manager server watches for changes and if a change is seen, it reads the new information and implements the replication process that fulfills the desired state by scaling the application group up or down. 
      • Scheduler server : Scheduler assigns workloads to specific nodes in the cluster. Scheduler also looks for the resource utilization on each host to make sure that workloads are not scheduled in excess of the available resources. For that, the scheduler must know the total resources available on each server as well as resources allocated to existing workloads assigned on each server. 
      6. kubelet

      Each minion runs services to run containers and minions are managed by the master. In addition to Docker, Kubelet is another key service installed there. Kubelet reads container manifests as YAML files that describes a pod and it ensures that the containers defined in the pods are started and continue running.

      7. Minion
      Minion is like a node. Each of the multiple Docker hosts with the Kubelet service that receive orders from the master, and manages the host running containers. In a node or a minion, you can have a pod running and within the pod, you have a container running. 

      8. kubectl
      This is a script that controls the Kubernetes cluster manager.We can use different commands with kubectl.
      kubectl get pods
      kubectl create -f <file-name>
      kubectl update
      kubectl delete
      kubectl resize -replicas=3 replicationcontrollers <name>



      Kubernetes Pros


      • Manage related Docker containers as a unit : You can specify what Docker containers need to run and all their dependencies in one file. 
      • Container communication across hosts : Unlike Docker which runs on single host, Kubernetes runs across multiple hosts. 
      • Availability and scalability through automated deployment and monitoring of pods and their replicas across hosts. 

      Kubernetes Cons


      • Lifecycle of applications : Has to build, deploy and manage applications 
      • No multi-tenancy 
      • On-premise 

      References



      Play with Kubernetes!

      How to install Kubernetes locally via Docker

      In this blog post, I’ll discuss the procedure of installing Kubernetes locally via Docker. I am now going to set up a simple,single node Kubernetes cluster using Docker. 

      First of all, we should make sure that we have installed Docker in our machine correctly. For that, If you need any help, refer this which is one of my previous blog posts that describes the procedure of installing Docker in a local machine.

      Then you are almost ready to install Kubernetes. Final setup of this process is shown below.
      k8s-singlenode-docker.png

      As shown in the above figure, the final result is a single machine that will be both the master and the worker node. Etcd is used for storage. The final set up has some Kubernetes components like Kubernetes master, service proxy and kubelet. There are also some containers to run.

      There are three main steps in the process of installing Kubernetes locally via Docker.


      Step 1 : Run etcd

      Use the following command to run the etcd. If etcd image does not exist in your local host, it will be downloaded from Docker Hub.

      docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data

      Kubernetes uses etcd to distribute information across the cluster by storing master state and configuration. Etcd is a key-value store designed for strong consistency and high-availability. The various master components watch this data and act accordingly. For example, starting a new container to maintain a desired number of replicas.

      Step 2 : Run master

      Then run the master. Similar to etcd, if master image does not exist in your local host, it will be downloaded from Docker Hub.

      docker run \
          --volume=/:/rootfs:ro \
          --volume=/sys:/sys:ro \
          --volume=/dev:/dev \
          --volume=/var/lib/docker/:/var/lib/docker:ro \
          --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
          --volume=/var/run:/var/run:rw \
          --net=host \
          --pid=host \
          --privileged=true \
          -d \
          gcr.io/google_containers/hyperkube:v1.1.1 \
          /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
          

      Kubernetes has a simple Master-Minion architecture . Master provides all the Kubernetes infrastructure to manage containers. The master handles the APIs, scheduler and the replication controller.
      • API Server: API server provides RESTful Kubernetes API to manage cluster configuration, backed by the etcd datastore. 
      • Scheduler: The scheduler determines what should run based on capacity and constraints. the replication controller ensures the right number of nodes have replicated pods. It places unscheduled pods on nodes according to labels.
      • Controller Manager: This manages all cluster-level functions, including creating and updating endpoints, discovering nodes, managing and monitoring pods etc. 
      Executing the run command for master actually runs the kubelet, which in turn runs a pod that contains the other master components or cluster wide components that forms the Kubernetes master.

      Step 3 : Run service proxy

      Then run the service proxy. Similar to etcd and master, if image of service proxy does not exist in your local host, it will be downloaded from Docker Hub.


      docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.1.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2

      The service proxy is also known as the kube-proxy. It runs on each node and provides simple network proxy and load balancing capability. This service proxy enables services to be exposed with a stable network address and name. In brief, service proxy is a combination of load balancing and DNS service discovery that provides Kubernetes services which are collections of containers.
      That’s it. At this point, you should have a running Kubernetes cluster. Let’s test it out.

      Check the containers in your local host by executing the command : docker ps  
      We have actually started a kubelet which is responsible for managing all the containers. And its surface forms a variety of containers such as API server and scheduler as listed in the above container list.


      Test it out

      We can test the running Kubernetes cluster using kubelet binary.


    • Download the kubectl binary, which is a command line tool to interface with Kubernetes. For that enter the following command.

    • $ wget
      http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl


    • You may have to make it readable by

    • chmod a+x kubectl


    • Then you can list the nodes in your cluster by executing the following command.
    • ./kubectl get nodes

      This should display the following.






    • The quickest way to start a container in the cluster is to use kubectl command. For eg: If you want to start a nginx container, use this command.

    • ./kubectl run web --image=nginx

      Run an application

      ./kubectl -s http://localhost:8080 run nginx --image=nginx --port=80

      Executing docker ps will display you nginx running.


      Expose it as a service

      ./kubectl expose rc nginx --port=80

      Following command will show you the IP address of this service.

      ./kubectl get svc nginx --template={{.spec.clusterIP}}

      Hit the web server with that IP address.

      curl <insert-cluster-ip-here>

      The result is shown below.

      Selection_002.png


      Pods are usually created using configuration artifacts. Here is an example of a pod configuration for the nginx pod. Create a file and save it as nginx-pod.yml.

      apiVersion: v1
      kind: Pod
      metadata:
      name: nginx
      labels:
      app: web
      spec:
      containers:
      - name: nginx
      image: nginx

      Then execute the following command.

      ./kubectl create -f nginx-pod.yml


      Kubernetes will pull the container image, start the container and register the pod. You can see the nginx pod running by the following command. Nginx container is now running in a pod. 

      ./kubectl get pods




      Selection_003.png







      Each pod is assigned an IP address from an internal network for inter communication. These IP addresses are accessible locally.

      Let’s check nginx is really running by using wget to fetch the index page:

      wget <ip address of the nginx pod>


      Selection_007.png


      We can access the welcome page of nginx using the browser as well. We will see the nginx welcome page as below. We have now deployed our first Kubernetes pod!





      Bash Programming


      Bash is a Unix shell. It is an interface for you to execute commands on different components of the system. Hence it is a command language. It was written by Brian Fox. Bash is a command processor that runs in a text window. The user types commands that cause actions. 

      This post gives you the basic knowledge on writing bash scripts on Unix or Linux. Bash scripts are used by any one who uses Unix or Linux system regularly. 

      Bash can also read commands from a file called 'script'. Anything you can run normally on the command line can be put into a script and it will do exactly the same thing. You don't need to change anything that you type in command line. Just type the commands as you would normally and they will behave as they would normally. The only difference is that instead of typing them at the command line, the commands that you want to execute are entered sequentially into a plain text file. And that's what bash scripting is!

      We normally give an extension of .sh to files that contain bash scripts.


      A traditional Hello-World script


      #!/bin/bash 
       echo Hello World 


      This is a very simple script which executes a command to print the line 'Hello World'. This has two lines.
      So what do they really mean? 
      The first line indicates the system which program to use to run the file. In this case, the system should use bash to run the script file.
      The second line is the only action performed by the script which is to print the 'Hello World' message on the terminal. You can type this command yourself on the command line and it will behave exactly the same.


      How do you run the script

      Running a bash script is fairly easy. Say you have saved the above file as 'hello.sh'. Type the following command to run the script.

      nanduni@nanduni - TECRA-M11:~$ ./hello.sh
      In order to execute the script, you must have the execute permission set which is not set by default for safety reasons. Otherwise you will get an error message as below.

      nanduni@nanduni - TECRA-M11:~$ ./hello.sh
      bash: ./hello.sh: Permission denied
      You can set the permission levels in the command line as below.

      chmod 755 hello.sh
      Then the output of the executed script is as below.

      nanduni@nanduni - TECRA-M11:~$ chmod 755 hello.sh
      nanduni@nanduni - TECRA-M11:~$ ./hello.sh 
      Hello World 
      'chmod' means 'change mode'. It is a command and system call which changes the access permissions to file system objects such as files and directories.
      Each digit in mode parameter represents permissions for a user or class of users.

                                     First digit refers to owner of file.
                                     Second digit refers to file's group.
                                     Final digit refers to everybody else.

      The different permission levels represented by various digits are as follows.

      0 : deny all
      1 : execute only
      2 : write only
      3 : execute + write
      4 : read only
      5 : read + execute
      6 : read + write
      7 : read + write + execute