WSO2 MSS - Pet Store sample

This would be a quite long blogpost, as it discusses basic concepts of microservices, WSO2 MSS and one of its sample-Petstore. If you wish to start with WSO2 MSS, this article will definitely help you.

Micro Services

Microservices have grown its popularity recently as one of the distinctive methods of designing software applications. According to my point of view, micro service is a solution to a certain small problem domain which can be built and deployed by itself. It is an independent artifact. The word ‘micro’ implies that the service is specific, which means that it does one specific task. Working with microservices is like modular programming. 

Generally, building applications as microservices can be identified as a method of developing software applications as independently deployable, small, modular services. Each service is specific to a unique process and functionality and they can be independently developed, tested, built, deployed and scaled. So how do these different services talk with each other? The communication process happens via APIs, often HTTP response APIs. The microservices expose their functionality through HTTP messages.

A complex application is composed of a bunch of independently deployable mini services. The traditional way of developing software applications is the monolith practice where the whole application is built as a single unit. There is a huge difference between monolithic software and software as microservices.

Monolithic architecture

Microservices architecture
A monolithic application contains all its functionality into a single process whereas a microservices architecture contains each functionality in a separate micro service.

If we build an enterprise application using a monolithic architecture, we will be having a single logical executable eventually. For example, this single logical unit would comprise the server side application, a database and a client side user interface. But if we use microservices architecture instead, we will be having separate logical units. The server side microservice will handle HTTP requests, execute the domain logic, retrieve and update data from database and send HTML views to browser. The database will be a separate microservice. The client side user interface will act as another microservice which will generate the HTML pages through JavaScript running in a browser on user’s machine. Compared to the monolithic architectural style of building software applications, this approach makes the development, testing and deployment process easy and comfortable. If we use monolithic approach, a change made to a small part of the application requires the entire monolith to be rebuilt and deployed. Sometimes the working application can get severely affected from the modifications which would result in the complete failure of the monolith eventually. Contrasting to that, since there are no dependencies among microservices, they can be modified and deployed very easily. On the other hand, each service can be built easily using different technologies without abiding to one specific technology for the overall application. Hence within a service, you can use any technology under any infrastructure. So, isn’t it great?

Microservices architecture has its super powers for scaling too. 

Monolithic architecture
Microservices architecture

A monolithic application scales by replicating the monolith on multiple servers. So if we want to scale a monolith, we have to scale all components which would be a waste of resources. On the other hand, it would increase the complexity as well. Instead, microservices scales by distributing the services across servers, replicating as needed. So we can scale certain services while keeping others untouched. I would be further elaborating this scaling scenario through the Petstore sample which will be discussed later.

As I have emphasized, every microservice is standalone. It builds and runs the service in complete isolation from other services. Any way, it is important that the service be understandable by anybody who needs to pick it up and work with it. 

As a summary, microservices architecture allows building independently deployable and scalable applications where an application consists of a bunch of microservices. Each microservice provides a module boundary. Since different services are deployed independently, there would be less system failures when they go wrong. Microservices architecture supports technology diversity too. Different microservices can be written in different languages, frameworks and data storage technologies. This would be useful for building enterprise applications managed by different teams. 

WSO2 Microservices Server (WSO2 MSS)

WSO2 Microservices Server is a new product which has been introduced to the pool of WSO2 products. It has released its 1.0.0-alpha release recently. It can be downloaded from here. This product is basically focused on building a programing model for developing Java based microservices. WSO2 introduces this product with the following description. 

“With its lightweight, fast, and easy programming model, WSO2 MSS offers an end-to-end microservices architecture to ensure agile delivery and flexible deployment of complex, service-oriented applications. It enables building and delivering service-oriented applications with RESTful service containers, offering high performance (runtime/startup) and ensuring low resource usage. WSO2 MSS has built-in metrics and analytics APIs with out-of-the-box integration via WSO2 Data Analytics Server (WSO2 DAS), offering a complete solution from development and deployment, to monitoring. It combines SOA best practices with modern application delivery tooling and organizational disciplines.”

WSO2 MSS is lightweight in the sense that it is simple, possess high speed and agility. With WSO2 MSS, more users can be handled at the same time at an acceptable performance level. On the other hand, it is easier to learn and faster to write code. Therefore it is actually lightweight. It utilizes an easy programing model which can be understood easily. Since the product facilitates developing software applications as microservices, it is very easy to develop, deploy and monitor each service.

Key features of WSO2 MSS are as follows.

• Lightweight
• High performance
• Fast uptime and runtime
•Quick and simple development model using simple annotations. This uses JAVA annotations as a  way of defining microservices APIs as well as metrics.
• Supports widely used methods such as JAXRS annotations
• Simple ways to develop and deploy microservices
• Custom interceptors
• JWT based security
• Provide built-in metrics and Analytics APIs with WSO2 Data Analytics Server (DAS)
• Metrics gathering & publishing to WSO2 Data Analytics Server
• Dashboard showing metrics as well as microservice request/response statistics
• Request streaming support
• WSO2 DevStudio based tooling for generating microservices projects starting from a Swagger API definition
• Develop, deploy and manage microservices applications using Docker and Kubernetes

You would be definitely surprised about the implementation of this product. It is based on the new Carbon 5.0 kernel which is the next generation of WSO2 Carbon Platform. Transport is based on Netty 4.0 which provides an asynchronous event driven network application framework with maintainable high performance and scalability. WSO2 MSS is very fast which is close to ten times faster than CXF based JAXRS implementation used in WSO2 AppServer. WSO2 MSS starts quickly within 300ms time. And would you believe that it is just 5MB pack size? It really is.

Following graphs demonstrate the throughput and memory consumption of WSO2 MSS with respect to some other products and you will be able to easily figure out how powerful WSO2 MSS is. 


In order to experience how WSO2 MSS works, I tried to run the samples within the product. I assume that it is the best place to understand how WSO2 MSS works. Rest of this article describes how I deployed the Petstore sample which is included in the samples directory of the product.

Pet Store Sample

This is a sample of a pet store which uses the microservices architecture to manage its front end, back end and transactions via cart. There are separate micro services built for the admin and customers. The most special feature that can be seen in this sample is the integration of Docker and Kubernetes to develop, deploy and manage the microservices.

First Approach: Executing the sample by running Kubernetes via Vagrant


In order to deploy this sample, you need to ensure that your machine has the following software installed.
• Vagrant
• VirtualBox

If you want to know about vagrant, virtualbox and their installation processes, refer to my blogpost. This sample uses vagrant to run Kubernetes. I have also tried this sample by running Kubernetes locally via Docker without using vagrant. That requires some modifications in the sample. First of all, I will describe the approach through vagrant.

If your machine runs Linux OS, first install vagrant and virtualbox to set your machine for deployment. After that, you just need to follow the README file in the deployment directory within the petstore sample to get going.

Download the WSO2 Identity server pack(zip) from here. Then copy that into the /product-mss/samples/petstore/deployment/packs location. Then navigate to the /product-mss/samples/petstore/deployment location and run the following command.


That’s all you need to do. Afterwards, everything is up to the WSO2 MSS. The sample will create the pods, services and replication controllers as required. It will download and configure CoreOS vagrant boxes and set up Kubernetes 1.1.1 in a 3 CoreOS nodes. Before going into the depth, let’s preview the deployed results. If you have deployed the sample successfully, you should get the following outcomes.

How do I know whether Kubernetes Cluster is up and running?

Access following URL and see if your Kubernetes service is up and running. 

This should display something similar to this. 

Access the Kubernetes UI/dashboard from the following link.

Kubernetes Master runs on and minions runs on, . Unless you have specified NUM_INSTANCES when you run '', it will create a kubernetes cluster with 2 minions by default. Above UI will show the resource utilization of two minions.

Kubernetes CLI (kubectl)

Kubernetes CLI provides an easy to use Command Line interface to interact with Kubernetes. You can get the information on kubernetes pods and services using kubectl. Also you can view,create, delete, edit information of kubernetes artifacts using the CLI easily. See here for information on CLI commands of using kubectl.

Try out the following to check if the pods, services and replication controllers are built successfully.

Get information about the cluster.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl cluster-info
Kubernetes master is running at
kube-dns is running at
KubeUI is running at

From this you are able to find the urls at which Kubernetes master, kube-dns and KubeUI run.

List all nodes in ps output format.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl get nodes
NAME           LABELS                                             STATUS   disktype=ssd,   Ready                Ready

List all pods in ps output format.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
admin-fe-tt16p         1/1       Running   0          2m
fileserver-65yq7       1/1       Running   0          2m
pet-5mbds              1/1       Running   0          2m
redis-23deq            1/1       Running   0          3m
redis-i95xd            1/1       Running   0          3m
redis-master-mq3e0     1/1       Running   0          2m
redis-sentinel-0jxkc   1/1       Running   0          3m
redis-sentinel-ndjsg   1/1       Running   0          2m
redis-sentinel-ywnjj   1/1       Running   0          3m
redis-vav33            1/1       Running   0          2m
security-mrxix         1/1       Running   0          2m
store-fe-93ooy         1/1       Running   0          2m
txn-oolob              1/1       Running   0          2m

This will display the details of the status of different pods. If the sample has deployed properly, all the pods should be readily running with no restarts.

List all pods that have the namespace kube-system.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl get pods --namespace=kube-system
NAME               READY     STATUS    RESTARTS   AGE
kube-dns-rz0ba     3/3       Running   0          7m
kube-ui-v2-ndyqk   1/1       Running   0          14m

Return snapshot logs from pod kube-dns to check if kube2sky and skydns works properly.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl logs kube-dns-rz0ba -c kube2sky --namespace=kube-system
I1217 02:47:13.449338       1 kube2sky.go:389] Etcd server found:
I1217 02:47:14.484016       1 kube2sky.go:455] Using for kubernetes master
I1217 02:47:14.484943       1 kube2sky.go:456] Using kubernetes API 

List all services in ps output format.

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl get svc
NAME              LABELS                                    SELECTOR              IP(S)            PORT(S)
admin-fe          name=admin-fe                             name=admin-fe    80/TCP
fileserver        name=fileserver                           name=fileserver    80/TCP
identity-server   <none>                                    <none>         9443/TCP
kubernetes        component=apiserver,provider=kubernetes   <none>             443/TCP
pet               name=pet                                  name=pet       80/TCP
redis-master      name=redis-master                         name=redis-master    6379/TCP
redis-sentinel    name=sentinel,role=service                redis-sentinel=true   26379/TCP
security          name=security                             name=security    80/TCP
store-fe          name=store-fe                             name=store-fe   80/TCP
txn               name=txn                                  name=txn        80/TCP
This will display the set of services created from the deployed sample. There are different service components for admin front end, file server, identity server, security, store front end, transaction etc. 

Now let’s look into the admin front end and store front end to see the corresponding UIs. For that we need to get the NodePorts of admin front end and store front end. That can be found by executing the following commands.

Admin front end

nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl describe svc/admin-fe
Name:   admin-fe
Namespace:  default
Labels:   name=admin-fe
Selector:  name=admin-fe
Type:   NodePort
Port:   <unnamed> 80/TCP
NodePort:  <unnamed> 30375/TCP
Session Affinity: ClientIP
No events.
This elaborates that the admin front end service with the label name ‘admin-fe’ has the Port 80/TCP and NodePort 30375/TCP. The endpoints are Access the following URL to navigate into the admin front end. Change the NodePort to yours. 

This should give an output similar to this.

Use the following username and password to login as admin.

username: admin
password: admin

Now you are logged into the admin account. So you are able to add pet types, add pets and list them.

Add a pet type

Add a pet

Get the pet types

Get the list of pets

Store front end

Get the NodePort of the store front end.
nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl describe svc/store-fe
Name:   store-fe
Namespace:  default
Labels:   name=store-fe
Selector:  name=store-fe
Type:   NodePort
Port:   <unnamed> 80/TCP
NodePort:  <unnamed> 30371/TCP
Session Affinity: ClientIP
No events.

This elaborates that the store front end service with the label name ‘store-fe’ has the Port 80/TCP and NodePort 30371/TCP. The endpoints are Access the following URL to navigate into the store front end. Change the NodePort to yours. 

This should give an output similar to this.

Now you are logged into the pet store. So you are able to view the available pets and add them into your cart. 

If your want to clean all the kubernetes resources, execute the following command.

If you want to redeploy the kubernetes resources for petstore sample, execute the following command.

If you want to clean all kubernetes resources and stop all CoreOS nodes, execute the following command.
nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ ./
==> node-02: Running triggers before halt...
==> node-02: Attempting graceful shutdown of VM...
==> node-01: Running triggers before halt...
==> node-01: Attempting graceful shutdown of VM...
==> master: Running triggers before halt...
==> master: Attempting graceful shutdown of VM...


When running the sample, I went through several circumstances where the sample didn’t deploy successfully. 

  1. It is not the WSO2 Identity Server - Service Pack that you need to copy into the packs folder within the sample. Make sure that you have copied the WSO2 Identity Server Version 5.0 zip file into the relevant folder. You can get that from here.
  2. When I run "kubectl get pods" I came across many pods that have not arrived to running state. Some pods had lot of restarts. This may be due to the slow internet connection. You may resolve this by running following commands. .

  3. If you are running the sample for the first time, it will take a lot of time to download kubernetes packs. So be calm until the cluster prepares for executing the sample with Kubernetes. 
  4. If you are unable to access the pet store front end, you have probably gone wrong somewhere. Look into the log files to see if there are any errors. You may execute the following command to access bash inside pod admin front end. Try to ping the file server from it. 
  5. nanduni@nanduni-TECRA-M11:~/product-mss/samples/petstore/deployment$ kubectl exec -t -i admin-fe-tt16p bash
    root@admin-fe-tt16p:/# ping fileserver
    PING fileserver.default.svc.cluster.local ( 56(84) bytes of data.

  6. When executing kubectl commands in the script file, you may get the following error.
  7. error: couldn't read version from server: Get http://localhost:8080/api: dial tcp connection refused
    You may resolve it by exporting the kubernetes master to use kubectl.
    You can set that variable in bashrc file too.

Second Approach: Executing the sample by running Kubernetes locally via Docker

This is the other approach I used to run the petstore sample. Instead of using vagrant, you can make use of Docker to run Kubernetes. I have already written a blogpost that describes how to run Kubernetes locally in your machine via Docker. You can refer it from here

We can run this sample using that approach too. But we have to do some modifications. I modified the sample code to make it allow to run using vagrant if it is a machine with Mac OS and to run this with aforementioned approach if it is a machine with Linux OS. 

To run Kubernetes locally via Docker, first you have to make sure that your machine has Docker installed in it. Then you have to run the images of etcd, master and service proxy. This is that relevant code.
if [[ "$OSTYPE" == "darwin"* ]]; then
cp -f $HOME/ $VAGRANT_HOME/docker/ 
NODE_MEM=2048 NODE_CPUS=2 NODES=2 USE_KUBE_UI=true vagrant up
source ~/.bash_profile

elif [[ "$OSTYPE" == "linux-gnu" ]]; then
echo "--------------------------------------------------------------"
echo "Setting up Kubernetes locally via Docker"
echo "--------------------------------------------------------------"

#run etcd
docker run --net=host -d /usr/local/bin/etcd --addr= --bind-addr= --data-dir=/var/etcd/data

#run master
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \ \
/hyperkube kubelet --containerized --hostname-override="" --address="" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests-multi --cluster_dns= --cluster_domain=cluster.local

#run service proxy
docker run -d --net=host --privileged /hyperkube proxy --master= --v=2

echo "--------------------------------------------------------------"
echo "Kubernetes was set up"
echo "--------------------------------------------------------------"

sed -i -e "s@HOME=.*@HOME=$VAGRANT_HOME/docker@g" ./ 
chmod 755 ./
And you need to modify the location of the fileserver directory too. For Linux users, I modified the location of the file server directory to user’s home directory.
echo "--------------------------------------------------------------"
echo "Deploying FileServer"
echo "--------------------------------------------------------------"
cd $HOME
kubectl label nodes disktype=ssd
cd $FILESERVER_HOME/container/kubernetes/
if [[ "$OSTYPE" == "darwin"* ]]; then
kubectl create -f .
elif [[ "$OSTYPE" == "linux-gnu" ]]; then 
sed -i -e "s@path: \/home\/core\/fileserver@path: \/home\/$USER\/fileserver@g" fileserver-rc.yaml
sed -i -e "s@nodeSelector:@ @g" fileserver-rc.yaml
sed -i -e "s@disktype: ssd@ @g" fileserver-rc.yaml 
kubectl create -f .

Cluster IP should also be modified in the dns-service.yaml file to support kubernetes single node cluster on your local machine. 
kubectl create -f $VAGRANT_HOME/plugins/namespace/kube-system.json
if [[ "$OSTYPE" == "darwin"* ]]; then
kubectl create -f $VAGRANT_HOME/plugins/dns/dns-service.yaml
elif [[ "$OSTYPE" == "linux-gnu" ]]; then 
sed -i -e "s@" $VAGRANT_HOME/plugins/dns/dns-service.yaml 
kubectl create -f $VAGRANT_HOME/plugins/dns/dns-service.yaml
After these modifications, you are able to execute the sample similar to the previous approach that used vagrant. It is important that you should download the images for kubernetes version 1.1.1. You should also look into the logs to see if the kube2sky images are downloaded properly so that you can be assure that skydns has integrated successfully.

Analysing the sample

By now, you have a general understanding of the structure of the petstore sample. It is a combination of several microservices. Following is a simple illustration of that. Below shown in rectangles are the pods that are created.

Services call each other and the data is published at DAS. So how do the services talk with each other? They talk via APIs. Each has an API exposed. That is how microservices communicate with each other. The microservices architecture has brought many benefits in this scenario of petstore. 

The most prominent benefit is the diversification of technologies and infrastructure. For ex: admin-fe service is written in PHP, whereas pet in Java. So we are able to select the best option for different services. 

Another benefit through this approach is auto scaling. If we relate this sample to a business scenario, this needs only one admin. So we can create one pod for that. But there can be millions of customers coming into the pet store to get the service. Therefore we need to scale store-fe to create more instances. Say it is about hundred. Out of them, about fifty would add the pets to their cart and involve in the transaction. Thus we can scale each service as required. This will avoid the wastage of resources and increase the performance and throughput. 

A redis database is used in this sample to store details of the the PetID, image and pricings of each pet. In fact Redis is a data structure server. Redis is a key-value in memory data storage mechanism. Redis has become the most popular key-value database and it has also been ranked the #1 NoSQL. Petstore stores all its data in Redis as the primary database because Redis is fast and suits this scenario perfectly.

Another advantage of building applications with WSO2 MSS is that different services can be developed and deployed independently. Therefore a particular modification in one service has no effect on the functionality of other working services. Errors can be figured out easily and testing becomes handy. 

So what role does kubernetes play in this scenario? Suppose Pet service wants to find the fileserver. In that case, pet can find the fileserver via SkyDNS which is another service that helps DNS service discovery.This will help to find the services that exist in a large environment. SkyDNS work together with Kube2sky which listens to the Kubernetes api for new services. SkyDNS responds to the DNS query of the Pet in order to find the fileserver. There can be lots of pods requesting their DNS queries from SkyDNS, where a certain round robin algorithm is used to respond to these queries. If you want to know more about how SkyDNS works, look for my previous blog post. Acting behind, MSS enables building microservices to give high performance and deployment.

PetStore sample - Deployment view

The sample creates Kubernetes master and two Kubernetes nodes. After successful deployment of the sample, you would be able to see the master and nodes running through virtualbox. Click on "Views" on the top right corner of the dashboard to access the kubernetes Pods, Services, Nodes etc.



Replication controllers


I assume that this article would have been useful to understand how petstore sample works with WSO2 MSS and Kubernetes. Follow the README files inside the WSO2 MSS samples to understand how they work and try to deploy the samples as a first step to start work with WSO2 MSS.