Next Generation Application Architecture for Cloud Ready Applications – Part I
Applications can resemble business, business function, process or anything that takes certain inputs and produces an output. Large business are based upon highly successful applications or services provided with those applications. They can either run an entire business or help with streamlining business functions or processes. The efficiency of generating output, agility of development, complexity of deployment and maintenance is majorly governed by its design and architecture. Designing is the first and most crucial part of developing an application. The design phase not only realizes the requirements but also lays out a foundation for the application’s architecture. With respect to challenges in development, efficiency and meeting the ever changing business needs, it is very necessary to architect the application with a foresight of ease in development, agility and deploying at scale. Though exchange between different functions in the application is very important from an architectural perspective, data governs the application and its flow. Inputs, data processing and its flow are crucial aspects of any application.
In a nutshell any application on the internet can be categorized into a “Client” and a “Server”. While this is a very basic model, here Client is anything that collects inputs and/or does some processing and Server is the main data processing unit. While the role of ‘client’ is trivial, major efforts in designing, development and maintenance are put in the ‘server’ component of the application. The client – server model has been the conventional model of any application and all well-known application frameworks are developed around this client – server model. Further to this conventional application architecture, individual components are modularized for easy development and deployment. This leads to modularized application architecture where components with specific functionality become modules and interact with each other by exchanging data. The efficiency and complexity of the application is significantly influenced by the level of modularization introduced in the application.
This design trait – of modular application – leads to a modular architecture of the application. On the contrary, architecture of monolithic application is different from modularized application. Here the application is built as a single unit. In the case of Client-Server model, the server can be a monolithic – single module which forms the entire applications.
Technical definition of “Service” is – A service is the provision of a discrete function within a system environment, where it interacts with other such functions or different system components via a communication protocol, over the network. Thus a service is nothing but a component in a modularized system that provides certain services to other modules in the system.
In an application, the Client can be branched into modules like Input Processing, Data Pre-processing, Server Data Exchange while the Server can be branched into Data Processing, Business Logic, Data Manager, etc. At an individual level each module does a very specific function, accepts inputs, processes data and generates output. The data flows through these modules to and fro while each module provides services to another module and might as well consume those services from other modules.
Rise of Micro-Services
Micro-Services Architecture (MSA), is a modern concept that has arrived after years of application development and technological advancements. MSA is a method of developing software application that focuses on building the application as a system of modularized services that communicate with each other over the network.
There is nor a formal definition of MSA neither any standard way to practice this style of application development. Essentially, in this approach a single application is developed as a suite of services, each of them running independently, and communicating with each other via some network protocol.
Individual services can be developed as per business needs. As these services are a small part of a bigger application they can be quickly developed, tested and deployed. Continuous Integration tools like Jenkins help boost the Q&A and Automated deployment tools like Puppet or Ansible help automate the deployment process. Thus, the only task is to write code and develop the service that would be easily integrated into the application. This not only encourages standard development practices but also decreases the go-to market time, and helps add or remove new features to the application with quick development cycles.
A Service runs as an independent process, mostly in independent system environment. Thus, the choice of programming language to develop the service is not restricted to one. Services can be developed with code written in different language and thus arises the opportunity to develop services that can run on different platforms.
The most important aspect of this style is developing the application by modularizing functions into independent services. These services then communicate with each other using REST APIs over HTTP or any other lightweight protocols like AMQP or MQTT. This leads to a loosely coupled system in which individual components work independently allowing to add or remove components without any effect on other components in the system. On the other hand, applications with monolithic architecture tend to be tightly coupled and any updating or addition of components in such system is difficult and time consuming. Thus, in case of micro-services application, deployment and updating is very quick and flexible. If a new service is introduced, then it is discovered by concerned services and integrated in the application with ease. If a service needs to be updated, then that can happen without any service disruption and while it can affect some part of the application it will never affect the entire application.
Tools like Puppet help automatically deploy a service, making deployments faster and error free. As services can communicate via APIs over a common network, they can be deployed on separate computing systems. This opens up a wide range of deployment options. Services deployed on different systems create a distributed system, in which individual services, business critical services can be deployed in Highly Availability for achieving high application uptime. Services can run on logically separated systems or geo-graphically separate systems. Thus an application can run with minimum latency and provide faster outputs.
This flexibility in deployment of services helps to scale the application as per business demand. If service requests increase, the application can scale by increasing the number of service instances across the network to meet demand spike. In case of monolithic applications, scaling would mean increasing the instances of the monolithic units in the application which might lead to provisioning of unnecessary resources. Whereas in case of micro-services, individual service instances can be scaled with minimum required resources, thus improving the resource utilization.
Micro-Services and IT Infrastructure
Software Applications are run on dedicated hardware for best performance, while similar performance can be achieved in virtual environments, by deploying the application on virtual machine or containers. As we move from dedicated hardware to virtual machines to containers the granularity in resource sharing and hardware abstraction increases. Virtualization, Container Virtualization and Cloud provide Scalability, Reliability, On-Demand Service, Resource Sharing and Service Monitoring.
Applications with micro-services architecture can be easily deployed on Cloud or any virtualized environment. Services can be deployed as individual virtual machines, and then, can be provisioned as per demand. These services can scale by deploying more virtual machines, or they can be deployed on select data centers to ensure high availability of the application.
A Service in micro-services architecture is nothing but a collection of codes written in a programming language along with libraries and runtime environment. This can either run on windows operating environment or Linux operating environment. If the services share same Linux kernel, then they can be run on same Linux operating system environment. Container based virtualization allows running of application, in this case services, on the same Linux operating system environment.
When provisioning services in virtual machines there is an additional overhead of operating system and un-necessary libraries and processes running inside the virtual machine. With containers, services can run in an isolated environment, where services with common Linux kernel can run in a single containerized environment.
Kubernetes and Docker Swarm are two container orchestrations engines that allow creation and management of containers. From these two, Kubernetes which is also known as “k8s” is a popular choice for container orchestration. With Kubernetes, services can be packed into containers and then provisioned as per demand. Kubernetes is developed in Go language, initially developed by Google and later open-sourced.
While Openstack provides machine level abstraction, Kubernetes provides application level abstraction. Along with user-friendly dashboard and command-line interface to manage containers, Kubernetes also provides application level health check-up.
Kubernetes itself has a modular architecture, which constitutes the following components:
- API Server, which talks to controller manager and Scheduler
- etcd, is a distributed key:value database
- kublet, is like a nova agent in Openstack. kublet is tightly coupled with docker
- kube-proxy, forwards requests to appropriate application or container via load-balancer
- Pod, is a collection of containers running in the same network or host or file system
- cAdvisior, auto-discovers all the containers in the machine and collects CPU, memory, file- system and network usage statistics
Kubernetes is deployed as cluster of master and minions. K8s master is a centralized mode that contains API Server, etcd, controller manager and scheduler. Minions are multiple notes which contain the actual containers. Along with the containers a minion contains kublet, kube-proxy, cAdvisior and Pod.
Istio is an open platform to connect, manage and secure micro-services. It provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. Service mesh is used to describe the network of micro-services that make-up the application and the interaction between them. As this service mesh grows in size and complexity, it can become harder to manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements such as A/B testing, canary releases, rate limiting, access control, and end-to- end authentication.
Micro-Services Pros & Cons
Micro-services are not a simple and seemingly magical solution to a complicated problem. Infact, by implementing them you will expose problems in communication, teamwork, and others that may have been previously implicit but will now be forced out into the open. But API Gateways in Micro-services can greatly reduce build and QA time and effort. As with anything else, whether or not micro-service architecture is right for you depends on your requirements, because they all have their benefits and disadvantages.
- Microservice architecture gives developers the freedom to independently develop and deploy services
- A microservice can be developed by a fairly small team
- Code for different services can be written in different languages (though many practitioners discourage it)
- Easy integration and automatic deployment (using open-source continuous integration tools such as Jenkins, Hudson, etc.)
- Easy to understand and modify for developers, thus can help a new team member turn productive quickly
- Developers can make use of the latest technologies
- Code is organized around business capabilities
- Starts the web container more quickly, so the deployment is also faster
- When change is required in a certain part of the application, only the related service can be modified and redeployed—no need to modify and redeploy the entire application
- Better fault isolation: If one microservice fails, the other will continue to work (In contrary, one problematic area of a monolith application can jeopardize the entire system)
- Easy to scale and integrate with third-party services
- No long-term commitment to technology stack
- Due to distributed deployment, testing can become complicated and tedious
- Increasing number of services can result in information barriers
- Architecture brings additional complexity as developers have to mitigate fault tolerance, network latency, and deal with a variety of message formats as well as load balancing
- Being a distributed system, it can result in duplication of effort
- When number of services increases, integration and managing whole products can become complicated
- In addition to several complexities of monolithic architecture, the developers have to deal with the additional complexity of a distributed system
- Developers have to put additional effort into implementing the mechanism of communication between the services
- Handling use cases that span more than one service without using distributed transactions is not only tough but also requires communication and cooperation between different teams
- The architecture usually results in increased memory consumption
- Partitioning the application into micro-services is very much an art
- Cloud metering and billing exactly how you want it! - July 17, 2018
- The Shift in Cloud Scalability - May 16, 2018
- Micro-Services Architecture – Next Generation Application Architecture for Cloud Ready Applications – Part II - March 20, 2018