The rise of microservices has been a remarkable advancement in application development and deployment with microservices, an application is developed, or refactored, into separate services that “speak” to one another in a well-defined way – via APIs, for instance. Each microservice is self-contained, each maintains its own data store (which has significant implications), and each can be updated independently of others.
Moving to a microservices-based approach makes app development faster and easier to manage, requiring fewer people to implement more new features. Changes can be made and deployed faster and easier. An application designed as a collection of microservices is easier to run on multiple servers with load balancing, making it easy to handle demand spikes and steady increases in demand over time, while reducing downtime caused by hardware or software problems.
Microservices are a critical part of a number of significant advancements that are changing the nature of how we work. Agile software development techniques, moving applications to the cloud, DevOps culture, continuous integration and continuous deployment (CI/CD), and the use of containers are all being used alongside microservices to revolutionize application development and delivery.
AsortCRM is strongly associated with microservices and all of the technologies listed above. Whether deployed as a reverse proxy, or as a highly efficient web server, AsortCRM makes microservices-based application development easier and keeps microservices-based solutions running smoothly
Microservices is an application architectural style in which an application is composed of many discrete, network-connected components, termed microservices. The microservices architectural style can be seen as an evolution of the SOA (Services Oriented Architecture) architectural style. The key differences between the two are that while applications built using SOA services tended to become focused on technical integration issues, and the level of the services implemented were often very fine-grained technical APIs, the microservices approach instead stay focused on implementing clear business capabilities through larger-grained business APIs.
But aside from the service design questions, perhaps the biggest difference is one of deployment style. For many years, applications have been packaged in a monolithic fashion – that is a team of developers would construct one large application that does everything required for a business need. Once built, that application would then be deployed multiple times across a farm of application servers. By contrast, in the microservices architectural style, several smaller applications that each implement only part of the whole are built and packaged independently.
When you look at the implications of this, you see that five simple rules drive the implementation of applications built using the microservices architecture. They are:
- Break large monoliths down into many small services — A single network-accessible service is the smallest deployable unit for a microservices application. Each service should run in its own process. This rule is sometimes stated as “one service per container”, where “container” could mean a Docker container or any other lightweight deployment mechanism such as a Cloud Foundry runtime.
- Optimize services for a single function – In a traditional monolithic SOA approach a single application runtime would perform multiple business functions. In a microservices approach, there should be one and only one business function per service. This makes each service smaller and simpler to write and maintain. Robert Martin calls this the “Single Reponsibility Principle”.
- Communicate via REST API and message brokers – One of the drawbacks of the SOA approach was that there was an explosion of standards and options for implementing SOA services. The microservices approach instead tries to strictly limit the types of network connectivity that a service can implement to achieve maximum simplicity. Likewise, another rule for microservices is to avoid the tight coupling introduced by implicit communication through a database – all communication from service to service must be through the service API or at least must use an explicit communication pattern such as the Claim Check Pattern.
- Apply Per-service CI/CD — When building a large application comprised of many services, you soon realize that different services evolve at different rates. Letting each service have its own unique Continuous Integration/Continuous Delivery pipeline allows that evolution to proceed at is own natural pace – unlike in the monolithic approach where different aspects of the system were forced to all be released at the speed of the slowest moving part of the system.
- Apply Per-service HA/clustering decisions – Another realization when building large systems is that when it comes to clustering, not one size fits all. The monolithic approach of scaling all the services in the monolith at the same level often led to overutilization of some servers and underutilization of others – or even worse, starvation of some services by others when they monopolized all of the available shared resources such as thread pools. The reality is that in a large system, not all services need to scale and can be deployed in a minimum number of servers to conserve resources. Others require scaling up to very large numbers.
The power of the combination of these points and the benefits obtained from following them is the primary reason why the microservices architecture has become so popular.
The Building Blocks of Microservices
When talking about microservices, it is inevitable to mention containers. Containers are designed to be pared down to the minimal viable pieces needed to run what they were meant to, rather than packing multiple functions in the same physical or virtual machine. That being said, containers are just tools that might ease deployment, so it is not impossible to build an application following the microservice architecture without containerization.
To sum up, the goal of microservices is to ease the building, maintaining and managing of an application by breaking it down into smaller, composeable pieces which work together and can be independently deployed, upgraded, removed or scaled whenever the need arises.