More and more applications are deployed in the microservice architecture, in which each service has single responsibility in the system. In addition, the system is characterized by a heterogeneous environment: it may be composed of back-end applications, file servers, static data servers, databases, NoSQL bases, messaging systems, or queue brokers.
Deployment of hundreds of separate applications, or even only a few of them, is not an easy task. To cope with this “chaos”, proper tools need to be used. The orchestration is such a tool.
When is orchestration a good idea?
The idea of orchestration emerged on the market several years ago along with cloud solutions.
It is a comprehensive tool, ideal for automation, naturally paired with microservice architecture.
The main task of an orchestrator resides in managing multiple containers. In order to make use of the orchestrator, services must be packed into containers. To simplify, it may be assumed that one service resides in a single container. Containers themselves are based on images created by service providers or system developers. Once an image is created, it is immutable. This triggers the coherence of environments in which it is used.
The orchestration ensures the avoidance of many problems, such as tedious and time-consuming manual service launching, securing the right amount of resources on servers, and configuring the network. In complex multi-service systems, managing dozens of services manually is practically impossible.
Read also: Why is immutability so important in React?
Are we doomed to cloud?
When deploying a microservice-based system, one of the available options is to use services of cloud hosting providers such as Amazon, Google, or Heroku.
Nevertheless, public clouds are not free of limitations. The lack of control over physical machines is one of them, especially that the infrastructure of the largest and most popular cloud providers is located abroad. Some businesses – the banking sector included – cannot usually accept this shortcoming.
An alternative option to a public cloud is to introduce this solution to a company’s own physical infrastructure. It must be mentioned though that the installation of a private cloud is not an easy task and requires the effort of an experienced team of DevOps.
Orchestration brings many benefits and allows for the mitigation of many potential problems. These benefits include:
1. Continuous integration and continuous delivery
Each time a new version of any service is released, a new image of it is created. It is crucial that every image is pushed to the repository from which an orchestrator can pull the required version. Once images of services are in the repository, a deployment can be executed independently of build pipelines. Moreover, exactly the same version of service can be introduced to numerous environments such as testing, acceptance or production.
2. In pursuit of automation
The orchestrator requires that the whole system along with all its components be defined in a manifest file. The manifest itself defines services that constitute the system and the way they are connected. The system comprises not only business logic but also databases, static content servers, messaging systems or queue brokers. The role of the orchestrator is to allocate resources and launch services defined in the manifest. Images of defined services will be fetched, if needed, from a local or remote repository. Many popular products are distributed in the form of images and there is no problem in acquiring them. It is also good practice to maintain a copy of publicly distributed images in the local repository.
3. Service monitoring
In order to ensure good availability and efficiency, the orchestrator will launch subsequent instances following pre-set rules. If any monitored service stops responding to requests, the orchestrator will automatically launch the next instance to guarantee work continuity. The orchestrator ensures not only that services are launched but also that they work efficiently. In the case of increased load, it may automatically scale up the number of instances launched and cope with heavy traffic.
4. Blue/green deployment
A new version is launched before the previous one is stopped. The orchestrator itself provides rules for load balancers, allowing a smooth transition of versions.
5. Service discovery
For the purpose of establishing effective communication between services, the orchestrator provides and manages a DNS server. Logical names of services assigned in the manifest are a minimum requirement for the orchestrator to enable network communication. The orchestrator also provides load-balancing rules for all service instances to be used.
Speed and automation
Managing instances, deploying services or adding new services in the system controlled by the orchestrator, in fact, mean giving short commands. The orchestrator can be managed via a dedicated web interface. However, most commonly it is done by a command-line interface which can easily be incorporated into scripts. The orchestrator only needs to receive a command to launch a service and an adequate manifest.
While the configuration of all services in the system is described in manifest files, deployment is achieved in a short period of time, virtually without manual intervention. This approach offers many automation opportunities, e.g. enables the quick launch of another instance of the system. It may also useful in the restoration of the system after a hardware failure.
Systems whose deployment is based on the orchestration will win acclaim and their invaluable benefits will make them more and more popular. In Efigence, the orchestrator that has been successfully used for over a year is Google’s Kubernetes. It is a base platform for running services in a web analytics system – EFI4 Analytics.