Tutorials Overview
In the following section, we provide an introduction of the different versions of MiCADO (Microservices-based Cloud Application-level Dynamic Orchestrator), the features of the different versions and overall pictures as well as to point out why we created these versions and user guidelines.
MiCADO is a framework that enables you to create scalable cloud-based infrastructures in the most generic way, where user applications can be interchanged in the framework. We developed MiCADO in a way that is user-friendly and highly customizable. The user guidelines were created to ensure a smooth process to get deeper into the topic and handling of MiCADO.
MiCADO V0
When we started to develop MiCADO we aimed to work with virtual machines (VM) only without any other virtualization technology. The first working version (V0/A) used only Linux services so we could develop and create our configuration files easily in a Linux based environment. We soon discovered that supporting the latest virtualization technologies and their benefits will make MiCADO better so we implemented the version using Docker (V0/B). Both versions are highly scalable. Scaling is executed in two layers: first user requests have to be shared in a load balancer layer, and then the requests have to be executed in the application node cluster where the computational tasks are getting done. MiCADO automatically scales both the load balancers and the compute nodes and makes sure that the implemented application works as it is expected with the optimum number of resources.
Tutorial V0/A
This version utilizes Linux based virtual machines where all the services that need for MiCADO to work properly are running as Linux services. This is the base infrastructure of all the following versions and further development are aimed at making it better while using the same concept. In this version user applications are hard coded into the node configuration files, and knowledge for editing the cloud-init files is required.
Tutorial V0/B
This version extends the previous one by implementing Docker, a well known virtualization technology. In this version every service are dockerized which helped us to create shorter configuration files which are more understandable by the users. Implementing user applications doesn’t require intensive knowledge of the cloud init files but the application still have to be written into them. Instead of creating all the configuration files and setting up the runtime environment, you can simply paste your application with a “docker run” command into the end of the application node descriptor file.
MICADO V1
When we released V0 versions of MICADO we identified two major things that we wanted to improve. The first one was that not every application makes advantage of a fully scalable load balancing layer and by implementing one we created unnecessarily VMs. The second and probably the bigger one was to make it easier to change user applications. The two problems were solved by implementing Docker Swarm. With the help of Swarm users don’t have to write their applications into the configuration files before they build up the infrastructure and don’t need to know cloud init configuration files in advance. Instead, with the help of Swarm they can start applications with a simple command. Swarm also has a built-in load balancer that can be used in case the application is more a computation heavy and not used by a huge number of users and to deal with user requests (V1/A). In other situations the previously implemented load balancing layer is necessary (V1/B).
Tutorial V1/A
This version extends the previous one by implementing Docker Swarm. In this version you will not find a separate load balancing layer, instead Swarm’s built-in load balancer is used, which means we only scale the application layer. From now on you don’t have to modify cloud-init files at all, instead you can start your application as a docker service in the Swarm node either by logging in or through the Docker API. This requires less knowledge of the files and cloud-init technology while also making advantage of replacing the user application later since the application is not hard-coded anymore into the cloud-init files. You can start many applications in the same infrastructure and replace them if it is needed.
Tutorial V1/B
This version extends the previous one with a scalable load balancing layer to support user heavy applications where Swarm’s load balancer is a bottleneck and a fully scalable load balancing layer is required. We implemented the same layer used in V0/B which means we scale both the load balancing and the application layer. User applications can be started as Swarm services as in the V1/A version.
Tutorial V2
As a user you may not find major differences in this version compared to the previous version of MiCADO, but in the background we made some major modification. We implemented container monitoring, and with the help of the collected information, now the deployed application can be scaled in the container level also, which means changing the number of containers of the application and scaling the virtual machines only if there is no resource left on the host machines. This gives us faster feedback in the control loop and with container level scaling we can fit the demand curve better, in real time. You will also find out in this tutorial that deploying multiple applications into the same infrastructure has improved as well and you can limit resource usage of your applications. You don’t have to use Docker’s global mode services any more. You will also find application specific Alerts in Prometheus which will be generated automatically when you start you application and which will be deleted when you delete the application.
Which version should I use?
Depending on your needs and knowledge you could choose any solution from V0/A to V1/B. If you prefer linux services and do not have a Docker image for your application you may go for V0/A. However if you are familiar with docker, it is recommended to go for the last version since changing and starting user application is easier and you can benefit from the use of Docker.