How to make the right choice on the technology to invest in?

Innovation is evolving very fast, new technologies are constantly emerging that are changing the context in which we work. Accordingly, in such a dynamic environment one cannot stand still - one must continually learn new things and develop oneself. At the same time, it is important to determine what technology to bet on today to be successful tomorrow. It's a bit of a lottery. Now those who put on Kubernetes rather than Docker Swarm 5 years ago or chose Azure instead of Oracle Cloud are winning.

How to make the right choice on the technology to invest in? Fortunately, we can not just rely on luck, but watch, analyze, focus on global trends to anticipate the future and secure a strong position in it. But before we look to the future, let's analyze the path we have already completed.


What did it all begin with and how did the role of the Clouds evolve

A brief flashback to the evolution of the Clouds. Pre-Cloud - old school: servers, routers, CPAs. We work with it ourselves - we install the OS, get libraries and platforms, deploy applications, configure and automate.

Further - more interesting.

The first virtual machines and hypervisors come on the scene, it is much easier to manage the infrastructure. Within a single server, you can now perform a much larger amount of tasks. Engineers are gradually shifting their focus from tool development and automation to processes - platform configuration, upgrade, application development, infrastructure, and whole solutions. Virtual machines set the vector of development for the future.

Cloud 0 - Some hosting companies have the idea to sell virtual servers, not real servers, to provide all the necessary automation and infrastructure management work. We are seeing the birth of what will be called IaaS (Infrastructure as a Service) in the future. But the service back then was still far from being called a cloud service provider.

Cloud 1.0 is a take-off of cloud platforms. It has become clear that most companies use the same OS, platforms and frameworks (Linux, Apache, PHP, etc.). This means that they can also be automated, and tools for using these platforms are sold as a service. PaaS (Platforms as a Service) is emerging, and there is a trend towards using public cloud. Engineers are getting new tools to work with platforms without having to deploy and maintain them themselves. The amount of routine technical work becomes even smaller. The emphasis in DevOps engineer work is shifting to platform setup, debugging and deployment of the platform, optimizing resource utilization, and controlling how fast, understandable, and flexible the entire process is.

Cloud 2.0 - automation applies not only to platforms, but also to some traditional services (email, queues). We no longer need to think about deploying, managing and scaling them, as a concept such as Software as a Service (SaaS) is emerging. It shifts the paradigm toward abandoning traditional billing for server resources (CPU / RAM / HDD) to pay for the service subscription and utilization level applications by the size of the data processed or the number of transactions.

Day K - June 7, 2014 - Kubernetes' first public release. A rainy day in the history of Docker Swarm. The era of cloud containers begins when traditional virtual machines for deployment of replicas are replaced by containers. ISPs are rapidly releasing their managed solutions to deploy and manage Kubernetes clusters. There are generations of Clouds that we are now actively using.

Cloud 2.5 - container orchestrators are developing very actively in response to the global trend of moving to a microservice architecture in application building. Containers as a Service emerges, allowing you to deploy complex production architecture in the cloud in just a few hours or days, unlike the weeks or months previously required.

Cloud 3.0 - Functions as a Service (FaaS) is the answer to the next stage of evolution, when microservices become so small that they are each responsible for a specific function. At the same time, each function is completely isolated from the other. Accordingly, deployment is deployed as a series of separate functions, each of which can be linked to another or to any event in the system or infrastructure.

As CaaS in its time was the answer to the emergence of microservice architecture, FaaS is the answer to the further evolution of this architecture and the emergence of functions.


How the evolution of the cloud has already changed DevOps


When we only had servers and even when virtualization appeared, 80% of DevOps engineer time was spent on development and automation and only 20% on platform processes and configuration.

In Generation 3.0, most processes are already automated. All emerging tools release us from the technical routine, the main task is to be able to use them properly. What the team used to do in a month can now be done by an engineer on their own in a day. To do this, he has enough basic technical knowledge without delving into details. The focus has shifted to process setup, platform configuration, in some cases - minimal automation, and even less - the need to manually add a specific component of the platform.


Accordingly, requirements for DevOps engineers are changing dramatically. Increasingly, in-depth technical knowledge is no longer competitive and is not the key to success. Even if you face certain limitations of the platform, you will be able to plan the architecture so that they can be bypassed. Now the main task is to fine-tune processes, connect components, and very rarely do we have to go down a level to make sense of something.

What is happening now


As of 2019, the cloud is a must-have for enterprise companies. 85% of them are already using AWS or plan to upgrade next year. A little less percent cover Azure, GCP. In addition, the share of Serverless, Container as a Service grew by 40-50% during the year. Nowadays, one in three companies is actively using Serverless in production, and one in three is Container as a Service. Machine learning, IoT are also growing, and in the next 2-3 years, the share of these technologies will double or triple.

According to the State of the Cloud Report 2019


What's next

Evolution does not stop. The area of ​​expertise and responsibility of DevOps engineers will continue to be transformed. These are the technological solutions that will develop and define our profession in the medium to long term.


Service mesh

Service mesh will effectively manage global distributed microservices solutions based on the Kubernetes platform, which will continue to grow rapidly in the coming years. According to Gartner and IDC research, in 2020, all companies that use a global microservice architecture in production will use Service mesh. Time will tell what specific tools will win. Now the leader in Istio, the second most popular linkerd, but that may change with the launch of products from giants such as HashiCorp, Red Hat and VMware.

Hybrid multiclouds and Distributed clouds

All of the things we've discussed above are major changes that happen pretty quickly. Large companies find it difficult to adapt to them. At each stage, they make large investments that do not always have time to justify themselves. In addition, it is important for corporations to maintain control over their own infrastructure. All this leads to the fact that, by introducing something new, they can not abandon the decisions they have already invested in, but need to unite all in one place. They become hybrid multiclouds - a combination of existing private infrastructure with public service providers.

Many enterprises are considering the use of several Clouds - not just AWS, but also Azure, GCP and others, such as Alibaba Cloud, which is developing as fast in Asia as AWS in North America.

However, it is inefficient to have 5 different teams, tools, processes, and approaches to deploy the same application for different cloud environments. This is why enterprise companies are increasingly focusing on platforms that unify their management and give engineers a single user experience. Therefore, platforms such as Pivotal and OpenShift are developing rapidly. In the near future, this trend will not only continue but will intensify.


Everything as a Service (XaaS)

With the emergence of platforms that unify service management and infrastructure abstraction, as well as cloud vendors' continued tendency to provide managed solutions, there will be no need for traditional services that need to be configured and deployed manually. As a result, most platforms will turn into managed solutions.


This trend will continue, and in the near future, the number of platforms that will need to be managed manually will be reduced to the absolute minimum. Everything will turn into managed services. The integration of such services with each other will be simplified. The need for automation and deployment as we have done before will disappear. Custom development will occupy a minimum share of our working experience.

Containers, Serverless, ML, IoT

By 2021, Containers, Serverless will double its performance. This means that 3 out of 4 companies will use them. The use of ML, IoT will triple. Accordingly, engineers who do not want to lose their job in a few years should study it.


DataOps, MLOps, IoTOps

The core ideas of DevOps culture will extend to other competencies. This is where new trends emerge, such as DataOps - right here, MLOps - is beginning to emerge and stabilize, IoTOps - is on the high, but in practice, we will reach stable and standardized usage in a few years.

Predictive CI/CD and engineering performance

For example, if an error at the merge of the code, the system will automatically notify her and the cause that caused it. That is, developers will no longer have to spend time figuring out why something doesn't work.

Predictive CI/CD would seem to make life easier for developers, not DevOps engineers. Although this is a win-win for all of us. Eliminating the human code-writing factor, where unnecessary comma can disrupt the entire system, is one of the major aspects of improving overall project performance. And this is a significant step forward. I think it will take years 3-4, as more and more tools are available to better collect and process data, improving ML, which will eventually lead to complete automation.



In the next few years, a new stream of AIOps will emerge, combining the functionality of big data and ML to partially or completely free engineers from the operational tasks they are still performing. It is about availability and performance monitoring, correlation and event analysis, management and automation of IT services.

Let's look at a specific example. What can bring the downtime of Facebook, Instagram or Netflix? Significant financial losses, lower stock prices, and loss of customer loyalty. And this can happen even with the smallest mistake. But whatever causes the malfunction, the engineer must do the work to find the cause, and then take some time to fix it. And if it happens at night, it also adds time to wake up and turn on your computer. AI-based solutions will accomplish this in a second.


DevOps of the future. 5+ year perspective

All of the processes and changes we discussed above sound as if we were cutting down a branch on which we sit. What will be left for me when AI learns to do the work for me?

Yes, traditional DevOps in the present sense will gradually disappear. But new areas of responsibility will emerge: we will need to streamline processes, rise above AI and work more as a data scientist, ML engineer, and data analyst.

Similar paradigm shifts have already occurred during the transition from Cloud 1.0 to Cloud 2.0, when we began to focus on process optimization. The next wave came when the need for manual optimization was gone.

The main task of a DevOps engineer will be to collect all the elements and integrate them into an optimized system to ensure that it works as efficiently as possible. This involves understanding how to deploy application and platforms, build processes, and integrate them to avoid malfunctions. And how can we verify that this progress is really happening? Without measurements and specific figures - no way. Accordingly, all processes will need to be measured, analyzed and clearly understood. The work of the DevOps engineer will be closely linked to the collection and analysis of information on the operation of entire platforms.


We are already starting to do this, and in the near future it will become even more active. Those companies that are currently working on the old scheme, when there is a need to write a lot of code on their own, will also gradually transition to the new model. With the advent of the cloud that is now lagging behind (Healthcare and Finance), the need to design and use custom solutions for automation is disappearing. Accordingly, processes and configuration will remain. And for those ahead, ML and AI will be added to help them handle data, train their systems to make metrics and data decisions automatically.

Enterprise over the next 10 years: In 80% of companies, the traditional IT part of the organization to which DevOps belongs will be reduced to a minimum. Operations and development specialists will work with platforms, data, analytics. People in the company, who will crawl into the platform, something to add and to stimulate, practically will not remain.

Those companies that have already completed these steps are asking for point-by-point requests, such as helping them set up cost management in the cloud or helping them build a new FaaS / XaaS platform.

That is, we understand that the environment will change. Accordingly, those who want to successfully pursue a career and be able to work on innovative projects should start to adapt to new conditions now.