loading
 

Native cloud applications on Kubernetes

by @Mmendola


Posted on May, 2021

Overview

A lot has been said about native cloud applications, how new technologies and DevOps processes accompany us throughout their life cycle.

At netlabs we have helped many companies and startups to start their journey in adopting these development cycles and, in particular, technologies such as Kubernetes that make up this new paradigm, witnessing some common mistakes that are made in design that lead to a poor implementation of this platform.

Some problems are more related to the application and others specifically to the basic infrastructure and its factory configuration or to the choice of the correct tools for a certain problem.

This article seeks to clarify some of these problems or suggestions to avoid future problems, in the hope that they will be useful for those who are in the design or construction stage of the path to native cloud architectures.

The pattern…

I think there is no doubt that native cloud applications are, almost by definition, a set of modules, where each of these is a gear in a much more complex system.

This design pattern is known as microservices, each of these gears are services operating together to form the application.

Now, it is not enough to separate it into logical units or functionalities, to say that we have designed a microservices architecture.

Some of the most common errors that we have encountered are related to the state of the microservice and how it behaves in an error scenario.

In general, the more stateless the services are, the better native cloud features we will have, in the sense of scalability and fault tolerance.

Disclaimer: Although, in this way, we can better exploit the capabilities of a platform like Kubernetes, this does not mean that we cannot have native stateful cloud applications.

Let us remember that by state it refers to the property of maintaining traceability of a certain process. Stateless indicates that this process does not keep records of previous executions which, in a failure scenario, makes it is easier to recover automatically since there is no information stored either in memory or on disk that has to be obtained prior to recovery.

A bad practice that we have seen is to use relational databases for the persistence of data that could well be temporary (in which case it is not necessary to persist it) or that could be stored in more flexible services for this type of data, such as KeyDB, Redis or memcached, but beyond the technology, it should be kept in mind that Kubernetes automates the deployment of volumes that are then presented to the application.

A good practice is to select the type of storage according to the need or, in other words, to make available different StorageClass that allow selecting the best storage backend for a specific use case.

Insufficient “Low Coupling”

In Software Engineering, the term Low Coupling refers to how much knowledge one module has of another. In other words, how dependent one microservice is on another.

A common mistake of this type of architecture is to find a component that depends on another to be able to start its service correctly. Let's take an example:

We have a microservice that all it does is to publish a rest API, in NodeJS, whose container is not instantiated because it depends on another one based on Java on an application server that takes between 40 and 50 seconds to raise all the libraries it needs (and those it does not :) ). This dependency between components creates problems at various stages of the application life cycle and makes it impossible for each microservice to be treated as a totally independent unit.

It is common to find this in legacy applications where we have a large monolith and a strong dependency on all its components that make up the application, and even in those cases where the redesign does not affect the entire system but part of it.

These types of legacy applications do not exploit all the features of Kubernetes and in general present an even greater challenge when managing them since we must understand the points of failure and add specific metrics to monitor this.

Is it all about metrics?

Kubernetes, and more precisely, Kubelet, automates the reboot of the container when we are facing a failure scenario and although the reasons why it does so may vary, it relies on the health checks that are responsible for accusing this.

There are three checks that we must implement to inform the platform that the POD is healthy and available to receive traffic, and when the POD is ready to begin receiving traffic.

  • StartupProbe
  • LivenessProbe
  • ReadinessProbe

Ideally, some component should be developed within the application that reports these statuses and that the checks point to these addresses. In any case, we must avoid falling into the error of checking the address and listening port of the service in question since it is not indicative that the application is working correctly.

Who has the power to define such conditions is the application development team and who should tell the platform administrator what to configure in the deployment.

Kubernetes implements three API Endpoints to obtain the overall health of the platform. Although we are not going to delve into this (at least in this article), I think it appropriate to mention that the platform administrator team is responsible for the deployment and configuration of the appropriate tools to collect different aspects of the services since, by default, Kubernetes does not come with any out-of-the-box monitoring tools.

The range of possibilities is great and it is not necessary to implement them all to be calm, and in our case, the Prometheus + Grafana + AlertManager trident is the one who gives us this peace of mind.

Where is my POD?

In Kubernetes, the person in charge of POD placement is the scheduler. Through a series of primitives, it collects information and determines which is the Node that best meets the requirements of the POD in question.

As administrators, we must control this behavior making use of affinity / anti-affinity rules and in general this is enough for many scenarios since we can separate workloads by their characteristics. For example, we could separate productive workloads from non-productive ones, or we could even allocate cheaper hardware resources and preserve the more expensive hardware.

There are cases where we have to be more specific and we need to configure mechanisms to force PODs of certain characteristics to not be housed in certain Nodes, and for this it is necessary to configure Taints and Tolerations.

But what happens when we need to guarantee a minimum of replicas of a POD while the scheduler works its magic? Or what if we need to do maintenance on a pool of nodes?

For this particular case, we can configure a PodDisruptionBudget and specify the amount or percentage of replicas that we must have at all times so as not to affect the performance of the application.

It is a recurring problem to have to do maintenance tasks on one or more nodes of the cluster and to have to move the applications manually. In general, this is because the cluster lacks these rules.

Cluster Capacity tips

Estimating the required hardware resources is neither an easy task nor one to be taken lightly. So much so, that I should write another article specifically for this, so I will not go into details.

If we have rules to separate the productive workloads, we must ensure that we have sufficient hardware resources to support the applications in the absence of a Node, either because we have to do maintenance or because we have a specific problem in that worker. For example, we allocate two nodes for productive applications, in the event of the failure of a node, the other must be able to lift all the PODs.

It is a good practice to configure Requests for our PODs (productive and non-productive) since it depends on whether we have enough information to know if we are facing a scenario of resource overselling or not (Overcommit). For example, if we have two workers, the total of configured Requests should not exceed 50% of the available resources.

Conclusions

Designing the architecture of a scalable platform and taking full advantage of the capabilities of a technology such as Kubernetes is not a trivial task, it requires time and we must be clear about the needs that we must cover.

It is important to have as much information as possible to avoid surprises and bad experiences, especially for organizations that are in the early stages of adopting scalable platforms.


Leave a Comment: