What is Kubernetes – Everything You Need to Know

Kubernetes (K8s) is an open-source project that aims to orchestrate containers and automate application deployment.

Currently maintained by the Cloud Native Computing Foundation, Kubernetes manages the clusters that contain the hosts that run Linux applications.

These clusters can include cloud hosts, so Kubernetes (K8s) is the ideal platform for hosting cloud-native applications that require rapid scalability, such as streaming data in real-time via Apache Kafka.

Deploying a new version of an application is always a risky process. There are several manual or semi-automated steps, and in case something goes wrong, rolling back to the previous version is super complicated.

Now imagine that with an application comprised of dozens of microservices, each with a different lifecycle, different release dates, and different technologies.

This would be the nightmare of any development team. Therefore, it eliminates many manual processes that a containerized application requires, making microservices projects easier and faster, for example.

How Kubernetes Works: Containers

There’s not much of a mystery! Containers follow the same logic as their literal counterpart.

In the same way that we group objects that need to be sent from one place to another in containers, we also group our code in a container, which can be executed in several areas.

This way, we can work with smaller components, using the microservices architecture that, like Kubernetes, is on the rise.

Using microservices and containers simplifies the programmer’s life, as it dissipates massive coding into smaller ones, preventing your code from turning into a monster.

How Kubernetes works: Cloud-native applications

Cloud native is the term used to classify applications designed to take full advantage of cloud environments, whether private or public.

These applications are based on the microservices architecture and incorporate practices that enable automation of the entire application lifecycle.

There are monolith applications, where all parts of the application live together (causing tight coupling).

The microservices architecture suggests that applications should be composed of more minor, independent parts called services (resulting in loose coupling).

The idea is that each Service is specialized and offers an API to communicate with other services.

This feature allows, for example, different teams to assume other parts of the same application.

Another advantage is that multiple applications can use the same Service without extra effort.

Features of cloud-native applications

In the case of cloud-native applications, one of the main characteristics is that they use containers to encapsulate each microservice.

As these containers have the Service and all its dependencies, they become independent of the infrastructure and can be easily migrated from one cloud to another, for example.

Another critical point is that using containers greatly facilitates issues such as scalability and deployment of new versions.

In the image example above, if 3 web interface containers are insufficient, just start one more.

Is a new version out? Just replace the containers with the latest version. Does the new version have a critical bug? Just return it with the previous version’s container.

This kind of flexibility brings several advantages and creates new challenges. When an application comprises many small parts, managing it all manually can become quite complex.

Kubernetes features

To help solve the problem mentioned at the beginning of the text, Kubernetes offers several features. However, before we get into the details, it’s essential to understand a core concept of K8s: application state.

The idea behind this concept is that there are two types of state in an application: current and desired.

The current state of the application describes the reality. For example, how many replicas of a given service are running, which version of each Service is in production, and so on.

The desired state describes how the team or person responsible for the application wants it to be at that moment.

Kubernetes implements a series of loops that constantly check if the current state is the same as the desired state. So-called Controllers play this role.

When a controller identifies that the current state is different from the desired state, it triggers other system components to bring the current state back to the desired state.

This entire process of monitoring and managing the state of the application, not counting the execution of the application itself, requires a series of components. That’s why the architecture of a Kubernetes environment is based on a cluster of machines.

The Kubernetes Architecture: How Does It Work?

Kubernetes is made up of several components, each with a different purpose. To ensure that there is a separation of responsibility and that the system is resilient, K8s use a cluster of machines to run.

The machines in a cluster are separated into three types:

Node

The first type is called Node. The role of a Node is to run the containers that encapsulate the applications being managed by K8s.

When you deploy an application on a K8s cluster, that application will run on one of the cluster’s Nodes. The set of Nodes forms what we call Workers.

etcd

The second node type is etcd. etcd is the name of the distributed database used to store everything happening within the cluster, including the application state.

In production environments, good management of these nodes is essential to ensure that the cluster is always available.

Master

Finally, the last type of Node is what we call a Master. On this type of Node, the main components of Kubernetes run, such as the Scheduler, which is responsible for controlling the allocation of resources in the cluster.

The Master node set forms what can be considered the brain of a Kubernetes cluster: the Control Plane.

Control Plane

The Kubernetes Control Plane can be considered the brain of a cluster. He is responsible for managing the system’s main components and ensuring that everything is working according to the desired state of the application.

To facilitate the representation of this state, K8s work with an abstraction called Object.

 An Object represents part of the application’s state, and when its current state is not the desired state, changes are applied so that the two states are equal again.

There are several types of Objects in a Kubernetes environment, but some are essential to understanding how a cluster works.

Pod

In the section on cloud-native applications, one of the main features is that these applications use containers to encapsulate their microservices. However, when we talk about applications running on a Kubernetes cluster, we do not speak about containers directly but Pods.

Pods are the basic unit of a K8s cluster. They encapsulate one or more application containers and represent a process within the cluster. When we deploy an application on K8s, we create one or more Pods.

However, Pods are ephemeral, meaning they are created and destroyed according to the needs of the cluster.

To ensure that access to a microservice is always available, an Object called Service encapsulates one or more Pods and can dynamically find them on any Node in the cluster.

Deployment

This type of Object offers a series of features that automate all the steps described in a typical software development scenario, with manual or semi-automated application deployments.

Using Deployments, we can describe the desired state of our application, and a Deployment controller will take care of transforming the current state into the desired state in case they are different.

And speaking of describing the desired state of our application, it’s time to understand how this is done in a Kubernetes environment. Learn more about Kubernetes certification to become expert in Deployment.

Learning Kubernetes: how to create applications?

When using a Kubernetes cluster, there are two ways to apply changes to the current state of an application: to change its configuration.

The traditional approach you may be more used to is the Imperative Configuration, where we tell you how each change should be done.

For example, suppose you want to change the number of replicas for a given Pod from 3 to 4.

In the imperative approach, you would send commands directly to the K8s API saying that you want to change the number of replicas from 3 to 4. But how to do this in an application with dozens of microservices?

What if something happened while changing each of them, and you only had time to apply the changes to half of the Pods? What are the implications that such a change could have? If something starts to go wrong, how will your team members know what has changed and what hasn’t?

Maybe you passed a wrong parameter in one of the commands, and now the application is down.

Declarative configuration

To avoid this kind of problem, Kubernetes supports what is called Declarative Configuration.

In a declarative approach, we don’t say how a change should be made, only what change should be made.

The system, in our case, the K8s Control Plane, will decide the best way to apply that change and make the current state of the application equal to the desired state.

If we were to make the exact change as in the previous example, change the number of replicas of a Pod from 3 to 4 in a declarative way, we would change the value of the “replicas” field in the example below and send this YAML file to the K8s API.

Understanding the Kubernetes API

The Kubernetes API is one of the main elements of the Control Plane. Through it, we can interact with all components of a K8s cluster, either through the command line or the web interface.

Furthermore, the API defines the different Objects that are part of the K8s ecosystem.

When we send a state change, either imperatively or declaratively, the API creates a Record of Intent. Depending on the Object being changed in this Record of Intent, a specific Controller will detect that the desired state has changed and will react to apply the necessary changes.

As there are many types of Objects in the context of Kubernetes, the API can seem complex when first seen. To facilitate API management and evolution, Objects have been grouped into the core, apps, and storage categories.

These groups are made up of developers from the Kubernetes community, and they decide how each category will evolve. This shows the open-source nature of the project and how the users of the system have a direct influence on its evolution.

A summary of the benefits of Kubernetes

Kubernetes is a super powerful tool, but it can initially seem quite complex.

There are several concepts and components involved in a cluster’s functioning, but they make K8s the most used orchestrator now.

To recap the main points, we’ve seen about Kubernetes and organize our thinking, here’s a summary:

  • K8s offers a complete platform for applications known as cloud native.
  • Its components run on a cluster composed of three types of nodes: Nodeetcd, and Master.
  • The set of all Master nodes forms the so-called Control Plane, which controls everything that happens within the cluster and monitors the state of the application.
  • It uses abstractions such as Pods and Deployments, called Objects, to represent different aspects of an application’s state.
  • This state can be changed in two ways: imperative or declarative. The declarative form is considered the most ideal and uses files in YAML format that are sent to the API.
  • The Kubernetes API is the gateway to a cluster, used by the command line and the web interface.

Now that you know ​​what Kubernetes is and its architecture, it’s time to start learning in the best way possible: getting your hands dirty!