What is Kubernetes?
It is an extensible, portable, open-source platform for managing containerized amounts of work and services, that facilitates both declarative configuration and automation. it’s an outsized , rapidly growing ecosystem. Kubernetes is a system for automating application deployment. Up-to-date applications are isolated across clouds, virtual machines, and servers. Administering apps manually is not any longer a viable option. K8s transforms virtual and physical machines into a unified API surface. A developer can then use the Kubernetes API to deploy, scale, and manage containerized applications.
Brief Kubernetes History
The term Kubernetes initiates from Greek, meaning helmsman or pilot. Google open-sourced the project in 2014. It combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes (also referred to as k8s or “kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications. In other words, you’ll cluster together groups of hosts running Linux containers, and It helps you easily and efficiently manage those clusters.
Kubernetes architecture: How Kubernetes works
It’s architecture makes use of varied concepts and abstractions. A number of these are variations on existing, familiar notions, but others are specific to Kubernetes.
The highest-level Kubernetes concept, the cluster, refers to the group of machines running it and thus the containers managed by it. A Kubernetes cluster must have a master, the system that commands and controls all the opposite machines within the cluster. A highly available cluster replicates the master’s facilities across multiple machines. But just one master at a time runs the work scheduler and controller-manager.
Kubernetes nodes and pods
Each cluster contains Kubernetes nodes. Nodes could be physical machines or VMs. Again, the thought is abstraction: regardless of the app is running on, It handles deployment thereon substrate. It even makes it possible to make sure that certain containers run only on VMs or only on bare metal.
Nodes run pods, the foremost basic the objects which will be created or managed. Each pod represents one instance of an application or running process in Kubernetes, and consists of 1 or more containers. It starts, stops, and replicates all containers during a pod as a gaggle. Pods keep the user’s attention on the appliance , instead of on the containers themselves. Details about how It must be configured, from the state of pods on up, is kept in Etcd, a distributed key-value store.
Pods are created and destroyed on nodes as required to evolve to the specified state specified by the user within the pod definition. It provides an abstraction called a controller for handling the logistics of how pods are spun up, unrolled , and spun down. Controllers are available a couple of different flavors counting on the type of application being managed. As an example , the recently introduced “StatefulSet” controller is employed to affect applications that require persistent state. Another quiet controller, the deployment, is employed to scale an app up or down, update an app to a replacement version, or roll back an app to a known-good version if there’s a drag .
A service in Kubernetes describes how a given group of pods (or other Kubernetes objects) are often accessed via the network. because the It’s documentation puts it, the pods that constitute the back-end of an application might change, but the front-end shouldn’t need to realize that or track it. Services make this possible.
Ingress: It’s services are running within a cluster. But you’ll want to be ready to access these services from the surface world. It has several modules that enable changeable degrees of easiness and strength, including NodePort and LoadBalancer, but the component with the foremost flexibility is Ingress. Ingress is an API that provides external access to a cluster’s services, typically via HTTP.
Dashboard: One Kubernetes component that helps you retain on top of all of those other components is Dashboard, a web-based UI with which you’ll deploy and troubleshoot apps and manage cluster resources
Storage orchestration – It allows mounting a spread of storage systems, including local storage, network storage and public cloud.
Horizontal Scaling – The system is scalable horizontally, allowing organizations to simply scale as their requirements grow. Apps are often scaled up or down during a sort of way, including supported CPU usage, through a UI or using simple commands.
Load balancing and repair discovery – The system ensures efficient load balancing by giving containers their own unique IP addresses and one DNS for a gaggle of containers. Users also don’t need to modify their apps to use service discovery mechanisms that are new or unfamiliar.
Automatic Bin Packing – The system places containers and other constraints on auto-pilot supporting the resources they have , without compromising on availability. This enables businesses to combine best-effort and important workloads and save more resources while improving utilization.
Automatic Rollouts/Rollbacks – The system is meant to roll out app changes or configuration during a progressive manner. It monitors app health and rolls back the changes if something goes wrong, while constantly monitoring app health so all the instances aren’t killed at an equivalent time.
Batch Execution – Kubernetes provides the power to manage batch and CI workloads and replace the failing containers if needed.
- Businesses need a particular degree of reorganization when using Kubernetes with an existing app
- Pods sometimes need a manual start/restart before they begin working as intended. this will happen in certain situations like when running near full capacity
- Uses a special configuration and YAML definitions and API because it wasn’t designed just for docker clustering
Kubernetes operates employing a very simple model. We input how we might like our system to function – It compares the specified state to the present state within a cluster. Its service then works to align the 2 states and achieve and maintain the specified state.
It was designed by Google to scale its internal apps like YouTube and Gmail and transform how we built, deployed and managed apps. It offers more velocity, better efficiency and therefore the agility companies need within the fast-moving IT world. It enables businesses to scale horizontally, delivery app consistent updates and is meant to run anywhere, whether cloud-based, on-premises or hybrid.