A Rapidly Expanding Kubernetes Universe!!
The Article is about how IT Companies are Using Kubernetes ,how this technology is so much beneficial for them…
The Kubernetes project was started by Google in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.
What is Actually the Kubernetes…?
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
Kubernetes , also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation . It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a “Dockershim”; however, the shim has since been deprecated in favor of directly interfacing with containerd or another CRI-compliant runtime.
Amazon EKS Is A Fully Managed Kubernetes Service That Is Secure, Reliable & Scalable. Applications Managed By Amazon EKS Are Fully Compatible With Kubernetes Environments. Serverless Computing. On-Demand Deployment. Control Plane.
What does Kubernetes mean? K8s?
The name Kubernetes originates from Greek, meaning “helmsman” or “pilot”, and is the root of “governor” and “cybernetic”. K8s is an abbreviation derived by replacing the 8 letters “ubernete” with 8.
Architecture of Kubernetes
Kubernetes is a very flexible and extensible platform. It allows you to consume its functionality a-la-carte, or use your own solution in lieu of built-in functionality. On the other hand, you can also integrate Kubernetes into your environment and add additional capabilities.
From a high level, a Kubernetes environment consists of a control plane (master), a distributed storage system for keeping the cluster state consistent (etcd), and a number of cluster nodes (Kubelets).
Kubernetes Control Plane:
The control plane is the system that maintains a record of all Kubernetes objects. It continuously manages object states, responding to changes in the cluster; it also works to make the actual state of system objects match the desired state. As the above illustration shows, the control plane is made up of three major components: kube-apiserver, kube-controller-manager and kube-scheduler. These can all run on a single master node, or can be replicated across multiple master nodes for high availability.
The API Server provides APIs to support lifecycle orchestration (scaling, updates, and so on) for different types of applications. It also acts as the gateway to the cluster, so the API server must be accessible by clients from outside the cluster. Clients authenticate via the API Server, and also use it as a proxy/tunnel to nodes and pods (and services).There are various controllers to drive state for nodes, replication (autoscaling), endpoints (services and pods), service accounts and tokens (namespaces). The Controller Manager is a daemon that runs the core control loops, watches the state of the cluster, and makes changes to drive status toward the desired state. The Cloud Controller Manager integrates into each public cloud for optimal support of availability zones, VM instances, storage services, and network services for DNS, routing and load balancing.
The Scheduler is responsible for the scheduling of containers across the nodes in the cluster; it takes various constraints into account, such as resource limitations or guarantees, and affinity and anti-affinity specifications.
Kubernetes Cluster Node:
Cluster nodes are machines that run containers and are managed by the master nodes. The Kubelet is the primary and most important controller in Kubernetes. It’s responsible for driving the container execution layer, typically Docker.
Pods and Services
Pods are one of the crucial concepts in Kubernetes, as they are the key construct that developers interact with. The previous concepts are infrastructure-focused and internal architecture.Alternatively, pods can be used to host vertically-integrated application stacks, like a WordPress LAMP (Linux, Apache, MySQL, PHP) application. A pod represents a running process on a cluster.
Pods are ephemeral, with a limited lifespan. When scaling back down or upgrading to a new version, for instance, pods eventually die. Pods can do horizontal autoscaling (i.e., grow or shrink the number of instances), and perform rolling updates and canary deployments.This logical construct packages up a single application, which can consist of multiple containers and storage volumes. Usually, a single container (sometimes with some helper program in an additional container) runs in this configuration — as shown in the diagram below.
◼ Types of Pods
- ReplicaSet, the default, is a relatively simple type. It ensures the specified number of pods are running
- Deployment is a declarative way of managing pods via ReplicaSets. Includes rollback and rolling update mechanisms
- Daemonset is a way of ensuring each node will run an instance of a pod. Used for cluster services, like health monitoring and log forwarding
- StatefulSet is tailored to managing pods that must persist or maintain state
- Job and CronJob run short-lived jobs as a one-off or on a schedule.
Kubernetes Networking:
Networking Kubernetes has a distinctive networking model for cluster-wide, podto-pod networking. In most cases, the Container Network Interface (CNI) uses a simple overlay network (like Flannel) to obscure the underlying network from the pod by using traffic encapsulation (like VXLAN); it can also use a fully-routed solution like Calico. In both cases, pods communicate over a cluster-wide pod network, managed by a CNI provider like Flannel or Calico.
Within a pod, containers can communicate without any restrictions. Containers within a pod exist within the same network namespace and share an IP. This means containers can communicate over localhost. Pods can communicate with each other using the pod IP address, which is reachable across the cluster.Moving from pods to services, or from external sources to services, requires going through kube-proxy.
Kubernetes Services
Services are the Kubernetes way of configuring a proxy to forward traffic to a set of pods. Instead of static IP address-based assignments, Services use selectors (or labels) to define which pods uses which service. These dynamic assignments make releasing new versions or adding pods to a service really easy. Anytime a Pod with the same labels as a service is spun up, it’s assigned to the service.
By default, services are only reachable inside the cluster using the clusterIP service type. Other service types do allow external access; the LoadBalancer type is the most common in cloud deployments. It will spin up a load balancer per service on the cloud environment, which can be expensive. With many services, it can also become very complex.
To solve that complexity and cost, Kubernetes supports Ingress, a high-level abstraction governing how external users access services running in a Kubernetes cluster using host- or URL-based HTTP routing rules.
There are many different Ingress controllers (Nginx, Ambassador), and there’s support for cloud-native load balancers (from Google, Amazon, and Microsoft). Ingress controllers allow you to expose multiple services under the same IP address, using the same load balancers.Ingress functionality goes beyond simple routing rules, too. Ingress enables configuration of resilience (time-outs, rate limiting), content-based routing, authentication and much more.
Internal Concepts In Kubernetes
Containers
Each container that you run is repeatable; the standardization from having dependencies included means that you get the same behavior wherever you run it.Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments.
Container images
A container image is a ready-to-run software package, containing everything needed to run an application: the code and any runtime it requires, application and system libraries, and default values for any essential settings.
By design, a container is immutable: you cannot change the code of a container that is already running. If you have a containerized application and want to make changes, you need to build a new image that includes the change, then recreate the container to start from the updated image.
Container runtimes
The container runtime is the software that is responsible for running containers.Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
Addons
Addons use Kubernetes resources like Deamon to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kubesysstem namespace.
Selected addons are described below; for an extended list of available addons, please see Adons
DNS
While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Web UI (Dashboard)
Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
Container Resource Monitoring
Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
Cluster-level Logging
A Cluster-level Logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
Orchestration in Kubernetes
Kubernetes Container Orchestration. The Docker container orchestration tool, also known as Docker Swarm, can package and run applications as containers, find existing container images from others, and deploy a container on a laptop, server or cloud (public cloud or private).Kubernetes orchestration allows you to build application services that span multiple containers, schedule containers across a cluster, scale those containers, and manage their health over time. Kubernetes eliminates many of the manual processes involved in deploying and scaling containerized applications.
Why Kubernetes ??
- Storage in Kubernetes is Persistent:
Kubernetes uses the concept of volumes. At its core, a volume is just a directory, possibly with some data in it, which is accessible to a pod. How that directory comes to be, the medium that backs it, and its contents are determined by the particular volume type used.
Kubernetes has a number of storage types, and these can be mixed and matched within a pod (see above illustration). Storage in a pod can be consumed by any containers in the pod. Storage survives pod restarts, but what happens after pod deletion is dependent on the specific storage type.PersistentVolumes (PVs) tie into an existing storage resource, and are generally provisioned by an administrator. They’re cluster-wide objects linked to the backing storage provider that make these resources available for consumption.
For each pod, a PersistentVolumeClaim makes a storage consumption request within a namespace. Depending on the current usage of the PV, it can have different phases or states: available, bound (unavailable to others), released (needs manual intervention) and failed (Kubernetes could not reclaim the PV).
2.Helps you to avoid vendor lock issues:
as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction, e.g., load balancer and storage.
3.Application-centric management:
Kubernetes works with the network and storage, as we discussed in the previous columns. There is work on rollback of these services that is deployed. It deals with phased rollouts or phased upgrades while the environment continues to run. If it sees that it needs the new capacity, it can bring up a new set of containers and deploy the workload associated with the Pod. One of the keys to making this work is an excellent scripting language that can also use other tools given the magnitude of the management technology.
So, when people ask how to manage the application-centric cloud with containers, micro-services, and virtualization, we look to the community that is building the tools because the traditional management systems fail us in this new environment.
4.containerized applications:
Kubernetes allows you to assure those containerized applications run where and when you want and helps you to find resources and tools which you want to work.
5.Automated Scheduling:
kube-scheduler is the default scheduler for Kubernetes and runs as part of the control plane. kube-scheduler is designed so that, if you want and need to, you can write your own scheduling component and use that instead.The scheduler finds feasible Nodes for a Pod and then runs a set of functions to score the feasible Nodes and picks a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called binding.
6.You can create predictable infrastructure:
With Kubernetes, you are able to quickly and efficiently respond to customer demandOne could build immutable virtual-machine images in order to achieve predictable.Kubernetes provides the infrastructure to build a truly container-centric The technical definition of “orchestration” is execution of a defined workflow
7.Kubernetes is:
1.portable: public, private, hybrid, multi-cloud
2.extensible: modular, pluggable, hookable, composable
3.self-healing: auto-placement, auto-restart, auto-replication, auto-scaling
Kubernetes is not:
◾Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. We preserve user choice where it is important.
◾Kubernetes does not limit the types of applications supported. It does not dictate application frameworks (e.g., Wildfly), restrict the set of supported language runtimes (e.g., Java, Python, Ruby).
◾Kubernetes does not provide middleware , data-processing frameworks (Spark), databases (mysql), nor cluster storage systems (e.g., Ceph) as built-in services. Such applications run on Kubernetes.
◾Kubernetes does not have a click-to-deploy service marketplace.
Kubernetes is unopinionated in the source-to-image space. It does not deploy source code and does not build your application. Continuous Integration (CI) workflow is an area where different users and projects have their own requirements and preferences, so we support layering CI workflows on Kubernetes but don’t dictate how it should work.
◾Kubernetes allows users to choose the logging, monitoring, and alerting systems of their choice. (Though we do provide some integrations as proof of concept.)
◾Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., jsonnet).
Expanding Usage Of Kubernetes in Startups
Kubernetes has taken the industry by storm. The container orchestration technology has quickly progressed from internal use at Google, to open source incubation in the Cloud Native Computing Foundation, to consensus as an infrastructure standard.
As a mature technology, Kubernetes is now in a phase of extremely rapid enterprise adoption.
But adoption trends only paint part of the Kubernetes picture. The question for CIOs choosing a digital transformation strategy more often isn’t whether to deploy Kubernetes as the backbone of their organizations’ cloud-native infrastructure, but how.
The needs of those enterprises and services providers vary greatly based on the characteristics of the IT environments they have implemented or would like to, their legacy systems, use cases, multi-cloud strategies and budgets.
Startups, open source foundations and industry giants are jointly contributing to a comprehensive ecosystem mushrooming around the core Kubernetes project that can support the diverse requirements and constraints of all those potential customers.Most Popular Example is:
Alcide kAudit
This Israel-based startup is building continuous security into Kubernetes infrastructure.Alcide’s latest product, Alicide kAudit, automatically analyzes multi-cluster deployments in real-time for breaches, misuses and anomalous behavior by scanning through Kubernetes audit logs.The solution provides customers with summaries of detected anomalies, as well as access, usage and performance trends
Case Study Of IBM and BOSE
BOSE
“We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. “There were a lot of cloud capabilities we wanted to provide to support our audio equipment and experiences.”
In 2016, the company decided to start building an IoT platform from scratch. The primary goal: “To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan O’Mahony. “If they release a new connected product, we want to be already well ahead of being able to handle whatever scale that they’re going to throw at us.”
From the beginning, the team knew it wanted a microservices architecture and platform as a service. After evaluating and prototyping orchestration solutions, including Mesos and Docker Swarm, the team decided to adopt Kubernetes for its platform running on AWS. Kubernetes was still in 1.5, but already the technology could do much of what the team wanted and needed for the present and the future. For West, that meant having storage and network handled. O’Mahony points to Kubernetes’ portability in case Bose decides to go multi-cloud.
“Bose is a company that looks out for the long term,” says West. “Going with a quick commercial off-the-shelf solution might’ve worked for that point in time, but it would not have carried us forward, which is what we needed from Kubernetes and the CNCF.”
The team spent time working on choosing tooling to make the experience easier for developers. “Our developers interact with tools provided by our Ops team, and the Ops team run all of their tooling on top of Kubernetes,” says O’Mahony. “We try not to make direct Kubernetes access the only way. In fact, ideally, our developers wouldn’t even need to know that they’re running on Kubernetes.”
The platform, which also incorporated Prometheus monitoring from the beginning, backdoored its way into production in 2017, serving over 3 million connected products from the get-go. “Even though the speakers and the products that we were designing this platform for were still quite a ways away from being launched, we did have some connected speakers on the market,” says O’Mahony. “We basically started to point certain features of those speakers and the apps that go with those speakers to this platform.”
Today, just one of Bose’s production clusters holds 1,800 namespaces/discrete services and 340 nodes. With about 100 engineers now onboarded, the platform infrastructure is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments.. It’s a staggering improvement over some of Bose’s previous deployment processes, which supported far fewer deployments and services.
We had a brand new service deployed from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says O’Mahony. “Everybody thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that we’ve built is a huge piece of that.”
Many of those technologies — such as Fluentd , CoreDNS , Jaeger , and OpenTracing — come from the CNCF Landscape, which West and O’Mahony have relied upon throughout Bose’s cloud native journey. “The CNCF Landscape quickly explains what’s going on in all the different areas from storage to cloud providers to automation and so forth,” says West. “This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.”
And, he adds, “If it weren’t for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.”
Another benefit of going cloud native: “We are even attracting much more talent into Bose because we’re so involved with the CNCF Landscape,” says West. (Yes, they’re hiring.) “It’s just enabled so many people to do so many great things and really brought Bose into the future of cloud.”
In the coming year, the team wants to work on service mesh and serverless, as well as expansion around the world. “Getting our latency down by going multi-region is going to be a big focus for us,” says O’Mahony. “In order to make sure that our customers in Japan, Australia, and everywhere else are having a good experience, we want to have points of presence closer to them. It’s never been done at Bose before.”
That won’t stop them, because the team is all about lofty goals. “We want to get to billions of connected products!” says West. “We have a lot going on to support many more of our business units at Bose in addition to the consumer electronics division, which we currently do. It’s only because of the cloud native landscape and the tools and the features that are available that we can provide such a fantastic cloud platform for all the developers and divisions that are trying to enable some pretty amazing experiences.”
In fact, given the scale the platform is already supporting, says O’Mahony, “doing anything other than Kubernetes, I think, would be folly at this point.”
IBM
International Business Machines Corporation (IBM) is an American multinational technology and consulting company.International Business Machines (IBM), is a global technology company that provides hardware, software, cloud-based services and cognitive computing. Founded in 1911 following the merger of four companies in New York State by Charles Ranlett Flint, it was originally called Computing-Tabulating-Recording Company.
Challenge : IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry. platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
Solution : The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation(CNCF) . open source project Notary , according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they’re loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
Impact : IBM’s intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. “Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem,” Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. “We had a multi-tenant Docker Registry with private image hosting,” Hough says. “The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose.”
Thanks For Reading the Article!!