This cheat sheet focuses on the most important CLI commands for Kubectl, helm and minikube CLIs. But what are the differences between them you ask? kubectl: The most important CLI to learn. It is used to manage a Kubernetes cluster helm: Package manager for Kubernetes. It is sort of like apt install in Linux but for installing services in Kubernetes clusters minikube: CLI for running Kubernetes locally 1. kubectl Enable autocomplete $ source <(kubectl completion bash) $ echo "source <(kubectl completion bash)" >> ~/.bashrc Context and configuration $ kubectl config current-context $ kubectl config view $ kubectl config set-context mia $ kubectl config use-context mia Run Yaml files / Make infrastructural changes Make a deployment Make sure to make a my-manifest.yaml file first!
What is Kubernetes? Kubernetes is an open source platform for running cloud native apps Its a layer over Vms and provide a rich set of APIs for running cloud native apps What are cloud native apps? Cloud native apps are built of small interacting services that work together to do something useful Making them small makes them easy to scale and update Prerequisites to learn Kubernetes? Compulsory requirements-
Kubernetes is a container orchestration platform. It is used to help containers in a cluster communicate between them in an easy way. This is where networking in Kubernetes kicks in. To understand how Kubernetes works, knowing its basic underlying networking concepts is a fundamental necessity.
Pods in Kubernetes are volatile. That means if a pod crashes or restarts then the data stored previously is lost. Volumes in Kubernetes decouple storage from pods and provides a method for persisting data. For persistent storage in Kubernetes, we need to know 3 things.
A typical Kubernetes workflow follows the following steps- Write code Containerize app Send to container registry Deploy to Kubernetes graph TD A[Write Code] --> B B[Containerzie app] --> C C[Send to container registry] --> D[Deploy to Kubernetes] 1. Write Code This step is pretty self explanatory. We write code that solves a business problem. For microservices/distributed systems we write code in multiple small cohesive services so that they can be deployed independently.
In the previous blog post we deployed an nginx container with 3 replicas. Replicas allows us to upscale an application and increase its availability and performance. However, we generally want the replica count of a pod to be dynamic. When demand increases, number of pods should increase. And when demand decreases, the number of pods should decrease.