What is Deployment in k8s?
A Deployment provides a configuration for updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling or to remove existing Deployments and adopt all their resources with new Deployments.
Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature
First Verify your Minicube is running or not; if not start it.
minikube status
minikube start --driver=docker
I have cloned https://github.com/LondheShubham153/django-notes-app.git to server and created docker image and pushed to docker hub. I'm using image kshitibartakke/django-todo:latest to create a container.
Create deployment.yml file as below
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-deploy
labels:
app: todo-app
spec:
replicas: 3
selector:
matchLabels:
app: todo-app
template:
metadata:
labels:
app: todo-app
spec:
containers:
- name: django-todo
image: kshitibartakke/django-todo:latest
ports:
- containerPort: 8000
Let’s create this deployment. Run this command as a root user:
kubectl apply -f deployment.yml
Verify using
kubectl get deployments
Let us test if the container we created is working or not. This will be checked on the Worker node and try to connect it
kubectl get pods
docker ps -a
docker exec -it <container_id> bash
Let’s connect to the application using the container’s IP.
curl -L http://<container_ip>:80
So the deployment I created is working successfully.
Let’s check the auto-healing and autoscaling features.
What are auto-healing and auto-scaling features in k8s?
Auto-healing, also known as self-healing, is a feature that automatically detects and recovers from failures within the cluster.
Auto-scaling is a feature that dynamically adjusts the number of running instances (pods) based on the current demand or resource utilization.
So I will delete two containers.
kubectl get pods
kubectl delete pod <podname> <podname>
If we check the pods again, we can observe that the minimum number of pods we specified (in this case 3) will be up again.
You can observe that one pod is created before 5s, proving that it has came up automatically after the deletion of pod. This is the feature of k8s: auto-scaling and auto-healing.
To delete deployment, we use:
kubectl delete -f deployment.yml
We can also observe that, along with deployment, the pods created are also deleted.
Finally! You created a cluster and deployed the application on it!
1. Step by Step creating deployment without YAML
At its core, a Kubernetes Deployment is a higher-level construct that manages the desired state for your Pods and ReplicaSets. It ensures that the defined number of pod replicas are maintained. If a Pod goes down, the Deployment ensures that a new one is spun up as a replacement. This ensures the reliability and resilience of your application.
2. Pre-requisites
A running Kubernetes cluster. If you haven’t set one up, consider using a platform like Minikube for local development or a cloud provider's Kubernetes service.
kubectl
– the command line tool to interact with the Kubernetes cluster.
3. Creating a Deployment
For our example, we'll deploy a simple Nginx server. Here's how you do it:
kubectl create deployment nginx-deployment --image=nginx
This command instructs Kubernetes to create a deployment named "nginx-deployment" using the Nginx image from DockerHub.
4. Verifying the Deployment
After creating the deployment, you can use the following command to see details:
kubectl get deployments
You should see your nginx-deployment
listed, along with the number of replicas, and other relevant information.
5. Exposing Your Deployment
By default, your deployment is only accessible from within the cluster. To access the Nginx server from outside, you need to expose it. One way to do this is using a Service:
kubectl expose deployment nginx-deployment --type=LoadBalancer --port=80
This command exposes your deployment using a LoadBalancer and makes it accessible on port 80.
6. Scaling Your Deployment
One of the major benefits of Kubernetes is its ability to easily scale applications. To scale your deployment, use the following:
kubectl scale deployment nginx-deployment --replicas=3
This command scales the nginx-deployment
to 3 replicas.
7. Updating Your Deployment
Kubernetes allows for zero-downtime updates. If you wanted to update the Nginx version (or any other parameter), just modify the deployment:
kubectl set image deployment/nginx-deployment nginx=nginx:1.17.9
This updates the Nginx image in the deployment to version 1.17.9.
8. Get Pods
kubectl get pods
9. Auto-Scaling and Auto-Healing
Delete a Pod and kubernetes will create another pod by itself called AutoHealing
10. Cleaning Up
Once you're done, or if you want to remove the resources you've created:
kubectl delete service nginx-deployment
kubectl delete deployment nginx-deployment
Conclusion
Kubernetes Deployment is a powerful abstraction that allows you to manage the desired state for your applications in an automated, scalable, and reliable manner. By mastering Deployments, you're well on your way to harnessing the full power of Kubernetes!
Whether you're running microservices, batch jobs, or other containerized applications, Kubernetes Deployments simplify the complexities of ensuring your application is running smoothly. Dive deeper, explore, and happy deploying!