Troubleshooting

Deploying Helm Package Manager in Kubernetes

January 28, 2020
14 min read
ship-helm-759954_1920.jpg

The following documentation is intended to explain the procedure for deploying the helm package manager in a Kubernetes environment. This tutorial assumes you have a working knowledge of Kubernetes and a receptive understanding of helm.

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes. Helm has two parts: a client ( helm ) and a server ( tiller ) Tiller runs inside of your Kubernetes cluster and manages releases (installations) of your charts.

Pre-Requisites:

  • Linux Workstation
  • K8 Cluster with no other load balancer installed
  • Kubernetes-cli or kubectl program
  • Kubernetes version v1.15.1 (any version should work)
  • Routable IP network with DHCP configured

For a tutorial on Building a K8s Cluster using Vagrant visit: Building a K8s Cluster using Vagrant.

Step 1) Downloading Helm

To install helm, we first need to download the package file. In your browser, navigate to https://helm.sh/docs/using_helm/ . In the page, scroll down to “INSTALLING THE HELM CLIENT” and click where it says “Download your own version”. Find the version you want to download. In this demo, we are downloading "Linux amd64". Right-click the link and copy the "Link Address".

On your master server download the helm package using the link address with wget.

wget https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz

Next, uncompress the file and change into the linux-amd64 directory and list the files.

[vagrant@kmaster linux-amd64]$ tar -xvf helm-v2.16.0-linux-amd64.tar.gz [vagrant@kmaster linux-amd64]$ cd linux-amd64/ [vagrant@kmaster linux-amd64]$ lshelm LICENSE README.md tiller

Step 2) Install Helm

Installing helm is as easy as copying the binary into your path. Copy the helm binary to /usr/local/bin.

[vagrant@kmaster linux-amd64]$ sudo cp helm /usr/local/bin/

Very the install and version. You can do so by running.

vagrant@kmaster:~/linux-amd64$ helm version --short --clientClient: v2.16.0+ge13bc94

the configuration for helm will be installed under .helm directory under your home directory. However, it will not appear until we initialize helm later down the line "helm init".

NVIDIA TESLA SERVER

Step 3) Installing Tiller

Service accounts are for processes, which run in pods and allow cluster users to create service accounts for specific tasks.

Next, we will install the server-side component (tiller). But we will first need to set up a service account and clusterrolebinding. To create the service account "tiller" type the following:

[vagrant@kmaster linux-amd64]$ kubectl -n kube-system create serviceaccount tillerserviceaccount/tiller created

Next, we will create the role binding. What we want to do is create a clusterrole called tiller and add the cluster-admin role and associate it with the service account we created earlier tiller under the namespace kube-system.

[vagrant@kmaster linux-amd64]$ kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller created

We can check that the clusterrolebinding was created.

[vagrant@kmaster linux-amd64]$ kubectl get clusterrolebinding tillerNAME AGEtiller 65s

Step 4) Initializing Helm

Now, the next thing to do is initialize helm. When this is done it will create the tiller pod. First, let's look at what pods exist. From the command below you can see that there is no tiller pod running:

[vagrant@kmaster linux-amd64]$ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5c98db65d4-5497l 1/1 Running 0 5h13mcoredns-5c98db65d4-s868z 1/1 Running 0 5h13metcd-kmaster.example.com 1/1 Running 0 5h12mkube-apiserver-kmaster.example.com 1/1 Running 0 5h12mkube-controller-manager-kmaster.example.com 1/1 Running 0 5h12mkube-flannel-ds-amd64-fbrpg 1/1 Running 0 5h13mkube-flannel-ds-amd64-h47cd 1/1 Running 0 5h8mkube-flannel-ds-amd64-p9g2l 1/1 Running 0 5h11mkube-proxy-5lwpm 1/1 Running 0 5h11m

Let's initialize helm and we should see that add tiller pod is created.

vagrant@kmaster:~/linux-amd64$ helm init --service-account tillerCreating /home/vagrant/.helmCreating /home/vagrant/.helm/repositoryCreating /home/vagrant/.helm/repository/cacheCreating /home/vagrant/.helm/repository/localCreating /home/vagrant/.helm/pluginsCreating /home/vagrant/.helm/startersCreating /home/vagrant/.helm/cache/archiveCreating /home/vagrant/.helm/repository/repositories.yamlAdding stable repo with URL: https://kubernetes-charts.storage.googleapis.comAdding local repo with URL: http://127.0.0.1:8879/charts$HELM_HOME has been configured at /home/vagrant/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

After applying the changes we should see a pod was created for tiller.

vagrant@kmaster:~/linux-amd64$ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-7b9dcdcc5-fttk7 1/1 Running 1 43hcalico-node-6jpjv 1/1 Running 1 43hcalico-node-bc8db 1/1 Running 1 43hcalico-node-qrxjw 1/1 Running 1 43hcoredns-5644d7b6d9-8vflt 1/1 Running 1 43hcoredns-5644d7b6d9-spp2d 1/1 Running 1 43hetcd-kmaster 1/1 Running 1 43hkube-apiserver-kmaster 1/1 Running 1 43hkube-controller-manager-kmaster 1/1 Running 1 43hkube-proxy-bd7fz 1/1 Running 1 43hkube-proxy-hdfwx 1/1 Running 1 43hkube-proxy-wdsng 1/1 Running 1 43hkube-scheduler-kmaster 1/1 Running 1 43htiller-deploy-68cff9d9cb-46r9w 1/1 Running 0 32s

Step 5) Useful Helm Commands

Now that we have the helm binary installed and the server component tiller running in Kubernetes. We can look at some useful commands. Helm help has some very useful commands.

helm help
helm install )—values) (--name_
helm fetch
helm list
helm status
helm search
helm repo update
helm upgrade
helm rollback
helm delete (--purge)
helm reset (--force (--remove-helm-home)

Before running helm commands we should update the repo


[vagrant@kmaster ~]$ helm repo update

Now we're ready to run some useful helm commands. for starters, helm home will show us the location of the helm director.

[vagrant@kmaster ~]$ helm home/home/vagrant/.helm

helm repo list will show us the repository that helm uses. This gets added when you initialize helm.

[vagrant@kmaster ~]$ helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/charts


helm search allows you to search the repo for applications such as nginx.

[vagrant@kmaster ~]$ helm search nginxNAME CHART VERSION APP VERSION DESCRIPTIONstable/nginx-ingress 1.24.5 0.26.1 An nginx Ingress controller that uses ConfigMap to store ...stable/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauthstable/nginx-lego 0.3.1Chart for nginx-ingress-controller and kube-legostable/gcloud-endpoints 0.1.2 1DEPRECATED Develop, deploy, protect and monitor your APIs...


If you want to see what's inside nginx you can inspect it with "helm inspect".

helm inspect stable/nginx-ingress


if you want to install nginx you can fetch the application. What this will do is download a tar file of the application that you can then extract and install.

[vagrant@kmaster ~]$ helm fetch stable/nginx-ingress [vagrant@kmaster ~]$ lsnginx-ingress-1.24.5.tgz [vagrant@kmaster ~]$ tar -xvf nginx-ingress-1.24.5.tgz [vagrant@kmaster ~]$ cd nginx-ingress/ [vagrant@kmaster nginx-ingress]$ lsChart.yaml ci OWNERS README.md templates values.yaml

If we look in the templates directory, we’ll see all the yaml files that tiller will execute when we install nginx using helm.

[vagrant@kmaster nginx-ingress]$ ls templates/addheaders-configmap.yaml controller-metrics-service.yaml controller-service.yaml default-backend-service.yamladmission-webhooks controller-poddisruptionbudget.yaml controller-webhook-service.yaml _helpers.tplclusterrolebinding.yaml controller-prometheusrules.yaml default-backend-deployment.yaml NOTES.txtclusterrole.yaml controller-psp.yaml default-backend-poddisruptionbudget.yaml proxyheaders-configmap.yamlcontroller-configmap.yaml controller-rolebinding.yaml default-backend-psp.yaml tcp-configmap.yamlcontroller-daemonset.yaml controller-role.yaml default-backend-rolebinding.yaml udp-configmap.yamlcontroller-deployment.yaml controller-serviceaccount.yaml default-backend-role.yamlcontroller-hpa.yaml controller-servicemonitor.yaml default-backend-serviceaccount.yaml

The below command we can see we've created the deployment, which then created the replicaset, which then created the pod. We also added a service account and clusterrolebinding.

vagrant@kmaster:~/linux-amd64$ kubectl -n kube-system get deploy,replicaset,pod,serviceaccount,clusterrolebinding | grep tillerdeployment.apps/tiller-deploy 1/1 1 1 8m25sreplicaset.apps/tiller-deploy-68cff9d9cb 1 1 18m25spod/tiller-deploy-68cff9d9cb-46r9w 1/1 Running 0 8m25sserviceaccount/tiller 1 10mclusterrolebinding.rbac.authorization.k8s.io/tiller 10m

And there you have had it! Helm in Kubernetes.

Final Thoughts

Why Helm?

Because Kubernetes can become very complex with all its objects, such as Services, ConfigMaps, Persistent Volumes, Pods. Helm in Kubernetes helps to manage these things and offers a simple way to package everything into a simple application. Helm fills the need to quickly and reliably provision container applications through the easy install, update, and removal. It provides a vehicle for developers to package their applications and share them with the Kubernetes community. Whether you are a Developer trying to package your application as a Kubernetes application or a DevOps person trying to deploy either internal or third-party vendor applications. Helm is the way to go!

Related Blogs

Final Thoughts

Why Helm?

Because Kubernetes can become very complex with all its objects, such as Services, ConfigMaps, Persistent Volumes, Pods. Helm in Kubernetes helps to manage these things and offers a simple way to package everything into a simple application. Helm fills the need to quickly and reliably provision container applications through the easy install, update, and removal. It provides a vehicle for developers to package their applications and share them with the Kubernetes community. Whether you are a Developer trying to package your application as a Kubernetes application or a DevOps person trying to deploy either internal or third-party vendor applications. Helm is the way to go!

Related Blogs

MicroK8s GPU Workstation | Kubernetes

Topics

ship-helm-759954_1920.jpg
Troubleshooting

Deploying Helm Package Manager in Kubernetes

January 28, 202014 min read

The following documentation is intended to explain the procedure for deploying the helm package manager in a Kubernetes environment. This tutorial assumes you have a working knowledge of Kubernetes and a receptive understanding of helm.

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes. Helm has two parts: a client ( helm ) and a server ( tiller ) Tiller runs inside of your Kubernetes cluster and manages releases (installations) of your charts.

Pre-Requisites:

  • Linux Workstation
  • K8 Cluster with no other load balancer installed
  • Kubernetes-cli or kubectl program
  • Kubernetes version v1.15.1 (any version should work)
  • Routable IP network with DHCP configured

For a tutorial on Building a K8s Cluster using Vagrant visit: Building a K8s Cluster using Vagrant.

Step 1) Downloading Helm

To install helm, we first need to download the package file. In your browser, navigate to https://helm.sh/docs/using_helm/ . In the page, scroll down to “INSTALLING THE HELM CLIENT” and click where it says “Download your own version”. Find the version you want to download. In this demo, we are downloading "Linux amd64". Right-click the link and copy the "Link Address".

On your master server download the helm package using the link address with wget.

wget https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz

Next, uncompress the file and change into the linux-amd64 directory and list the files.

[vagrant@kmaster linux-amd64]$ tar -xvf helm-v2.16.0-linux-amd64.tar.gz [vagrant@kmaster linux-amd64]$ cd linux-amd64/ [vagrant@kmaster linux-amd64]$ lshelm LICENSE README.md tiller

Step 2) Install Helm

Installing helm is as easy as copying the binary into your path. Copy the helm binary to /usr/local/bin.

[vagrant@kmaster linux-amd64]$ sudo cp helm /usr/local/bin/

Very the install and version. You can do so by running.

vagrant@kmaster:~/linux-amd64$ helm version --short --clientClient: v2.16.0+ge13bc94

the configuration for helm will be installed under .helm directory under your home directory. However, it will not appear until we initialize helm later down the line "helm init".

NVIDIA TESLA SERVER

Step 3) Installing Tiller

Service accounts are for processes, which run in pods and allow cluster users to create service accounts for specific tasks.

Next, we will install the server-side component (tiller). But we will first need to set up a service account and clusterrolebinding. To create the service account "tiller" type the following:

[vagrant@kmaster linux-amd64]$ kubectl -n kube-system create serviceaccount tillerserviceaccount/tiller created

Next, we will create the role binding. What we want to do is create a clusterrole called tiller and add the cluster-admin role and associate it with the service account we created earlier tiller under the namespace kube-system.

[vagrant@kmaster linux-amd64]$ kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller created

We can check that the clusterrolebinding was created.

[vagrant@kmaster linux-amd64]$ kubectl get clusterrolebinding tillerNAME AGEtiller 65s

Step 4) Initializing Helm

Now, the next thing to do is initialize helm. When this is done it will create the tiller pod. First, let's look at what pods exist. From the command below you can see that there is no tiller pod running:

[vagrant@kmaster linux-amd64]$ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5c98db65d4-5497l 1/1 Running 0 5h13mcoredns-5c98db65d4-s868z 1/1 Running 0 5h13metcd-kmaster.example.com 1/1 Running 0 5h12mkube-apiserver-kmaster.example.com 1/1 Running 0 5h12mkube-controller-manager-kmaster.example.com 1/1 Running 0 5h12mkube-flannel-ds-amd64-fbrpg 1/1 Running 0 5h13mkube-flannel-ds-amd64-h47cd 1/1 Running 0 5h8mkube-flannel-ds-amd64-p9g2l 1/1 Running 0 5h11mkube-proxy-5lwpm 1/1 Running 0 5h11m

Let's initialize helm and we should see that add tiller pod is created.

vagrant@kmaster:~/linux-amd64$ helm init --service-account tillerCreating /home/vagrant/.helmCreating /home/vagrant/.helm/repositoryCreating /home/vagrant/.helm/repository/cacheCreating /home/vagrant/.helm/repository/localCreating /home/vagrant/.helm/pluginsCreating /home/vagrant/.helm/startersCreating /home/vagrant/.helm/cache/archiveCreating /home/vagrant/.helm/repository/repositories.yamlAdding stable repo with URL: https://kubernetes-charts.storage.googleapis.comAdding local repo with URL: http://127.0.0.1:8879/charts$HELM_HOME has been configured at /home/vagrant/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

After applying the changes we should see a pod was created for tiller.

vagrant@kmaster:~/linux-amd64$ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-7b9dcdcc5-fttk7 1/1 Running 1 43hcalico-node-6jpjv 1/1 Running 1 43hcalico-node-bc8db 1/1 Running 1 43hcalico-node-qrxjw 1/1 Running 1 43hcoredns-5644d7b6d9-8vflt 1/1 Running 1 43hcoredns-5644d7b6d9-spp2d 1/1 Running 1 43hetcd-kmaster 1/1 Running 1 43hkube-apiserver-kmaster 1/1 Running 1 43hkube-controller-manager-kmaster 1/1 Running 1 43hkube-proxy-bd7fz 1/1 Running 1 43hkube-proxy-hdfwx 1/1 Running 1 43hkube-proxy-wdsng 1/1 Running 1 43hkube-scheduler-kmaster 1/1 Running 1 43htiller-deploy-68cff9d9cb-46r9w 1/1 Running 0 32s

Step 5) Useful Helm Commands

Now that we have the helm binary installed and the server component tiller running in Kubernetes. We can look at some useful commands. Helm help has some very useful commands.

helm help
helm install )—values) (--name_
helm fetch
helm list
helm status
helm search
helm repo update
helm upgrade
helm rollback
helm delete (--purge)
helm reset (--force (--remove-helm-home)

Before running helm commands we should update the repo


[vagrant@kmaster ~]$ helm repo update

Now we're ready to run some useful helm commands. for starters, helm home will show us the location of the helm director.

[vagrant@kmaster ~]$ helm home/home/vagrant/.helm

helm repo list will show us the repository that helm uses. This gets added when you initialize helm.

[vagrant@kmaster ~]$ helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/charts


helm search allows you to search the repo for applications such as nginx.

[vagrant@kmaster ~]$ helm search nginxNAME CHART VERSION APP VERSION DESCRIPTIONstable/nginx-ingress 1.24.5 0.26.1 An nginx Ingress controller that uses ConfigMap to store ...stable/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauthstable/nginx-lego 0.3.1Chart for nginx-ingress-controller and kube-legostable/gcloud-endpoints 0.1.2 1DEPRECATED Develop, deploy, protect and monitor your APIs...


If you want to see what's inside nginx you can inspect it with "helm inspect".

helm inspect stable/nginx-ingress


if you want to install nginx you can fetch the application. What this will do is download a tar file of the application that you can then extract and install.

[vagrant@kmaster ~]$ helm fetch stable/nginx-ingress [vagrant@kmaster ~]$ lsnginx-ingress-1.24.5.tgz [vagrant@kmaster ~]$ tar -xvf nginx-ingress-1.24.5.tgz [vagrant@kmaster ~]$ cd nginx-ingress/ [vagrant@kmaster nginx-ingress]$ lsChart.yaml ci OWNERS README.md templates values.yaml

If we look in the templates directory, we’ll see all the yaml files that tiller will execute when we install nginx using helm.

[vagrant@kmaster nginx-ingress]$ ls templates/addheaders-configmap.yaml controller-metrics-service.yaml controller-service.yaml default-backend-service.yamladmission-webhooks controller-poddisruptionbudget.yaml controller-webhook-service.yaml _helpers.tplclusterrolebinding.yaml controller-prometheusrules.yaml default-backend-deployment.yaml NOTES.txtclusterrole.yaml controller-psp.yaml default-backend-poddisruptionbudget.yaml proxyheaders-configmap.yamlcontroller-configmap.yaml controller-rolebinding.yaml default-backend-psp.yaml tcp-configmap.yamlcontroller-daemonset.yaml controller-role.yaml default-backend-rolebinding.yaml udp-configmap.yamlcontroller-deployment.yaml controller-serviceaccount.yaml default-backend-role.yamlcontroller-hpa.yaml controller-servicemonitor.yaml default-backend-serviceaccount.yaml

The below command we can see we've created the deployment, which then created the replicaset, which then created the pod. We also added a service account and clusterrolebinding.

vagrant@kmaster:~/linux-amd64$ kubectl -n kube-system get deploy,replicaset,pod,serviceaccount,clusterrolebinding | grep tillerdeployment.apps/tiller-deploy 1/1 1 1 8m25sreplicaset.apps/tiller-deploy-68cff9d9cb 1 1 18m25spod/tiller-deploy-68cff9d9cb-46r9w 1/1 Running 0 8m25sserviceaccount/tiller 1 10mclusterrolebinding.rbac.authorization.k8s.io/tiller 10m

And there you have had it! Helm in Kubernetes.

Final Thoughts

Why Helm?

Because Kubernetes can become very complex with all its objects, such as Services, ConfigMaps, Persistent Volumes, Pods. Helm in Kubernetes helps to manage these things and offers a simple way to package everything into a simple application. Helm fills the need to quickly and reliably provision container applications through the easy install, update, and removal. It provides a vehicle for developers to package their applications and share them with the Kubernetes community. Whether you are a Developer trying to package your application as a Kubernetes application or a DevOps person trying to deploy either internal or third-party vendor applications. Helm is the way to go!

Related Blogs

Final Thoughts

Why Helm?

Because Kubernetes can become very complex with all its objects, such as Services, ConfigMaps, Persistent Volumes, Pods. Helm in Kubernetes helps to manage these things and offers a simple way to package everything into a simple application. Helm fills the need to quickly and reliably provision container applications through the easy install, update, and removal. It provides a vehicle for developers to package their applications and share them with the Kubernetes community. Whether you are a Developer trying to package your application as a Kubernetes application or a DevOps person trying to deploy either internal or third-party vendor applications. Helm is the way to go!

Related Blogs

MicroK8s GPU Workstation | Kubernetes

Topics