Skip to content

Helm deployment

Archived (pre-2022)

Preserved for reference only -- likely outdated. View original | Last updated: April 2021

Prerequisites

  • Follow this procedure to obtain credentials for BLN managed AWS accounts or obtain AWS key pair for TLV managed accounts.
  • Make sure that your AWS IAM auth entity (assumed role or account) has access to the EKS cluster that you going to work with. By default the IAM user/role that was used for provisioning EKS cluster is given admin access to it implicitly, other users/roles need to be given explicit access by modifying aws-auth configmap in kube-system namespace. For more info on configuration format please refer to aws-iam-authenticator (Github)
  • Install tools:
  • Latest version of AWS CLI with aws eks get-token functionality Installing
  • Kubectl: Install Kubectl
  • Sops: sops (Github)
  • Helm CLI tool: helm (Github)
    Assuming Tiller was installed in kube-system namespace you can find version with the following query(because the query can be performed when kubectl is configured to access kubernetes endpoint this step can be postponed):

    $ kubectl get deployment.apps/tiller-deploy -n kube-system -o jsonpath='{.spec.template.spec.containers[*].image}'
    

    Note

    • You don't need to install server side Helm component called Tiller unless you are provisioning new EKS cluster or you know what you are doing.
    • Please download corresponding version of helm CLI that is compatible with Tiller server side component already installed in the cluster. You have to use the same major and minor version to satisfy compatibility constrains. Do not upgrade server side component unless you know what you are doing.
    • Helmfile: helmfile (Github)
    • Install helm-secrets plugin: helm-secrets (Github)
    • Install helm-diff plugin: helm-diff (Github)

Creating kubeconfig for EKS cluster

At this point you should have your AWS credentials configured so that aws sts get-caller-identity returns credentials that you going to use to access EKS cluster.

Example for BLN AWS creds:

$ aws sts get-caller-identity --profile=saml
{
    "UserId": "AROAIPFXSQ4FCQFRLHQ2A:sergii.slobodianiuk@fyber.com",
    "Account": "767648288756",
    "Arn": "arn:aws:sts::767648288756:assumed-role/ADFS-Admin/sergii.slobodianiuk@fyber.com"
}

Example for TLV AWS creds(assuming that AWS credentials exported as env vars):

$ aws sts get-caller-identity
{
    "UserId": "AIDA237PDF4FQOQGHMSQ7",
    "Account": "747288866571",
    "Arn": "arn:aws:iam::747288866571:user/sergii.slobodianiuk@fyber.com"
}

Use the AWS CLI update-kubeconfig command to create or update the .kube/config for your cluster

Example for BLN managed aws-production-eks-common cluster in eu-west-1(adding --dry-run option causes to output config instead of saving it to .kube/config file):

$ aws eks update-kubeconfig --name aws-production-eks-common --region eu-west-1 --profile saml --dry-run
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJ...........................=
    server: https://D59BBBD23A08A9F623D96AD7F05D72DB.yl4.eu-west-1.eks.amazonaws.com
  name: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
contexts:
- context:
    cluster: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
    user: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
  name: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
current-context: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-1
      - eks
      - get-token
      - --cluster-name
      - aws-production-eks-common
      command: aws
      env:
      - name: AWS_PROFILE
        value: saml

Example for TLV managed accounts where separate role is used to grant access to EKS cluster:

$ aws eks update-kubeconfig --name staging --region us-east-1 --role-arn arn:aws:iam::747288866571:role/EKS_users --dry-run
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBD...........................=
    server: https://43BAC8E5B8779D9F82AB9B9FA5C5D8A4.gr7.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:747288866571:cluster/staging
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:747288866571:cluster/staging
    user: arn:aws:eks:us-east-1:747288866571:cluster/staging
  name: arn:aws:eks:us-east-1:747288866571:cluster/staging
current-context: arn:aws:eks:us-east-1:747288866571:cluster/staging
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:747288866571:cluster/staging
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - staging
      - --role
      - arn:aws:iam::747288866571:role/EKS_users
      command: aws

Thus, issuing the command/s above without --dry-run will update your .kube/config file that allows kubectl to access EKS cluster using specific configuration.

Kubectl uses context notion for entries in its configuration file tat can contain many contexts and allows to switch between them.

For example if I run both commands above without --dry-run I will have the following picture for my configuration:

$ kubectl config get-contexts
CURRENT   NAME                                                                   CLUSTER                                                                AUTHINFO                                                               NAMESPACE
          arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common   arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common   arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
*         arn:aws:eks:us-east-1:747288866571:cluster/staging                     arn:aws:eks:us-east-1:747288866571:cluster/staging                     arn:aws:eks:us-east-1:747288866571:cluster/staging

Current context marked with asterisk(*)

Contexts can be switched as follows:

$ kubectl config use-context arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
Switched to context "arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common".

Now the context I chose is shown to be selected:

$ kubectl config get-contexts
CURRENT   NAME                                                                   CLUSTER                                                                AUTHINFO                                                               NAMESPACE
*         arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common   arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common   arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common
          arn:aws:eks:us-east-1:747288866571:cluster/staging                     arn:aws:eks:us-east-1:747288866571:cluster/staging                     arn:aws:eks:us-east-1:747288866571:cluster/staging

Additionally a context can be modified to use specific namespace in Kubernetes cluster instead of default.

This can be very convenient if you work with resources in some specific namespace most of the time.

$ kubectl config set-context arn:aws:eks:us-east-1:747288866571:cluster/staging --namespace=ssr
Context "arn:aws:eks:us-east-1:747288866571:cluster/staging" modified.

At this point you should be able to access EKS cluster

Test connection:

$ kubectl get svc
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes                 ClusterIP   10.100.0.1       <none>        443/TCP          62d

Info

More information on configuring AWS CLI: Cli Chap Getting Started

More Information on configuring kubectl for EKS: Create Kubeconfig

Helm + deploy.sh based deployment structure

helm_struct_1.png

Let's go over the structure:

1) Jenkinsfile holds pipeline code for deployment job

2) charts directory holds helm charts. Those can be self developed charts or charts copied from some public repo for example charts (Github)

We change standard helm structure a bit by adding directory with chart version where actual chart content is located. This structure is used by Rancher and provides convenient way for holding several versions of the same chart together. Specific version of a chart is referenced from deployment configuration which will be explained later on.

3) config directory holds structured directory tree with separate directories for region, environment, cluster endpoint(e.g. k8s API endpoint), service and it's components. In each directory values.yaml file can be put providing respectively scoped variables override(innermost with the highest priority). The component directory can have also secrets.yaml file encrypted by mozilla sops utility, which then decrypted and passed to helm command as regular values file.

In cluster endpoint directory also located kube_config file with exactly the information that the file's name says.

Inside of the component directory is also .env file that holds reference to a chart and specifies namespace for a deployment. It looks like this:

NAMESPACE=test
CHART_REF=test-chart/v0.0.1

4) deploy.sh script is used to bind configuration to specific chart that the configuration requires and run helm command passing all the variables files in appropriate order to implement override as well as decrypted secrets if such exist.

Real example of the structure:

helm_struct_1_example.png

deploy.sh usage

Usage of the script looks like this:

Usage: deploy.sh options (-a <Action> -r <Region> -e <Environment> -d <Deployment Endpoint> -s <Service> -c <Component> )

Where options Region, Environment, Deployment/Cluster Endpoint, Service and Component refer to the corresponding subdirectories in configdirectory of the deployment structure.

And Action can be one of those:

  • deploy - invokes helm upgrade --install command with appropriate variables
  • dry_run - invokes helm upgrade --install --dry-run command with appropriate variables
  • kubectl_apply - invokes helm template | kubectl apply -f - with appropriate variables to provide option to change resources that already deployed without helm and we do not want to remove them(like storageclasses) in order to redeploy with helm or any other case when only templating functionality is needed.
  • template - invokes helm templatewith appropriate variables. Used for debugging purposes
  • diff - compares results produced by helm get manifest and helm template against the same release to show difference between deployed state and current repository configuration
  • delete - invokes helm manifest of the specific release and then kubectl delete against obtained state and then helm delete --purgeagainst the same release. This was done due to some helm specifics when not all resources can be cleaned during helm delete folowing failing deploy as result.
  • helm_init - invokes kubectl apply -f helm_init/rbac-config.yaml and helm init --service-account tillerto install helm into k8s cluster if needed

Note

In case there is secrets.yaml file for any specific service/component script will try to decrypt it with sops utility. In order to do this sops needs to have access to valid AWS credentials, because data is encrypted with KMS keys.

In case AWS CLI profile is used - exporting AWS_PROFILE variable is needed:

$ export AWS_PROFILE=saml

Example:

$ helm/deploy.sh -r eu-west-1 -e production -d eks_common -s ofw-pivot -c main -a dry_run

         Region:  eu-west-1
    Environment:  production
Deploy Endpoint:  eks_common
      Namespace:  ofw-pivot
        Service:  ofw-pivot
      Component:  main
          Chart:  ofw-pivot/latest
         Action:  dry_run

Release "ofw-pivot-main" has been upgraded. Happy Helming!
LAST DEPLOYED: Tue Jul 16 12:42:08 2019
NAMESPACE: ofw-pivot
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
ofw-pivot-main-ofw-pivot-96f49d5fb-lsnkz 1/1 Running 0 15h
ofw-pivot-main-ofw-pivot-96f49d5fb-mljhc 1/1 Running 0 15h

==> v1/Secret
NAME TYPE DATA AGE
ofw-pivot-main-ofw-pivot Opaque 1 122d

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ofw-pivot-main-ofw-pivot LoadBalancer 172.20.153.2 a1181f4e86811... 443:30198/TCP 122d

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
ofw-pivot-main-ofw-pivot 2/2 2 2 41d

SOPS usage

In order to introduce secrets.yaml for deployment or edit existing one direct usage of sops utility is required

SOPS encrypts only values in yaml files.

Once new secrets.yaml files is created run the following command to encrypt it inplace:

$ sops -i -e helm/config/eu-west-1/production/eks_common/ofw-pivot/main/secrets.yaml

To show decrypted content of the file:

$ sops -d helm/config/eu-west-1/production/eks_common/ofw-pivot/main/secrets.yaml

To decript file inplace for editing:

$ sops -i -d helm/config/eu-west-1/production/eks_common/ofw-pivot/main/secrets.yaml

Note

Make sure that KMS keys used to encrypt files are available for other people who need to be able to perform deploy.

In case AWS CLI profile is used - exporting AWS_PROFILE variable is needed:

$ export AWS_PROFILE=saml

In order to encrypt files SOPS needs to know which keys to use. This can be done by placing .sops.conf config file or exporting appropriate env var.

Production Fyber Core

export SOPS_KMS_ARN="arn:aws:kms:us-west-2:767648288756:key/71621466-2e0b-4d6c-80b3-c0efcc01b607,arn:aws:kms:eu-west-1:767648288756:key/bf5753b7-a1c4-4f8d-b0b5-ef8be6660fbc"

Staging Fyber Core

export SOPS_KMS_ARN="arn:aws:kms:eu-west-1:399797994004:key/e09377d3-ac96-43e4-a992-3ab85ba930aa,arn:aws:kms:us-west-2:399797994004:key/908bd786-e610-436e-b2df-f2379fb30e8b"

Helmfile based deployment structure

This deployment approach uses helmfile utility instead of deploy.sh script, which gives more flexibility and simplifies structure.

helmfile_struct.png

1) Jenkinsfile holds pipeline code for deployment job

2) helmfile.yaml in helm directory used for referencing other helmfiles by environment name and/or labels if needed

Currently is looks like this:

environments:
  production_eu_west_1:

helmfiles:
- config/*/helmfile.yaml

Which allows to trigger deployment (or other action) against all production_eu_west_1 environments in all service configs, in other words "Deploy everything".

3) charts directory is identical to the structure used with helm + deploy.sh

4) config directory has separate directories for each service/app for logical separation.

Each of these directories have its own helmfile.yaml that is templated with environment name and points to the environment specific values and/or secrets

$ cat helm/config/jenkins_main/helmfile.yaml

helmDefaults:
  kubeContext: {{ .Environment.Values.kubeContext }}

environments:
  production_eu_west_1:
    values:
    - production_eu_west_1.yaml

releases:
  - name: {{ .Environment.Values.name }}
    namespace: {{ .Environment.Values.namespace }}
    chart: {{ .Environment.Values.chartRef }}
    values:
      - {{ .Environment.Name }}/values.yaml
    secrets:
      - {{ .Environment.Name }}/secrets.yaml

In this case deployed app has only one environment(deployed instance in that specific case, do not mix with classic definition of an environment) but we can have as many as we want for different regions(eu-west-1, us-east-1 etc.) classic envs(prod, stg, dev etc.).

Environment specific config is used to reference helm chart to be used by this helmfile environment, specify name of the helm release, default namespace and kube context (k8s cluster to deploy to).

Example:

$ cat helm/config/jenkins_main/production_eu_west_1.yaml

name: jenkins-main
namespace: default
chartRef: "../../charts/jenkins/latest"
kubeContext: "arn:aws:eks:eu-west-1:767648288756:cluster/aws-production-eks-common"

Thus, using helmfile environments structuring feature allows  to use different charts(versions) for any of them as well as deploy to arbitrary k8s cluster, namespace and use different helm release name.

The helmfile environment directory holds values.yaml and/or secrets.yaml files that are regular helm values files to override default values.yaml in a chart used by this environment.

secets.yaml files decrypted by helmfile automatically by sops utility.

Note

In case AWS CLI profile is used - exporting AWS_PROFILE variable is needed:

$ export AWS_PROFILE=saml

To run deployment for specific helmfile env and service/app:

$ helmfile --interactive --environment production_eu_west_1 --file helm/config/jenkins_main/helmfile.yaml  apply

Building dependency ../../charts/jenkins/latest
No requirements found in ../../charts/jenkins/latest/charts.

Decrypting secret production_eu_west_1/secrets.yaml
Decrypting production_eu_west_1/secrets.yaml

Comparing jenkins-main ../../charts/jenkins/latest

No affected releases

In case of pending changes helmfile will output diff for changes and will ask for confirmation.
Diff command can also be run directly:

$ helmfile --interactive --environment production_eu_west_1 --file helm/config/jenkins_main/helmfile.yaml diff

For more information regarding helmfile functionality please refer to helmfile (Github)