Building an image with GitLab CI and GKE-based runner¶
Imported from Confluence
Content may be outdated. Verify before following any procedures. View original | Last updated: November 2022

Description¶
We'd like to use GitLab to build an image and upload it to Google Artifact Registry (GAR), utilizing build-on-push approach and running all the related workloads in Google Kubernetes Engine (GKE) cluster. To achieve that, an integration must be setup between a GitLab project and both Google Cloud Platform (GCP) entities.
On GitLab side a term Runner is introduced: an application that works with GitLab CI/CD to run jobs in a pipeline. Berlin R&D department uses two types of runners:
- Shared runners, which are used by all projects and are managed by a dedicated DevOps team ();
- Kubernetes runners, which are deployed to our clusters and are fully operated by Berlin DevOps team.
We have a monthly total of 50k ci/cd minutes globally accessible for all the DT projects via shared runners. If we hit the cap we do have the option to buy additional minutes, but before that a dedicated team would do theirs best to roll projects to internally managed runners. It's totally fine for your projects to start with the shared runners to experiment and figure things out, but ultimately dedicated runners must be setup.
Setting up GKE runner¶
Following the below steps you can setup a runner in GKE cluster:
-
Create a GitLab project if it's not created;
-
Copy a registration token for the above project, for that go to Settings → CI/CD → Runners → Specific Runners → Copy token;
-
Done by DevOps team: create Google Service Account (GSA) with Artifact Registry Writer role (example);
-
Done by DevOps team: deploy GitLab runner into your GKE cluster via helm chart, assigning token from above to runnerRegistrationToken and annotating Kubernetes Service Account with with GSA from the previous step (example);
-
Once deployed you should see the runner available in Settings → CI/CD → Runners → Available specific runners, there you can also press Edit next to your new runner (pencil icon) and add Tags if needed (those allow you to bind specific runners to specific GitLab CI pipelines, note that this tag should be equal to the one configured in helm chart):

Building a pipeline¶
Now as we have our runner ready for incoming requests, all we have to do is configure a pipeline to utilize this runner:
-
Create .gitlab-ci.yml file in the root dir of GitLab project (see example below);
-
Create Dockerfile which will be utilized by the pipeline;
-
Produce a git event required to trigger your pipeline (indicated by rules keyword, the below example runs upon push to master branch);
-
Now you can see your pipline and it's jobs running in CI/CD → Jobs.
Example¶
Below is an example of .gitlab-ci.yaml file, which uses Kaniko to build an image from to Dockerfile placed in the root directory:
stages:
- build-push
image:
name: us-east1-docker.pkg.dev/ag-fyber-offerwall-dev/infra-images/kaniko-executor:v1.9.1-debug
entrypoint: [""]
variables:
DEST_REGISTRY: "us-east1-docker.pkg.dev/ag-fyber-offerwall-dev/infra-images"
DEST_IMAGE: ${CI_PROJECT_NAME}
DEST_TAG: ${CI_COMMIT_SHORT_SHA}
build-push:
stage: build-push
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${DEST_REGISTRY}/${DEST_IMAGE}:${DEST_TAG}"
--cache=true
tags:
- test-runner
rules:
- if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "master"'
when: on_success
Kaniko caching¶
Kaniko can cache layers created by RUN (configured by flag --cache-run-layers) and COPY (configured by flag --cache-copy-layers) commands in a remote repository. Before executing a command, kaniko checks the cache for the layer: if it exists, kaniko will pull and extract the cached layer instead of executing the command, if not, kaniko will execute the command and then push the newly created layer to the cache. Users can opt into caching by setting the --cache=true flag. A remote repository for storing cached layers can be provided via the --cache-repo flag. If this flag isn't provided, a cached repo will be inferred from the --destination provided.
Info
Note that kaniko cannot read layers from the cache after a cache miss: once a layer has not been found in the cache, all subsequent layers are built locally without consulting the cache.
Following the above statement this combination creates repo in a dedicated GAR folder (precreated by DevOps team), then store and search for cache there:
--cache=true
--destination us-east1-docker.pkg.dev/ag-fyber-offerwall-dev/images/application:latest
--cache-repo us-east1-docker.pkg.dev/ag-fyber-offerwall-dev/cache/application
And this one creates a dedicated repo us-east1-docker.pkg.dev/ag-fyber-offerwall-dev/images/application/cache instead in the same folder:
Useful links¶
Kaniko - kaniko (Github)