Today a colleague explained how pipelines from gitlab are put together and how they work. A lot of information in a short time. To see if I’ve remembered everything correctly, and because it’s valuable information, I’ll try to summarize it here.
Pipelines are the base component for setting up Continuous Integration, Delivery, and Deployment for a software project. It is the main part for a DevOps way of working. The main part of a pipeline is the configuration file in which is described how a pipeline is setup and which tasks it should do. Main parts of a pipeline are Jobs and Stage. Jobs describe what the pipeline should do, and stages define when it should do it. Pipelines can be setup in a lot of different ways where the Basic and Directed Acyclic Graph Pipeline are the most common. In this article i will give an example of both pipelines, how to configure them and discuss the parts of it.
Combine with needs ( new in gitlab)
A pipeline has stages. In most cases 4 stages are used : build, test, staging and production. Stages are run one after another. In the project I currently work on we use a lot more stages like : install_dependencies, build, test, sandbox, consolidate_reports, deploy_staging, run-e2e-test, deploy_demo, release and deploy_live. You can define your own stages and as much as you need. Stages are described in the configuration file with the keyword
stages: . Each stage is intended and on its own line :
Jobs are one of the most fundamental part of the configuration file,
.gitlab-ci.yml. Jobs contain constrains and conditions. In other words in a job its possible to describe what to do and under what conditions. Jobs must have a unique name and should at least contain a
script: keyword. There is no limitation at the amount of jobs. Jobs gave information at so called runners, which will execute the jobs.
The configuration file for a Gitlab pipeline is named
.gitlab-ci.yml. In this file the complete behaviour of the pipeline is described by using the stages and jobs and a lot of other configuration options. All jobs are designed with keywords. Keyword for Gitlab can be find on there site.
The keywords stages:, workflow and include are not used in jobs, they control the behaviour of the pipeline. The keyword
stages: define the stages that contain groups of jobs. The keyword
workflow: determine whether or not a pipeline is created. By using builtin variables we can choose or a pipeline should be created. The
include:keyword is used to include external YAML files in the CI/CD configuration.
The first part of the
.gitlab-ci.yml is the
workflow: which determine in which situation a pipeline should be created. This is done by the use of
rules:. Rules in a workflow accepts the keywords
variables:. The if keyword is used to determine when a pipeline should run. With the when keyword can be specified what to do if the if rule is true. When: always set a pipeline to run while when: never prevents the pipeline from running. We can make use of some predefined variables for use in the workflow rules like if:
||Control when merge request pipelines run.|
||Control when both branch pipelines and tag pipelines run.|
||Control when tag pipelines run.|
||Control when branch pipelines run.|
Workflow example :
Pipelines do run when they match a when: never rule in all other cases they will run if the last rule contains a when: always.
variables: can be used to specify variable needed in the pipeline or for use in the workflow:rules:. Gitlab variables can be recognized by the $CI_ at the start of the name (see table). All other variables can be defined by the user itself. A very detailed explanation of the workflow and variables can be found at the Gitlab site
Variables example :
include keyword is used to import external YAML files like local (project repo), file (different project repo), remote (from a URL) or template (provided by Gitlab)
image IS used for specify the Docker image in which the jobs will run. Image from jobs will override the default image.
script keyword specify the commands for the runner. Each job requires a script keyword. Special scripts are
before_script , which will run before other script, and
after_script which will run, like the name already says, after the other scripts.
stage will bundle the jobs defined in that stage and can run these jobs in parallel. If a stage is not defined jobs will use the test stage by default.