Last week we had our “All hands meeting”, consisting of a video call with all the team members. We use these meetings to report what’s going on here at flugel, to share knowledge, and to define some general guidelines. In this case, it was time to review how we deploy pipelines based on Jenkins.
The latest versions of Jenkins introduced pipelines as first-class citizens, and combined with Pipeline as Code, you can fully automate your CI server deployments, leaving the server ready to build artifacts. We are using a custom version of the official Jenkins Docker image (https://hub.docker.com/r/jenkinsci/jenkins/).
What we are doing with the custom image is:
- Adding Groovy scripts to initialize Jenkins configuration: authentication, permissions and the seed job.
- The seed job reads its configuration from a git repository, to configure and update the jobs on each commit. The Job DSL plugin is controlled through Groovy scripts. The plugin allows the administrator to manage the Jenkins jobs definitions in a declarative language stored in a repository.
- The seed job creates the Pipeline jobs.
The jobs are specific from client to client, but there are some patterns. Let’s take a look at one specific example. In this case we are using Amazon ECS and ECR. We have two jobs with their stages:
- Docker Build:
- Code checkout
- Run unit tests
- Docker build
- Docker registry push (to ECR).
- Docker Deploy:
- Deploy to ECS cluster
- Post deploy tests
The first job always triggers the second job, which deploys the new container to the development ECR cluster by default. This job has parameters, so we can run it manually to deploy containers on different clusters like production or staging.
The main point of this meeting was to clarify the process of deploying this system automatically from scratch. The key parts of this automation are the initialization Groovy scripts to set up Jenkins, and the seed job that sets up all the other jobs that then run on Jenkins.