Jenkins for CI/CD
To deliver software projects successfully in today’s ultra-competitive environments you require low cycle time to get changes into production while still maintaining high quality. How can this be achieved?
- Build, deploy, test and release should be Automated and easily repeatable. It should be an engineering discipline, not an art.
- Changes should be frequent, this means smaller delta’s between releases which in turn will reduce the risk of the release and make it easier to rollback.
“If it hurts, do it more frequently, and bring the pain forward”— Jez Humble
Jenkins is a free, open-source tool that has been around for a long time, but don’t let the dated interface fool you. It is a Swiss army knife in the CI/CD space. One of its biggest features is Pipeline as Code_._ Every project can commit a Jenkinsfile as part of the project’s git repository that describes the build and deployment pipeline of the project. Jenkins can be configured to continuously scan your organisation’s git repos for Jenkinsfile’s and automatically setup pipelines for these projects.
In this post we will look at how to setup private Jenkins CI/CD infrastructure on AWS.
Infrastructure as Code
To ensure that the required infrastructure can be created in a repeatable manner this example will make use of Cloudformation and Ansible. All the code should go into a git repository that provides a full audit trail of changes. This makes it easy to rollback changes and ensures that no drift occurs between what is deployed on AWS and what is in the infrastructure code.
Jenkins AMI
Step one is to create an AMI that will form the base of both the Jenkins master and slave instances. Only a handful of packages are required to be installed on the AMI namely: Docker, Docker Compose, Java, Git, and AWS ECR Credentials helper to allow Jenkins to pull Docker images from a private ECR.
This example will make use of Packer and Ansible to build the AMI. The Packer config for the AMI uses Amazon Linux as a base image and Ansible to provision the required packages:
ami.json
jenkins-ami.json
{
"builders" : [
{
"region" : "ap-southeast-2",
"type" : "amazon-ebs",
"instance_type" : "t2.micro",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"architecture": "x86_64",
"root-device-type": "ebs",
"name": "amzn2-ami-*-gp2"
},
"owners": ["137112412989"],
"most_recent": true
},
"ssh_username" : "ec2-user",
"ami_name" : "jenkins-{{isotime \"2006-01-02\"}}",
"ami_description" : "AMI image for Jenkins. Can be used for master and slaves. It contains ansible, docker, git and corretto8",
"encrypt_boot": true
}
],
"provisioners" : [
{
"type": "shell",
"inline": [
"sudo yum update -y",
"sudo amazon-linux-extras enable ansible2 corretto8 docker",
"sudo amazon-linux-extras install ansible2 -y"
]
},
{
"type" : "ansible-local",
"playbook_file" : "{{ user `playbook_file` }}",
"extra_arguments": ["--extra-vars \"aws_account_no={{user `aws_account_no`}}\""]
}
]
}
playbook.yml
jenkins-ami-playbook.yml
---
- name: Setup AMI for jenkins
hosts: 127.0.0.1
connection: local
gather_facts: true
become: yes
tasks:
- name: Install packages
yum:
name:
- amazon-ecr-credential-helper
- docker
- git
- java-1.8.0-amazon-corretto-devel
- python3-pip
state: present
- name: Update awscli
shell:
cmd: pip3 install --upgrade awscli
- name: Add ec2-user to docker group
user:
name: ec2-user
groups: docker
append: true
- name: Enable docker
service:
name: docker
state: started
enabled: true
- name: Create java bin directory
file: path=/usr/local/bin/java/bin state=directory
- name: Symlink java executable
file:
src: /usr/bin/java
dest: /usr/local/bin/java/bin/java
state: link
- name: Docker version command
shell:
cmd: docker --version
register: docker_version_cmd
- name: Docker version regexp
set_fact:
docker_version: "{{ docker_version_cmd.stdout | regex_search('(\\d+.\\d+.\\d+?-ce)') }}"
- name: Assert docker version is not empty
assert:
that:
- docker_version != ""
fail_msg: "docker version is empty"
success_msg: "docker version set {{ docker_version }}"
- name: Create docker config dir
file:
path: /home/ec2-user/.docker
state: directory
owner: ec2-user
group: ec2-user
- name: Docker config content
set_fact:
docker_config_content: |
{
"HttpHeaders": {
"User-Agent": "Docker-Client/{{ docker_version }} (linux)"
},
"credHelpers": {
"{{aws_account_no}}.dkr.ecr.ap-southeast-2.amazonaws.com": "ecr-login"
}
}
- name: Configure docker credentials helper
copy:
dest: "/home/ec2-user/.docker/config.json"
content: "{{ docker_config_content | to_nice_json }}"
owner: ec2-user
group: ec2-user
mode: 0600
You will have to bootstrap this from your local machine the first time but after Jenkins is up and running this can be placed in a pipeline that will automatically build future changes to the AMI. Volume in a directory with the above files into a packer container to build the AMI.
$ docker run --rm -it --entrypoint="" -v $(pwd):/workspace \
-v ~/.aws:/root/.aws \
-w /workspace hashicorp/packer:light packer build \
-var 'playbook_file=playbook.yml' \
-var 'aws_account_no=0000000000' \
ami.json
Jenkins Master
Jenkins master can run on a small EC2 since all the jobs will be farmed out to on-demand spot fleet instances that be spun up and shut down as demand changes. Jenkins itself can run inside a Docker container based on the official image. Running Jenkins inside a Docker container makes upgrades and rollbacks of the Jenkins version easy.
Customisations to the official image are limited to the following:
- Installing a self-signed certificate to ensure TLS between the load balancer and Jenkins.
- Installing the Docker client to allow Jenkins to launch docker containers. Note that when we start Jenkins the host EC2 instance docker socket will be volumed into the container. This allows Jenkins to launch containers on the host from inside the container.
- Optionally include a custom.groovy and plugins.txt to configure Jenkins automatically
Dockerfile
jenkins-dockerfile
FROM alpine AS cmps
ARG compose_version=1.21.1
RUN apk --no-cache add python py-pip git openssl && \
git clone --depth 1 --branch ${compose_version} https://github.com/docker/compose.git /code/compose && \
cd /code/compose && \
pip --no-cache-dir install -r requirements.txt -r requirements-dev.txt pyinstaller==3.1.1 && \
git rev-parse --short HEAD > compose/GITSHA && \
ln -s /lib /lib64 && ln -s /lib/libc.musl-x86_64.so.1 ldd && \
ln -s /lib/ld-musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2 && \
pyinstaller docker-compose.spec && \
unlink /lib/ld-linux-x86-64.so.2 /lib64 ldd || true && \
mv dist/docker-compose /usr/local/bin/docker-compose && \
pip freeze | xargs pip uninstall -y && \
apk del python py-pip git && \
rm -rf /code /usr/lib/python2.7/ /root/.cache /var/cache/apk/* && \
chmod +x /usr/local/bin/docker-compose && \
mkdir /var/lib/jenkins && \
openssl genrsa -out /var/lib/jenkins/private_key.pem && \
openssl req -new -key /var/lib/jenkins/private_key.pem -out /var/lib/jenkins/csr.pem -subj "/C=US/ST=New Sweden/L=Stockholm /O=Private/OU=Private/CN=jenkins-ec2.gotomytest.site/emailAddress=null@localhost.com" && \
openssl x509 -req -days 9999 -in /var/lib/jenkins/csr.pem -signkey /var/lib/jenkins/private_key.pem -out /var/lib/jenkins/cert.pem
FROM jenkins/jenkins:alpine
ENV JENKINS_USER=jenkins
USER root
# COPY entrypoint.sh /usr/local/bin/
# ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/entrypoint.sh"]
RUN apk --no-cache add shadow su-exec
COPY --from=cmps /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY --from=cmps /var/lib/jenkins/* /var/lib/jenkins/
RUN ln -s /lib /lib64 && \
ln -s /lib/ld-musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2 && \
curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && \
mv /tmp/docker/docker /usr/bin/docker && \
chown jenkins. /var/lib/jenkins
ENV JENKINS_OPTS --httpPort=-1 --httpsPort=8083 --httpsCertificate=/var/lib/jenkins/cert.pem --httpsPrivateKey=/var/lib/jenkins/private_key.pem
EXPOSE 8083
For backups, we attach separate EBS storage to the EC2 instance and volume in the EBS mount point to the container under the JENKINS_HOME directory. Making snapshots of the EBS volume will effectively backup build history and plugin installations.
Initial bootstrapping of the Jenkins infrastructure will have to happen from your local PC but after that updates to the infrastructure can happen via a Jenkins pipeline, Jenkins updates its own infrastructure. The way this works is that you commit changes to the Jenkins infrastructure Cloudformation, which will be picked up and built by Jenkins. The cfn-init script that runs on the EC2 host will detect and perform the update and then restart the Jenkins docker container.
Jenkins EC2 Cloudformation Resource
cfn-jenkins.yml
JenkinsHost:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
awslogs: []
docker: []
git: []
files:
"/etc/cfn/cfn-hup.conf":
mode: "000444"
owner: root
group: root
content: !Sub |
[main]
stack=${AWS::StackId}
region=${AWS::Region}
interval=2
verbose=true
"/etc/cfn/hooks.d/cfn-auto-reloader.conf":
mode: "000444"
owner: root
group: root
content: !Sub |
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.JenkinsHost.Metadata.AWS::CloudFormation::Init
action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource JenkinsHost --region ${AWS::Region}
runas=root
"/etc/awslogs/awslogs.conf":
mode: "000444"
owner: root
group: root
content: !Sub |
[general]
use_gzip_http_content_encoding = true
state_file = /var/lib/awslogs/agent-state
[/var/log/secure]
file = /var/log/secure
log_group_name = ${JenkinsSecureLogGroup}
log_stream_name = log
datetime_format = %b %d %H:%M:%S
"/etc/awslogs/awscli.conf":
mode: "000444"
owner: root
group: root
content: !Sub |
[plugins]
cwlogs = cwlogs
[default]
region = ${AWS::Region}
"/etc/systemd/system/docker.jenkins.service":
mode: "000444"
owner: root
group: root
content: !Sub |
[Unit]
Description=Jenkins Container
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStart=/usr/bin/docker run --rm --name %n -p 80:8080 -p 443:8083 -u root --privileged=true --log-driver=awslogs --log-opt awslogs-region=${AWS::Region} --log-opt awslogs-group=${JenkinsSecureLogGroup} --log-opt awslogs-stream=log -v /var/run/docker.sock:/var/run/docker.sock -v /mnt/data/jenkins/:/var/jenkins_home -v /mnt/data/jenkins-tmp:/tmp ${JenkinsDockerImage}
[Install]
WantedBy=multi-user.target
"/home/ec2-user/setup-jenkins.sh":
mode: "000550"
owner: root
group: root
content: !Sub |
#!/bin/bash -xe
cd /home/ec2-user
if [ $(file -s /dev/sdf -L | grep ext4 | wc -l) == 0 ]; then
echo "create fs and mount"
mkfs -t ext4 /dev/sdf && mkdir -p /mnt/data && mount /dev/sdf /mnt/data
echo "$(blkid /dev/sdf | cut -d" " -f2 | sed 's/\"//'g) /mnt/data ext4 defaults 0 0" >> /etc/fstab
fi
if [ $(mount | grep "/mnt/data" | wc -l) == 0 ]; then
mkdir -p /mnt/data && mount /dev/sdf /mnt/data
fi
mkdir -p /mnt/data/jenkins
mkdir -p /mnt/data/jenkins-tmp
chown ec2-user.docker /mnt/data/jenkins
chown ec2-user.docker /mnt/data/jenkins-tmp
$(aws ecr get-login --region ap-southeast-2 --no-include-email)
docker pull ${JenkinsDockerImage}
systemctl daemon-reload
systemctl enable docker.jenkins
systemctl restart docker.jenkins
"/etc/cron.hourly/cleanup-docker-images":
mode: "000550"
owner: root
group: root
content: !Sub |
#!/bin/sh
/usr/bin/docker image prune -af
commands:
setup-jenkins:
command: /home/ec2-user/setup-jenkins.sh
services:
sysvinit:
cfn-hup:
enabled: true
ensureRunning: true
files:
- /etc/cfn/cfn-hup.conf
- /etc/cfn/hooks.d/cfn-auto-reloader.conf
awslogsd:
enabled: true
ensureRunning: true
files: /etc/awslogs/awslogs.conf
Properties:
InstanceType: !Ref MasterInstanceType
KeyName: !Ref KeyName
SubnetId: {'Fn::ImportValue': !Sub '${NetworkStackName}:PrivateSubnet1'}
SourceDestCheck: true
SecurityGroupIds:
- !Ref JenkinsSecurityGroup
ImageId: !Ref AMIID
Volumes:
-
Device: "/dev/sdf"
VolumeId: {'Fn::ImportValue': !Sub '${JenkinsStorageStackName}:JenkinsDataVolume'}
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
/opt/aws/bin/cfn-init -v -s ${AWS::StackId} --resource JenkinsHost --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackId} --resource JenkinsHost --region ${AWS::Region}
IamInstanceProfile: !Ref JenkinsInstanceProfile
Tags:
- Key: Name
Value: 'jenkins'
CreationPolicy:
ResourceSignal:
Count: 1
Timeout: PT10M
A security group limits SSH access to Jenkins through a fixed CIDR block (Most likely from the IP of a Bastion host deployed in the public subnet) but allows HTTPS access from anywhere (0.0.0.0/0). Since Jenkins will be deployed in a private subnet this really limits access to anything else deployed in the private subnet. Traffic will route via an application load balancer to Jenkins.
Jenkins Security Group
cfn-jenkins-sg.yml
JenkinsSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable access to the cicd host
VpcId: {'Fn::ImportValue': !Sub '${NetworkStackName}:VpcId'}
SecurityGroupIngress:
- CidrIp: !Join [ "", [ {'Fn::ImportValue': !Sub '${BastionStackName}:PrivateIp'} , "/32"]]
IpProtocol: tcp
ToPort: 22
FromPort: 22
- CidrIp: !Ref HTTPFrom
IpProtocol: tcp
ToPort: 80
FromPort: 80
- CidrIp: !Ref HTTPFrom
IpProtocol: tcp
ToPort: 443
FromPort: 443
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-JenkinsSecurityGroup"
Since Jenkins will be the central piece of infrastructure that deploys applications onto AWS infrastructure via Cloudformation, it needs certain AWS permissions to perform that action. Jenkins will get its permissions via a permissions policy attached to the IAM role assigned to the EC2 Host instance. I recommend starting with minimum permissions and as you run into deployment issues regarding permissions, updating the permission policy on the IAM Role ( in Cloudformation) and let the Jenkins pipeline take care of updating the infrastructure.
Last but not least is adding a DNS record to Route53 for Jenkins as well as a load balancer target group and a host-based routing listener rule to forward HTTP requests to the Jenkins EC2 instance from the ALB.
Load Balancer & DNS Configuration
cfn-jenkins-alb-route53.yml
JenkinsTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 15
HealthCheckPath: /login
HealthCheckPort: 443
HealthCheckProtocol: HTTPS
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 5
Matcher:
HttpCode: '200'
Name: 'JenkinsTargetGroup'
Port: 443
Protocol: HTTPS
Targets:
- Id: !Ref JenkinsHost
Port: 443
TargetType: instance
UnhealthyThresholdCount: 3
VpcId: {'Fn::ImportValue': !Sub '${NetworkStackName}:VpcId'}
JenkinsListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: forward
TargetGroupArn: !Ref JenkinsTargetGroup
Conditions:
- Field: host-header
Values:
- !Sub "${Hostname}.${DomainName}"
ListenerArn: {'Fn::ImportValue': !Sub '${ALBStackName}:ALBListener'}
Priority: 10
JenkinsALBDNSRecord:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: {'Fn::ImportValue': !Sub '${DomainStackName}:HostedZoneID'}
Comment: DNS name for my instance.
Name: !Sub "${Hostname}.${DomainName}"
Type: CNAME
TTL: '60'
ResourceRecords:
- {'Fn::ImportValue': !Sub '${ALBStackName}:ALBDnsName'}
Spot Fleet Slaves
Using AWS spot fleet instances as slaves is a good way to keep costs down. The ec2-fleet Jenkins plugin can be used to manage demand for slave nodes. When the build queue grows the plugin will provision more spot instances if any are available up to a defined limit. When these slaves become idle for a given period it will be shut down.
We use the same AMI for the slaves as for the master with one key difference, the slave agent will be running as a java process directly on the slave EC2 instance and from there the agent will launch docker containers as needed/defined by any given pipeline its been tasked with building.
An EBS volume is also assigned to each slave that will be deleted upon termination. This is required as the default size of 8GB will be quickly consumed by the local cache of Docker images.
Spot Fleet Request
cfn-jenkins-spotfleet.yml
SlaveSpotFleet:
Type: AWS::EC2::SpotFleet
Properties:
SpotFleetRequestConfigData:
IamFleetRole: !GetAtt [SlaveSpotFleetRole, Arn]
TargetCapacity: !Ref SlaveSpotFleetTargetSize
LaunchSpecifications:
- BlockDeviceMappings:
- DeviceName: '/dev/xvda'
Ebs:
DeleteOnTermination: 'true'
Encrypted: 'true'
VolumeSize: 10
VolumeType: 'gp2'
EbsOptimized: 'true'
InstanceType: !Ref SlaveInstanceType
ImageId: !Ref SlaveAMIID
KeyName: !Ref KeyName
SubnetId: {'Fn::ImportValue': !Sub '${NetworkStackName}:PrivateSubnet1'}
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
if [ $(file -s /dev/nvme1n1 -L | grep ext4 | wc -l) == 0 ]; then
echo "create fs and mount"
sudo mkfs -t ext4 /dev/nvme1n1 && sudo mount /dev/nvme1n1 /tmp
chmod 1777 /tmp
fi
systemctl restart cloud-init
SecurityGroups:
- GroupId: !GetAtt JenkinsSecurityGroup.GroupId
IamInstanceProfile:
Arn: !GetAtt [JenkinsInstanceProfile, Arn]
Trigger Pipelines Using Web Hooks
To automatically trigger a pipeline on Jenkins when a change is committed git webhooks can be used. There may be variations on this depending on whether you are using Github or Bitbucket.
Using Docker As Stage Agents
To keep Jenkins robust and easy to upgrade I suggest limiting Jenkins plugins to an absolute minimum and relying on Docker images configured through the agent directive at the stage level of your pipeline (In the Jenkinsfile). In my experience, some Jenkins plugins can be flaky and break between upgrades so limiting the installation of plugins will make maintenance easier.
Using Docker is a great way to include any specific build (think java, node maven, etc) or deployment tools needed for your project. It’s easy to upgrade and rollback and does not affect the stability of Jenkins itself. Rely on official images from docker hub where possible for security reasons but you always have the option of rolling your own image if something specific is required. Because the AWS ECR Credentials helper is installed on the host EC2 instance, custom images can be pulled from a private ECR.
Output of a Stage
One common mistake in configuring pipelines is assuming that the output of one stage will be available in the next. This is not always true since Jenkins may execute each stage on a different slave. I have found that using an artifact repository like nexus to be a good way of storing artifacts produced by a stage in a pipeline. It caters to many different repository types like maven and npm. It also supports storing binaries on S3 which keeps storage costs down.
If a later stage depends on something produced earlier in the pipeline it can simply download it from nexus. Using the Jenkins build number as a version number for artifacts in nexus works well.
To host your own private nexus instance, similar Cloudformation as described here for Jenkins can be used to host nexus, with the biggest difference running a nexus Docker container instead of a Jenkins one. (Things like the Spot Fleet request can be removed but you will need to add an s3 bucket for artifact storage.)
Custom Steps For Jenkinsfile
To get code reuse between Jenkinsfiles, common steps can be extracted into a git repo and configured globally in Jenkins as a shared library. These custom steps can then be referenced across all pipelines.
I have found Cloudformation combined with Ansible is a good way to define infrastructure as code and to describe deployments on AWS. Ansible variables can be used to configure differences between environments (staging vs prod) and these variables can be passed using the Ansible Cloudformation module.
In a microservices architecture, deployment between services and the required infrastructure will likely look the same. This can be extracted into an Ansible role. These common roles and environment variables can then be baked into a custom Docker image that gets used by the deployment stage in a pipeline. Each project can then commit its own Ansible playbook alongside the code to describe its deployment and required AWS resources (as a Cloudformation template). With this setup, you retain the flexibility to explicitly define the Cloudformation required for the service or to reference a common Ansible role should it be similar to other deployed services.