Dockerize a Spring Boot Application
This post will cover the basic concepts of Docker and Docker Compose and how it can be applied to setup the Spring Boot Application created in part one of the series.
What is Docker?
Docker allows an application and all its dependencies to be packaged in a Container that will always run the same regardless of the environment it's deployed in. The term "dependencies" here refers to the components an application needs to run, like Java, OS packages or libraries like ImageMagick, any other files or folders, basically anything that can installed on a server. The big difference between Docker and a Virtual Machine is that Docker Containers share the host operating system kernel, therefore it's lightweight, starts quickly and makes efficient use of RAM.
Docker Concepts
Docker Image
A Docker Image acts as a blueprint or template for a Docker Container. For example docker run ubuntu
will create a new Docker Container based on the ubuntu Docker Image. The container does not modify the image in any way unless changes are explicitly committed. New images can easily be created by inheriting from existing images, online repositories like Docker Hub makes it easy to share and find images created by the community.
Dockerfile
The preferred way to create a Docker Image is with a script known as a Dockerfile. Alternatively the required changes can be made through running a shell on a container and then committing it to an image. Dockerfiles have the advantage of providing the ability to automate image creation and the syntax is simple, clear and self explanatory. The below example will create an image based on ubuntu and will run the echo "Hello docker!"
command when a container is created from this image.
#Sample Dockerfile
FROM ubuntu
CMD "echo" "Hello docker!"
Docker Container
A Docker Container is a running instance of a Docker Image and there can be many running instances of the same image. A Docker Volume can be used to persist changes to the file system in a Docker Container. Every time a docker run <image>
command is run a new container is created from the given image. Docker Containers that expose ports can be mapped to a port on the host where the Docker Daemon is running with docker run -p 8080:80 <name|id>
.
Docker Volumes
Changes an application makes to the filesystem (like writing log or database files) will not persist beyond the lifecycle of a Docker Container unless these files are written to a mounted Volume. A Volume remains available after a Docker Container that uses it is destroyed.
To create a data Volume for Mongo and then start it run:
docker volume create --name data-mongo
docker run -v data-mongo:/data/db mongo
Then to backup the Mongo data into the current working directory:
docker run --rm -v data-mongo:/data/mongo -v $(pwd):/backup busybox tar cvf /backup/mongo-data.tar /data
docker volume inspect <name|id>
will show the mount point of the Volume. On OSX this mount point will reference a folder in the VirtualBox VM started by Docker Machine and not a local folder.
Docker Daemon
This is the process that manages Docker Containers. Since Docker makes use of Linux kernel features the Docker Daemon has to run on a virtual machine when using OSX. The Docker Machine utility can be used to setup a virtual machine on OSX. It also configures the OSX terminal (or iTerm) with environment variables that tells the docker
cli where the Docker Daemon is running.
Docker Image Repository
Image Repositories act as the github of Docker Images. Images can be pushed and pulled from these repositories and both public and private repositories exist.
Docker Compose
Docker Compose is a tool that simplifies the configuration of multi Container applications. A single yaml file can be used to define all the required Containers, configure the networking between them, and the Volumes for persistence. A docker-compose up
command will build and start all the containers defined in the docker-compose.yml
file.
Basic Docker Commands
docker run <image>
Creates a new Container each time it is run.docker start <name|id>
Starts an existing Container.docker stop <name|id>
Stops an existing Container.docker ps [-a include stopped containers]
List all Containers created.docker rm <name|id>
Remove a Container.docker rm $(docker ps -aq)
Delete all Containers.docker images
Lists built and dowloaded Images.docker rmi node
remove Image named 'node'docker exec -t -i <name|id> /bin/bash
Get a shell in a running Container.docker volume ls
Lists the created data Volumes.docker build -t <name> .
Build an Image from a Dockerfile in the current directory.
Building a Docker Image With Gradle
To build a Docker Image out of the Spring Boot Application built in part one of this series, the gradle docker plugin can be added to the gradle.build
file.
...
dependencies {
...
classpath('se.transmode.gradle:gradle-docker:1.2')
...
}
...
apply plugin: 'docker'
task buildDocker(type: Docker, dependsOn: build) {
push = false
applicationName = "monkey-codes/${jar.baseName}"
dockerfile = file('src/main/docker/Dockerfile')
doFirst {
copy {
from jar
into stageDir
}
}
}
Next step is to add the src/main/docker/Dockerfile
that contains the instructions for Docker to build the Image for the application.
FROM java:8
ADD spring-boot-restful-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
With this setup the application Image can be built with:
./gradlew buildDocker
Then launched with:
docker run --rm monkey-codes/spring-boot-restful
The application still needs Mongo, Prometheus and Grafana to function, that is where Docker Compose comes in handy.
Assembling All The Parts
Create a new Docker Compose file in src/main/docker/docker-compose.yml
version: '2'
volumes:
data-mongo:
external:
name: spring-boot-restful-data-mongo
data-prometheus:
external:
name: spring-boot-restful-data-prometheus
data-grafana:
external:
name: spring-boot-restful-data-grafana
data-grafana:
external:
name: spring-boot-restful-data-grafana
data-logging:
external:
name: spring-boot-restful-data-logs
data-elasticsearch:
external:
name: spring-boot-restful-data-elasticsearch
services:
web:
image: monkey-codes/spring-boot-restful
container_name: web
ports:
- "80:8080"
links:
- db
- fluentd
db:
image: mongo
container_name: db
volumes:
- data-mongo:/data/db
prometheus:
build: ./prometheus
image: prometheus
container_name: prometheus
volumes:
- data-prometheus:/prometheus
ports:
- "9090:9090"
links:
- web
grafana:
image: grafana/grafana
container_name: grafana
volumes:
- data-grafana:/var/lib/grafana
ports:
- "3000:3000"
links:
- prometheus
fluentd:
build: ./fluentd
container_name: fluentd
volumes:
- data-logging:/fluentd/log
ports:
- "24224:24224"
links:
- elasticsearch
elasticsearch:
image: elasticsearch
container_name: elasticsearch
volumes:
- data-elasticsearch:/usr/share/elasticsearch/data
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
links:
- elasticsearch
The configuration has two parts volumes
and services
. The volumes
section shows the data Volumes required by the application to persist data. Launching the application will prompt for these Volumes to be created if they do not exist. The services
section defines all the components of the application. The options defined for each service include, ports that need to be forwarded from the host, volumes that need to be mounted, and links that define dependencies between the services
. Docker Compose maintains a complete reference of all the available options.
Links are by far the most interesting option, it exposes the dependency under a host name with the same name as the service. For example, the web service can connect to the db link by using mongodb://db
.
The components of this application include:
- web - The Spring Boot Application created in part one.
- db - The mongo Container that persists the application data.
- prometheus - Time series database that collects information from Spring Actuator.
- grafana - Create dashboards from the application data collected by Prometheus
- fluentd - Collect all application logs and push it into Elasticsearch.
- elasticsearch - Store application logs.
- kibana - Provide an interface into the log data stored in Elasticsearch
The combination of fluentd, elasticsearch and kibana act as an alternative to Splunk.
Conclusion
“But it works on my machine”— Confused Developer
Docker is a powerful tool that will reduce issues caused by differences in development and production environments, by defining Docker Images that can be shared between the environments. Furthermore I believe it can be used as a tool to standardize development environments across team members and reduce the time new recruits need to get up and running.
The code described in this post is available at github.
Sources
Data volume containers -
http://stackoverflow.com/questions/18496940/how-to-deal-with-persistent-storage-e-g-databases-in-docker
Fluentd for logging example with docker-compose
https://github.com/j-fuentes/compose-fluentd
http://www.fluentd.org/architecture
Docker Machine
https://docs.docker.com/machine/
Spring Boot Docker
https://spring.io/guides/gs/spring-boot-docker/