Manually running builds, executing tests and deploying can become a nightmare and an error-prone process:
That's the benefit you get of configuring a CI/CD server (in this post we will use Travis) and mixing it up with Docker container technology.
This is the second post of the Hello Docker series, the first post is available in this link.
In this post we will use Travis to automatically trigger the following processes on every merge to master or pull request:
We will configure this for both a Front End project and a Back End project.
If you want to dig into the details, keep on reading :)
Steps that we are going to follow:
A Chat app will be taken as an example. The application is split into two parts: client (Front End) and server (Back End), which will be containerized using Docker and deployed using Docker Containers.
We already have a couple of repositories that will create a chat application together.
Following our actual deployment approach (check previous post in this series), a third part will be included as well: a load balancer - its responsibility will be to route the traffic to the front or back depending on the requesting url. The load balancer will also be containerized and deployed using a Docker Container.
In our last post we took an Ubuntu + Nodejs Docker image as our starting point; it was great to retrieve it from Docker Image and to have control of which version you're downloading.
Wouldn't it be cool to be able to push our own images to that Docker Hub Registry including versioning? That's what Docker Hub offers you: you can create your own account and upload your Docker Images there.
Advantages of using Docker Hub:
Docker Hub is great to get started: you can create an account for free and upload your docker images (free version has a restriction: you get unlimited public repositories and one private repository).
If later on you need to use it for your business purposes and keep and restrict the access, you can use a private Docker registry, some providers:
Although Docker helps us standardize the creation of a given environment and configuration, building new releases manually can become a tedious and error-prone process:
Imagine doing that manually on every merge to master; you will get sick of this deployment hell... Is there any automated way to do that? Travis to the rescue!
Just by spending some time creating an initial configuration, Travis will automatically:
One of the advantages of Travis is that it's quite easy to setup:
If you want to follow this tutorial you can start by forking the Front End and Back End repos:
By forking these repos, a copy will be created in your github account and you will be able to link them to your Travis account and setup the CI process.
In our previous post from this series we consumed an image container from Docker Hub. If we want to upload our own image containers to the hub (free for public images), we need to create an account, which you can do in the following link.
Travis offers you two portals:
Since we are using Travis for learning purposes, let's hop on into travisci.org.
The next step that we have to take is to link our Github account to Travis (sign in with Github). By doing this:
In this tutorial, we will be applying automation to our forked chat application's repositories using a Travis Pipeline. Travis will launch a task after every commit where the following tasks will be executed:
Before we start automating stuff, let's give the manual process a try.
Once you have your Docker Hub account, you can interact with it from your shell (open your bash bash terminal, or windows cmd).
You can log into to Docker Hub.
$ docker login
In order to push your images, they have to be tagged according to the following pattern:
<Docker Hub user name>/<name of the image>:<version>
The version is optional. If none is specified,
latest will be assigned.
$ docker tag front <Docker Hub user name>/front
and finally push it
$ docker push <Docker Hub user name>/front
From now on, the image will be available for everyone's use.
$ docker pull <Docker Hub user name>/front
To create the rest of the images that we need, we have to follow the exact same steps.
This can become a tedious and error-prone process. In the following steps we will learn how to automate this using Travis CI/CD.
First you have to activate your repositories in Travis.
Once they are activated, you need to enter in each of the project settings (Back and Front) and enter your Docker Hub user and password as environment variables. This action has to be done in both repositories.
These variables will be used later to log into Docker Hub (note down: the first time you enter the data in these environment variables, they are shown as clear text. Once you set them up, they are shown as a password field).
Possibly, the repo that you have just forked is not browsed in your Travis account. You can try to sync your Travis account with your Github account manually, by clicking on the Sync button.
Now we can finally begin to automate our tasks.
To configure Travis, we need to create a file named
.travis.yml in the root folder of your project. This is where we'll describe the actions that will be executed by Travis.
Since we have already linked both Back End and Front End repositories in Travis, it will automatically check when these yml files are available and parse them.
The steps to create the
.travis.yml file for the back end application are the following:
A summary of this build process:
Let's create our .travis.yml file at the root of our backend repository:
We will start by indicating that we are going to use nodejs as our language, then we will indicate that we are using nodejs version 12 (more information about languages in this link).
+ language: node_js + node_js: + - "12"
First we will ask to run the commands as sudo (administrator), just in case any of the commands we are running need elevated privileges.
To use Docker we need to request it as a service in the yml file.
language: node_js node_js: - "12" + sudo: required + services: + - docker
So we've got our ubuntu + nodejs machine up and running. Then we indicate that we want to use Docker. Travis has already downloaded our project source code from the repository, so now it's time to execute an npm install before we start running the tests.
To execute this before main scripts are run (e.g. run tests), Travis offers us the section beforescript. Inside that section we enclose the command _npm install.
language: node_js node_js: - "12" sudo: required services: - docker + before_script: + - npm install
All the plumbing is ready, so now we can start defining our main scripts. Let's add a Travis yml section called script and inside that section let's add an npm test command; this will just run all the tests from our test battery.
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install + script: + - npm test
If the tests have passed successfully, we are ready to build the Docker image. In the previous post we created a Dockerfile configuring the build steps. Let's copy the content of that file and place it at the root of your repository (filename: Dockerfile).
FROM node WORKDIR /opt/back COPY . . RUN npm install EXPOSE 3000 ENTRYPOINT ["npm", "start"]
Just as a reminder about this Dockerfile configuration:
Let's jump back into the Travis yml file: inside the script section, right after the npm test, we add the command to build the Docker Container image.
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install script: - npm test + - docker build -t back.
This command will search for the Dockerfile file that we have just created at the root of your backend repository, and follow the steps to build it.
Hey! I've just realized something strange is going on: you are using different containers to get started, Travis runs the test on a given linux instance and the Dockerfile uses another linux / node configuration pulled from the Docker Hub Registry. That's a bad smell, isn't it? You are totally right! Both Travis yml and the Dockerfile configuration should start from the same image container. We need to make sure that the test runs in the same configuration as we would have in production - that's a limitation on the free version of Travis.org (here you can find some workarounds). The paid version allows you to configure the image container you want to get as a starting point, more information in this link.
Right after all the scripts have been executed and the docker image has been generated, we want to upload the *Docker image** to the *Docker Hub Registry\*.
Travis yml exposes a section called aftersuccess. This section is only executed if all the steps in the _script section executed successfully. Under this section we are going to take the steps to upload the image to the docker registry.
The first step is to login into the docker hub (we will make use of the environment variables we added into our project Travis configuration, see section Linking Travis repo and docker hub credentials in this post).
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install script: - npm test - docker build -t back . + after_success: + - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
The current Docker image that we have generated has the following name: back. In order to upload it to Docker Hub registry, we need to add a more elaborated and unique name:
On the other hand, we will indicate that the current image that we have generated is the latest docker image available.
In a real project, this could vary depending on your needs.
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install script: - npm test - docker build -t back . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD + - docker tag back $DOCKER_USER/back:$TRAVIS_BUILD_NUMBER + - docker tag back $DOCKER_USER/back:latest
Now that we've got unique names, we need to push the Docker Images into the Docker Registry. We will use the docker push command for this.
Note down that first of all we are pushing the $DOCKERUSER/back:$TRAVISBUILDNUMBER_ image, and then the $DOCKERUSER/back:latest_ . Doesn't this mean that the image will be uploaded twice? The answer is no. Docker is smart enough to identify that the image is the same, so it will assign two different "names" to the same image in the Docker Repository.
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install script: - npm test - docker build -t back . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD - docker tag back $DOCKER_USER/back:$TRAVIS_BUILD_NUMBER + - docker push $DOCKER_USER/back:$TRAVIS_BUILD_NUMBER - docker tag back $DOCKER_USER/back:latest + - docker push $DOCKER_USER/back:latest
.travis.yml should look like this:
language: node_js node_js: - "12" sudo: required services: - docker before_script: - npm install script: - npm test - docker build -t back . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD - docker tag back $DOCKER_USER/back:$TRAVIS_BUILD_NUMBER - docker push $DOCKER_USER/back:$TRAVIS_BUILD_NUMBER - docker tag back $DOCKER_USER/back:latest - docker push $DOCKER_USER/back:latest
Now if you push all this configuration to Travis it will automatically trigger the build (you can manually trigger a build from the Travis web UI).
Once finished, you can check if the docker image has been generated successfully (check Travis web console):
And we can check if the image is available in our Docker Hub Registry account:
The steps for creating
.travis.yml are quite similar to the previous one (backend), the only difference is that we don't have
implemented unit tests (we will skip that step):
The frontend application will be containerized using Docker. Therefore, we will begin by indicating that the service is necessary and sudo is required, just like in the backend configuration.
+ sudo: required + services: + - docker
Let's build the Docker Image inside the
sudo: required services: - docker script: + - docker build -t front .
If the Docker Image was built successfully, the next step is to log into Docker Hub.
sudo: required services: - docker script: - docker build -t front . + after_success: + - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
As we did with the backend application, we are going to tag the current version using Travis Build number and define it as
sudo: required services: - docker script: - docker build -t front . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD + - docker tag front $DOCKER_USER/front:$TRAVIS_BUILD_NUMBER + - docker tag front $DOCKER_USER/front:latest
Now we only need to push the images.
sudo: required services: - docker script: - docker build -t front . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD - docker tag front $DOCKER_USER/front:$TRAVIS_BUILD_NUMBER + - docker push $DOCKER_USER/front:$TRAVIS_BUILD_NUMBER - docker tag front $DOCKER_USER/front:latest + - docker push $DOCKER_USER/front:latest
And the final
.travis.yml should be like the following:
sudo: required services: - docker script: - docker build -t front . after_success: - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD - docker tag front $DOCKER_USER/front:$TRAVIS_BUILD_NUMBER - docker push $DOCKER_USER/front:$TRAVIS_BUILD_NUMBER - docker tag front $DOCKER_USER/front:latest - docker push $DOCKER_USER/front:latest
Let's check if our CI configuration is working as expected. First let's make sure that Travis has run at least one successful build (in case it hasn't, you can trigger the buld process manually or just push some dummy change to front and backend repository).
You should see in Travis build that it has been launched for Front End and Back End repos (login into travis.org):
You should see the images available in the docker registry (login into docker hub):
As we did in our previous post, we can launch our whole system using Docker Compose. However, in this case for the Front End and Back End we are going to consume the image containers that we have uploaded to the Docker Hub Registry.
The changes that we are going to introduce to that docker-compose.yml are:
version: '3.7' services: front: - build: ./container-chat-front-example + image: <Docker Hub user name>/front:<version> back: - build: ./container-chat-back-example + image: <Docker Hub user name>/back:<version> lb: build: ./container-chat-lb-example depends_on: - front - back ports: - '80:80'
docker-compose.yml will be like:
version: "3.7" services: front: image: <Docker Hub user name>/front:<version> back: image: <Docker Hub user name>/back:<version> lb: build: ./container-chat-lb-example depends_on: - front - back ports: - "80:80"
We can launch it:
$ docker-compose up
It will download the back and front images from the Docker Hub Registry (latest available). You can check out how it works by opening your web browser and typing http://localhost/ (more information about how this works in our previous post Hello Docker)
By introducing this CI/CD step (CI stands for Continous Integration, CD stands for Continuos Delivery), we've got several benefits:
How about deployment? In the next post of this series we will learn how to create automated deploys using Kubernetes, so stay tuned :).
We are a team of Front End developers. If you need training, coaching or consultancy services, don't hesitate to contact us.
C/ Pintor Martínez Cubells 5 Málaga (Spain)
+34 693 84 24 54
Copyright 2018 Basefactor. All Rights Reserved.