Delivering the Expected - Solteq Developer Blog

First Steps Towards Continuous Delivery

Recently we started some greenfield projects with our long term customer. It was clear from the beginning that the old way was not the best way. The existing software grew by the years and now we’re facing a monolith which is very hard to maintain. The codebase is just the other side of the pain. The chasm from code change to production is huge and there are limited tools to make this gap narrower. Even the smallest changes require complex care and this is what we want to do avoid. One solution to overcome this problem is doing deployments often. By often I mean all the time. This is called Continuous Delivery or Continuous Deployment, the definition will vary. We call it Continuous Delivery (CD).

Projects are very hard to retrofit for CD. So we try to ensure that first thing for every new project or service, is to build a delivery pipeline. First deployment must be hello world using the shortest pipeline possible. As you will see, CD is natural fit for minimal viable product (MVP) thinking. After you have the foundations to build on, you can build also your pipelines iteratively. Once you make a change, your customer gains value right after. Not after waiting cumbersome and expensive process to finish.

As one great mind nicely put it:


In this post I’ll show you how simple is to start with CD. I will guide you trough the first steps how to achive functional end-to-end CD pipeline for your project. Make sure you have docker installed. If you don’t, you can easily get it from

To build a working CD pipeline, we need to do couple things:

We need to glue all these together, so let’s get started!


Our weapon of choice for version control has been GitHub for some while. GitHub’s API is very well documented and integration is easy to other tools. Naturally we chose GitHub for this project also. Public repositories are free of charge and paid features are really worth all the seven dollars.

Example project

Create sample Java spring-boot project which you can use for this guide. Go to and select ‘web’ dependency and download the project. To run this project you need to java installed. Maven was once required, but as awsome mvnw came into existence, it is bundled inside the project. To start the web server run $ ./mvnw spring-boot:run and to execute the tests run $ ./mvn test. That’s pretty much all we need. Naturally this can be any sort of project written in any language as long you can run the tests from command line like with ruby’s rake test or like javascript’s $ mocha **/*.spec.js.

Jenkins 2.0

Jenkins is infamous open-source Continous Integration (CI) tool which automatically tests your code against changes made in version control. Jenkins has grown to fully fledged platform for CD and we started to use it right after the 2.0 release.

Running Jenkins is easy with Docker so why wouldn’t we. Make sure that you don’t shut down the container. Data is not persisted to host machine, so restarting the container will cause data loss. If you want save your configuration check the documentation for docker data volumes.

$ docker run -p 8080:8080 -d --name jenkins jenkins:alpine

Let’s break down the command:

$ docker run \                           # let's start the container
    -p 8080:8080 \                       # and expose port 8080
    -d \                                 # and launch as daemon
    --name jenkins \                     # and give it a name
    jenkins:alpine                       # and use alpine image                          

Now Jenkins will take couple minutes to start up. Let’s dig up the initial admin password meanwhile.

$ docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword  

You will get the initial admin password to setup up jenkins.

Next up, open localhost:8080 and follow the wizard:

The only configuration we wan’t to do in Jenkins is setup our GitHub organization. So let’s add a job from the landing page:

Next up, we need to enter our credentials to GitHub:

Enter credentials

Add either username and password or public SSH-key which you use for GitHub. Owner will be your organization name or user name, depending on your GitHub account.

Pipeline as a Code

In Jenkins 2.0 release, Pipeline as a Code was announced. This is great feature which eliminates the burden of configuring Jenkins. Now you commit your configuration into your project repository. Once you change it, the pipeline will change accordingly. Most importantly, when working with branches and pull requests, you can also modify the pipeline and not affect the current pipeline which resides in master branch. Jenkins Github organization will take care of building your branches and pull requests with no configuration.

Let’s take a look of minimial Jenkinsfile configuration:

node {

  stage "git"
  checkout scm

  stage "build"
  sh "./mvnw package -DskipTests"

  stage "test"
  sh "./mvnw test"

  if (env.BRANCH_NAME == 'master') {
    stage "deploy"
    echo "deployment commands!"  


Language is Groovy based, and is read directly from GitHub repository. There are ready made commands for you to use which gets you started, check Pipeline as a Code step reference. Once you commit this file to your repository, Jenkis should pick up your changes via automatically created webhooks. Quite often the case is that you reside inside corporate network and GitHub will not know of your Jenkins. Like in this tutorial, you must configure Jenkins to poll GitHub:

Build often

Once Jenkins notices the branch your working on, you should be able to see results under your organization:


Hooray, working pipelines! One neat trick is that deployment happens only in master branch. This way we ensure that other branches won’t get deployed and you can actually modify the pipeline in separate branches. Image above demonstrates nicely how we incerementally build our pipelines.

Next steps

From this point on, you can enrich your pipeline. Just make sure that you don’t over commit to anything. Build whatever is needed for your project to deliver value and focus making those changes. Once you fail, improve enough to cover that particular problem, fail fast and often. This creates a effective feedback loop which creates value (money) for you and your client.

Deployment was not covered in this post, I’ll cover this later, be sure to check our blog often. We’ve been using Ansible for configuration management and deployments. More recently we’ve studied Docker and especially Docker Swarm based pipelines and deployments, and oh boy that thing is interesting!