Tuesday, March 31, 2015

Job DSL Part II

In the first part of this little series I was talking about some of the difficulties you have to tackle when dealing with microservices, and how the Job DSL Plugin can help you to automate the creation of Jenkins jobs. In today’s installment I will show you some of the benefits in maintenance. Also we will automate the job creation itself, and create some views.

Let’s recap what we got so far. We have created our own DSL to describe the microservices. Our build Groovy script iterates over the microservices, and creates a build job for each using the Job DSL. So what if we want to alter our existing jobs? Just give it a try: we’d like to have JUnit test reports in our jobs. All we have to do, is to extend our job DSL a little bit by adding a JUnit publisher:
  freeStyleJob("${name}-build") {
  
    ...
    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }
  
    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
    }
  
  }

Run the seed job again. All existing jobs has been extended by the JUnit report. The great thing about the Job DSL is, that it alters only the config. The job’s history and all other data remains, just like you edited the job using the UI. So maintenance of all our jobs is a breeze using the Job DSL Note: Be aware that the report does not show until you run the tests twice.

test-report

Automating the job generation itself

Wouldn’t it be cool, if the jobs would be automatically re-generated, if we change our job description or add another microservice? Quite easy. Currently our microservice- and job-DSL are hardcoded into the seed job. But we can move that into a (separate) repository, watch and check it out in our seed job, and use this instead of the hardcoded DSL. So at first we put our microservice- and job-DSL into two files called microservice.dsl and job.dsl.

microservice.dsl:
microservices {
  ad {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'ad'
  }
  billing {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'billing'
  }
  cart {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'cart'
  }
  config {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'config'
  }
  controlling {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'controlling'
  }
  customer {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'customer'
  }
  datastore {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'datastore'
  }
  help {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'help'
  }
  logon {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'logon'
  }
  order {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'order'
  }
  preview {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'preview'
  }
  security {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'security'
  }
  shipping {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shipping'
  }
  shop {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shop'
  }
  statistics {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'statistics'
  }
  warrenty {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'warrenty'
  }
}

job.dsl
def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(readFileFromWorkspace('microservices.dsl'))

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
    }
  
  }

}

We now check it into a repository so we can reference it in our seed build (you don’t have to this, I already prepared that for you in the master branch of our jobdsl-sample repository at GitHub).

Finally we have to adapt our seed build to watch and check out the jobdsl-sample repository

dsl-scm-section

… and use the checked out job.dsl instead of the hardcoded one:

dsl-groovy-section

That’s it. Now the seed job polls for changes on our sample respository, so if somebody adds a new microservice or alters our job.dsl, all jobs will be (re-)created automatically without any manual intervention.

Note: We could have put both the microservice.dsl and job.dsl in one file, as we had it in the first place. But now you can use your microservice.dsl independently of the job.dsl to automate all kinds of stuff. In our current project we use it e.g. for deployment and tooling like monitoring etc.


Creating Views

In the post Brining in the herd I described how helpful views can be to get a overview to all your jobs, or even aggregate information. The Job DSL allows you to generate views just like jobs, so it’s a perfect fit for that need. In order to have some examples to play with, we will increase our set of jobs by adding an integration-test and deploy-job for every microservice. These jobs don’t do anything at all (means: they are worthless), we just use them to set up a build pipeline.

Note: in order to use the build pipeline, you have to install the Build Pipeline Plugin.

We set up a new DSL file called pipeline.dsl and push it to our sample repository (again you don’t have to do this, it’s already there). We add two additional jobs per microservice, and set up a downstream cascade. Means: at the end of each build (-pipeline-step) the next one is triggered:

pipeline.dsl
def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(readFileFromWorkspace('microservices.dsl'))

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
  createITestJob(name,data)
  createDeployJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
      downstream("${name}-itest", 'SUCCESS')
    }
  }

}

def createITestJob(name,data) {
  freeStyleJob("${name}-itest") {
    publishers {
      downstream("${name}-deploy", 'SUCCESS')
    }
  }
}

def createDeployJob(name,data) {
  freeStyleJob("${name}-deploy") {}
}

Now change your seed job to use use the pipeline.dsl instead of the job.dsl and let it run. Now we have three jobs for each microservice cascaded as a build pipeline.

all-build

The Build Pipeline Plugin comes with its own view, the build pipeline view. If you set up this view and provide a build job, the view will render all cascading jobs as a pipeline. So now we are gonna generate a pipeline view for each microservice. As before I already provided the DSL for you in the repository.

pipeline-view.dsl
...

// create build pipeline view for every service
config.microservices.each { name, data ->
   buildPipelineView(name) {
     selectedJob("${name}-build")
   }
}

...

Not that complicated, eh? We just iterate over the microservices, and create a build pipeline view for each. All we got to specify, is the name of the first job in the pipeline. The others are found by following the downstream cascade. Ok, so configure your seed job to use the pipeline-view.dsl and let it run. Now we have created a pipeline view for every microservice:

all-pipelines

If you select one view, you will see the state of all steps in the pipeline:

one-pipeline

Having a single view for each microservice will soon become confusing, but as described in Brining in the herd, nested views will help you by aggregating information. So we are gonna group all our pipeline views together by nesting them in one view. We our going to generate a nested view containing the build pipeline views of all our microservices. i will list only the difference to the previous example. The complete script is provided for you on GitHub ;-)

pipeline-nested-view.dsl
...
// create nested build pipeline view
nestedView('Build Pipeline') { 
   description('Shows the service build pipelines')
   columns {
      status()
      weather()
   }
   views {
      config.microservices.each { name,data ->
         println "creating build pipeline subview for ${name}"
          buildPipelineView("${name}") {
            selectedJob("${name}-build")
            triggerOnlyLatestJob(true)
            alwaysAllowManualTrigger(true)
            showPipelineParameters(true)
            showPipelineParametersInHeaders(true)
            showPipelineDefinitionHeader(true)
            startsWithParameters(true)
         }
      }
   }
}
...

So what do we do here? We create a nested view with the columns status and weather, and create a view of type BuildPipelineView for each microservice. A difference you might notice compared to our previous example, is that we are setting some additional properties in the build pipeline view. The point is how we create the view. Before we used the dedicated DSL for the build pipeline view which sets some property values by default. Here we are using the generic view, so in order to get the same result, we have to set these values explicitly. Enough of the big words, configure your seed job to use the pipeline-nested-view.dsl, and let it run.

Note: You need to install the Nested View Plugin into your Jenkins in order to run this example.

pipeline-overview 

Cool. This gives us a nice overview to the state of all our build pipelines. And you can still watch every single pipeline by selecting one of the nested views:

pipeline-overview-one

So what have we got so far? Instead of using a hardcoded DSL in the job, we moved it to a dedicated repository. Our seed job watches this repository, and automatically runs on any change. Means if we alter our job configuration or add a new microservices, the corresponding build jobs are automatically (re-)created. We also created some views to get more insight into the health of our build system.

That’s it for today. In the next and last installment I’d like to give you some hints on how to dig deeper into the Job DSL: Where you will find some more information, where to look if the documentation is missing something, faster turnaround using the playground, and some pitfalls I’ve already fallen into.

Regards
Ralf
I don't know that there are any short cuts to doing a good job.
Sandra Day O'Connor
Update 08/17/2015: Added fixes by rhinoceros in order to adapt to Job DSL API changes

Sunday, March 29, 2015

Job DSL Part I

Jenkins CI is a great tool for automating your build and deployment pipeline. You set up jobs for build, test, deployment and whatever, and let Jenkins do the work. But there’s a catch. In the recent blog post Bringing in the herd I already talked a bit about the difficulties you have to tackle if you are dealing with microservices: they are like rabbits! When you start with a project, there may be only a couple of microservices, but soon there will be a few dozens or even hundreds.Setting up jobs for these herds is a growing pain you have to the master, and that’s where the Job DSL comes to the rescue. This post is the start of a small series on the Job DSL.

One lesson we already learned about microservices is that you have to automate everything. Even – or especially – the configuration of the tooling used to build, deploy, and monitor your application. Not to mention the things you have to do to run the application like distributing, load balancing etc. But let’s start with the build pipeline for the moment. Setting up a Jenkins job is an easy task. When you create a job using the UI, you just have to select the things Jenkins is supposed to do, like check out source code from GIT, run the maven or gradle build, and publish the test results. Once the job does what you want, it is easy to set up this job for another project: Jenkins allows to make copies. Just adapt some data like names and paths, and that’s it. So there’s no challenge in creating jobs for new microservices. If you have to create multiple jobs for each microservice – let’s say integration and acceptance tests, release builds, deployment to various environments – things start to get annoying. But one day you recognize that you – just for example - forgot to publish the checkstyle results, and you will have to change all your existing jobs… manually :-0

Don’t do it. Not even once! What does developers do in order to avoid repetitive, boring, annoying, error-prone tasks? They write a script, yep.We are lazy bones, so instead of doing stuff, we’re telling the machine what to do, and have cup of coffee while the work is being done. Jenkins job definitions are nothing but a little XML, so we could easily write a little script - Groovy has great built-in support for processing XML - and generate that. We could even invent a DSL using Groovy, so our script will be more readable.And since all that is so obvious, somebody already had this idea: The Jenkins Job DSL Plugin.

The development of this plugin was driven by one of the protagonists of microservices: Netflix. They currently have about 600 microservices, so they really need to automate everything. And that’s why they invented the Job DSL: it allows you to describe your Jenkins job using a predefined DSL. It is implemented as a Jenkins plugin, so the creation of the Jenkins job is performed as a Jenkins job itself: Let’s start with a basic example. At first we create a seed job. That’s the common lingual for the job that generates other jobs using the Job DSL:

seed-create

We will need only a single build step: the Job DSL:

seed-create-jobdsl

Now copy the following sample DSL to the editor field:

freeStyleJob('order-build') { 
  scm { 
    git { 
      remote { 
        url('https://github.com/ralfstuckert/jobdsl-sample.git') 
      } 
      branch('order')
      createTag(false) 
    } 
  } 
  triggers { 
     scm('H/15 * * * *') 
  } 

  steps { 
    maven { 
      mavenInstallation('3.1.1') 
      goals('clean install') 
    } 
  } 
}

Let’s go through it step by step. We define a freestyle build job named order-build. Next is a source control block with a GIT repository. I don’t wanted to set up a dozen repositories for the projects used in this example, so I used different branches. So to check out the order project, select the branch named order. We don’t want Jenkins to create a tag (with the build-number) after the checkout, so we set this property to false. In the trigger block we watch the source control system for changes every 15 minutes. In the following (build-) steps block, we define just one step: maven. A maven installation is selected (as predefined in Jenkins) and the goals clean and install are executed. Save and run. Now we have generated a new job order-build:

seed-and-order

Looks good, so let run the order-build: Yep, it builds :-)

order-run

Ok, so we generated a build job. But we could have done the same thing using the Jenkins UI, so where is the big deal? The benefit of generating jobs pays off when generate the same class of job for multiple projects. Let’s say we have a some projects named customer, order, datastore etc. Now we will extend our DSL with a little Groovy code that iterates over these projects, and create a build job for each. So (re-)configure your seed build, and replace the DSL part with the following stuff:

def microservices = '''
microservices {
  ad {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'ad'
  }
  billing {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'billing'
  }
  cart {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'cart'
  }
  config {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'config'
  }
  controlling {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'controlling'
  }
  customer {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'customer'
  }
  datastore {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'datastore'
  }
  help {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'help'
  }
  logon {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'logon'
  }
  order {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'order'
  }
  preview {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'preview'
  }
  security {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'security'
  }
  shipping {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shipping'
  }
  shop {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shop'
  }
  statistics {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'statistics'
  }
  warrenty {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'warrenty'
  }
}
'''

def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(microservices)

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

  }

}

Ok, let’s go through this again step by step. At firs, we define al little DSL describing our microservices. After that, we use the Groovy ConfigSlurper to parse the DSL (ignore this class loader stuff for the moment, that’s a bug). Than we iterate over the microservices and pass the name and the data of each service to the method createBuildJob(). This method contains the Job DSL we used in the first example. Well, almost. We parameterized some things like the name, Git URL and branch, so we can reuse the DSL for creating all the build jobs.

Let the seed job run again and watch the output:

all-projects-seed-console

Looks good. Now let’s see the dashboard:

all-projects-overview


Ta-da. It’s done. We generated a dozen build jobs using a single script. That’s it for the first installment. In the next part, we will alter our DSL, automate the Job creation itself, and create some views.
When you got a job to do, 
you gotta do it well.
Paul McCartney – Live and let die

Monday, March 23, 2015

Bringing in the herd

Everybody is doing microservices at the time of writing. They promise to solve the problems we had with monolithic architectures: They are easy to deploy, scale, understand, and throw away, they are resilient and may be implemented using different technologies. That’s hell a lot of promises, but there are also downsides: Microservices come in herds, and herds are hard to handle ;-) In our current project we use the Jenkins CI server to implement a continuous integration pipeline. For every microservice we have a couple of jobs:
  • Build: Compile the classes, build a jar, run the JUnit tests
  • ITest: Run the integration tests against the built jar
  • Deploy: Deploy the microservice to the environment
These steps are run one after another using the Build Pipeline Plugin. But when it comes to getting an overview to the state of the jobs, you have few choices: The All-View is quite inadequate for that. Even if you have only a couple of dozens of services, there are three jobs for every service, so the all view is quite crowded:

all

Jenkins provides views that let you filter the job list either by selecting each job, or by using regex. So we could easily create a view providing a nice overview of all build jobs:

all-build

But what about the integration-test and deploy jobs? Well, we could create corresponding views for that in the same manner. But that’s also not very appropriate, since we are interested in the pipeline. The Build Pipeline Plugin brings a special view for visualizing the state of the pipeline, so you are able to see the build-health of your microservice in a single:

one-pipeline

That’s fine for the developers of that microservice: they have all steps that matters to them in one view. But if we create a build pipeline view for every microservice, that’s still a confusing lot of views. In this example its only a couple of microservices, think what you will experience with dozens or hundreds of services:

all-pipelines-marked

If you are a team leader, or if you are developing multiple services, it would be perfect, if you had an overview to all build pipelines.  That’s where the Nested View Plugin comes to the rescue: It allows grouping job views into multiple levels instead of one big list of tabs:

pipeline-overview

You can still get down to the pipeline view by selecting the corresponding link:

pipeline-overview-one

That’s already quite nice, but the really neat thing is: you can aggregate the state of the complete pipeline. Let’s see what happens if one step of the pipeline fails:

pipeline-overview-failed

That’s what we want, you can see the state of the complete pipeline(s) at one glance. And if step down to customer pipeline subfolder, you will see which step failed:

pipeline-overview-one-failed

Currently only the state and weather columns are supported in the nested view plugin, but there is already an open issue requesting other columns.

That’s it for today
Ralf

Update 24.03.2015

Just to make that point clear: you can also nest nested views. If you have just a couple of microservices, it is ok to have all build pipelines on one overview. But if they don’t fit on one view, you can use nested views to create groups:
nested-overview
Here we use three (imaginary) groups backend, base and shop. You still have the state aggregation feature, and if you step down into the next level, you’ll see pipelines contained in that group:
nested-overview-2
Regards
Ralf