Monday, April 13, 2015

Job DSL Part III

The previous part of this little series on the Job DSL gave you some examples on maintenance, automating the job creation itself and creating views. This last installment will complete the little round trip through the Job DSL with some hints on documentation, tooling and pitfalls.

Documentation

If you search the internet for the Job DSL one of the first hits will be the corresponding wiki. This is the most valuable source of information. It is well structured and maintained, so new features and missing pieces are filled in regularly. If you are looking for any details on jobs, the job reference is your target. If you like to generate a view, there is a corresponding view reference.

job-dsl-wiki-png

Job DSL Source

The documentation on the Job DSL is quite extensive, but so is the Job DSL itself. They are steadily closing the gaps, but sometimes a piece of information is missing. A prominent example: enumeration values. There are some attributes, that only accept a definite set of values, so you have to know them. Let’s take the CloneWorkspace Publisher as an example. There are two attributes with enumeration values: String criteria = 'Any', String archiveMethod = 'TAR'

cloneWorkspace-wiki

But what about all other values that are acceptable for criteria and archiveMethod? The documentation (currently) says nothing about that. In cases like this it is the easiest thing to have a look at the source code of the Job DSL:

cloneWorkspace-source

Ah, there you go: criteria accepts the values Any, Not Failed and Successful. And archiveMethod has TAR and ZIP. But how can I find the appropriate source for the Job DSL? If you have a look at the Job DSL repository, you will find three major packages:helpers, jobs and views. As the name implies, jobs contains all the job types, and views the different view types. All other stuff like publishers, scm, triggers and the like are located in helpers, so that’s usually the place to start your search. Our CloneWorkspace Publisher is – good naming is priceless - a publisher, so if we step down from the helper to the publisher package: ta-da, here it is :-)

See you at the playground

Sometimes it’s not easy to get your DSL straight. Examples are outdated, you do not get the point, or you just have a typo. Anyway, you type your DSL into the Jenkins editor, save your change and retry again and again, until you fix it.But all this is quite time consuming and developers are impatient creatures: we are used to syntax highlighting and incremental compilation while-u-write. This kind of typing feels a bit historical, so there should be something more adequate, and here it is: The Job DSL Playground is a web application, that let’s type in some DSL (with syntax highlighting) in the left editor side, and shows the corresponding Jenkins config.xml on the other side:

playground

Using the playground has two major benefits. First: no edit-save cycles, so you are much faster. Second: you see the generated configuration XML, which can be useful when you set up a DSL by reverse engineering; means: you have an existing configuration and you want to create a DSL generating exactly that one. I highly recommend you to give it a try, it’s pretty cool.

Nuts and Bolts

The Job DSL is a mature tool and bugs are seldom, but sometimes the devil is in the detail. I’d like you to introduce to some pitfalls I fell into when working with the Job DSL… and how to work around ‘em.

ConfigSlurper

The ConfigSlurper is a generic Groovy DSL parser, which we have used in our examples to parse the microservice.dsl.
def microservices = '''
microservices {
  ad {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'ad'
  }
  billing {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'billing'
  }
  // ... more microservices
}
'''

def slurper = new ConfigSlurper()
def config = slurper.parse(microservices)

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
}


If you try to use the ConfigSlurper like this in Jenkins you will get an error message:

Processing provided DSL script
ERROR: Build step failed with exception
groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper.parse() is applicable for argument types: (script14284650953961421329905) values: [script14284650953961421329905@1563f9f]
Possible solutions: parse(groovy.lang.Script), parse(java.lang.Class), parse(java.lang.String), parse(java.net.URL), parse(java.util.Properties), parse(groovy.lang.Script, java.net.URL)

Possible solution is parse(String)?!? Well, that’s what we do, isn’t it? After searching for a while a stumbled over a post which explained that there is a problem with the ConfigSlurper in the Job DSL, and the workaround is to fix the class loader:

def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(microservices)

Ah, now it works :-)  This problem may be fixed by the time you are reading this, but just in case you experience this bug, you now have a workaround.

Loosing the DSL context

When I tried out nesting nested views for the post Brining in the herd I stumbled over the following problem, that you sometimes loose the context of the Job DSL when nesting closures. My first attempt was to just nest some nested views. I invented a new attribute group in the microservice.dsl, so I can assign a microservice to one of the (fictional) groups backend, base or frontend. For each of these groups a nested view is created. These group views are supposed to contain a nested view for each microservice in that group, which in turn contains the build pipeline view. Say what?!? The following pictures will show the target situation:

nested-overview

nested-base

nested-base-help

That’s what I wanted to build, so I started straight ahead. I used the groupBy() Groovy method to create a map with the group attribute as keys, and the corresponding microservices as values. Then iterate over theses groups and create a nested view for these. In each group, iterate over the contained microservices, and created a nested Build Pipeline View:

// create nested build pipeline view
def microservicesByGroup = config.microservices.groupBy { name,data -> data.group } 
nestedView('Build Pipeline') { 
   description('Shows the service build pipelines')
   columns {
      status()
      weather()
   }
   views {
      microservicesByGroup.each { group, services ->
         view("${group}", type: NestedView) {
            description('Shows the service build pipelines')
            columns {
               status()
               weather()
            }
            views {
               services.each { name,data ->
                  view("${name}", type: BuildPipelineView) {
                     selectedJob("${name}-build")
                     triggerOnlyLatestJob(true)
                  alwaysAllowManualTrigger(true)
                  showPipelineParameters(true)
                     showPipelineParametersInHeaders(true)
                  showPipelineDefinitionHeader(true)
                  startsWithParameters(true)
                  }
               }
            }
         }
      }
   }   
}

Makes sense, doesn’t it? But what came out, is that:

nested-bad

Ooookay. The (nested) Build Pipeline views are on the same nest-level as our intermediate group views backend, base and frontend. If you have a look at the generated config.xml you will see that there is only one <views> element, and all <view> elements are actually children of that element…what happened? Obviously creating the Build Pipeline view has been applied to the outer NestedViewsContext. I don’t know too much about Groovy, but closure code is applied to the delegate, so the delegate seems to be wrong here. Let’s see if can fix that by applying the view creation to the correct delegate:

def microservicesByGroup = config.microservices.groupBy { name,data -> data.group } 
nestedView('Build Pipeline') { 
   description('Shows the service build pipelines')
   columns {
      status()
      weather()
   }
   views {
      microservicesByGroup.each { group, services ->
         view("${group}", type: NestedView) {
            description('Shows the service build pipelines')
            columns {
               status()
               weather()
            }
            views {
               def viewsDelegate = delegate
               services.each { name,data ->
                  // Use the delegate of the 'views' closure 
                  // to create the view.
                  viewsDelegate.view("${name}", type: BuildPipelineView) {
                     selectedJob("${name}-build")
                     triggerOnlyLatestJob(true)
                  alwaysAllowManualTrigger(true)
                  showPipelineParameters(true)
                     showPipelineParametersInHeaders(true)
                  showPipelineDefinitionHeader(true)
                  startsWithParameters(true)
                  }
               }
            }
         }
      }
   }   
}

So now we explicitly use the surrounding views closure’s delegate to create the view, and…yep, now it works:

nested-overview

If you now inspect the config.xml you will actually find an outer and three inner <views> representing the groups, where each group contains the <view> elements for the Build Pipelines. Fixing the delegate is not a cure for cancer, but it will save your day in situations like these.

Done

That’s all I’ve got to say about the Job DSL :-)

Have a nice day 
Ralf
Sure it's a big job; but I don't know anyone who can do it better than I can.
John F. Kennedy

Tuesday, March 31, 2015

Job DSL Part II

In the first part of this little series I was talking about some of the difficulties you have to tackle when dealing with microservices, and how the Job DSL Plugin can help you to automate the creation of Jenkins jobs. In today’s installment I will show you some of the benefits in maintenance. Also we will automate the job creation itself, and create some views.

Let’s recap what we got so far. We have created our own DSL to describe the microservices. Our build Groovy script iterates over the microservices, and creates a build job for each using the Job DSL. So what if we want to alter our existing jobs? Just give it a try: we’d like to have JUnit test reports in our jobs. All we have to do, is to extend our job DSL a little bit by adding a JUnit publisher:
  freeStyleJob("${name}-build") {
  
    ...
    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }
  
    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
    }
  
  }

Run the seed job again. All existing jobs has been extended by the JUnit report. The great thing about the Job DSL is, that it alters only the config. The job’s history and all other data remains, just like you edited the job using the UI. So maintenance of all our jobs is a breeze using the Job DSL Note: Be aware that the report does not show until you run the tests twice.

test-report

Automating the job generation itself

Wouldn’t it be cool, if the jobs would be automatically re-generated, if we change our job description or add another microservice? Quite easy. Currently our microservice- and job-DSL are hardcoded into the seed job. But we can move that into a (separate) repository, watch and check it out in our seed job, and use this instead of the hardcoded DSL. So at first we put our microservice- and job-DSL into two files called microservice.dsl and job.dsl.

microservice.dsl:
microservices {
  ad {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'ad'
  }
  billing {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'billing'
  }
  cart {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'cart'
  }
  config {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'config'
  }
  controlling {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'controlling'
  }
  customer {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'customer'
  }
  datastore {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'datastore'
  }
  help {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'help'
  }
  logon {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'logon'
  }
  order {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'order'
  }
  preview {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'preview'
  }
  security {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'security'
  }
  shipping {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shipping'
  }
  shop {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shop'
  }
  statistics {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'statistics'
  }
  warrenty {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'warrenty'
  }
}

job.dsl
def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(readFileFromWorkspace('microservices.dsl'))

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
    }
  
  }

}

We now check it into a repository so we can reference it in our seed build (you don’t have to this, I already prepared that for you in the master branch of our jobdsl-sample repository at GitHub).

Finally we have to adapt our seed build to watch and check out the jobdsl-sample repository

dsl-scm-section

… and use the checked out job.dsl instead of the hardcoded one:

dsl-groovy-section

That’s it. Now the seed job polls for changes on our sample respository, so if somebody adds a new microservice or alters our job.dsl, all jobs will be (re-)created automatically without any manual intervention.

Note: We could have put both the microservice.dsl and job.dsl in one file, as we had it in the first place. But now you can use your microservice.dsl independently of the job.dsl to automate all kinds of stuff. In our current project we use it e.g. for deployment and tooling like monitoring etc.


Creating Views

In the post Brining in the herd I described how helpful views can be to get a overview to all your jobs, or even aggregate information. The Job DSL allows you to generate views just like jobs, so it’s a perfect fit for that need. In order to have some examples to play with, we will increase our set of jobs by adding an integration-test and deploy-job for every microservice. These jobs don’t do anything at all (means: they are worthless), we just use them to set up a build pipeline.

Note: in order to use the build pipeline, you have to install the Build Pipeline Plugin.

We set up a new DSL file called pipeline.dsl and push it to our sample repository (again you don’t have to do this, it’s already there). We add two additional jobs per microservice, and set up a downstream cascade. Means: at the end of each build (-pipeline-step) the next one is triggered:

pipeline.dsl
def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(readFileFromWorkspace('microservices.dsl'))

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
  createITestJob(name,data)
  createDeployJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

    publishers {
      archiveJunit('/target/surefire-reports/*.xml')
      downstream("${name}-itest", 'SUCCESS')
    }
  }

}

def createITestJob(name,data) {
  freeStyleJob("${name}-itest") {
    publishers {
      downstream("${name}-deploy", 'SUCCESS')
    }
  }
}

def createDeployJob(name,data) {
  freeStyleJob("${name}-deploy") {}
}

Now change your seed job to use use the pipeline.dsl instead of the job.dsl and let it run. Now we have three jobs for each microservice cascaded as a build pipeline.

all-build

The Build Pipeline Plugin comes with its own view, the build pipeline view. If you set up this view and provide a build job, the view will render all cascading jobs as a pipeline. So now we are gonna generate a pipeline view for each microservice. As before I already provided the DSL for you in the repository.

pipeline-view.dsl
...

// create build pipeline view for every service
config.microservices.each { name, data ->
   buildPipelineView(name) {
     selectedJob("${name}-build")
   }
}

...

Not that complicated, eh? We just iterate over the microservices, and create a build pipeline view for each. All we got to specify, is the name of the first job in the pipeline. The others are found by following the downstream cascade. Ok, so configure your seed job to use the pipeline-view.dsl and let it run. Now we have created a pipeline view for every microservice:

all-pipelines

If you select one view, you will see the state of all steps in the pipeline:

one-pipeline

Having a single view for each microservice will soon become confusing, but as described in Brining in the herd, nested views will help you by aggregating information. So we are gonna group all our pipeline views together by nesting them in one view. We our going to generate a nested view containing the build pipeline views of all our microservices. i will list only the difference to the previous example. The complete script is provided for you on GitHub ;-)

pipeline-nested-view.dsl
...
// create nested build pipeline view
nestedView('Build Pipeline') { 
   description('Shows the service build pipelines')
   columns {
      status()
      weather()
   }
   views {
      config.microservices.each { name,data ->
         println "creating build pipeline subview for ${name}"
         view("${name}", type: BuildPipelineView) {
            selectedJob("${name}-build")
            triggerOnlyLatestJob(true)
         alwaysAllowManualTrigger(true)
            showPipelineParameters(true)
            showPipelineParametersInHeaders(true)
            showPipelineDefinitionHeader(true)
         startsWithParameters(true)
         }
      }
   }
}
...

So what do we do here? We create a nested view with the columns status and weather, and create a view of type BuildPipelineView for each microservice. A difference you might notice compared to our previous example, is that we are setting some additional properties in the build pipeline view. The point is how we create the view. Before we used the dedicated DSL for the build pipeline view which sets some property values by default. Here we are using the generic view, so in order to get the same result, we have to set these values explicitly. Enough of the big words, configure your seed job to use the pipeline-nested-view.dsl, and let it run.

Note: You need to install the Nested View Plugin into your Jenkins in order to run this example.

pipeline-overview 

Cool. This gives us a nice overview to the state of all our build pipelines. And you can still watch every single pipeline by selecting one of the nested views:

pipeline-overview-one

So what have we got so far? Instead of using a hardcoded DSL in the job, we moved it to a dedicated repository. Our seed job watches this repository, and automatically runs on any change. Means if we alter our job configuration or add a new microservices, the corresponding build jobs are automatically (re-)created. We also created some views to get more insight into the health of our build system.

That’s it for today. In the next and last installment I’d like to give you some hints on how to dig deeper into the Job DSL: Where you will find some more information, where to look if the documentation is missing something, faster turnaround using the playground, and some pitfalls I’ve already fallen into.

Regards
Ralf
I don't know that there are any short cuts to doing a good job.
Sandra Day O'Connor

Sunday, March 29, 2015

Job DSL Part I

Jenkins CI is a great tool for automating your build and deployment pipeline. You set up jobs for build, test, deployment and whatever, and let Jenkins do the work. But there’s a catch. In the recent blog post Bringing in the herd I already talked a bit about the difficulties you have to tackle if you are dealing with microservices: they are like rabbits! When you start with a project, there may be only a couple of microservices, but soon there will be a few dozens or even hundreds.Setting up jobs for these herds is a growing pain you have to the master, and that’s where the Job DSL comes to the rescue. This post is the start of a small series on the Job DSL.

One lesson we already learned about microservices is that you have to automate everything. Even – or especially – the configuration of the tooling used to build, deploy, and monitor your application. Not to mention the things you have to do to run the application like distributing, load balancing etc. But let’s start with the build pipeline for the moment. Setting up a Jenkins job is an easy task. When you create a job using the UI, you just have to select the things Jenkins is supposed to do, like check out source code from GIT, run the maven or gradle build, and publish the test results. Once the job does what you want, it is easy to set up this job for another project: Jenkins allows to make copies. Just adapt some data like names and paths, and that’s it. So there’s no challenge in creating jobs for new microservices. If you have to create multiple jobs for each microservice – let’s say integration and acceptance tests, release builds, deployment to various environments – things start to get annoying. But one day you recognize that you – just for example - forgot to publish the checkstyle results, and you will have to change all your existing jobs… manually :-0

Don’t do it. Not even once! What does developers do in order to avoid repetitive, boring, annoying, error-prone tasks? They write a script, yep.We are lazy bones, so instead of doing stuff, we’re telling the machine what to do, and have cup of coffee while the work is being done. Jenkins job definitions are nothing but a little XML, so we could easily write a little script - Groovy has great built-in support for processing XML - and generate that. We could even invent a DSL using Groovy, so our script will be more readable.And since all that is so obvious, somebody already had this idea: The Jenkins Job DSL Plugin.

The development of this plugin was driven by one of the protagonists of microservices: Netflix. They currently have about 600 microservices, so they really need to automate everything. And that’s why they invented the Job DSL: it allows you to describe your Jenkins job using a predefined DSL. It is implemented as a Jenkins plugin, so the creation of the Jenkins job is performed as a Jenkins job itself: Let’s start with a basic example. At first we create a seed job. That’s the common lingual for the job that generates other jobs using the Job DSL:

seed-create

We will need only a single build step: the Job DSL:

seed-create-jobdsl

Now copy the following sample DSL to the editor field:

freeStyleJob('order-build') { 
  scm { 
    git { 
      remote { 
        url('https://github.com/ralfstuckert/jobdsl-sample.git') 
      } 
      branch('order')
      createTag(false) 
    } 
  } 
  triggers { 
     scm('H/15 * * * *') 
  } 

  steps { 
    maven { 
      mavenInstallation('3.1.1') 
      goals('clean install') 
    } 
  } 
}

Let’s go through it step by step. We define a freestyle build job named order-build. Next is a source control block with a GIT repository. I don’t wanted to set up a dozen repositories for the projects used in this example, so I used different branches. So to check out the order project, select the branch named order. We don’t want Jenkins to create a tag (with the build-number) after the checkout, so we set this property to false. In the trigger block we watch the source control system for changes every 15 minutes. In the following (build-) steps block, we define just one step: maven. A maven installation is selected (as predefined in Jenkins) and the goals clean and install are executed. Save and run. Now we have generated a new job order-build:

seed-and-order

Looks good, so let run the order-build: Yep, it builds :-)

order-run

Ok, so we generated a build job. But we could have done the same thing using the Jenkins UI, so where is the big deal? The benefit of generating jobs pays off when generate the same class of job for multiple projects. Let’s say we have a some projects named customer, order, datastore etc. Now we will extend our DSL with a little Groovy code that iterates over these projects, and create a build job for each. So (re-)configure your seed build, and replace the DSL part with the following stuff:

def microservices = '''
microservices {
  ad {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'ad'
  }
  billing {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'billing'
  }
  cart {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'cart'
  }
  config {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'config'
  }
  controlling {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'controlling'
  }
  customer {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'customer'
  }
  datastore {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'datastore'
  }
  help {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'help'
  }
  logon {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'logon'
  }
  order {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'order'
  }
  preview {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'preview'
  }
  security {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'security'
  }
  shipping {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shipping'
  }
  shop {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'shop'
  }
  statistics {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'statistics'
  }
  warrenty {
    url = 'https://github.com/ralfstuckert/jobdsl-sample.git'
    branch = 'warrenty'
  }
}
'''

def slurper = new ConfigSlurper()
// fix classloader problem using ConfigSlurper in job dsl
slurper.classLoader = this.class.classLoader
def config = slurper.parse(microservices)

// create job for every microservice
config.microservices.each { name, data ->
  createBuildJob(name,data)
}


def createBuildJob(name,data) {
  
  freeStyleJob("${name}-build") {
  
    scm {
      git {
        remote {
          url(data.url)
        }
        branch(data.branch)
        createTag(false)
      }
    }
  
    triggers {
       scm('H/15 * * * *')
    }

    steps {
      maven {
        mavenInstallation('3.1.1')
        goals('clean install')
      }
    }

  }

}

Ok, let’s go through this again step by step. At firs, we define al little DSL describing our microservices. After that, we use the Groovy ConfigSlurper to parse the DSL (ignore this class loader stuff for the moment, that’s a bug). Than we iterate over the microservices and pass the name and the data of each service to the method createBuildJob(). This method contains the Job DSL we used in the first example. Well, almost. We parameterized some things like the name, Git URL and branch, so we can reuse the DSL for creating all the build jobs.

Let the seed job run again and watch the output:

all-projects-seed-console

Looks good. Now let’s see the dashboard:

all-projects-overview


Ta-da. It’s done. We generated a dozen build jobs using a single script. That’s it for the first installment. In the next part, we will alter our DSL, automate the Job creation itself, and create some views.
When you got a job to do, 
you gotta do it well.
Paul McCartney – Live and let die

Monday, March 23, 2015

Bringing in the herd

Everybody is doing microservices at the time of writing. They promise to solve the problems we had with monolithic architectures: They are easy to deploy, scale, understand, and throw away, they are resilient and may be implemented using different technologies. That’s hell a lot of promises, but there are also downsides: Microservices come in herds, and herds are hard to handle ;-) In our current project we use the Jenkins CI server to implement a continuous integration pipeline. For every microservice we have a couple of jobs:
  • Build: Compile the classes, build a jar, run the JUnit tests
  • ITest: Run the integration tests against the built jar
  • Deploy: Deploy the microservice to the environment
These steps are run one after another using the Build Pipeline Plugin. But when it comes to getting an overview to the state of the jobs, you have few choices: The All-View is quite inadequate for that. Even if you have only a couple of dozens of services, there are three jobs for every service, so the all view is quite crowded:

all

Jenkins provides views that let you filter the job list either by selecting each job, or by using regex. So we could easily create a view providing a nice overview of all build jobs:

all-build

But what about the integration-test and deploy jobs? Well, we could create corresponding views for that in the same manner. But that’s also not very appropriate, since we are interested in the pipeline. The Build Pipeline Plugin brings a special view for visualizing the state of the pipeline, so you are able to see the build-health of your microservice in a single:

one-pipeline

That’s fine for the developers of that microservice: they have all steps that matters to them in one view. But if we create a build pipeline view for every microservice, that’s still a confusing lot of views. In this example its only a couple of microservices, think what you will experience with dozens or hundreds of services:

all-pipelines-marked

If you are a team leader, or if you are developing multiple services, it would be perfect, if you had an overview to all build pipelines.  That’s where the Nested View Plugin comes to the rescue: It allows grouping job views into multiple levels instead of one big list of tabs:

pipeline-overview

You can still get down to the pipeline view by selecting the corresponding link:

pipeline-overview-one

That’s already quite nice, but the really neat thing is: you can aggregate the state of the complete pipeline. Let’s see what happens if one step of the pipeline fails:

pipeline-overview-failed

That’s what we want, you can see the state of the complete pipeline(s) at one glance. And if step down to customer pipeline subfolder, you will see which step failed:

pipeline-overview-one-failed

Currently only the state and weather columns are supported in the nested view plugin, but there is already an open issue requesting other columns.

That’s it for today
Ralf

Update 24.03.2015

Just to make that point clear: you can also nest nested views. If you have just a couple of microservices, it is ok to have all build pipelines on one overview. But if they don’t fit on one view, you can use nested views to create groups:
nested-overview
Here we use three (imaginary) groups backend, base and shop. You still have the state aggregation feature, and if you step down into the next level, you’ll see pipelines contained in that group:
nested-overview-2
Regards
Ralf

Friday, June 15, 2012

The Covariant Return Type Abyssal

The possibility to define a more specific return type when overriding a method has been introduced together with generics long time ago. The number of casts needed has dimished dramatically since these days, and our code is now a lot more readable. But there are also some nasty pitfalls you won't recognize before you fall down. In this article on DZone I will describe some of those traps we stumbled about lately. 

Wednesday, June 9, 2010

Give me a ping Vasily, Part I

Writing integration tests is not an easy task. Besides the business complexity, it is also often not easy to set up a suitable test scenario. This article will show an easy way to write integration tests for common problems in Eclipse based client-server projects.

It started in a meeting with project management. The leader of a large-scale SmartClient project asked: "How come we have so many unit tests, but only a few integration tests?". I guess the main problem was that the application was based on large, interweaved (host) database tables, and we were not allowed to alter any data...even in the test environment. So in fact, we could not set up any test data, we were forced to set up our test cases on the existing data.

But it was living data, and so it was subject to change. Means: even if you had set up a test based on that data, it was very expensive to maintain. The project leader claimed: "We need a kind of integration test that is easy to write, needs no maintenance and runs in any stage from development to production". That's three wishes at once. I ain't no Jeannie in a bottle, man ;-)

But let's be serious for a second. What kind of common problems do we have in large multi-tier client-server projects?
  • The components of the application are loosely coupled, so a lot of problems do not occur until runtime. Especially in a distributed development environment.
  • The environment-specific part of the application is extracted into stages. So if a developer is not careful, he might forget to provide the needed information in all stages.
  • Infrastructure: In a large distributed system there are a whole bunch of things that can go wrong, e.g. firewall problems, missing entries in the host file, etc.
So, if business-motivated integration tests are hard to write, maybe there's a way to write tests for the problems mentioned above? Well, the core of current multi-tier architectures is usually based on (local and remote) services. By successfully calling a service we can prove that...er, yo what?!? Well for local services it proves that the service is registered and the corresponding plugin is active. For remote service it also means, that we can connect from client to server. Services often need other resources to fulfill there job e.g. other services, databases, etc. If these resources are not available, the service itself is also not - or only partially - available. So our call to the service should also call all resources it depends on. And finally, if any call fails, the failure message will give you a hint on the cause.

The Eclipse Riena Project provides a little framework for writing and running such tests: the Ping API. The main interface is IPingable which defines a non-business service dedicated for implementing the test described above:
public interface IPingable {

PingVisitor ping(PingVisitor visitor);

PingFingerprint getPingFingerprint();
}

The ping() method defines the dedicated service call. The PingFingerprint is needed for reporting and to avoid cycles. So, any services that wants to get pinged has to implement that interface. The implementation is quite easy:
public PingVisitor ping(final PingVisitor visitor) {
return visitor.visit(this);
}

public PingFingerprint getPingFingerprint() {
return new PingFingerprint(this);
}

Not that hard, is it? And for the lazy ones (like me ;-) there is the class DefaultPingable which you can derive from. But here comes the tedious job: all service that wanna get pinged (that's usually ALL services) have to implement IPingable. Means all service interfaces must extends IPingable:
public interface IRidiculousService extends IPingable {

We already discussed the implementation. If you can derive, it is just
public class RidiculousServiceImpl extends DefaultPingable implements IRidiculousService {

Until now we've only pinged a single service. What about it's dependencies? What about resources like databases or backend server? Let's take the following silly architecture as an example:


On the client side we have a couple of service. Some of them are local, others (dashed) are stubs for remote services. On the server side there are the service implementation for the client stubs, which itself may call other services, a database or other backend resources like a mailserver. How are they getting pinged? That's the PingVisitors job. Remember the implementation of ping:
public PingVisitor ping(final PingVisitor visitor) {
return visitor.visit(this);
}

When visitor.visit(this) is called, the PingVisitor inspects the service for member variables that implement IPingable (using introspection), collects and then pings 'em.

Let's take the SeriousService from our silly architecture for example:
public class SeriousServiceImpl extends DefaultPingable implements ISeriousService {

private IRidiculousService ridiculousService;

@InjectService
public void bind(IRidiculousService ridiculousService) {
this.ridiculousService = ridiculousService;
}

public void unbind(IRidiculousService ridiculousService) {
this.ridiculousService = null;
}
...
}

The IRidiculousService is injected and stored in a member variable. On ping() the visitor will find the ridiculousService and pings it also. Means: ping() is called recursive on all IPingables found.

But I don't use injection, I don't keep services in member variables. That's evil. I fetch service when I need 'em:
public void doSomeRidiculousStuff() {
IRidiculousService ridiculousService = Service.get(IRidiculousService.class);
ridiculousService.dontWorryBeHappy();
}

No problem. Just provide a method getAdditionalPingables() that returns all IPingables you are interested in:
private Iterable getAdditionalPingables() {
List pingables = new ArrayList();
pingables.add(Service.get(IRidiculousService.class));
...
return pingables;
}

If the PingVisitor finds a method matching that signature, it will ping all IPingables in that Iterable. The drawback of this way is that you have to maintain this list of services.

Databases and other resources
By now we can ping all client and server side services. But what about other resources like databases or e.g. the mail server. Wouldn't it be nice if you could ping them also? Sure you can: The PingVisitor inspects the IPingable if there are methods called void ping...(), where the first character after ping must be an upper case letter e.g.
private void pingDatabase() {
...
}

In pingDatabase() you have to check if the database is available, e.g. make a select for a row with a certain ID. It doesn't even matter if that ID exists: If the select returns a result (that might be empty), it proves that:
  • the database exists
  • you can connect to it
  • the table exists
If that's not enough, you can write a stored procedure (named ping ;-) that can perform all kinds of checks. And your pingDatabase() method just calls the stored proc.

In the same way, you can check all kind of resources e.g. for a mail server you could check if a helo/quit succeeds.

Ping'em all
So now all services implement IPingable... but we haven't pinged anything yet. How do we do that? Well, you could write a JUnit test that collects all pingable services, creates a PingVisitor and calls ping() on them. Or you could use the helper class Sonar which does exactly that for you. Or... you could use the Sonar User Interface which will be introduced in the next part.

Regards
Ralf

Give me a ping Vasily, Part II


This is the second part of a two-part article on Ping. The first part gave you an instroduction to the Riena Ping API. This second part will show you the usage of the Sonar UI to run ping tests.
The first intention was to use ping for automated, JUnit-driven, integration tests. But that did not work out for two reasons:
  • The build server supposed to run the tests was a unix system, but the client was targeted for windows.
  • The build server was located in the intranet and was not allowed to connect to the application server.
During discussion of these problems, one of the infrastructure guys said: "Wouldn't it be cool, if we could use those ping tests as part of our daily system check?" Bingo! That's the idea. Instead of preparing a client-like test setup, just integrate the tests into the client. And that's what Sonar is all about: A UI dedicated for running ping tests, that could be easily integrated into any (Riena-based) client product:


No need to say that it was inspired by the JUnit-Runner, eh? ;-) It consists of a single plugin (org.eclipse.riena.example.ping.client) which provides the Sonar Submodule and the main logic, and also menu and command handler for bringing up the module. The plugin is currently part of the Riena example application.

So that's what Sonar basically does: If you press the start button...
  • it collects all services that implement the IPingable interface
  • it creates a new PingVisitor and pings all services
  • it renders the result tree and provides any failure messages

So usage is quite simple: Just run the tests. If everything is green, your system is elementary ok and you can start functional testing. If it's red, you have to analyze the failure message and see what's wrong. Let's do this by example. Take the following silly architecture from the first part:


On the client side we have a couple of services. Some of them are local, others are stubs for remote services (dashed). On the server side, there are the service implementations for the client stubs, which itself may call other services, databases or other backend systems like e.g. a mail server. Now let's start a ping and see what happens:


Oops, all remote services are red. What's wrong?!? Let's have a look at the failure message:

org.eclipse.riena.communication.core.RemoteFailure:
Error while invoking remote service at
...
Caused by: java.net.ConnectException: Connection refused


Alright, in our silly example I just forgot about to start the server. But in reality, a couple of things could be the cause: Firewalls, wrong host names, network failure, whatever. But here the solution is quite simple: just start the server and try again:


Hmm, the remote services have been called successfully, but all pings to the database failed. What does the failure message say?

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at
...
Caused by: java.sql.SQLNonTransientConnectionException:
Connection authentication failure occurred. Reason: Invalid authentication..


Oh, we've used a wrong database user. Smells like a stages problem. The developer has used the correct database user in development (stage), but forgot to set up the appropriate user for test, resp. productions. So let's fix the stage by setting up the correct DB user and try again:


Aah, nicely green at last. So all system components from client to database are basically available now ;-)

Conclusion
Sonar provides an easy way to run ping tests directly form your client product. The ping tests doesn't help you on testing business functionality, but give you a tool for checking basic integration and infrastructure problems... at almost no cost :-)

Give me a ping, Vasily.
One ping only, please!
Captain Ramius
Boat Red October