Dynamically spinning up Jenkins slaves on Docker clusters

  • 0

Dynamically spinning up Jenkins slaves on Docker clusters

Introduction:

Being able to dynamically spin up slave containers is great. But if we want to support significant build volumes we need more than a few Docker hosts. Defining a separate Docker cloud instance for each new host is definitely not something we want to do – especially as we’d need to redefine the slave templates for each new host. A much nicer solution is combining our Docker hosts into a cluster managed by a so-called container orchestrator (or scheduler) and then define that whole cluster as one cloud instance in Jenkins.
This way we can easily expand the cluster by adding new nodes into it without needing to update anything in Jenkins configuration.

There are 4 leading container orchestration platforms today and they are:

Kubernetes (open-sourced and maintained by Google)

Docker Swarm (from Docker Inc. – the company behind Docker)

Marathon (a part of the Mesos project)

Nomad (from Hashicorp)

A container orchestrator (or scheduler) is a software tool for the deployment and management of OS containers across a cluster of computers (physical or virtual). Besides running and auditing the containers, orchestrators provide such features as software-defined network routing, service discovery and load-balancing, secret management and more.

There are dedicated Jenkins plugins for Kubernetes and Nomad using the Cloud extension point. Which means they both provide the same ability of spinning up slaves on demand. But instead of doing it on a single Docker host they talk to the Kubernetes or Nomad master API respectively in order to provision slave containers somewhere in the cluster.

Nomad

Nomad plugin was originally developed by Ivo Verberk and further enhanced by yours truly while doing an exploratory project for Taboola. A detailed post describing our experience will be up on Taboola engineering blog sometime next month.
Describing Nomad usage is out of the scope of this book, but in general – exactly as the YAD plugin – it allows one to define a Nomad cloud and a number of slave templates. You can also define the resource requirements for each template so Nomad will only send your slaves to nodes that can provide the necessary amount of resources.
Currently there are no dedicated Pipeline support features in the Nomad plugin.
Here’s a screenshot of Nomad slave template configuration:Screen Shot 2017-07-11 at 12.24.48 AM

Kubernetes

The Kubernetes Plugin was developed and is still being maintained by Carlos Sanchez. The special thing about Kubernetes is that its basic deployment unit is a kubernetes pod which could consist of one or more containers. So here you get to define pod templates. Each pod template can hold multiple container templates. This is definitely great when we want predefined testing resources to be provisioned in a Kubernetes cluster as a part of the build.

The Kubernetes plugin has strong support for Jenkins pipelines with things like this available:

podTemplate(label: 'mypod', containers: [
 containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
 containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')]) 
{
  node('mypod') {
    stage('Checkout JAVA') {
    git 'https://github.com/jenkinsci/kubernetes-plugin.git'
    container('maven') {
      stage('Build Maven') {
        sh 'mvn -B clean install'
      }
    }
   }
 stage('Checkout Go') {
 git url: 'https://github.com/hashicorp/terraform.git'
 container('golang') {
 stage('Build Go) {
 sh """
 mkdir -p /go/src/github.com/hashicorp
 ln -s `pwd` /go/src/github.com/hashicorp/terraform
 cd /go/src/github.com/hashicorp/terraform && make core-dev
 """
 }
 }
 }
}

There’s detailed documentation in plugin’s README.md on github.

Marathon

There is a Jenkins marathon plugin but instead of spinning up build slaves it simply provides support for deploying applications to a Marathon-managed cluster
It requires a Marathon .json file to be present in the project workspace.
There’s also support for Pipeline code. Here’s an example of its usage:

 marathon(
   url: 'http://otomato-marathon',
   id: 'otoid',
   docker: otomato/oto-trigger')

 

Docker Swarm

I used to think there was no dedicated plugin for Swarm but then I found this. As declared in the README.md this plugin doesn’t use the Jenkins cloud API. Even though it does connect into it for label-based slave startup. This non-standard approach is probably the reason why the plugin isn’t hosted on the official Jenkins plugin repository.
The last commit on the github repository dates back 9 months ago, so it may also be outdated – as Docker and Swarm are changing all the time and so does the API.

Learn More:

This post is a chapter from the ebook ‘Docker, Jenkins, Docker’ which we recently released at Jenkins User Conference TLV 2017.  Follow this link to download the full ebook: http://otomato.link/go/docker-jenkins-docker-download-the-ebook-2/


  • 0

Continuous Lifecycle London 2017

Last week I had the honour to speak about ChatOps at Continuous Lifecycle conference in London. The conference is organised by The Register and heise Developer and is dedicated to all things DevOps and Continuous Software Delivery. There were 2 days of talks and one day of workshops. Regretfully I couldn’t attend the last day, but I heard some of the workshops were really great.

The Venue

QEIICC_evcom_partnership-20141007102013918

The venue was great! Situated right in the historical centre of London city, a few steps away from Big Ben, the QEII Center has a breathtaking view and a lot of space. The talks took place in 3 rooms : one large auditorium and 2 — smaller ones. It is quite hard to predict which talks will attract the most audience and it was hit and miss this time around too. Some talks were over-crowded while others felt a bit empty.

Between the talks everybody gathered in the recreation area to collect merchandise from the sponsors’ stands and enjoy coffe and refreshments.

The Audience

The pariticipants were mostly engineers, architects and engineering managers. As it happens too often in DevOps gatherings— the business folks were relatively few. Which is a pity — because DevOps and CI/CD is a clear business advantage based on better tech and process optimization. The sad part is the techies understand it, but the business people still too often fail to see the connection.

The Talks

Beside the keynotes (I only attended the first one) there were 3 tracks running in parallel. I had a chance to attend a few selected ones in between networking, mentally preparing for my talk and relaxing afterwards.

The keynote

The opening keynote was delivered by Dave Farley — the author of the canonical ‘Continuous Delivery’ book. Dave is a great speaker. He was talking about the necessity of iterative and incremental approaches to software engineering while bringing some exciting examples from space exploration history. Still to me it felt a bit like he was recycling his own ideas. The book was published 7 years ago. At that time it was a very important work. It laid out all the concepts and practices many of us were applying (or at least trying to promote) in a clear and concise way. I myself have used many of the examples from the book to explain CI/CD to my managers and employees numerous times over the years. But time has passed and I feel we need to find new ways of bringing the message. I do realise many IT organisations are still far from real continuous delivery. Some still don’t feel the danger of not doing it, others are afraid of sacrificing quality for speed. But more or less everybody already knows the theory behind it. Small batches, process automation, trunk-based development, integrated systems etc. It’s the implementation of these ideas that organisations are struggling with. The reasons for that are manyfold — politics, low trust, inertia, stress, burnout and lack of motivation. And of course the ever growing tool sprawl. What people really want to hear today is how to navigate this new reality. Practical advices on where to start, what to measure and how to communicate about it. Not the beaten story of agile software delivery and how it’s better than other methodologies.

The War Stories

Thankfully there was no lack of both success and failure stories and practical tips. There were some great talks on how to do deployments correctly, stories of successful container adoption and also Sarah Wells’ excellent presentation of the methodologies for influencing and coordinating the behaviours of distributed autonomous teams.

Focus on Security

As I already said — quite naturally not all the talks got the same level of interest. Still I think I noticed a certain trend — the talks dedicated to security attracted the largest crowd. Which is in itself very interesting. Security wasn’t traditionally on the priority list of DevOps-oriented organisations. Agility, quality, reliability — yes. Security — maybe later.

The disconnect was so obvious that some folks even called for adding the InfoSec professionals into the loop while inventing such clumsy terms as DevOpSec or DevSecOps.

But now it looks lke there’s a change in focus. New deployment and orchestration technologies are bringing new challenges and we suddenly see the DevOps enablers looking for answers to some hard questions that InfoSec is asking. No wonder all the talks on security I atteneded got a lot of attention. Lianping Chen’s presentation was focused on securing our CI/CD pipeline, while Dr. Phil Winder provided a great overview of container security best practices with a live demo and quite a few laughs. And there was also Jordan Taylor’s courageous live demo of using Hashicorp Vault for secret storage.

As a side note — if you’re serious about your web application and API security — you should definitely look at beame.io — they have some great tech for easy provisioning of SSL certificates in large volumes.

And for InfoSec professionals looking to get a grip on container technologies here’s a seminar we’ve recently developed : http://otomato.link/otomato/training/docker-for-information-security-professionals/

ChatOps

My talk was dedicated to the subject that I’ve been passionate about for the last couple of years — ChatOps. The slides are already online, but they are just illustrating the ideas I was describing so it’s better to wait until the video gets edited and uploaded (yes, I’m impatient too). In fact — while preparing for the talk I’ve laid out most of my thoughts in writing and I’m now thinking of converting that into a blog post. Hope to find some time for editing in the upcoming days. And if you’d like some help or advice enabling ChatOps at your company – drop us a line at contact@otomato.link

There was another talk somehow related to the topic at the conference. Job van der Voort — GitLab’s product marketing manager — described what he calls ‘Conversational Development’ — “a natural evolution of software development that carries a conversation across functional groups throughout the development process.” GitLab is a 100% remote working company and according to Job, this mode of operation allows them to be effective and ensure good communication across all teams.

GitLab Dinner

At the end of the first day all the speakers got an invitation to a dinner organised by GitlLab. There were no sales pitches — only good food and a great opportunity to talk to colleagues from all across Europe. Many thanks go to Richard and Job from GitLab for hosting the event. BTW — I just discovered that Job is coming to Israel and will be speaking at a meetup organised by our friends and partners — the great ALMToolBox. If you’re in Israel — it’s a great chance to learn more about GitLab and enjoy some pizza and beer on the 34th floor of Electra Tower. I’ll be there.