Continuous Lifecycle London 2017

  • 0

Continuous Lifecycle London 2017

Last week I had the honour to speak about ChatOps at Continuous Lifecycle conference in London. The conference is organised by The Register and heise Developer and is dedicated to all things DevOps and Continuous Software Delivery. There were 2 days of talks and one day of workshops. Regretfully I couldn’t attend the last day, but I heard some of the workshops were really great.

The Venue

QEIICC_evcom_partnership-20141007102013918

The venue was great! Situated right in the historical centre of London city, a few steps away from Big Ben, the QEII Center has a breathtaking view and a lot of space. The talks took place in 3 rooms : one large auditorium and 2 — smaller ones. It is quite hard to predict which talks will attract the most audience and it was hit and miss this time around too. Some talks were over-crowded while others felt a bit empty.

Between the talks everybody gathered in the recreation area to collect merchandise from the sponsors’ stands and enjoy coffe and refreshments.

The Audience

The pariticipants were mostly engineers, architects and engineering managers. As it happens too often in DevOps gatherings— the business folks were relatively few. Which is a pity — because DevOps and CI/CD is a clear business advantage based on better tech and process optimization. The sad part is the techies understand it, but the business people still too often fail to see the connection.

The Talks

Beside the keynotes (I only attended the first one) there were 3 tracks running in parallel. I had a chance to attend a few selected ones in between networking, mentally preparing for my talk and relaxing afterwards.

The keynote

The opening keynote was delivered by Dave Farley — the author of the canonical ‘Continuous Delivery’ book. Dave is a great speaker. He was talking about the necessity of iterative and incremental approaches to software engineering while bringing some exciting examples from space exploration history. Still to me it felt a bit like he was recycling his own ideas. The book was published 7 years ago. At that time it was a very important work. It laid out all the concepts and practices many of us were applying (or at least trying to promote) in a clear and concise way. I myself have used many of the examples from the book to explain CI/CD to my managers and employees numerous times over the years. But time has passed and I feel we need to find new ways of bringing the message. I do realise many IT organisations are still far from real continuous delivery. Some still don’t feel the danger of not doing it, others are afraid of sacrificing quality for speed. But more or less everybody already knows the theory behind it. Small batches, process automation, trunk-based development, integrated systems etc. It’s the implementation of these ideas that organisations are struggling with. The reasons for that are manyfold — politics, low trust, inertia, stress, burnout and lack of motivation. And of course the ever growing tool sprawl. What people really want to hear today is how to navigate this new reality. Practical advices on where to start, what to measure and how to communicate about it. Not the beaten story of agile software delivery and how it’s better than other methodologies.

The War Stories

Thankfully there was no lack of both success and failure stories and practical tips. There were some great talks on how to do deployments correctly, stories of successful container adoption and also Sarah Wells’ excellent presentation of the methodologies for influencing and coordinating the behaviours of distributed autonomous teams.

Focus on Security

As I already said — quite naturally not all the talks got the same level of interest. Still I think I noticed a certain trend — the talks dedicated to security attracted the largest crowd. Which is in itself very interesting. Security wasn’t traditionally on the priority list of DevOps-oriented organisations. Agility, quality, reliability — yes. Security — maybe later.

The disconnect was so obvious that some folks even called for adding the InfoSec professionals into the loop while inventing such clumsy terms as DevOpSec or DevSecOps.

But now it looks lke there’s a change in focus. New deployment and orchestration technologies are bringing new challenges and we suddenly see the DevOps enablers looking for answers to some hard questions that InfoSec is asking. No wonder all the talks on security I atteneded got a lot of attention. Lianping Chen’s presentation was focused on securing our CI/CD pipeline, while Dr. Phil Winder provided a great overview of container security best practices with a live demo and quite a few laughs. And there was also Jordan Taylor’s courageous live demo of using Hashicorp Vault for secret storage.

As a side note — if you’re serious about your web application and API security — you should definitely look at beame.io — they have some great tech for easy provisioning of SSL certificates in large volumes.

And for InfoSec professionals looking to get a grip on container technologies here’s a seminar we’ve recently developed : http://otomato.link/otomato/training/docker-for-information-security-professionals/

ChatOps

My talk was dedicated to the subject that I’ve been passionate about for the last couple of years — ChatOps. The slides are already online, but they are just illustrating the ideas I was describing so it’s better to wait until the video gets edited and uploaded (yes, I’m impatient too). In fact — while preparing for the talk I’ve laid out most of my thoughts in writing and I’m now thinking of converting that into a blog post. Hope to find some time for editing in the upcoming days. And if you’d like some help or advice enabling ChatOps at your company – drop us a line at contact@otomato.link

There was another talk somehow related to the topic at the conference. Job van der Voort — GitLab’s product marketing manager — described what he calls ‘Conversational Development’ — “a natural evolution of software development that carries a conversation across functional groups throughout the development process.” GitLab is a 100% remote working company and according to Job, this mode of operation allows them to be effective and ensure good communication across all teams.

GitLab Dinner

At the end of the first day all the speakers got an invitation to a dinner organised by GitlLab. There were no sales pitches — only good food and a great opportunity to talk to colleagues from all across Europe. Many thanks go to Richard and Job from GitLab for hosting the event. BTW — I just discovered that Job is coming to Israel and will be speaking at a meetup organised by our friends and partners — the great ALMToolBox. If you’re in Israel — it’s a great chance to learn more about GitLab and enjoy some pizza and beer on the 34th floor of Electra Tower. I’ll be there.


  • 4

DevOps is a Myth

(Practitioner’s Reflections on The DevOps Handbook)

The Holy Wars of DevOps

Yet another argument explodes online around the ‘true nature of DevOps’, around ‘what DevOps really means’ or around ‘what DevOps is not’. At each conference I attend we talk about DevOps culture, DevOps mindset and DevOps ways. All confirming one single truth – DevOps is a myth./img/sapiens.jpg

Now don’t get me wrong – in no way is this a negation of its validity or importance. As Y.N.Harrari shows so eloquently in his book ‘Sapiens’ – myths were the forming power in the development of humankind. It is in fact our ability to collectively believe in these non-objective, imagined realities that allows us to collaborate at large scale, to coordinate our actions, to build pyramids, temples, cities and roads.

There’s a Handbook!

I am writing this while finishing the exceptionally well written “DevOps Handbook”. If you really want to know what stands behind the all-too-often misinterpreted buzzword – you better read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in DevOps. And it comes from the folks who invented the term and have been busy developing its main concepts over the last 7 years.


Now notice – I’m only saying you should read the “DevOps Handbook” if you want to understand what DevOps is about. After finishing it I’m pretty sure you won’t have any interest in participating in petty arguments along the lines of ‘is DevOps about automation or not?’. But I’m not saying you should read the handbook if you want to know how to improve and speed up your software manufacturing and delivery processes. And neither if you want to optimize your IT organization for innovation and continuous improvement.

Because the main realization that you, as a smart reader, will arrive at – is just that there is no such thing as DevOps. DevOps is a myth.

So What’s The Story?

It all basically comes down to this: some IT companies achieve better results than others. Better revenues, higher customer and employee satisfaction, faster value delivery, higher quality. There’s no one-size-fits-all formula, there is no magic bullet – but we can learn from these high performers and try to apply certain tools and practices in order to improve the way we work and achieve similar or better results. These tools and processes come from a myriad of management theories and practices. Moreover – they are constantly evolving, so we need to always be learning. But at least we have the promise of better life. That is if we get it all right: the people, the architecture, the processes, the mindset, the org structure, etc.

So it’s not about certain tools, cause the tools will change. And it’s not about certain practices – because we’re creative and frameworks come and go. I don’t see too many folks using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts today) And then the speakers at the next fancy conference will tell you it’s mainly about culture. And you know what culture is? It’s just a story, or rather a collection of stories that a group of people share. Stories that tell us something about the world and about ourselves. Stories that have only a very relative connection to the material world. Stories that can easily be proven as myths by another group of folks who believe them to be wrong.

But Isn’t It True?

Anybody who’s studied management theories knows how the approaches have changed since the beginning of the last century. From Taylor’s scientific management and down to McGregor’s X&Y theory they’ve all had their followers. Managers who’ve applied them and swore getting great results thanks to them. And yet most of these theories have been proven wrong by their successors.

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn’t deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

You may say that the world is changing fast – that’s why we now need new approaches! And I agree – the technology, the globalization, the flow of information – they all change the stories we live in. But this also means that whatever is working for someone else today won’t probably work for you tomorrow – because the world will change yet again.

Which means that the DevOps Handbook – while a great overview and historical document and a source of inspiration – should not be taken as a guide to action. It’s just another step towards establishing the DevOps myth.

And that takes us back to where we started – myths and stories aren’t bad in themselves. They help us collaborate by providing a common semantic system and shared goals. But they only work while we believe in them and until a new myth comes around – one powerful enough to grab our attention.

Your Own DevOps Story

So if we agree that DevOps is just another myth, what are we left with? What do we at Otomato and other DevOps consultants and vendors have to sell? Well, it’s the same thing we’ve been building even before the DevOps buzz: effective software delivery and IT management. Based on tools and processes, automation and effective communication. Relying on common sense and on being experts in whatever myth is currently believed to be true.

As I keep saying – culture is a story you tell. And we make sure to be experts in both the storytelling and the actual tooling and architecture. If you’re currently looking at creating a DevOps transformation or simply want to optimize your software delivery – give us a call. We’ll help to build your authentic DevOps story, to train your staff and to architect your pipeline based on practice, skills and your organization’s actual needs. Not based on myths that other people tell.


  • -

Infrastructure As Code Revisited

elica01

One can not talk about modern software delivery without mentioning Infrastructure As Code (IAC). It’s one of the cornerstones of DevOps. It turns ops into part-time coders and devs into part-time ops. IAC is undoubtedly a powerful concept  – it has enabled the shift to giant-scale data centers, clouds and has made a lot of lives easier. Numerous tools (generally referred to as DevOps tools) have appeared in the last decade to allow codified infrastructures. And even tools that originally relied on a user-friendly GUI (and probably owe much of their success to the GUI) are now putting more emphasis on switching to codified flows (I am talking about Jenkins 2.0 of course, with it’s enhanced support of delivery pipeline-as-code).

IAC is easy to explain and has its clear benefits:

  • It allows automation of manual tasks (and thus cost reduction)
  • Brings speed of execution
  • Allows version control for infrastructure configuration
  • Reduces human error
  • Brings devs and ops closer together by giving them a common language

 

But why am I writing about this now? What made me revisit this already quite widely known and accepted pattern? ( Even many enterprise organizations are now ready to codify their infrastructure )

The reason is that I recently got acquainted with an interesting man who has a mind-stimulating agenda. Peter Muryshkin of SIGSPL.org has started a somewhat controversial discussion regarding the future of devops in the light of overall business digitalisation. One thing he rightfully notices is that software engineering has been learning a lot from industrial engineering – automating production lines, copying Toyotism and theory of constraints, containerising goods and services, etc. The observation isn’t new  – to quote Fred Brooks as quoted by Peter :

Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.”

This is certainly true for labour automation that has existed long before IAC has brought its benefits to software delivery. It’s also true for monitoring and control systems which have been used on factories since the dawn of the 20th century and which computers started being used for in the 1960-ies.

But the progress of software delivery disciplines wasn’t incremental and linear. The cloud and virtualization virtually exploded on us. We didn’t have the time to continue slowly adapting the known engineering patterns when the number of our servers suddenly rocketed from dozens to thousands.

In a way – that’s what brought IAC on. There were (and still are) numerous non-IAC visual infrastructure automation tools in the industry. But their vendors couldn’t quite predict the needed scale and speed of operation caused by the black hole of data gravity. So the quick and smart solution of infrastructure-as-code was born.

And that brings us to what I’ve been recently thinking quite a lot about – missing visualization.  Visibility and transparency (or measurement and sharing) are written all over the DevOps banners. Classic view of IAC actually insists that “Tools that utilize IaC bring visibility to the state and configuration of servers and ultimately provide the visibility to users within the enterprise“ In theory – that is correct. Everybody has access to configuration files, developers can use their existing SCM skills to make some sense of system changes over time… But that’s only in theory.

The practice is that with time the amount of IAC code lines grows in volume and complexity. As with any programming project – ugly hacks get introduced, logic bugs get buried under the pile of object and component interactions. (And just think of the power of a bug that’s been replicated to a thousand servers) Pretty soon only the people who support the infra code are able to really understand why and what configuration gets applied to which machine.

In the past years I’ve talked to enough desperate engineers trying to decipher puppet manifests or chef cookbooks written by their ops colleagues. Which makes me ask if maybe IAC sometimes hinders devops instead of enabling it…

Even the YAML-based configurations like those provided by Ansible or SaltStack become very hard to read and analyze beyond simple models.

As always is the problem with code – it’s written by humans for machines, not for other humans.

But on the other hand  – machines are becoming ever better at visualizing code so humans can understand them. So is that happening in the IAC realm?

In my overview of Weapons of Mass Configuration I specifically looked at the GUI options for each of the 4 ninja turtles of the IAC and sadly found out that not even one of them got any serious praise from the users for their visualization features. Moreover the GUI solutions were disregarded as “just something the OSS vendors are trying to make a buck from”.

 

I certainly see this as a sign of infrastructure automation still being in its artisanal state. Made by and for the skilled craft workers who prefer to write infra code by hand. But exactly as the artisans had to make way for the factories and labour automation of the industrial revolution – a change is imminent in the current state of IAC. It’s just that the usable visualization is still lacking, the tools still require too much special skills and the artisans of the IAC want to stay in control.

 

Don’t get me wrong – I’m not saying our infra shouldn’t be codified. Creating repeatable, automated infrastructure has been my bread and butter for quite some time and tools like Puppet and Ansible have made this a much easier and cleaner task. I just feel we still have a long way to go. ( Immutable infrastructure and containerisation while being a great pattern and having benefits of its own also relies heavily on manual codification of both the image definitions and the management layers. )

Infrastructure management and automation is still too much of an issue and still requires too much special knowledge to be effectively handled with the existing tools. Ansible is a step in the direction of simplicity, but it’s a baby step. Composing infrastructure shouldn’t be more complicated than assembling an Ikea bookshelf  – and for that new, simpler, ergonomic UIs need to be created.

Large-scale problems need industrial level solutions. Let’s just wait and see which software vendor provides such a solution first – one which will make even the artisans say ‘Wow!’ and let go of their chisel.

And with that said – I’ll go back to play with my newly installed Jenkins 2.0 instance for some artisanal pipeline-as-code goodness. :)


  • 0

Devops Enablers vs. DevOps Engineers

A lot has been said and written in these last 3 years in an attempt to define what DevOps really stands for. One thing most of us agree upon is that DevOps is not a job definition – it’s a culture, a mindset, a software manufacturing practice which is focused on breaking the walls between the developers and the operations. And it is a very cool and hip practice, one that everybody likes and everybody wants a piece of.

So job postings for “DevOps engineers” pop up each day like mushrooms after a summer rain.

And we adapt ourselves to the new realities and start calling ourselves DevOps engineers, even though half a year ago we were called CM, or integrators, or system engineers, or whatever.

I myself just signed a new contract for “DevOps” role. And yes – I’m going to do DevOps. But I know that if we want DevOps – everybody in the company has to do DevOps. So my natural goal is that every engineer in the company becomes a DevOps engineer. And that got me  thinking – if everyone is a DevOps engineer –  how will my role be different from all the rest?

I think I have found the right term:

I’ve always liked thinking of what we’re doing at work (you know, providing process automation, building CD pipelines, etc)  as ‘enablement’ – as this enables all the other players of software development life cycle to do their work with more quality, efficiency, visibility and ease.

And that’s exactly what DevOps is for.

So if everybody wants DevOps, we’re going to enable the DevOps.

We’ll be the DevOps Enablers!

 

Originally posted at http://otomato.wordpress.com

 


  • 0

What’s with the DevOps hype?

Tags :

Category : Tools

Someone asked at one of the forums if the DevOps hype is justified – after all “it’s something we’ve been doing for the last 20 years”…

It’s a good question and here’s what I have to say:

DevOps isn’t new, but the hype around it is all about the ever-growing amount and speed of change in the development process. We’ve seen large shifts in development methodologies and release strategies over the last 5-10 years – towards shorter cycles, continuous delivery, automated testing, etc. New tools and practices have been established to deal with these new requirements and organizations now clearly see the competitive advantage they provide.
So yes, the discipline itself isn’t new, but it’s role in the software manufacturing process is becoming more visible and acknowledged than ever. That’s what the hype is about.

//