Jez Humble: Geek of the Week

Comments 0

Share to social media

Jez Humble immersed himself in technology at the start of the Millennium, the year the dot com bubble burst, the time when the masses stopped believing there was a new world upon us and that the hype didn’t live up to its promises.

1360-jez%20humble_1.jpg

Nevertheless, his interest in computers survived and in the eleven years since has worked as a developer, system administrator, trainer, consultant, manager, and speaker. He has worked with a variety of platforms and technologies, consulting for non-profits, telecoms, financial services, and online retail companies.

He has worked for ThoughtWorks, the pioneers of Agile, in bases around the world including Beijing, Bangalore, London, and San Francisco.

He co-wrote ‘Continuous Delivery – Reliable Software Releases Through Build, Test, and Deployment Automation’ along with David Farley. This was published by Addison-Wesley in Martin Fowler’s Signature Series. This book reinforced the theory that good software can be achieved by following a series of managed approaches which are carefully planned and rigorously implemented. The book has been influential in focusing attention on one of the more neglected aspects of the development cycle, the delivery and deployment process; the meticulous steps required to ensure the successful creation or update of a working production system. Although the book concentrates on being a general guide to successful software delivery in the business environment, it provides a strong advocacy of both collaborative working methods between testers, developers and Operations, and automation of the processes. It was an ambitious book that couldn’t always offer solutions, and whilst it sometimes caused controversy, it has quickly become adopted as a standard text for Agile. From both experience and research, it describes the principles and technical practices that enable rapid, incremental delivery of high quality applications.

In his spare time, he writes and reads music and books. As well as his earlier degree, Jez has masters in ethnomusicology from the School of Oriental and African Studies, University of London, for which he studied Hindi and Indian classical music. He’s married to Rani and they have a baby daughter.

RM:
You studied physics and philosophy at Oxford. When did your interest in computers start and how did that develop into a career?
JH:
I became interested in computers when my neighbours got a ZX Spectrum. I got one for my 11th birthday, and I was hooked. My favourite computers were Acorns. My parents bought me a BBC Master and then an Archimedes A3000, which were brilliant because they had really nice programming environments (for the time) and great reference manuals. I stopped being interested in computers when I nearly got kicked out of school for illicitly acquiring administrative privileges over the school network, which was coincidentally also around the time the school started admitting girls. I got back into computers after I graduated from university, because I needed to make a living.
RM:
Staying with the philosophical questioning for a moment what makes computer science an empirical science and not merely a branch of pure mathematics?
JH:
The extent to which mathematics tells us anything useful about the real world – and can thus be considered empirical – is still hotly debated (Einstein once said “as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”) Fundamentally the output of mathematics is created through symbol manipulation, which is a purely intellectual exercise. If you’re doing science, those symbols have to correspond to measurable things in the real world so you can do experiments to see if your models have predictive power. Even complexity theory is arguably empirical science rather than maths inasmuch as it purports to tell us about the real-world properties of algorithms (the amount of time they take to execute, for example).

In terms of the activity of programming, once you move beyond the narrow context where formal methods are appropriate, I think test-driven development is a really nice way to expose the fundamentally scientific nature of writing software. First I write a test that makes a prediction about the behaviour of the method or function I’m about to write. The test fails, because I haven’t implemented the function yet. Then I write the code that makes the test pass. That’s science in its purest Popperian form. Of course programming is an art too, because you have to come up with the correct tests to write, and the right order to write them in, if you want to create an elegant and valuable program.

RM:
Do you think we’ll get to the point when machines can be made to think?
JH:
I reject the Cartesian view that there is some kind of thinking “stuff” that is ontologically distinct from matter, so I don’t think there’s any fundamental reason why machines shouldn’t think. However I think it’s a while before we’ll get there. Some of the bits of philosophy that I continue to find fascinating (and insufficiently examined by computer scientists) are Nietzsche, Heidegger, and Wittgenstein’s views on consciousness, being, and language. Ultimately the ability to “think” comes from the combination of some innate mechanics that undergoes a process of socialization. Historically we’ve focussed a lot of effort on the innate mechanics, and not so much on the process of socialization. That probably says more about computer scientists than it says about the nature of the problem.
RM:
In one of his papers Dijkstra talks about how computer science is a branch of mathematics and how computer science students shouldn’t touch a computer for the first few years of their education and should learn instead to manipulate systems of formal symbols. How much mathematics do you think is required to be a competent programmer?
JH:
I can only speak – as I suspect Dijkstra did – from personal experience. I learned to program computers several years before I studied set theory and formal logic. I won’t deny that learning those things helped my programming, but I think that studying philosophy (and more practically, electronics) helped my programming as well. I think the most important things required to be a competent programmer are a desire always to be learning new things, combined with a masochistic tendency towards delayed gratification that manifests itself in problem solving (I’m thinking here of the hours I spent as a teenager debugging ARM assembler printed out on fanfold paper while my classmates were out getting drunk). You probably need similar personality traits to be a competent mathematician.
RM:
For people coming into the industry, are there other skills, beyond the ability to write code, that they should develop?
JH:
I think that if you want to be a great programmer, you need to be a generalist. You need to have at least a high level understanding of systems administration, CPU architecture, networking protocols, and so forth simply because all the abstractions we have developed to make software development faster and more efficient are leaky. Knowing when the context in which you can ignore what’s going on under the abstraction layer you’re using at any point in time (even if that abstraction layer is assembler) is as important as making sure that the program satisfies its functional requirements.

It’s also important to think holistically (this is sometimes called systems thinking). Code doesn’t deliver any value until it’s in the hands of users – indeed you can’t know whether or not it’s valuable until you get feedback from users. Until then, any work you’re doing is based purely upon a hypothesis. Understanding what is valuable to your users – which means understanding something about their world – and how to best deliver that value requires much more than just knowing how to write code. Specification documents, for example, are a very leaky abstraction.

RM:
Has programming become a more social activity than it used to be? Do you remember any particular ‘aha!’ moments when you noticed the difference between working on something by yourself and working on a team?
JH:
Writing valuable, high quality software is fundamentally a social activity, if only to the extent that judgements of value and quality are subjective. Too many programmers fail to create valuable software because it’s so easy to focus on the immediate issue – getting some particular part of a program to work, or creating an elegant architecture – at the expense of actually solving a problem. It requires discipline to constantly ask oneself, “is what I’m doing at the moment actually the most valuable thing I could be doing right now?” Other people help keep you honest.

My biggest “aha!” moment came when I joined ThoughtWorks and got to do pair programming and test-driven development with people who were really great programmers. I had been pretty sceptical about the whole XP thing, but I learned more about how to create, deliver, and evolve high quality object-oriented software in my first year at ThoughtWorks than I’d learned in all the years before that. However I try not to be bigoted about pair programming and TDD because I don’t believe there’s any way to convince you they are a good idea unless you’ve actually experienced it for yourself. They’re neither necessary nor sufficient conditions for creating and evolving great software, but they make it much easier and much more fun.

RM:
Though it’s not new, Continuous Delivery, the title of the book you wrote with David Farley and which was published in 2010, made the concept much more popular. Roughly speaking it means that every build goes through the same quality process, so it saves time, reduces risk, improves trust and helps deliver better software to clients.

To look at this laterally, I suppose academics have worked this way for years. Are there areas where industry is ignoring good stuff about how we should build software? Probably the best example I can think of is where Intel had to recall the Series 6 chipsets because of a serious bug, something which cost an estimated $600 million.

JH:
One of the problems in our industry is the divide between academia and commercial software development. It’s hard to move between the two because there is a tendency for each branch to consider itself somewhat superior to the other (you see the same attitude in physics between the theoreticians and the experimentalists). So I’m sure there’s plenty of stuff the two sides could learn from each other.

A great example comes from my co-author, Dave Farley. He works on the world’s highest performance financial exchange, LMAX. The main thing they did to get really blistering performance was to throw away the conventional wisdom on how you create these kinds of systems. So, to pick two particularly counter-intuitive decisions, their core computational engine (which they recently open sourced) is single-threaded and written in Java. They spent a lot of time thinking about how to write algorithms that displayed mechanical sympathy with the JVM and the underlying CPU architecture. Their approach is a striking example of how you can make a giant leap when you harness together theoretical and commercial approaches to writing software.

RM:
Do you think programming and building software should be like an engineering discipline? The analogy of building bridges comes to mind because you can predict how it is going to take and the bridges for the most part don’t fall down?
JH:
The great thing about bridges is that if you’re building (say) a truss bridge, there is a nice model where you can plug in some variables and out pops the design. A software system of any size and complexity is going to be way too complex to model analytically (which is why formal methods in computer science aren’t widely used in real life). Even when we build things where there is an applicable model that is tractable analytically or that can be simulated on a computer things can go wrong, as the new terminal at Charles de Gaulle showed. But building large software systems is not really comparable to building a bridge.

My wife and I visited Gaudí’s la Sagrada Família in Barcelona recently, and one of the striking things is the extent to which the design and the construction of it were performed iteratively and incrementally. Gaudí built a smaller church using his new hyperbolic approach before he started work on the Sagrada Família. He had a big workshop in the crypt where he was forever creating scale models to test out his ideas. One of his innovations was to create upside-down models of parts of the building with suspended weights simulating the loads on the structure to validate that it was sound (see image below).

1360-sagrada_familia.jpg

Photo Credit: Subtle_Devices’ Flickr stream

Now, the Sagrada Família has been under construction for over 130 years. It is behind schedule, way over budget, and still not done, so in that sense it’s perhaps not a shining beacon of what we should be aiming for (Gaudí once remarked “My client is not in a hurry”).” But there are a couple of advantages of working with software. First, it’s much cheaper to experiment. Second, if we approach the design correctly, we can start performing useful work with computer systems from early on in their lifecycle, and get feedback on what to build next so we don’t waste time building stuff that people actually don’t want. That’s why the concept of a minimum viable product is important, and where continuous delivery comes into the picture.
RM:
When it was released in 2010 your book was recognized as one of the most important software development books of the year. What books would you recommend up and coming programmers to read? What technology books do you read? Have you read Don Knuth’s Art of Computer Programming for instance?
JH:
I bought The Art of Computer Programming in India when I was working in ThoughtWorks’ Bangalore office and got through most of Volume 1 on a road trip, but I never found the time to finish it. I still dip into it from time to time, because occasionally it feels good to mainline pure computer science from the motherlode.

Among the books that I’ve read, I think probably my favourites are Bob Martin’s Agile Software Development, Michael Nygard’s Release It!, Nat Pryce and Steve Freeman’s Growing Object Oriented Software, Guided by Tests, and then Don Reinertsen’s Principles of Product Development Flow for the process wonks.

RM:
Do you think programming, and therefore the kind of people who can succeed as programmers, has changed? Can you be a great programmer operating at a certain level without ever learning assembly or C?
JH:
Certainly there have been serious productivity gains in programming over the last few decades because of the new tools we have available (I am using the term “tool” pretty widely here, to include paradigms such as object-oriented and functional programming). But as I say I don’t think you could really be a great programmer without having at least some understanding and sympathy for what’s going on under the hood.
RM:
I guess that the most important rule of technical writing is to understand your audience – the better you know your reader the better you can write. What would you say are the other cardinal rules? Is there a way of saying everything twice in a complementary way so that the person who’s reading it has a chance to put the ideas into their head in a way that reinforce each other?
JH:
Everyone learns differently, so it’s hard to be general. There are a bunch of different ways of organizing your content. Some people like a more discursive style. Some people prefer a terse style where things are laid out more formally (check out Spinoza’s Ethics for an extreme example of this – maybe there’s someone, somewhere for whom this book represents the acme of technical writing). Martin Fowler has pioneered a two-part model where you introduce the core concepts discursively and then the rest of the book is patterns.

Continuous Delivery is written in a discursive style throughout where we deliberately trade off being concise both for readability and so people can dip in to a particular section and get all the context they need to understand it. Some people don’t like it for this reason, and it certainly made the book longer and more irritating if you read it cover to cover, but we tried to achieve the goal you describe of showing how everything fits together in a holistic fashion, and it’s very hard to achieve that without some level of repetition.

RM:
It’s often claimed that there are orders of magnitude differences in productivity between programmers. I read an article which debunked these claims saying that studies found this were done some time ago and a lot of things have changed about programming and working since then that could have accounted for the differences such as some in the study were using batch processing techniques while others were using timeshare programming environments. What’s your view on this and how would Lean and Agile methods help with productivity?
JH:
From personal experience, I can tell you that it’s true that there are orders of magnitude differences in productivity between programmers. Some of that is down to familiarity with their environment. Some of it is down to having a wide variety of experience. Some of it is down to being very clever.

I think that lean and agile methods improve productivity by setting up fast, rich feedback loops so you can work out whether what you’re doing at any point in time is valuable and high quality as fast as possible. The biggest source of waste in software is functionality that is developed but never used. The Standish Group presented a report at the IEEE XP 2002 conference, based on studies including data gathered at the Department of Defence that showed that over 50% of functionality developed is rarely or never used.

Agile and lean methodologies help eliminate that by trying to get the ultimate feedback loop – from users back to the team – as fast as possible, and also by setting up other feedback loops through techniques such as continuous integration and test-driven development so you can find out quickly if you’ve introduced a regression or if what you’ve written doesn’t actually behave in the way you’re expecting.

However none of this can compensate for having mediocre developers. Having 10 really good programmers is always going to get you better software faster than having 100 mediocre ones.

RM:
What is your process for designing software? Do you fire up Emacs and start writing code or do you use UML as a design tool technology or just start coding?
JH:
These days when I fire up Emacs it’s to write books rather than code, but I have to say I’ve never used UML formally. I would hazard that even Martin Fowler, whose best-selling book is UML Distilled, would not advocate the model-driven approach to design.

Predictably, I advocate just enough design. Based on the limited information you have before you start writing any code, grab a whiteboard with a bunch of people on your team, and discuss the various options. Come to a decision – without spending too long arguing over minutiae – on what you think is going to be a good possible approach. One of the attributes of a good approach is that it lets you test the functional and cross-functional (performance, security and so forth) characteristics of your system from early on in its lifecycle, and that you can change it without too much trouble if it turns out not to be so good. As necessary, write some throwaway code to test your more risky hypotheses.

Then as you write the real code, write automated tests that assert the behaviour that is important to you so you can validate your architecture as soon as possible when it’s cheaper to change it. Always make decisions based on data, not intuition. Intuition is useful in software development, but computer systems are inherently complex and non-linear, and intuition must always be validated with real data. To quote Knuth, “Premature optimization is the root of all evil”.

Load comments

About the author

Richard Morris

See Profile

Richard Morris is a journalist, author and public relations/public affairs consultant. He has written for a number of UK and US newspapers and magazines and has offered strategic advice to numerous tech companies including Digital Island, Sony and several ISPs. He now specialises in social enterprise and is, among other things, a member of the Big Issue Invest advisory board. Big Issue Invest is the leading provider to high-performing social enterprises & has a strong brand name based on its parent company The Big Issue, described by McKinsey & Co as the most well known and trusted social brand in the UK.

Richard Morris's contributions