Bjarne Stroustrup: Geek of the Week

Without Bjarne Stroustrup, object-oriented programming would have taken much longer to gain mainstream acceptance. Bjarne wrote and popularised 'C with classes', later C++, which changed the way that mainstream computer languages worked. It is still the language of choice for system programmers.

617-bjarne_landscape.jpg

In the 1980s and 1990s, Bjarne Stroustrup designed implemented, and popularised the C++ programming language, which introduced object-oriented programming to the mainstream, and influenced numerous other programming languages, including Java, C# and Fortran 99.

C++ remains the archetypal ‘high level’ computer language – one that preserves the features of natural, human language – and it is still used by millions of programmers today.

Many of the systems and applications of the PC and Internet eras were written in C++. For all this, the language has its detractors. It has the reputation for being difficult to learn and use because of its large feature-set , and because Dr Stroustrup’s design chooses not to enforce a single paradigm on developers, serious programming errors can sometimes be made, but not detected.

617-bjarne_portrait.jpgBjarne Stroustrup and his family now live in Texas. He was, for many years, a researcher at AT&T Bell Labs, (where he still retains a link) is now the ‘College of Engineering Chair in Computer Science Professor’ at Texas A&M University, near Houston. He was born in Aarhus in Jutland, Denmark in 1950. He studied Mathematics and Computer Science at the University of Aarhus before moving to Churchill College, Cambridge University to obtain his PhD in 1979, studying under David Wheeler and Roger Needham. He remains a member of the college.

His book ‘The C++ Programming Language’ is the most widely read book of its type and has been translated into at least 19 languages. His later book ‘The Design and Evolution of C++ broke new ground in the description of the way a programming language was shaped by ideas, ideals, problems and practical constraints. In addition to his five books, Stroustrup has published over a hundred academic papers and has contributed to a number of newspapers and magazines.

He has received a litany of honours including being named as one of ‘America’s twelve top young scientists’ by Fortune Magazine in 1990 and was recognized as one of the 20 most influential people in the computer industry in the last 20 years by BYTE magazine in 1995.

Recently Bjarne was awarded the Dr Dobb’s ‘Excellence in Programming award’ for ‘advancing the craft of computer programming’. We now award him the Simple-Talk accolade of ‘Geek of the Week’.


RM:
“Bjarne, given that our technological civilization depends on software why is most of it so poor?”
BS:
“Hmmm. If software was as bad as its reputation, we’ll all be dead by now. It is always easy and entertaining to tell a story of a disaster. People feel competent, comfortable, and superior when describing other people’s failures. On the other hand, spectacular success is often hard to understand, to appreciate, and can – as it should – make us feel humble. I can understand a crashed disk or a simple ‘silly’ programming error but it is much harder to imagine a million disks not failing for a month and the software that didn’t fail to make that possible. I look at the Mars Rovers, the human genome project, and Google with awe.

So, much software is not just ok. It’s good from the point of view of its users. Unfortunately, part of that success comes from an increased ability to craft functioning systems out of failing parts through layers upon layers of – largely redundant – internal checking and huge amounts of testing. I would strongly have preferred to get to this point through clean, logical, comprehensible, well-analyzed structure, through design and implementation by professionals. That would be safer and cheaper in the long run and certainly avoid the huge bloat we see today. We would also be able to use simpler and less power-hungry hardware. There are examples of this today – often high-end, high-reliability embedded systems – but the PC/Web bloatware dominates in volume and in people’s perception of software.”

RM:
“Do you think education is the answer to developing better software and that somehow we get out from the ‘we must do it first no matter how buggy it is’ way of thinking?”
BS:
“Education is part the answer, an essential part, but ‘education’ itself is not a solution. We need an education for software developers that combine principles from science and engineering with practical skills. Most likely, we will need several specializations, hopefully with a common base. Unfortunately, I am not at all sure that the fields of computer science, software engineering, IT, whatever, are mature enough to agree on such a principled common base and specialisations. I also suspect that such a degree would be a master’s rather than a bachelor’s.

Currently, we have another problem: students often leave educational establishments with a set of skills that are seriously misaligned to what the industry needs. We can argue that maybe industry should ask for something different, but there is a lot of hasty re-training and un-learning going on at the handover from education to industry. I think this is really bad for both sides. It discourages industry from relying on more than basic skills and puts an emphasis on tools and techniques that can be used by relatively unskilled labour. Students know that and therefore pay less attention to higher-level skills and some of the best students chose what they perceive as more challenging fields, such as physics and biology. It discourages professors, who then concentrate on work that does not directly relate to industry or join the scramble to build tools and processes to develop and maintain software with semi-skilled labour

I don’t think you have to be first with a product in a given field. In fact, I suspect that typically the first product in a field fails as a slightly later product comes along with facilities – or stability – that allows it to appeal to a larger group of users. Think of Simula, WordPerfect, Netscape, and AltaVista (do you even remember those?). However, I do not think that we can delay and delay until a product reaches “perfection.” My ideal model is something basically sound and potentially beautiful, made available early (warts and all) and then gradually improved over the years under real-world pressures. The real-world pressures will improve a fundamentally sound product and make it better adjusted to real needs than we could make it from first principles. Shipping “no matter how buggy” is something else – that’s unethical and irresponsible. What you ship has to at least deliver what it promises and have a design that allows it to grow to meet future challenges.”

RM:
“Do you think you would ever design a new language from scratch?”
BS:
“Sometimes, I’m tempted, but designing a language for real use is a decade’s worth of work. It requires a firm idea of what problem is to be solved and stable funding. It’s not something that fits the academic model and it’s not something that has an obvious commercial payoff. I doubt it will happen.”
RM:
“How soon after you created C++ did you see it start to take over the industry?”
BS:
“The first real use of ‘C with Classes’ (C++’s direct ancestor) started 6 months after I began the project and then I saw steady exponential growth for a decade. The use of C++ doubled every 7.5 months from 1980 to 1991. After that, I lost count, but the current ‘best guesses’ are in the three million C++ programmers range. For most of the early years, I was simply too busy (designing, implementing, writing, and providing support) to observe what was going on. C++ was not a project with plans, marketing budgets, and consumer surveys.”
RM:
“What’s your opinion about the Microsoft implementation of C++?”
BS:
“It’s getting very good actually both in terms of standard conformance and in code quality. I use that – and other C++ implementations – weekly, if not daily. I don’t care much about the proprietary extensions such as /CLI), but it’s their OS and their system interfaces, so my opinion is not really relevant. It would be hard to meet their design aims significantly better. For a language to be a systems programming language, it has to deal with real systems, rather than idealized abstractions. However, I do wish that they had interfaced ISO standard C++ to .Net through interface libraries. I do understand why they did not (think: generating meta data), but once you rely heavily on /CLI features, you no longer write portable or easy to port standard C++, you write for a specific proprietary system. To be fair: just about every implementation provider tries to lock in its users. Apple’s Objective C++ GUI is a nasty lock-in and deep in GNU C++ you find quite a few non-standard features. Whenever I can, I prefer to deal with ISO standard C++ and to access system-specific features through libraries with system-independent interfaces.”
RM:
“Do you think C++ has become too ‘expert friendly’?”
BS:
“Yes. In fact, I think that it was me who stuck that label on it – reflecting the old saying “Unix is expert friendly” – I said it to alert people to my view that the experts were getting too complacent and were not sufficiently sensitive to the needs of novices (of all kinds), casual and occasional programmers – whatever you want to call people who want to use C++ well before they could become experts – and the many who don’t need to or want to become C++ experts; they are often quite happy being experts in some other field and just want to use C++ in support of that.

I tried to do something about that with C++0x and had some success – not at must as I would have liked, of course, but enough that there is a large and powerful subset of C++0x that’s easier to learn, to use, and to teach than I could carve out of C++98. In particular, I can’t wait to be able to use the simpler, safer, and more flexible initialization mechanisms. For example:

for (auto p = v.begin(); p!=v.end(); ++p) cout << *p << ‘\n’;

The auto says that the type of p is to be that of its initializer (v.begin()), so that I don’t have to remember how to write the type of v’s iterator, say, vector::iterator.
That’s actually the oldest C++0x feature by far; I first implemented it in 1983, but was forced to take it out for C compatibility reasons. Now, with “implicit int” banned in both C++ and C, we can have this convenient notation.

Another example is variable length (homogenous) initialize lists. In C++0x we can write:

vector greats = { “Newton”, “Darwin”, “Archimedes”, “Bohr” };

And get a suitably initialized vector of 4 strings.

The list of small new features is long (e.g., see my C++0x FAQ), but they are designed to work together and with all other features of the language so that I hope people get to see them as generalizations rather than complications. In particular, the { } initialization syntax can be used for every form of initialization:

void f(const vector>& v);

 
f({ {“Simula”, 1967}, {“BCPL”, 1967}, {“C”, 1978}, {“C++”,1985} });

struct Point {
       int x,y;
       Point(int xx, int yy) :x{xx}, y{yy}
};

Point* p = new Point{x,y};

If you feel this is not sufficiently advanced to warrant attention and need some really hard technical stuff, you can have a look at the C++0x machine model. If you feel that real language design needs a heavy dose of type theory, have a look at the concepts and concept_maps used to control template arguments.

The area where C++ is still not sufficiently supportive of novices is libraries. There are lots of libraries for people who are experts, for people who can search out libraries on the web, and for people who can afford commercial quality libraries. However, there is no one place to which a “novice” (of any degree of previous experience) can turn for a linear algebra library, a GUI library (and builder), an XML manipulation and Web service toolset, a 3D graphics library, a computational geometry library, a concurrent programming support library, a set hard-real time programming facilities, etc. All of these exist, but you have to search for them, chose among alternatives, download and install them, convince yourself about their quality, worry about the timescale of their support, etc. Those are not tasks for which the average novice is well prepared.

My home pages are often a good place to start to look (especially the C++ page and the applications page), but I can only feature links to a few collections of libraries, such as Boost, Poco, and Qt, and they don’t all work together seamlessly.

C++0x does provide a few new libraries, but not as many as I would have liked. I particularly like to be able to use regular expressions (the regex library) and to use that library even when teaching my freshman (first year) students. Finally getting standard and portable threads is a relief.”

RM:
“Why do you think C++ is so successful?”
BS:
“It met its fairly modest design aims. C++ was meant to be efficient in the hands of good programmers, capable of expressing elegant solutions to hard systems programming problems, suitable for work close to the hardware (without compromises), to allow organization of code in the style of Simula (object-oriented programming), to cope with the complexity of large systems, to fit into existing systems, to be portable across different hardware, operating systems, and linkers (again in the hands of capable programmers), to be cheap (no complicated run-time systems and fairly simple compilers), non-proprietary, and to be teachable on an industry wide scale.

That was actually not easy to achieve given the almost complete ignorance about object-oriented programming at the time and the extreme skepticism about the use of higher-level languages for systems programming. When I started, it was less than 5 years since C and Unix had demolished the idea that an operating system had to be written in assembler for a particular piece of hardware.”

RM:
“In your well-read and well-written book The Design and Evolution of C++, you claim that Kierkegaard was an influence on your conception of the language. What do you mean by this?”
BS:
“I wanted to say something about C++’s intellectual roots. In particular, I wanted to state my opposition to the authoritarian system builders (e.g. Plato and Hegel) and emphasize my concern for the individual and the exceptional. In that context, Kierkegaard fits right in. Maybe referring to philosophers appears a bit pretentious, but I did read a fair bit of philosophy before I got too busy with computing and occasionally still do.

The human aspects of programming, software design, etc. are often overlooked – and when they are not, it is often by someone trying to shoehorn humans into an inhuman development process. A “process” that does not recognize the enormous variations in human abilities is inhumane and suboptimal in that it fails to utilize the best in the people involved. That does not imply that I’m arguing for anarchistic “cowboy programmers.” On the contrary, I’d argue that a firmer foundation of programming addressing areas such as types, interfaces, resource management, invariants, underlying models, etc. is needed for less wasteful collaboration of many developers. A rush to the lowest common denominator is not a great idea.”

RM:
“How do you think more programmers can write quality code (as opposed to quantity) and still keep their jobs?”
BS:
“They have to be more effective than the popular million monkey approaches. Worse, they have to be more productive over a variety of time spans: the first year, the first two years, the first five years, and the first ten years. That means that it should be possible to replace the pioneers and not rely on exceptional talent and exceptional enthusiasm – in other words, the approach has to be a form of sustainable professionalism.

Where upper management sees it as a major aim to ensure that their software development and maintenance can be done primarily by semi-skilled and interchangeable individuals, getting to professionalism and quality code is an up-hill task; where upper management is consistently supportive it’s only difficult. I think we have to base the necessary professionalism on classical computer science core areas such as algorithms, data structures, and machine architecture.

To this we have to add a notion of “software architecture” that must be more than simply showing off the features of a language or two. We have to get into interface design, into invariants, applications of predicate logic, model checking, design for testing, systematic resource management and systematic error-handling strategies. There is much material on these subjects, but not a coherent and widely accepted body of work supported by standard textbooks such as you find in other fundamental fields. People are still stuck arguing language choice (e.g., C vs. functional vs. Java vs. domain specific vs. C++ vs. Python) rather than techniques and principles. I fear we have a lot of work still to do in this area – it is not easy to express and apply principles across languages. What people have to do is to articulate principles for their languages of choice and try to see languages – any language – as an incomplete approximation to the ideals.

The old idea of re-use, of well-specified, well-tested components must be realized (as far as our real-world constraints allow in a given situation). Programming must come much closer to Math so that we can reason (informally and formally) about properties of a program. There is hope, though. We have come a long way since Doug McIlroy’s original 1968 call for software components.”

RM:
“What things in your technological life would you have done differently? And what are you most proud of?”
BS:
“Looking back, there were so many years where I needed 24 months to get all the work done right. I had the choices of shipping with flaws, shipping later, or not shipping (that is, staying out of an application domain). Over the years, I choose each of these alternatives at various times. Most of all, I wish I could have delayed the commercial introduction of C++ by four-to-six months to allow me to ship it with a significantly larger library. The reason I shipped when I did (early) was simple and idealistic: I couldn’t design and implement a sufficiently general, efficient, and elegant container library – I didn’t yet have a clear idea of what a parameterized type would be. Had I shipped the best I could come up with then (or even a year later) it wouldn’t have been good enough for the long term. However, it would have made the C++ community consider a container library part of any minimally acceptable standard library and the initial and flawed library’s inevitable replacement would have served well.

What am I proud of? Having helped making object-oriented programming mainstream and not stopped there but carried on trying to make generic programming mainstream. C++0x should do a lot for the latter. In purely technical matters, I consider the destructor and the resource management techniques based on it a major contribution. Many of the most effective modern C++ techniques rely on destructors.”