Geek of the Week: Niklaus Wirth

When looking for a suitable Geek of the Week, we wondered whether a suitable candidate might be the man who pioneered structured programming, invented modular programming and who wrote one of the first languages with features for Object-oriented programming. Yes, for a second time, Niklaus Wirth, gets the accolade of 'Geek of the Week' and shows that he is still the radical thinker with strong view about computer languages.

1531-747-NiklausWirth.jpg

Niklaus Wirth, the Swiss computer scientist, is one of the most influential thinkers in the science of computer languages. A professor at ETH Institute in Zurich, Wirth designed Pascal, Modula 2 and Oberon. In the early 1970s, he was one of the people who proposed program development by stepwise refinement. The author of many significant books, he was awarded the Turing Prize in 1984, and has also received five honorary doctorates and several other awards.

‘High-level programming languages
 can not only be used to design
 beautiful programs with much less
 effort but also to hide incompetence
 underneath an impressive coating
 of glamour and frills’

The effectiveness of his systems, and his ability to build complex systems with small teams, relied on his constant search for elegant simplicity-for what could be left out. Over time his language designs and compiler techniques became, in some respects, simpler and more efficient rather than slower and more complex. In 1995 he warned that “The plague of software explosion is not a ‘law of nature.’ It is avoidable, and it is the software engineer’s task to curtail it.”

We last spoke with Professor Wirth in the summer of 2009, and in this interview we talked about his still eager enthusiasm for technology, the simplicity of code and his legacy as a academic and a technologist.


RM:
Niklaus; computing software has, perhaps surprisingly, often been based on prior theory. For example FORTRAN was firmly based on the familiar concept of the mathematical formula, and relational data bases were based on the theory of sets and relations. Is it a basis in sound theory that enables the quality and structure of software to survive the long process of evolution which follows its first delivery?
NW:
In these cases, it was, quite naturally, the intended area of application that determined the laws and features of the language. A more general “theory” was underlying Algol 60: BNF, a formalism for defining the language’s syntax.
RM:
Do you still think that computing has much to learn much to learn from other branches of Science, which have made occasional ‘breakthrough’ progress as a result of externally, or internally, promoted Grand Challenges?

After all, worldwide collaboration is now the norm in physics and astronomy; and more recently in genetics and molecular biology. It was the international Grand Challenge of the Human Genome project that triggered the necessary shift in the research ethos of biologists towards the larger scale and the longer term accumulation of knowledge. After twenty years, its results are beginning to trickle through to medical practice. Is not a similar breakthrough now in prospect for the verification of software?

NW:
We must distinguish between science and engineering. The breakthroughs you mention were due to discoveries, to deep analyses of the laws of nature. Physics, chemistry, and biology are analytical sciences. Computing is a branch of engineering, sometimes with a mathematical background. Our work is of a synthetic nature, we create artifacts. And we build tools to facilitate our work. The computer engineer is a tool smith. In this endeavor, such breakthroughs are hardly to be expected.
RM:
Is there a risk that research in computer science is too fragmented to make any real impact on the current scale of the problems of software construction?
NW:
It is the community of computer scientists that has become fragmented. This is due to most research on computing having become application specific. Very often now, we specialize in applications, applying computational techniques to diverse subjects.
RM:
While I was researching for this interview I read an article in which you said: ‘High-level programming languages can not only be used to design beautiful programs with much less effort but also to hide incompetence underneath an impressive coating of glamour and frills’. Has the modern trend of overselling software become counterproductive?
NW:
The field of computing is dominated not by concerns of science, but of business, of growth and innovation. Overselling lies in the nature of business.

Innovation has become a buzzword, and it fascinates politicians and granting agencies. But we must not put the innovations due to physics and chemistry, for example the transistor and the laser, on an equal footing with the newest innovations of the computing world. Some of them are indeed very useful, some are seeking customers.

RM:
Is the software industry focusing too much on technical issues, forgetting that software development is mostly a human activity? For example, one of the reasons why BASIC and C enjoy continued popularity is that their relative lack of constraints allows for some patching or local solutions to be introduced late in the development cycle.
NW:
Software development is a technical activity conducted by human beings. It is no secret that human beings suffer from imperfection, limited reliability, and impatience – among other things. Add to it that they have become demanding, which leads to the request for rapid, high performance in return for the requested high salaries.

Work under constant time pressure, however, results in unsatisfactory, faulty products. Generally, the hope is that corrections will not only be easy, because software is immaterial, but that the customers will be willing to share the cost.

We know of much better ways to design software than is common practice, but they are rarely followed. I know of a particular, very large software producer that explicitly assumes that design takes 20% of developers’ time, and debugging takes 80%.

Although internal advocates of an 80% design time vs. 20% debugging time have not only proven that their ratio is realistic, but also that it would improve the company’s tarnished image.

Why, then, is the 20% design time approach preferred? Because with 20% design time your product is on the market earlier than that of a competitor consuming 80% design time. And surveys show that the customer at large considers a shaky but early product as more attractive than a later product, even if it is stable and mature.

Who is to blame for this state of affairs? The programmer turned hacker; the manager under time pressure; the business man compelled to extol profit wherever possible; or the customer believing in promised miracles?

RM:
On the same theme. Dijkstra has a paper where he says we shouldn’t let computer science students touch a machine for the first few years but spend time manipulating symbols. Do you think he was right?
NW:
I do not. Dijkstra was a mathematician with an analytical mind. He was convinced that theory must precede applications – if these were to be mentioned at all. But most people – and most engineers – function differently. They want to understand individual objects and their functioning. If several have common characteristics, these can be abstracted and condensed into a theory. If you put theory in front of the cart, most students lose interest.
RM:
Did you find with Pascal or Oberon that you had people who would read the whole thing and then send you observations and questions that would make you think that they had misunderstood how the language worked or had no inkling of what structure meant?
NW:
Usually people who had no inkling did not write to me!
RM:
You are quoted as saying that programming languages can not only be used to design beautiful programs with much less effort but also to hide incompetence underneath an impressive coating of glamour and frills. At the other end of the spectrum from code beauty, software is full of painful historical warts that we can’t get rid off. Is there a way to avoid this in future?
NW:
I believe you are pointing out “legacy software”. Old programs have been extended to cope with new demands, until they became messy and unintelligible. They were written in “legacy languages”. In my experience, the time comes when the best solution is to start rewriting them from scratch. This is costly, time-consuming, and risky. But unavoidable in the long run.
RM:
As a language designer, how have your ideas about language design changed over time?
NW:
I had always dealt with general-purpose languages, suited for system construction. And I have concentrated on what I believed to be essential, basic features, leaving out matters of convenience, bells and whistles. But new languages issued by industry have taken the opposite path. They have become huge, and their implementations heavy. They are not designed to be taught and understood in full. The trend is now towards application-specific languages.
RM:
You are one of the most influential academics with a body of work which has had a huge impact on software development.

In many cases the academic world and industry are very different; how a professor of computer science thinks about software and how a developer thinks about it, is entirely different. As we progress do you think there will be a coming together of academic thinking and industry thinking or will they continue to stay apart?

NW:
The division is rather unfortunate and the source of many problems. Professors typically spend their time in meetings about planning, policy, proposals, fund raising, consulting, interviewing, travelling, and so forth, but spend relatively little time at their drawing boards.

As a result, they lose touch with the substance of their rapidly developing subject. They lose the ability to design; they lose sight of what is essential; and resign to teach academically challenging puzzles.

I have never designed a language for its own sake. Instead, I’ve always designed a language because I had a practical need that was not satisfied by languages that where already available.

For example, Modula and Oberon were by-products of the designs of the workstations Lilith (1979) and Ceres (1986). My being a teacher had a decisive influence on making language and systems as simple as possible so that in my teaching, I could concentrate on the essential issues of programming rather than on details of language and notation.

RM:
In what ways do you think you have influenced the software industry?
NW:
I would hope that Structured Programming – as embodied by Pascal (1970) – modularization – as represented by Modula-2 (1979) – and object-orientation – as contained in Oberon (1988) – have contributed to a professionalization of the art of programming. I also hope that my belief that programs should be written not just for computers, but for the scrutiny by human beings, would become more widely accepted.