Barbara Liskov: Geek of the Week

Barbara Liskov is one of the great pioneers of Computer Science, She was one of the first US women to achieve a PhD in computing and  is the inventor of  two computer languages, as well as contributing a number of ideas to system design, especially related to data abstraction, program methodology, object-oriented design, fault tolerance, and distributed computing


In 2009, Barbara Liskov won the Alan Turing Award, the most prestigious prize in computer science; it was the latest of many firsts for this remarkable woman.

She was one the first US women to be awarded a PhD in computing, and her innovations have become the building blocks for every modern programming language, making software more reliable and easier to maintain.

Her remarkably curious and fertile mind for computer engineering was first developed at Stanford University where her tutor was the legendary John McCarthy, who by the time Liskov became one of his pupils in 1968, had already made several towering contributions to the world of computing.

McCarthy had invented the LISP programming language, like Turing he had far-sighted plans for Artificial Intelligence and had pioneered the modern time-shared operating systems that would become the foundation of interactive computing.

Barbara herself developed a timesharing system for her Venus Computer,a bespoke machine designed to support the construction of complex software.

She has invented two computer-programming languages: CLU, a forerunner of modern object-oriented ones and Argus, which formed the underpinnings for languages such as Java and C++.

Now 76 years of age, Barbara is an institute professor at the Massachusetts Institute of Technology and Ford Professor of Engineering in its School of Engineering’s electrical engineering and computer science department.

Barbara can you explain to me the significance of the work that won you the Turing prize in 2009 and what impact this has had on computing?
This was for CLU, never made it out beyond academic circles. It was definitely an implemented language. We used it at MIT and it was used at a number of other academic institutions but it did not make it into commercial use.

What happened instead was that the ideas in CLU moved into mainstream languages because they were accepted by the community as being important ideas. So they moved into Ada, which was a language that was developed by the US Department of Defense. They moved into C++. Later they moved in Java and now into C sharp.

It is one those questions I have asked of so many people who have shaped the modern computing industry, how do you go about writing good software?
It is not easy to write good software and it is not difficult to write small programs but when you try to write a big piece of software. In our everyday lives we deal with very large pieces of software. For example Google, behind the scenes, has immense quantities of software which makes their search engine run.

There are really two parts to it. One of them is understanding the basic techniques that you can use, so this idea of data abstraction and modularity is very important.

The other part of it is more like a craft. You have to think about what is the right way, even when you have the right idea of what the building blocks should be, there is huge flexibility in how you decide to put the whole system together.

It is a craft and some people can learn it, and it has a lot to do with valuing simplicity over complexity. Many people do have a tendency to make things more complicated than they need to be.

There is another problem that people in the real world have to cope with and that is called feature creep.

What I mean by this is there is sort of the minimal features that a system needs to have in order to be useful.

However, there is always piled on lots of other things that it would be nice to have. It’s difficult when you’re in a commercial environment to stand up against this kind of pressure.

The more stuff you throw in to a system, the more complicated it gets and the more likely it is not going to work properly.

Thinking about software for a moment and the ongoing threat from organisations and individuals do you think there is any way of code being safe from hackers?
I do not know. Right at this moment in time what you see going on is like a game. People who don’t want to let the hackers in develop new techniques that the hackers at that point in time can’t circumvent.

But then before very long the hackers have figured out a way round it. I would like to think there will be a time when a lot of our basic software will be hack-proof but there’s a lot of criminal activity out there. That’s going to go on and you know what they’re going to come up with is going to be hard to predict.

Do you think then the Internet as a whole is insecure?
It is actually. The Internet was designed in an era when people did not think about this kind of stuff. I mean hacker used to not be a bad word, a hacker used to be somebody who was interested in building programs.

But it’s migrated into this other meaning, which is somebody that’s doing bad stuff on the internet.

There has been quite a few big data beaches this year. How might we go about preventing these from happening?
First of all the data ought to be encrypted that’s sort of obvious, but that won’t solve the problem entirely. That’s really the kind of thing I’m working on, what can we do to make it much less likely that the data gets released?

It’s something that the research community is very interested in right now. What’s going on is thinking about, “How do we conceptualize the problem? What can different techniques accomplish?” Security is not only about data, it is also about tracking what you’re doing, everything you do on the Internet can be collected and people can mine that information, so there’s another kind of breach of confidentiality that’s lurking there.

Also there’s immense problems lurking, coming up in the future.

We’re talking about things like identity theft, which is already happening. We’re talking about government eavesdropping–which as you know is already happening also. These are problems that are clearly ahead of us that need to be dealt with.

How do you think computer languages and software might change over say the next five years?
There is potential looming crisis which is that computer manufacturers have been building machines that have many cores inside them. The machine on your desk has maybe two cores or four cores but people are talking now about machines with maybe hundreds of cores, thousands of cores, but I just wonder if people really know how to program these machines. Perhaps there will be some interesting advances in programming languages because of the need to figure out how to get programs to run on those machines.
You studied under John McCarthy at Stanford University and did your PhD thesis in artificial intelligence. If we look at the basics of the science, are we close to a computer which shows emotion?
Will machines have emotions? Will they be able to interact with people the way that people do? Do we want that? I think there are ethical issues too around artificial intelligence). One thing that’s looming in the near future, I don’t know how close this is, is the possibility of fighting wars with robots. We already have the potential of cyberwars of course.

This is a very scary thing and is going to require probably new conventions–international conventions–governing the use.

It’s like many of the advances we see in science, like biotechnology and some of the stuff that’s going on there, there are ethical issues that come into play.

Are we any closer to a more intelligent search engine?
I think with artificial intelligence techniques it is possible to come up with a much better way of finding what you are looking for.

That’s something I think we can expect from artificial intelligence in the future. Unfortunately those very same techniques will allow very efficient data mining of the patterns of your use on the internet and so forth, so there’s the good side and the bad side.

When you arrived at Stanford University in the late 1960s you were taught differently to how people are taught fifty years on, in some ways there was more input from government and industry. Should there be a more input from industry so there is less of a focus on theory and more on teaching essential skills which are in demand from business?
This is a very long standing argument. Where industry keeps saying, “Oh we want somebody who knows how to program in C++.” Universities say you need somebody who understands what computation’s about.

I am absolutely on the university side here. It is such a short-sighted view to think that you should get people out of university who are training to do one specific thing.

The problem is that the field moves and if you do not have somebody coming out of university with general skills that allow them to move with it, then in a few years they are obsolete.

I read an interview with you some time where you said your favourite programming langage was Linux. Is this still the case?
Yes it is!