Alan Kay: Geek of the Week

The development of Object-oriented programming, the windowing User-interface, Ethernet and the Laptop all had essential contributions from a brilliant, visionary, former professional Jazz and Rock guitarist. Alan Kay. His second career as a computer scientist led to him being the creative catalyst at Xerox, Atari and Apple. Alan is driven by the vision of the computer's potential role in education, to build a better society.

When you want to gain a historical perspective on personal computing and programming languages, there are few better people to talk to than Alan Kay. The winner of numerous awards including the Turing Award for Smalltalk, which underpinned the new world of computing during the 1970s, Kay was one of the founders of the Xerox Palo Alto Research Center (PARC). Here he led one of several groups that developed modern workstations; he was the first to approach the design of computers from the point of an artist rather than that of an engineer and by doing so influenced the design of the first Apple Mac. Kay’s other firsts include the overlapping window interface, desktop publishing, the Ethernet, laser printing, and network client-servers.

For ten years after leaving school,  Kay was a professional guitarist in and around Denver.  He then joined the Air Force, discovered a natural aptitude for computer science and went to  the University of Utah. He received his master’s degree in 1968 for the Flex programming language, suitably enough designed for the small desktop computer of the same name.It was while he was thinking about the software design of the Flex computer that he witnessed Doug Englebart demonstrating his early Augment NLS system which he and his team had been working on since 1962.

In those six years, Englebart and his team had bundled almost all of the critical components of modern personal computing in one machine including hypertext, graphics, multiple windows, efficient navigation and command input, collaborative work and a mouse pointing device.

Now, this work is acknowledged to be the foundation of everything  in the modern PC:  Previously teams of people had served a single computer; now the computer would become a personal assistant. The notion flowed directly from Vennevar Bush’s Memex and Kay’s 1968 concept the Dynabook – what can only be described as a wireless laptop with a nearly eternal battery life.

Kay went on to design a graphical object-oriented personal computer and was a member of the research team that developed pioneering 3-D graphics work for the Advanced Research Projects Agency (ARPA) and was also a ‘slight participant’ in the original design of the ARPANet, which later became the Internet.

Even at the time, Xerox’s decision to turn off resources to PARC was generally seen as bizarre. Now, it is acknowledged to be a pivotal moment in the development of the PC. Apple were quick to take over, and Kay’s work came to fruition with the Apple Macintosh. Kay  left Xerox PARC in 1981 to become chief scientist of Atari, a Fellow of Apple Computers and in December 1995, when he was still at Apple, he collaborated with others to start the open source Squeak dynamic media software.

He then joined Walt Disney Imagineering as a Disney Fellow and remained there until Disney ended its Fellow program. In 2001 he founded Viewpoints Research Institute, a non-profit organization dedicated to children, learning, and advanced software development.

Later, Kay worked with a team at Applied Minds and became a Senior Fellow at Hewlett-Packard Labs.

In November 2005, at the World Summit on the Information Society, the MIT research laboratories unveiled the $100 Laptop, the One Laptop per Child Program which was begun and is sustained by Kay’s friend, Nicholas Negroponte.

 

RM:
Alan, Doug Engelbart had a singular vision about augmenting the human mind and interaction between machine and its user, which led directly to the invention of the PC. Has his vision been reached, do you think?
AK:
Doug is a very special visionary within our field, but he attributes many of his ideas to having read the Vannevar Bush “As We May Think” article in Atlantic Monthly, while a young man serving in the Navy in 1945. What is so interesting about Doug’s vision is that the details of what he meant (and even many parts of the grand demo in 1968) were much more ambitious than the current general state of personal computing – and especially the web – today. For example, he thought that full collaboration of seeing, hearing, screen and file sharing, should be as basic as having a mouse and a graphics display. And there are many other such examples having to do with more complex issues such as processes for (a) helping organizations work on their goals (b) helping organizations improve their goals (c) helping organizations improve the processes of improving goals. These three are all critically important, but are rarely found in organizations today – especially (c).
RM:
What are the most important but fundamental things, the computing industry can learn from its past?
AK:
One thing is that the industry has been invariably wrong about what computing is and where it should go and how to improve it. Compare the role of ARPA and contrast with what industry thought it should be doing. I’d say this situation is even worse today.
RM:
The One Laptop per Child Association has the laudable aim of extending internet access across the Third World but the sticking point seems to be a program that could teach children to read in their native language. It surprises me that the app doesn’t exist. What specific problem stops this technology from becoming a reality?
AK:
This is a really good question. The sticking point for all education everywhere is that the adults who are involved in it are generally not up to what the subject matter and processes require.

But you are definitely right that one of the starting routes to escape the problem of adults would be to be able to teach children to read in their native language without requiring adults to help. I’ve been calling for this as one part of a larger set of new ways to approach the interfaces for personal computing.

I think there is enough known to pull off ‘the computer teaching to read and write’ today. Okay, Moore at Yale 50 years ago was the trailblazer both philosophically, experimentally, and pragmatically. One of his disciple’s 25 years ago – John Henry Martin – supported by IBM, did a very interesting system based on Moore’s ideas that strained the technology of the time but could be done better and more completely today.

If you like the idea of Grand Challenges, then this would be a good idea – I like the idea of Grand Funding better! – and if this funding were available I would urge researchers and students to make the effort to pull this off. It’s a nice and rich blend of technical, psychological, and theatrical (UI) problems whose solutions have to be blended very comprehensively.

One of the many difficulties here would be how to go beyond the simple social uses of reading and writing – including simplistic answers to deeper questions that are found all too often in the net cultures today. Providing motivation beyond basic narcissism and desires to be part of a social network is not easy. This is one of the many roles that is better filled by adults in a society, and the adults used to take this much more seriously than they do now.

The simple answer to your question could be as simple as lack of goal-plus-funding. We want a practical usable result, and this is not within the scope of either NSF or the current DARPA (Defense Advanced Research Projects Agency)

The DOE (Department of Education) should fund it, but this traditionally would have been almost impossible politically for them. The current Secretary of Education, Arne Duncan knows about and likes some computer tutoring systems, such as those by Carnegie Learning, but I wonder how politically feasible he would think it would be for the DOE to use substantial funds in the light of the current public and teacher’s opinions on where the money should be spent.

RM:
Over the last 20 years the Internet has scaled in growth but operating systems and other computer software hasn’t grown exponentially.  Do you think that the internet concept could be imitated and used as a basis for an operating environment that doesn’t have an operating system?
AK:
Of course this is one of my favorite soapbox topics, and I’ve lectured and written extensively about it – and our research institute has a couple of big grants to try to show what could be done here.

So this is an un-neutral topic in which I’m biased (mostly by logic I hope) towards answering: “Sure! As many things as possible should have a distributed ‘no centers’ architecture. And of course this is tantamount to really getting rid of the concept and the reality of an “operating system”.

The ARPA/PARC research community tried to do as many things ‘no center’ as possible, and this included Internet, Ethernet, personal computers instead of main-frames, multiple processor architectures instead of single CPUs, and the Smalltalk system which was ‘objects all the way down’ and used no OS at all.

This could be done much better these days, but very few people are interested in it (we are). We’ve got some nice things to show not quite half way through our project. Lots more can be said on this subject.

RM:
The Singularity University based at NASA’s Silicon Valley campus in California will offer courses in artificial intelligence, nanotechnology and biotechnology. Is this a throwback to the days when a handful of visionaries set out to redefine what technology could do to help the human race?
AK:
I think ‘university’ here is a marketing term, and at best is metaphorical. As far as I can tell this is a well thought about and put together ‘business enlightenment’ organization. It is like the old CSC Index Forums, and has many overlaps with the current TTI Vanguard program.

For example, none of the courses are long enough to really learn the subjects deeply, but some of them (particularly the 9 week summer course) could be thought of as in depth briefings on “possibilities” in the near and mid futures.

They don’t seem to be doing any research themselves in the areas they proffer, and the famous names listed are visitors, not regular employees.

Still, a little bit of enlightenment is a lot better than none, so I would think that this is probably a little bit of a plus for attendees.

RM:
Most innovations in software have emerged from experience and experiment, without any appeal to an underlying theory. When John Backus invented Fortran, he had no theory to work by. What is stopping the same thing happening now?
AK:
I think you are right about FORTRAN. But not about Algol, (especially LISP), and most things done by the ARPA/PARC research community. The Internet and Ethernet were not remotely ad hoc, nor was Smalltalk, the PARC GUI, peer-peer networking, etc. These were all done by very knowledgeable scientists using theoretical frameworks very deeply.

In fact, the ad hoc nature of FORTRAN was so distressing that it acted as a motivational goad for mathematicians and scientists to do much better – and they did.

I would say that quite a few more recent technologies -especially including the web and things made out of web stuff – are distressingly ad hoc and poorly designed.

RM:
The Viewpoints Research Institute is supporting several technologies and projects aimed at re-inventing programming. How you planning to re-develop how people use a PC from OS upwards?
AK:
We are not trying to reform the PC or how it is used per se. We chose “personal computing” (a) as a wide range of experiences that most people have a sense of, and (b) as a body of code to provide these experiences that seems many orders of magnitude larger than it *should* be, and (c) because we had a lot of experience inventing the first versions of personal computing, and this was done with tiny resources compared to it seems to take now.

So this makes a great target to invent and exhibit improvements (still with some apples and oranges problems, but if the improvements are by factors of 100s or 1000s, then we are back to comparing “just fruit”).

The real aim here is to learn and invent better ways to program systems at all levels of scale and that run very close to the hardware (or even inside it) all the way up to the user experience and further to the entire Internet. We call it “STEPS Toward the Reinvention of Programming” because we think we have to do a real improvement on today’s (or even Xerox PARC’s generally better) ways to program before we can even see how to really reinvent the genre.

Our website has several progress reports which will give a more complete picture of how things are going.

The original proposal: http://www.vpri.org/pdf/rn2006002_nsfprop.pdf

2007 Progress: http://www.vpri.org/pdf/tr2007008_steps.pdf

2008 Progress: http://www.vpri.org/pdf/tr2008004_steps08.pdf

More writings: http://www.vpri.org