Ross Anderson: Geek of the Week

Professor Ross Anderson is one of the foremost experts in Computer Security in the world. He has published widely on the economics of security. cryptology, formal methods, hardware design, and the robustness of distributed systems in general. He is best known for his book 'Security Engineering: A Guide to Building Dependable Distributed Systems'. He has never been shy of controversy, and we were intrigued by the influence he wields at Cambridge University; so intrigued  were we that we sent the taciturn Richard Morris to find out more from him

… if you have a
database in the
USA, then the
FBI has access
 to it

Ross Anderson is Professor in Security Engineering at the University of Cambridge Computer Laboratory. He is an internationally renowned expert on computer security who is often called on by the Media for his views. He graduated from Trinity College, Cambridge in 1978 with a BA in mathematics and natural science and subsequently received a qualification in computer engineering. After working in the avionics and banking industry he moved back to Cambridge in 1992 to begin work on his doctorate under the supervision of the remarkable computer scientist Roger Needham. He received his PhD in 1995, and became a lecturer in the same year.

He designed with Eli Biham the Israeli cryptographer and cryptanalyst the BEAR, LION and Tiger cryptographic primitives. He has also co-authored with Biham and Professor Lars Knudsen of the Technical University of Denmark, the block cipher Serpent, one of the finalists in the AES competition.

… security failure
is caused by bad
incentives at least
as often as by bad

Professor Anderson is well-known among Cambridge academics as an outspoken defender of academic freedoms, intellectual property, and other matters of university politics. In 2002, he became a vigorous critic of trusted computing proposals, in particular Microsoft’s proposed ‘Next-Generation Secure Computing Base’ (NGSCB), formerly known as the Palladium operating system.

And in January 2004, the student newspaper Varsity gave Anderson the epithet of Cambridge University’s ‘most powerful person.’


Ross Anderson is the author of Security Engineering, He was the founder and editor of Computer and Communications Security Reviews – his other research interests include:

  • Peer-to-Peer systems including the Eternity Service and the Cocaine auction protocol
  • Robustness of cryptographic protocols including programming Satan’s Computer – coined to express the difficulty of designing cryptographic protocols
  • Reliability of security systems
  • Analysis and design of cryptographic algorithms

In 1998 he set the Foundation for Information Policy Research (FIPR), which has become a thorn in the sides of Government and claims a number of successes in stamping out erosions of liberty including amendments to the Regulation of Investigatory Powers Act. As the Bill passed through the UK Parliament, the foundation successfully promoted amendments which

  • Prevented surveillance of web browsing without a warrant (‘Big Browser’ amendment for traffic data)
  • Ensured that those who lose keys or forget passwords would be presumed innocent
‘Ross, I enjoyed reading through your latest blog about you chairing the Interdisciplinary Workshop and Human Behaviour at MIT recently. It sounded fascinating & it had some pretty impressive attendees including Richard Clarke, the former terrorism adviser to Presidents Clinton and George W Bush and the magician and scientific sceptic James Randi. I can understand why they invited to attend but I noticed a philosopher or two. Why were they there? ‘
‘Dick Clarke in the end didn’t turn up.
The reason we invited the philosopher David Livingstone Smith was because of his writings on psychology and war. One of the reasons human societies demonise an outgroup is to make warfare easier; it’s difficult to kill people whom you regard as people. Hence the drive to dehumanise ‘the fascists’ or ‘the communists’ or ‘the Catholics’ or ‘the Jews’ or ‘the Hutus’ or ‘the colonialists’, depending on what war we’re talking about. There are various ways in which this can be done – to see them as predators (‘the colonialists’), for example, or even as vermin (Hitler and the Jews). If you’re interested in the interaction between security and psychology, this is pretty central stuff. The first session, on deception, was fascinating.
It emphasised the huge range of problems, from detecting deception in interpersonal contexts such as interrogation through the effects of context and misdirection to how we might provide better trust signals to computer users.
Over the past seven years, security economics has gone from nothing to a thriving research field with over 100 active researchers. Over the next seven I believe that security psychology should do at least as well. ‘
‘One session was entitled “How do we Fix the World?”. That must have been one heck of a long session! ‘
‘Most of it was a discussion of how we should organise future workshops. However, before we launched into that, we had two excellent summaries – one by the Harvard economist Richard Zeckhauser, and the other by Nick Humphery who is one of the Britain’s leading psychologists – he invented the theory that we evolved intelligence primarily for social rather than technical purposes. You can read a summary of their contributions on my Light the Blue Touchpaper blog. ‘
‘You say that security economics has gone from nothing to a thriving research field with over 100 active researchers. What do you expect to happen to security psychology in the same time? ‘
‘I expect it will become at least as important. Since about 2000, people have started to realise that security failure is caused by bad incentives at least as often as by bad design. Systems are particularly prone to failure when the person guarding them does not suffer the full cost of failure. Game theory and microeconomic theory are becoming important to the security engineer, just as the mathematics of cryptography did a quarter century ago.

The growing use of security mechanisms for purposes such as digital rights management and accessory control – which exert power over system owners rather than protecting them from outside enemies – introduces many strategic issues. Where the system owner’s interests conflict with those of her machine’s designer, economic analysis can shine light on policy options.

Even in the one-week since the workshop, for example, we’ve seen a media storm IN Britain about fear of knife crime. How much of this is real and how much of it is just driven by competitive pandering by politicians? Wouldn’t it be nice to understand such mechanisms better? ‘

‘Let’s return to technology. What would you say is the most dangerous threat to IT security and why? ‘
‘The fastest growing threat is phishing – which involves deceiving users. That crosses the boundaries between technology and psychology, and a purely techie approach is unlikely to work. Engineers, psychologists and economists will have to work together.
Online frauds like phishing are often easier to do, and harder to stop, than similar real-world frauds because most online protection mechanisms are not anything like as intuitively usable or as difficult to forge convincingly as their real-world equivalents; it is much easier for crooks to build a bogus bank website that passes casual inspection than it is for them to create a bogus bank in a shopping mall. We’ve evolved social and psychological tools over millions of years to help us deal with deception in face-to-face contexts, but these are little use to us when we’re presented with an email that asks us to do something. It seems to be harder to create useful asymmetry in usability, by which I mean that good use is.
Deception, of various kinds, is now the greatest threat to online security. It can be used to get passwords, or to compromise confidential information or manipulate financial transactions directly.
The most common way for private investigators to steal personal information is pretexting – phoning someone who has the information under a false pretext, usually by pretending to be someone authorised to be told it. Such attacks are sometimes known collectively as social engineering.
Hoaxes and frauds have always happened, but the Internet makes some of them easier, and lets others be repackaged in ways that may bypass our existing controls – be they personal intuitions, company procedures or even laws. We will be playing catch-up for some time.
Another driver for the surge in attacks based on social engineering is that people are getting better at technology. As designers learn how to forestall the easier techie attacks, psychological manipulation of system users or operators becomes ever more attractive. So the security engineer simply must understand basic psychology and ‘security usability’, and one of the biggest opportunities facing the research community is to learn more about what works and why. ‘
‘What do you believe to be the biggest threat to IT safety? Or has it happened already and the general public hasn’t heard of it? ‘
‘We’re building ever more complex socio-technical systems that link up millions of people without putting much effort into thinking about the consequences. Now, historically IT has brought much more benefit than harm, so we certainly don’t want to stifle innovation – but it is prudent to ensure that people bear the consequences of their actions. It’s where actors can dump risk on others that things go wrong, and it’s here that governments have a role ‘
‘So what country breeds the most sophisticated cyber-criminals? ‘
‘A trite answer might be ‘Russia’ but that’s an oversimplification. The problem is that over the past four years or so, criminals have started to specialise. In the old days, online crime was a cottage industry; it was also vertically integrated, and these two features together kept it small.
Every cyber villain had to do just about everything for himself – write the bad code, steal the credit card numbers, take out the money and launder it. Now, however, the bad guys trade with each other. Some guys sell malware while others operate botnets or phishing sites and yet others buy up stolen credentials and cash them out.

This industrialises the business; it lets the bad guys get good at their jobs, and makes the whole operation scale. In short, the villains are doing what the regular economy started doing in the mid-18th century. It’s Adam Smith’s pin factory all over again ‘

‘In your book Security Engineering you mention how the Swedish government were upset when they learned that their version of Lotus Notes, which they used widely in public service, had its cryptography deliberately weakened to allow US National Security Agency (NSA) access. Why would the NSA want to access Swedish Govt documents? ‘
‘It’s their job! ‘
‘Do you think then that the NSA are scare mongering on terrorism or was there an innocent reason for decrypting data they hacked into? ‘
‘All the agencies are talking up fears about the threats that their operations might help tackle. This is nothing new; read Eisenhower on the dangers of the military-industrial complex. For terrorism and the modern security-industrial complex, read John Mueller’s book ‘Overblown’ ‘
‘How well-encrypted must data be, in order to be safe then? ‘
‘You are in a state of sin. This is a wrong question to ask, for many reasons. `Whoever thinks his problem is solved by encryption, doesn’t understand his problem and doesn’t understand encryption’ (Needham and Lampson) ‘
‘How secure are databases on Windows: should we be worried? ‘
‘I don’t use Windows. As for database security I’d recommend reading David Litchfield’s The Database Hacker’s Handbook: Defending Database Servers. ‘
‘OK! Let’s move on. The Foundation for Information Policy which you chair. It’s partly concerned with educating the public on the social effects of IT and has since become (or so it’s claimed) Britain’s internet policy think-tank. What is its purpose? ‘
‘We bring together engineers, lawyers, economists and others interested in the effects that IT has on society and try to figure out what policy options might be helpful or harmful. ‘
‘Could you give me a couple of examples of how you think the information revolution is eroding our civil liberties. What’s the most ludicrous example in the UK that you have come across? ‘
‘The idea floated by GCHQ (Official site of the UK Government Communications Headquarters. The centre for the UK Government’s Signal Intelligence (SIGINT) activities) in 2000 and again recently by Gordon Brown that all the telcos should have to give them copies of everyone’s itemised phone bill, and that all ISPs should have to give them logs of all IP traffic. ‘
‘And an American example? ‘
‘Isn’t the Patriot Act enough? A senior Microsoft guy described this to me at a conference in London yesterday as ‘if you have a database in the USA, then the FBI has access to it’
[Note: The Patriot Act means that the US Government can gain access to information about computer use including Web surfing by saying that the information obtained by the FBI is relevant to an ongoing criminal investigation. No other proof is required.] ‘
‘When you weigh up all these big minuses what’s the big positive about the net? ‘
‘In the previous generation, culture was broadcast; it was one-to-many. The gentleman from the BBC – or whatever) -talked, and we listened. Now culture is becoming interactive: we’re all talking to each other.
Other people are the killer application of the Internet, not ‘content’. This isn’t really new, of course, but a return to how things used to be: before radio and TV, culture was more interactive, and we’re getting back there. However we can now interact with anyone in the world we want to, not just with the people in the village where we live
The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design.
The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, Spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management).
This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and the social sciences. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. ‘
‘You came up with the phrase ‘programming Satan’s computer’ to describe the problems faced by computer-security engineers. Tell me a little about the background behind this sinister sounding phrase? ‘
‘We have described the crypto protocols design problem as `programming Satan’s computer’ because a network under the control of an adversary is possibly the most obstructive computer which one could build. It may give answers which are subtly and maliciously wrong at the most inconvenient possible moment.
Seen in this light, it is less surprising that so many protocols turned out to contain serious errors, and that these errors often took a long time to discover twelve years for the bug in Denning-Sacco, and seventeen years for the mid- dleperson attack on Needham-Schroeder. It is hard to simulate the behaviour of the devil; one can always check that a protocol does not commit the old familiar sins, but every so often someone comes up with a new and pernicious twist. It is therefore natural to ask what we must do to be saved.’