Do We Need a New Internet?

The New York Times | February 14, 2009
By John Markoff

Two decades ago a 23-year-old Cornell University graduate student brought the Internet to its knees with a simple software program that skipped from computer to computer at blinding speed, thoroughly clogging the then-tiny network in the space of a few hours.

The program was intended to be a digital “Kilroy Was Here.” Just a bit of cybernetic fungus that would unobtrusively wander the net. However, a programming error turned it into a harbinger heralding the arrival of a darker cyberspace, more of a mirror for all of the chaos and conflict of the physical world than a utopian refuge from it.

Since then things have gotten much, much worse. 

Bad enough that there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over.

What a new Internet might look like is still widely debated, but one alternative would, in effect, create a “gated community” where users would give up their anonymity and certain freedoms in return for safety. Today that is already the case for many corporate and government Internet users. As a new and more secure network becomes widely adopted, the current Internet might end up as the bad neighborhood of cyberspace. You would enter at your own risk and keep an eye over your shoulder while you were there.

“Unless we’re willing to rethink today’s Internet,” says Nick McKeown, a Stanford engineer involved in building a new Internet, “we’re just waiting for a series of public catastrophes.”

That was driven home late last year, when a malicious software program thought to have been unleashed by a criminal gang in Eastern Europe suddenly appeared after easily sidestepping the world’s best cyberdefenses. Known as Conficker, it quickly infected more than 12 million computers, ravaging everything from the computer system at a surgical ward in England to the computer networks of the French military.

Conficker remains a ticking time bomb. It now has the power to lash together those infected computers into a vast supercomputer called a botnet that can be controlled clandestinely by its creators. What comes next remains a puzzle. Conficker could be used as the world’s most powerful spam engine, perhaps to distribute software programs to trick computer users into purchasing fake antivirus protection. Or much worse. It might also be used to shut off entire sections of the Internet. But whatever happens, Conficker has demonstrated that the Internet remains highly vulnerable to a concerted attack.

“If you’re looking for a digital Pearl Harbor, we now have the Japanese ships streaming toward us on the horizon,” Rick Wesson, the chief executive of Support Intelligence, a computer consulting firm, said recently.

The Internet’s original designers never foresaw that the academic and military research network they created would one day bear the burden of carrying all the world’s communications and commerce. There was no one central control point and its designers wanted to make it possible for every network to exchange data with every other network. Little attention was given to security. Since then, there have been immense efforts to bolt on security, to little effect. 

“In many respects we are probably worse off than we were 20 years ago,” said Eugene Spafford, the executive director of the Center for Education and Research in Information Assurance and Security at Purdue University and a pioneering Internet security researcher, “because all of the money has been devoted to patching the current problem rather than investing in the redesign of our infrastructure.”

In fact, many computer security researchers view the nearly two decades of efforts to patch the existing network as a Maginot Line approach to defense, a reference to France’s series of fortifications that proved ineffective during World War II. The shortcoming in focusing on such sturdy digital walls is that once they are evaded, the attacker has access to all the protected data behind them. “Hard on the outside, with a soft chewy center,” is the way many veteran computer security researchers think of such strategies.

Despite a thriving global computer security industry that is projected to reach $79 billion in revenues next year, and the fact that in 2002 Microsoft itself began an intense corporatewide effort to improve the security of its software, Internet security has continued to deteriorate globally.

Even the most heavily garrisoned military networks have proved vulnerable. Last November, the United States military command in charge of both the Iraq and Afghanistan wars discovered that its computer networks had been purposely infected with software that may have permitted a devastating espionage attack.

That is why the scientists armed with federal research dollars and working in collaboration with the industry are trying to figure out the best way to start over. At Stanford, where the software protocols for original Internet were designed, researchers are creating a system to make it possible to slide a more advanced network quietly underneath today’s Internet. By the end of the summer it will be running on eight campus networks around the country.

The idea is to build a new Internet with improved security and the capabilities to support a new generation of not-yet-invented Internet applications, as well as to do some things the current Internet does poorly — such as supporting mobile users.

The Stanford Clean Slate project won’t by itself solve all the main security issues of the Internet, but it will equip software and hardware designers with a toolkit to make security features a more integral part of the network and ultimately give law enforcement officials more effective ways of tracking criminals through cyberspace. That alone may provide a deterrent.

This is not the first time a replacement has been proposed for the current Internet. For example, modern Windows and Macintosh computers already come equipped to support a new Internet protocol known as IPv6 that would fix many of the shortcomings of the current IPv4 version. However, because of cost, performance and compatibility questions it has languished.

That has not discouraged the Stanford engineers who say they are on a mission to “reinvent the Internet.” They argue that their new strategy is intended to allow new ideas to emerge in an evolutionary fashion, making it possible to move data traffic seamlessly to a new networking world. Like the existing Internet, the new network will almost certainly have no one central point of control and no one organization will run it. It is most likely to emerge as new hardware and software are built in to the router computers that run today’s network and are adopted as Internet standards.

For all those efforts, though, the real limits to computer security may lie in human nature.

The Internet’s current design virtually guarantees anonymity to its users. (As a New Yorker cartoon noted some years ago, “On the Internet, nobody knows that you’re a dog.”) But that anonymity is now the most vexing challenge for law enforcement. An Internet attacker can route a connection through many countries to hide his location, which may be from an account in an Internet cafe purchased with a stolen credit card.

“As soon as you start dealing with the public Internet, the whole notion of trust becomes a quagmire,” said Stefan Savage, an expert on computer security at the University of California, San Diego.

A more secure network is one that would almost certainly offer less anonymity and privacy. That is likely to be the great tradeoff for the designers of the next Internet. One idea, for example, would be to require the equivalent of drivers’ licenses to permit someone to connect to a public computer network. But that runs against the deeply held libertarian ethos of the Internet.

Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.