The next internet: will it trade privacy for security?

Network World examines ongoing efforts to change the basic architecture of the internet in an article titled

2020 Vision: Why you won’t recognize the ‘Net in 10 years

The US National Science Foundation is funding some fairly intensive research to re-invent and secure the internet, or perhaps to better control the net which today, despite intense efforts, eludes any single country’s power to control.

The research funded by the NSF with tens of millions of dollars has been going on for years and it leaves no area untouched in an effort to better secure the net and make it more impervious to attack.

NSF is challenging researchers to come up with ideas for creating an Internet that’s more secure and more available than today’s. They’ve asked researchers to develop more efficient ways to disseminate information and manage users’ identities while taking into account emerging wireless and optical technologies. Researchers also must consider the societal impacts of changing the Internet’s architecture.

“One of the things we’re really concerned about is trustworthiness because all of our critical infrastructure is on the Internet,” Fisher says. “The telephone systems are moving from circuits to IP. Our banking system is dependent on IP. And the Internet is vulnerable.”

NSF says it won’t make the same mistake today as was made when the Internet was invented, with security bolted on to the Internet architecture after-the-fact instead of being designed in from the beginning.

The NSF’s multi-year effort is entering into the prototype testing phase this year. A testbed, the Global Environment for Network Innovations, will serve to try out chosen technological changes in a real network environment.

The GENI program has developed experimental network infrastructure that’s being installed in U.S. universities. This infrastructure will allow researchers to run large-scale experiments of new Internet architectures in parallel with — but separated from — the day-to-day traffic running on today’s Internet.

“What’s distinctive about GENI is its emphasis on having lots and lots of real people involved in the experiments,” Elliott says. “Other countries tend to use traffic generators….We’re looking at hundreds or thousands or millions of people engaged in these experiments.”

Some ideas that are being tested using the GENI testbed:

Software-defined networking

Today’s routers and switches come with software written by the vendor, and customers can’t modify the code. Researchers at Stanford University’s Clean Slate Project are proposing — and the GENI program is trialing — an open system that will allow users to program deep into network devices.

Stanford has demonstrated OpenFlow protocol running on switches from Cisco, Juniper, HP and NEC. With OpenFlow, an external controller manages these switches and makes all the high-level decisions.

Appenzeller says the OpenFlow architecture has several advantages from an Internet security perspective because the external controller can view which computers are communicating with each other and make decisions about access control.

Routing Tables

The Rochester Institute of Technology (RIT) is addressing the issue of a routing table that gobbles up much of the available memory in routers – 300.000 entries and growing with a technology called the Floating Cloud Tiered Internet Architecture, being tried out on GENI.

With the Floating Cloud approach, ISPs would not have to keep buying larger routers to handle ever-growing routing tables. Instead, ISPs would use a new technique to forward packets within their own network clouds.

RIT is proposing a flexible, peering structure that would be overlayed on the Internet. The architecture uses forwarding across network clouds, and the clouds are associated with tiers that have number values. When packets are sent across the cloud, only their tier values are used for forwarding, which eliminates the need for global routing within a cloud.

Opportunistic networks

Research out of Howard University in Washington, D.C. experimenting with a new type of mobile wireless network is focused on networks that aren’t connected all the time – so called opportunistic networks, which have intermittent network connectivity.

Opportunistic networks would use peer-to-peer communications to transfer communications if the network is unavailable. For example, you may want to send an e-mail from a car in a remote location without network access. With an opportunistic wireless network, your PDA might send that message to a device inside a passing vehicle, which might take the message to a nearby cell tower.

“The most fundamental difference about this architecture is that the network has intermittent connections, as compared to the Internet which assumes you are connected all of the time,” Li says.

Facebook as a model

Davis Social Links uses the format of Facebook — with its friends-based ripple effect of connectivity — to propagate connections on the Internet. That’s how it creates connections based on trust and true identities, according to S. Felix Wu, a professor in the Computer Science Department at UC Davis.

Although based on the popular Facebook application, Davis Social Links represents a radical change over today’s Internet. The current Internet is built upon the idea of users being globally addressable. Davis Social Links replaces that idea with social rather than network connectivity.

“This is revolutionary change,” Wu says. “One of the fundamental principles of today’s Internet is that it provides global connectivity. If you have an IP address, you by default can connect to any other IP address. In our architecture, we abandon that concept. We think it’s not only unnecessary but also harmful. We see [distributed denial-of-service] attacks as well as some of the spamming activity as a result of global connectivity.”

Content-centric networks

Instead of using IP addresses to identify the machines that store content, content-centric networking uses file names and URLs to identify the content itself. The underlying idea is that knowing the content users want to access is more important than knowing the location of the machines used to store it.

Jacobson proposes that content — such as a movie, a document or an e-mail message — would receive a structured name that users can search for and retrieve. The data has a name, but not a location, so that end users can find the nearest copy.

In this model, trust comes from the data itself, not from the machine it’s stored on. Jacobson says this approach is more secure because end users decide what content they want to receive rather than having lots of unwanted content and e-mail messages pushed at them.

“TCP was designed so it didn’t know what it was carrying. It didn’t know what the bits were in the pipe,” Jacobson explains. “We came up with a security model that we’ll armor the pipe, or we’ll wrap the bits in SSL, but we still don’t know the bits. The attacks are on the bits, not the pipes carrying them. In general, we know that perimeter security doesn’t work. We need to move to models where the security and trust come from the data and not from the wrappers or the pipes.”

Jacobson says the evolution to content-centric networking would be fairly painless because it would be like middleware, mapping between connection-oriented IP below and the content above. The approach uses multi-point communications and can run over anything: Ethernet, IP, optical or radio.

This leaves us with an open question: Are we about to give up privacy in the name of security? The debates around full-body scanners being put up in airports around the world come to mind.

Certainly all the social implications – societal impacts as the NSF calls them – should be considered before committing to any changes that are as deep and potentially irreversible as making changes in the architecture of the internet!

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.