Can the internet do without servers? A group of scientists at Cambridge University thinks it can and should.
One of the major bottlenecks in getting data and content on the net is the capacity of servers to respond to large traffic demands. The change that is being proposed to eliminate the bottleneck and connect us all more directly, is for our own computers to take a more active role. When we look at some data, a local copy is kept and made available for others who might be looking for the same thing, in the manner of p2p sharing.
Data is no longer identified by a pointer to the location where it is stored – that is the current system using URLs – but has its own unique “tag”, a URI or Uniform Resource Identifier that points to that particular package of data, be it an article, an image, a video, a presentation. So data can live on any computer, and it can be shared from there. The more popular a piece of data, the more copies of it will be around the net.
URLs have made the internet location-centric, while this new way of doing things can be described as an information-centric architecture.
Essentially, the idea is to remove the concept of the URL from the internet. In fact, the researchers explain that online searches would stop looking for URLs (Uniform Resource Locators) and start looking for URIs (Uniform Resource Identifiers). URIs, then, specify where data is—and where to go in order to find it—rather than being the single point of call. Trossen explains what that means for the user:
“Under our system, if someone near you had already watched [a] video or show, then in the course of getting it their computer or platform would republish the content. That would enable you to get the content from their network, as well as from the original server… Widely used content that millions of people want would end up being widely diffused across the network. Everyone who has republished the content could give you some, or all of it. So essentially we are taking dedicated servers out of the equation.”