MIT Researchers Promise an Internet 100 Times as Fast
A new network design developed by MIT researchers avoids the need to convert optical signals into electrical ones and could boost capacity while reducing power consumption.
The heart of the Internet is a network of high-capacity optical fibers that spans continents. But while optical signals transmit information much more efficiently than electrical signals, they?re harder to control. The routers that direct traffic on the Internet typically convert optical signals to electrical ones for processing, then convert them back for transmission, a process that consumes time and energy.
In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.
One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there?s no cost-effective way to hold an optical signal still for even that short a time.
Chan?s approach, called "flow switching," solves this problem in a different way. Between locations that exchange large volumes of data ? say, Los Angeles and New York City ? flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there?s no possibility of signals arriving from multiple directions, there?s never a need to store them in memory.
To some extent, something like this already happens in today?s Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country?s fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.
In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company?s servers aren?t exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.
In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.
In a series of papers published over a span of 20 years ? the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month ? they?ve also performed mathematical analyses of flow-switched networks? capacity and reported the results of extensive computer simulations. They?ve even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.
Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they?ve shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet?s power consumption.
In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.
One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there?s no cost-effective way to hold an optical signal still for even that short a time.
Chan?s approach, called "flow switching," solves this problem in a different way. Between locations that exchange large volumes of data ? say, Los Angeles and New York City ? flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there?s no possibility of signals arriving from multiple directions, there?s never a need to store them in memory.
To some extent, something like this already happens in today?s Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country?s fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.
In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company?s servers aren?t exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.
In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.
In a series of papers published over a span of 20 years ? the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month ? they?ve also performed mathematical analyses of flow-switched networks? capacity and reported the results of extensive computer simulations. They?ve even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.
Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they?ve shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet?s power consumption.