Newswise — Researchers in North Carolina State University's Department of Computer Science have developed a new data transfer protocol for the Internet that makes today's high-speed Digital Subscriber Line (DSL) connections seem lethargic.
The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London.
Dr. Injong Rhee, associate professor of computer science, said BIC can achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems. While this might translate into music downloads in the blink of an eye, the true value of such a super-powered protocol is a real eye-opener.
Rhee and NC State colleagues Dr. Khaled Harfoush, assistant professor of computer science, and Lisong Xu, postdoctoral student, presented a paper on their findings in Hong Kong at Infocom 2004, the 23rd meeting of the Institution of Electrical and Electronics Engineers Communications Society, on Thursday, March 11.
Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, "Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data." Visualizations might include satellite images or climate models used in weather predictions. Receiving the data and sharing the results can lead to massive congestion of current networks, even on the newest wide-area high-speed networks such as ESNet (Energy Sciences Network), which was created by the U.S. Department of Energy specifically for these types of scientific collaborations.
The problem, Rhee said, is the inherent limitations of regular TCP. "TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller," he said. "Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth." Essentially, we're using an eyedropper to fill a water main. BIC, on the other hand, would open the floodgate.
Along with postdoctoral student Xu, Rhee has been working on developing BIC for the past year, although Rhee said he has been researching network congestion solutions for at least a decade. The key to BIC's speed is that it uses a binary search approach " a fairly common way to search databases " that allows for rapid detection of maximum network capacities with minimal loss of information. "What takes TCP two hours to determine, BIC can do in less than one second," Rhee said. The greatest challenge for the new protocol, he added, was to fill the pipe fast without starving out other protocols. "It's a tough balance," he said.
By allowing the rapid transfer of increasingly large packets of information over long distances, the new protocol could boost the efficacy of cutting-edge applications ranging from telemedicine and real-time environmental monitoring to business operations and multi-user gaming. At NC State, researchers could more readily visualize, monitor and control real-time simulations and experiments conducted at remote computing clusters. BIC might even help avoid a national disaster: The recent blackout that affected large areas of the eastern United States and Canada underscored the need to spread data-rich backup systems across hundreds of thousands of miles.
With network speeds doubling roughly annually, Rhee said the performances demonstrated by the new protocol could become commonly available in the next few years, setting a new standard for full utilization of the Internet.
Note to editors: An abstract of the paper follows.
"Binary Increase Congestion Control for Fast, Long-Distance Networks" Authors: Lisong Xu, Khaled Harfoush and Injong Rhee, North Carolina State UniversityPresented: March 11, 2004, at Infocom 2004
Abstract: High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. The protocols consider mainly two properties: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from TCP while utilizing the full bandwidth of high-speed networks. This paper presents another important constraint, namely RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the window increase rate gets larger as the window grows " ironically the very reason that makes them more scalable. RTT unfairness for high-speed networks occurs distinctly with drop tail routers where packet loss can be highly synchronized. After recognizing the RTT unfairness problem of existing protocols, this paper presents a new congestion control protocol that ensures linear RTT fairness under large windows while offering both scalability and TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase is designed to provide TCP friendliness. The paper presents a performance study of the new protocol.
MEDIA CONTACT
Register for reporter access to contact detailsCITATIONS
Institution of Electrical and Electronics Engineers Communications Society