CCR Papers from July 2007

Find a CCR issue:
  • Sara Landström and Lars-Åke Larzon

    Delayed acknowledgments were introduced to conserve network and host resources. Further reduction of the acknowledgment frequency can be motivated in the same way. However, reducing the dependency on frequent acknowledgments in TCP is difficult because acknowledgments support reliable delivery, loss recovery, clock out new segments, and serve as input when determining an appropriate sending rate. Our results show that in scenarios where there are no obvious advantages of reducing the acknowledgment frequency, performance can be maintained although fewer acknowledgments are sent. Hence, there is a potential for reducing the acknowledgment frequency more than is done through delayed acknowledgments today. Advancements in TCP loss recovery is one of the key reasons that the dependence on frequent acknowledgments has decreased. We propose and evaluate an end-to-end solution, where four acknowledgments per send window are sent. The sender compensates for the reduced acknowledgment frequency using a form of Appropriate Byte Counting. The proposal also includes a modification of fast loss recovery to avoid frequent timeouts.

    Dina Katabi
  • DongJin Lee and Nevil Brownlee

    Flow based analysis has been considered a simple and effective approach in network analysis. 5-tuple (unidirectional) flows are used in many network traffic, however, often these analyses require bidirectional packet matching to observe the interactions. Separating the flows into two categories as one-way (packets in one direction only) and two-way (packets in both directions) flows can yield further insight. We have examined traces of Auckland traffic for 2000, 2003 and 2006, and analyzed their one-way and two-way flows. We observed several behaviors and the changes in flow sizes and their lifetimes over time. In our traces, we observe that one-way flows are mostly malicious, re-transmissions, and some are long-lived. Two-way flows are mostly normal end-toend transmissions with their lifetimes/RTTs decreasing, their sizes increasing, and many short-lived flows mostly depict errors in TCP. Also, we observe similarity between one-way and two-way flow sizes for their lifetimes.

    Dina Papagiannaki
  • Vinay Aggarwal, Anja Feldmann, and Christian Scheideler

    Peer-to-peer (P2P) systems, which are realized as overlays on top of the underlying Internet routing architecture, contribute a significant portion of today’s Internet traffic. While the P2P users are a good source of revenue for the Internet Service Providers (ISPs), the immense P2P traffic also poses a significant traffic engineering challenge to the ISPs. This is because P2P systems either implement their own routing in the overlay topology or may use a P2P routing underlay [1], both of which are largely independent of the Internet routing, and thus impedes the ISP’s traffic engineering capabilities. On the other hand, P2P users are primarily interested in finding their desired content quickly, with good performance. But as the P2P system has no access to the underlying network, it either has to measure the path performance itself or build its overlay topology agnostic of the underlay. This situation is disadvantageous for both the ISPs and the P2P users. To overcome this, we propose and evaluate the feasibility of a solution where the ISP offers an “oracle” to the P2P users. When the P2P user supplies the oracle with a list of possible P2P neighbors, the oracle ranks them according to certain criteria, like their proximity to the user or higher bandwidth links. This can be used by the P2P user to choose appropriate neighbors, and therefore improve its performance. The ISP can use this mechanism to better manage the immense P2P traffic, e.g., to keep it inside its network, or to direct it along a desired path. The improved network utilization will also enable the ISP to provide better service to its customers.

    Michalis Faloutsos
  • Dmitri Krioukov, k c claffy, Kevin Fall, and Arthur Brady

    The Internet’s routing system is facing stresses due to its poor fundamental scaling properties. Compact routing is a research field that studies fundamental limits of routing scalability and designs algorithms that try to meet these limits. In particular, compact routing research shows that shortest-path routing, forming a core of traditional routing algorithms, cannot guarantee routing table (RT) sizes that on all network topologies grow slower than linearly as functions of the network size. However, there are plenty of compact routing schemes that relax the shortest-path requirement and allow for improved, sublinear RT size scaling that is mathematically provable for all static network topologies. In particular, there exist compact routing schemes designed for grids, trees, and Internet-like topologies that offer RT sizes that scale logarithmically with the network size. In this paper, we demonstrate that in view of recent results in compact routing research, such logarithmic scaling on Internet-like topologies is fundamentally impossible in the presence of topology dynamics or topology-independent (flat) addressing. We use analytic arguments to show that the number of routing control messages per topology change cannot scale better than linearly on Internet-like topologies. We also employ simulations to confirm that logarithmic RT size scaling gets broken by topology-independent addressing, a cornerstone of popular locator-identifier split proposals aiming at improving routing scaling in the presence of network topology dynamics or host mobility. These pessimistic findings lead us to the conclusion that a fundamental re-examination of assumptions behind routing models and abstractions is needed in order to find a routing architecture that would be able to scale “indefinitely.”

    Dina Katabi
  • Jiayue He, Jennifer Rexford, and Mung Chiang

    As networks grow in size and complexity, network management has become an increasingly challenging task. Many protocols have tunable parameters, and optimization is the process of setting these parameters to optimize an objective. In recent years, optimization techniques have been widely applied to network management problems, albeit with mixed success. Realizing that optimization problems in network management are induced by assumptions adopted in protocol design, we argue that instead of optimizing existing protocols, protocols should be designed with optimization in mind from the beginning. Using examples from our past research on traffic management, we present principles that guide how changes to existing protocols and architectures can lead to optimizable protocols. We also discuss the trade-offs between making network optimization easier and the overhead these changes impose.

  • Anja Feldmann

    Many believe that it is impossible to resolve the challenges facing today’s Internet without rethinking the fundamental assumptions and design decisions underlying its current architecture. Therefore, a major research effort has been initiated on the topic of Clean Slate Design of the Internet’s architecture. In this paper we first give an overview of the challenges that a future Internet has to address and then discuss approaches for finding possible solutions, including Clean Slate Design. Next, we discuss how such solutions can be evaluated and how they can be retrofitted into the current Internet. Then, we briefly outline the upcoming research activities both in Europe and the U. S. Finally, we end with a perspective on how network and service operators may benefit from such an initiative.

  • Luca Salgarelli, Francesco Gringoli, and Thomas Karagiannis

    Many reputable research groups have published several interesting papers on traffic classification, proposing mechanisms of different nature. However, it is our opinion that this community should now find an objective and scientific way of comparing results coming out of different groups. We see at least two hurdles before this can happen. A major issue is that we need to find ways to share full-payload data sets, or, if that does not prove to be feasible, at least anonymized traces with complete application layer meta-data. A relatively minor issue refers to finding an agreement on which metric should be used to evaluate the performance of the classifiers. In this note we argue that these are two important issues that the community should address, and sketch a few solutions to foster the discussion on these topics.

  • Vivek Mhatre

    The ns-2 simulator has limited support for simulating 802.11- based wireless mesh networks. We have added the following new features at the MAC and PHY layer of ns-2: (i) cumu- lative interference in SINR (Signal to Interference and Noise Ratio) computation, (ii) an accurate and combined shadow- fading module, (iii) multi-SINR and multi-rate link support, (iv) auto rate fallback (ARF) for rate adaptation, and (v) a framework for link probing and link quality estimation as required by most mesh routing protocols. We have made these modules publicly available. In this paper, we present an overview of these new features.

  • Konstantina Papagiannaki

    This editorial article is put together to disseminate the experience gained through the author feedback experiment, performed at the 2007 Passive and Active Measurement (PAM) conference.

  • Michalis Faloutsos, Anirban Banerjee, and Reza Rejaie

    We did it! A few CCR issues back, this column argued in favor of an all open review. At the time, most people thought that it was a joke. Quite frankly, we meant it as a joke. It is a crazy world we live in. No, seriously, it was not so much of a joke, as a wild attempt to think outside the box. Or else, desperate times call for desperate measures. Something like that. Apparently, we were so desperate that we actually tried it. And it worked. Beautifully. But then again, we may be biased.

    To say it was easy would be a lie." R. Rejaie and M. Faloutsos, early 21st century

  • S. Keshav

    Researchers spend a great deal of time reading research papers. However, this skill is rarely taught, leading to much wasted effort. This article outlines a practical and efficient three-pass method for reading research papers. I also describe how to use this method to do a literature survey.

  • Darleen Fisher

    In spite of the current Internet’s overwhelming success, there are growing concerns about its future and its robustness, manageability, security, openness to innovation, and scalability. As the Internet has become the largest human-made system ever deployed will we retain the ability to understand or manage it? Will we find ways to secure the current Internet or will we lose the security arms race to hackers and even state-supported attackers as they become more pervasive and sophisticated? Will the Internet continue to incorporate the thousands of new wireless networks currently added daily or encompass millions of embedded sensor systems that are expected to connect to the Internet in the future? There are also increasing societal concerns such as ensuring that an Internet maintains support for an open society, a balance of accountability and privacy, and continued economic viability. Will Internet companies continue to create new services and capabilities for the current Internet or will economic factors result in “network ossification” as some researchers fear?

    These are questions that concern networking and social science researchers around the world. In the United States, the National Science Foundation (NSF) has challenged the US research community to take a fresh look at the Internet by participating in the Future Internet Design (FIND) part of the Networking Technology and Systems (NeTS) program in the Division of Computer and Network Systems.

  • Anastasius Gavras, Arto Karila, Serge Fdida, Martin May, and Martin Potts

    The research community worldwide has increasingly drawn its attention to the weaknesses of the current Internet. Many proposals are addressing the perceived problems, ranging from new enhanced protocols to fix specific problems up to the most radical proposal to redesign and deploy a fully new Internet. Most of the problems in the current Internet are rooted in the tremendous pace of increase of its use. As a consequence there was little time to address the deficiencies of the Internet from an architectural point of view. Within FP7, the European Commission has facilitated the creation of European expert groups around the theme FIRE "Future Internet Research and Experimentation". FIRE has two related dimensions: on one hand, promoting experimentally-driven long-term, visionary research on new paradigms and networking concepts and architectures for the future Internet; on the other hand, building a large-scale experimentation facility supporting both medium- and long-term research on networks and services by gradually federating existing and new testbeds for emerging or future Internet technologies. By addressing future challenges for the Internet such as mobility, scalability, security and privacy, this new experimentally-driven approach is challenging the mainstream perceptions for future Internet development. This new initiative is intended to complement the more industrially-driven approaches which are addressed under the FP7 Objective "The Network of the Future" within the FP7-ICT Workprogramme 2007-08. FIRE is focused on exploring new and radically better technological solutions for the future Internet, while preserving the "good" aspects of the current Internet, in terms of openness, freedom of expression and ubiquitous access. The FIRE activities are being launched in the 2nd ICT call, which closes in October 2007, under the FP7-ICT Objective 1.6 "New Paradigms and Experimental Facilities" (budget €40m). Projects are envisaged to start in early 2008.

Syndicate content