Computer Communication Review: Papers

Find a CCR issue:
  • Yi Wang, Eric Keller, Brian Biskeborn, Jacobus van der Merwe, and Jennifer Rexford

    The complexity of network management is widely recognized as one of the biggest challenges facing the Internet today. Point solutions for individual problems further increase system complexity while not addressing the underlying causes. In this paper, we argue that many network-management problems stem from the same root cause—the need to maintain consistency between the physical and logical configuration of the routers. Hence, we propose VROOM (Virtual ROuters On the Move), a new network-management primitive that avoids unnecessary changes to the logical topology by allowing (virtual) routers to freely move from one physical node to another. In addition to simplifying existing network-management tasks like planned maintenance and service deployment, VROOM can also help tackle emerging challenges such as reducing energy consumption. We present the design, implementation, and evaluation of novel migration techniques for virtual routers with either hardware or software data planes. Our evaluation shows that VROOM is transparent to routing protocols and results in no performance impact on the data traffic when a hardware-based data plane is used.

  • Dave Levin, Katrina LaCurts, Neil Spring, and Bobby Bhattacharjee

    Incentives play a crucial role in BitTorrent, motivating users to upload to others to achieve fast download times for all peers. Though long believed to be robust to strategic manipulation, recent work has empirically shown that BitTorrent does not provide its users incentive to follow the protocol. We propose an auction-based model to study and improve upon BitTorrent’s incentives. The insight behind our model is that BitTorrent uses, not tit-for-tat as widely believed, but an auction to decide which peers to serve. Our model not only captures known, performance-improving strategies, it shapes our thinking toward new, effective strategies. For example, our analysis demonstrates, counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors. We implement and evaluate a modification to BitTorrent in which peers reward one another with proportional shares of bandwidth. Within our game-theoretic model, we prove that a proportional-share client is strategy-proof. With experiments on PlanetLab, a local cluster, and live downloads, we show that a proportional-share unchoker yields faster downloads against BitTorrent and BitTyrant clients, and that underreporting pieces yields prolonged neighbor interest.

  • Maxim Podlesny and Sergey Gorinsky

    With the Internet offering a single best-effort service, there have been numerous proposals of diversified network services that align better with the divergent needs of different distributed applications. The failure of these innovative architectures to gain wide deployment is primarily due to economic and legacy issues, rather than technical shortcomings. We propose a new paradigm for network service differentiation where design principles account explicitly for the multiplicity of Internet service providers and users as well as their economic interests in environments with partly deployed new services. Our key idea is to base the service differentiation on performance itself, rather than price. The proposed RD (Rate-Delay) services enable a user to choose between a higher transmission rate or low queuing delay at a congested network link. An RD router supports the two services by maintaining two queues per output link and achieves the intended rate-delay differentiation through simple link scheduling and dynamic buffer sizing. After analytically deriving specific rules for RD router operation, we conduct extensive simulations that confirm effectiveness of the RD services geared for incremental deployment in the Internet.

  • Sharon Goldberg, Shai Halevi, Aaron D. Jaggard, Vijay Ramachandran, and Rebecca N. Wright

    We study situations in which autonomous systems (ASes)may have incentives to send BGP announcements differing from the AS-level paths that packets traverse in the data plane. Prior work on this issue assumed that ASes seek only to obtain the best possible outgoing path for their traffic. In reality, other factors can influence a rational AS’s behavior. Here we consider a more natural model, in which an AS is also interested in attracting incoming traffic (e.g., because other ASes pay it to carry their traffic). We ask what combinations of BGP enhancements and restrictions on routing policies can ensure that ASes have no incentive to lie about their data-plane paths. We find that protocols like S-BGP alone are insufficient, but that S-BGP does suffice if coupled with additional (quite unrealistic) restrictions on routing policies. Our game-theoretic analysis illustrates the high cost of ensuring that the ASes honestly announce data-plane paths in their BGP path announcements.

  • Ionut Trestian, Supranamaya Ranjan, Aleksandar Kuzmanovi, and Antonio Nucci

    Understanding Internet access trends at a global scale, i.e., what do people do on the Internet, is a challenging problem that is typically addressed by analyzing network traces. However, obtaining such traces presents its own set of challenges owing to either privacy concerns or to other operational difficulties. The key hypothesis of our work here is that most of the information needed to profile the Internet endpoints is already available around us —on the web.

    In this paper, we introduce a novel approach for profiling and classifying endpoints. We implement and deploy a Google-based profiling tool, which accurately characterizes endpoint behavior by collecting and strategically combining information freely available on the web. Our ‘unconstrained endpoint profiling’ approach shows remarkable advances in the following scenarios: (i) Even when no packet traces are available, it can accurately predict application and protocol usage trends at arbitrary networks; (ii) When network traces are available, it dramatically outperforms state-ofthe- art classification tools; (iii) When sampled flow-level traces are available, it retains high classification capabilities when other schemes literally fall apart. Using this approach, we perform unconstrained endpoint profiling at a global scale: for clients in four different world regions (Asia, South and North America and Europe). We provide the first-of-its-kind endpoint analysis which reveals fascinating similarities and differences among these regions.

  • Brian Eriksson, Paul Barford, and Robert Nowak

    Understanding the Internet's structure through empirical measurements is important in the development of new topology generators, new protocols, traffic engineering, and troubleshooting, among other things. While prior studies of Internet topology have been based on active (traceroutelike) measurements, passive measurements of packet traffic offer the possibility of a greatly expanded perspective of Internet structure with much lower impact and management overhead. In this paper we describe a methodology for inferring network structure from passive measurements of IP packet traffic. We describe algorithms that enable 1) traffic sources that share network paths to be clustered accurately without relying on IP address or autonomous system information, 2) topological structure to be inferred accurately with only a small number of active measurements, 3) missing information to be recovered, which is a serious challenge in the use of passive packet measurements. We demonstrate our techniques using a series of simulated topologies and empirical data sets. Our experiments show that the clusters established by our method closely correspond to sources that actually share paths. We also show the tradeffs between selectively applied active probes and the accuracy of the inferred topology between sources. Finally, we characterize the degree to which missing information can be recovered from passive measurements, which further enhances the accuracy of the inferred topologies.

  • Rob Sherwood, Adam Bender, and Neil Spring

    Internet topology discovery consists of inferring the inter-router connectivity (“links”) and the mapping from IP addresses to routers (“alias resolution”). Current topology discovery techniques use TTL-limited “traceroute” probes to discover links and use direct router probing to resolve aliases. The often-ignored record route (RR) IP option provides a source of disparate topology data that could augment existing techniques, but it is difficult to properly align with traceroute-based topologies because router RR implementations are under-standardized. Correctly aligned RR and traceroute topologies have fewer false links, include anonymous and hidden routers, and discover aliases for routers that do not respond to direct probing. More accurate and feature-rich topologies benefit overlay construction and network diagnostics, modeling, and measurement.

    We present DisCarte, a system for aligning and cross-validating RR and traceroute topology data using observed engineering practices. DisCarte uses disjunctive logic programming (DLP), a logical inference and constraint solving technique, to intelligently merge RR and traceroute data. We demonstrate that the resultant topology is more accurate and complete than previous techniques by validating its internal consistency and by comparing to publiclyavailable topologies. We classify irregularities in router implementations and introduce a divide-and-conquer technique used to scale DLP to Internet-sized systems.

  • Marcel Dischinger, Andreas Haeberlen, Ivan Beschastnikh, Krishna P. Gummadi, and Stefan Saroiu

    Planetary-scale network testbeds like PlanetLab and RON have become indispensable for evaluating prototypes of distributed systems under realistic Internet conditions. However, current testbeds lack the heterogeneity that characterizes the commercial Internet. For example, most testbed nodes are connected to well-provisioned research networks, whereas most Internet nodes are in edge networks.

    In this paper, we present the design, implementation, and evaluation of SatelliteLab, a testbed that includes nodes from a diverse set of Internet edge networks. SatelliteLab has a two-tier architecture, in which well-provisioned nodes called planets form the core, and lightweight nodes called satellites connect to the planets from the periphery. The application code of an experiment runs on the planets, whereas the satellites only forward network traffic. Thus, the traffic is subjected to the network conditions of the satellites, which greatly improves the testbed’s network heterogeneity. The separation of code execution and traffic forwarding enables satellites to remain lightweight, which lowers the barrier to entry for Internet edge nodes.

    Our prototype of SatelliteLab uses PlanetLab nodes as planets and a set of 32 volunteered satellites with diverse network characteristics. These satellites consist of desktops, laptops, and handhelds connected to the Internet via cable, DSL, ISDN, Wi-Fi, Bluetooth, and cellular links. We evaluate SatelliteLab’s design, and we demonstrate the benefits of evaluating applications on SatelliteLab.

  • Zheng Zhang, Ying Zhang, Y. Charlie Hu, Z. Morley Mao, and Randy Bush

    IP prefix hijacking remains a major threat to the security of the Internet routing system due to a lack of authoritative prefix ownership information. Despite many efforts in designing IP prefix hijack detection schemes, no existing design can satisfy all the critical requirements of a truly effective system: real-time, accurate, light-weight, easily and incrementally deployable, as well as robust in victim notification. In this paper, we present a novel approach that fulfills all these goals by monitoring network reachability from key external transit networks to one's own network through lightweight prefix-owner-based active probing. Using the prefix-owner's view of reachability, our detection system, ISPY, can differentiate between IP prefix hijacking and network failures based on the observation that hijacking is likely to result in topologically more diverse polluted networks and unreachability. Through detailed simulations of Internet routing, 25-day deployment in 88 ASes (108 prefixes), and experiments with hijacking events of our own prefix from multiple locations, we demonstrate that ISPY is accurate with false negative ratio below 0.45% and false positive ratio below 0.17%. Furthermore, ISPY is truly real-time; it can detect hijacking events within a few minutes.

  • David G. Andersen, Hari Balakrishnan, Nick Feamster, Teemu Koponen, Daekyeong Moon, and Scott Shenker

    This paper presents AIP (Accountable Internet Protocol), a networkarchitecture that provides accountability as a first-order property.AIP uses a hierarchy of self-certifying addresses, in which eachcomponent is derived from the public key of the correspondingentity. We discuss how AIP enables simple solutions to sourcespoofing, denial-of-service, route hijacking, and route forgery. Wealso discuss how AIP’s design meets the challenges of scaling, keymanagement, and traffic engineering.

  • Haiyong Xie, Y. Richard Yang, Arvind Krishnamurthy, Yanbin Grace Liu, and Abraham Silberschatz

    As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications.

  • David R. Choffnes and Fabián E. Bustamante

    Peer-to-peer (P2P) systems, which provide a variety of popular services, such as file sharing, video streaming and voice-over- IP, contribute a significant portion of today’s Internet traffic. By building overlay networks that are oblivious to the underlying Internet topology and routing, these systems have become one of the greatest traffic-engineering challenges for Internet Service Providers (ISPs) and the source of costly data traffic flows. In an attempt to reduce these operational costs, ISPs have tried to shape, block or otherwise limit P2P traffic, much to the chagrin of their subscribers, who consistently finds ways to eschew these controls or simply switch providers.

    In this paper, we present the design, deployment and evaluation of an approach to reducing this costly cross-ISP traffic without sacrificing system performance. Our approach recycles network views gathered at low cost from content distribution networks to drive biased neighbor selection without any path monitoring or probing. Using results collected from a deployment in BitTorrent with over 120,000 users in nearly 3,000 networks, we show that our lightweight approach significantly reduces cross-ISP traffic and, over 33% of the time, it selects peers along paths that are within a single autonomous system (AS). Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random, and that these high-quality paths can lead to significant improvements in transfer rates. In challenged settings where peers are overloaded in terms of available bandwidth, our approach provides 31% average download-rate improvement; in environments with large available bandwidth, it increases download rates by 207% on average (and improves median rates by 883%).

  • Yan Huang, Tom Z.J. Fu, Dah-Ming Chiu, John C.S. Lui, and Cheng Huang

    P2P file downloading and streaming have already become very popular Internet applications. These systems dramatically reduce the server loading, and provide a platform for scalable content distribution, as long as there is interest for the content. P2P-based video-on-demand (P2P-VoD) is a new challenge for the P2P technology. Unlike streaming live content, P2P-VoD has less synchrony in the users sharing video content, therefore it is much more difficult to alleviate the server loading and at the same time maintaining the streaming performance. To compensate, a small storage is contributed by every peer, and new mechanisms for coordinating content replication, content discovery, and peer scheduling are carefully designed. In this paper, we describe and discuss the challenges and the architectural design issues of a large-scale P2P-VoD system based on the experiences of a real system deployed by PPLive. The system is also designed and instrumented with monitoring capability to measure both system and component specific performance metrics (for design improvements) as well as user satisfaction. After analyzing a large amount of collected data, we present a number of results on user behavior, various system performance metrics, including user satisfaction, and discuss what we observe based on the system design. The study of a real life system provides valuable insights for the future development of P2P-VoD technology.

  • Ashwin Bharambe, John R. Douceur, Jacob R. Lorch, Thomas Moscibroda, Jeffrey Pang, Srinivasan Seshan, and Xinyu Zhuang

    Without well-provisioned dedicated servers, modern fast-paced action games limit the number of players who can interact simultaneously to 16-32. This is because interacting players must frequently exchange state updates, and high player counts would exceed the bandwidth available to participating machines. In this paper, we describe Donnybrook, a system that enables epicscale battles without dedicated server resources, even in a fastpaced game with tight latency bounds. It achieves this scalability through two novel components. First, it reduces bandwidth demand by estimating what players are paying attention to, thereby enabling it to reduce the frequency of sending less important state updates. Second, it overcomes resource and interest heterogeneity by disseminating updates via a multicast system designed for the special requirements of games: that they have multiple sources, are latency-sensitive, and have frequent group membership changes. We present user study results using a prototype implementation based on Quake III that show our approach provides a desirable user experience. We also present simulation results that demonstrate Donnybrook’s efficacy in enabling battles of up to 900 players.

  • Sachin Katti, Dina Katabi, Hari Balakrishnan, and Muriel Medard

    This paper describes MIXIT, a system that improves the throughput of wireless mesh networks. MIXIT exploits a basic property of mesh networks: even when no node receives a packet correctly, any given bit is likely to be received by some node correctly. Instead of insisting on forwarding only correct packets, MIXIT routers use physical layer hints to make their best guess about which bits in a corrupted packet are likely correct and forward them to the destination. Even though this approach inevitably lets erroneous bits through, we show that it achieves high throughput without compromising end-to-end reliability.

    The core component of MIXIT is a novel network code that operates on small groups of bits, called symbols. It allows the nodes to opportunistically route correctly-received bits to their destination with low overhead. MIXIT’s network code also incorporates an endto- end error correction component that the destination uses to correct any errors that might seep through. We have implemented MIXIT on a software radio platform running the Zigbee radio protocol. Our experiments on a 25-node indoor testbed show that MIXIT has a throughput gain of 2.8× over MORE, a state-of-the-art opportunistic routing scheme, and about 3.9× over traditional routing using the ETX metric.

  • Yi Li, Lili Qiu, Yin Zhang, Ratul Mahajan, and Eric Rozner

    We present a novel approach to optimize the performance of IEEE 802.11-based multi-hop wireless networks. A unique feature of our approach is that it enables an accurate prediction of the resulting throughput of individual flows. At its heart lies a simple yet realistic model of the network that captures interference, traffic, and MAC-induced dependencies. Unless properly accounted for, these dependencies lead to unpredictable behaviors. For instance, we show that even a simple network of two links with one flow is vulnerable to severe performance degradation. We design algorithms that build on this model to optimize the network for fairness and throughput. Given traffic demands as input, these algorithms compute rates at which individual flows must send to meet the objective. Evaluation using a multi-hop wireless testbed as well as simulations show that our approach is very effective. When optimizing for fairness, our methods result in close to perfect fairness. When optimizing for throughput, they lead to 100-200% improvement for UDP traffic and 10-50% for TCP traffic.

  • Aruna Balasubramanian, Ratul Mahajan, Arun Venkataramani, Brian Neil Levine, and John Zahorjan

    connectivity from moving vehicles for common applications such as Web browsing and VoIP. Driven by this question, we conduct a study of connection quality available to vehicular WiFi clients based on measurements from testbeds in two different cities. We find that current WiFi handoff methods, in which clients communicate with one basestation at a time, lead to frequent disruptions in connectivity. We also find that clients can overcome many disruptions by communicating with multiple basestations simultaneously. These findings lead us to develop ViFi, a protocol that opportunistically exploits basestation diversity to minimize disruptions and support interactive applications for mobile clients. ViFi uses a decentralized and lightweight probabilistic algorithm for coordination between participating basestations. Our evaluation using a twomonth long deployment and trace-driven simulations shows that its link-layer performance comes close to an ideal diversity-based protocol. Using two applications, VoIP and short TCP transfers, we show that the link layer performance improvement translates to better application performance. In our deployment, ViFi doubles the number of successful short TCP transfers and doubles the length of disruption-free VoIP sessions compared to an existing WiFi-style handoff protocol.

  • Ying-Ju Chi, Ricardo Oliveira, and Lixia Zhang

    In this paper we present Cyclops, a system that collects and displays information of AS-level connectivity extracted from looking glasses, route-servers and BGP tables and updates of hundreds of routers across the Internet. From an operational standpoint, Cyclops provides ISPs a view of how their connectivity is perceived from the outside, enabling a comparison between their observed connectivity and their intended connectivity. ISPs can use the tool to detect and diagnose BGP misconfigurations, route leakages or hijacks based on false AS path. From a research standpoint, Cyclops is able to provide a quick snapshot of the AS-level connectivity of each network, and provides mechanisms to infer the root cause of BGP (de)peerings, which is an important input to the study of AS-level topology models and inter-domain routing protocols.

    Dmitri Krioukov
  • Amit Mondal and Aleksandar Kuzmanovic

    The well-accepted wisdom is that TCP’s exponential backoff mechanism, introduced by Jacobson 20 years ago, is essential for preserving the stability of the Internet. In this paper, we show that removing exponential backoff from TCP altogether can be done without inducing any stability sideeffects. We introduce the implicit packet conservation principle and show that as long as the endpoints uphold this principle, they can only improve their end-to-end performance relative to the exponential backoff case.

    By conducting large-scale simulations, modeling, and network experiments in Emulab and the Internet using a kernellevel FreeBSD TCP implementation, realistic traffic distributions, and complex network topologies, we demonstrate that TCP’s binary exponential backoff mechanism can be safely removed. Moreover, we show that insuitability of TCP’s exponential backoff is fundamental, i.e., independent from the currently-dominant Internet traffic properties or bottleneck capacities. Surprisingly, our results indicate that a path to incrementally deploying the change does exist.

    Jitendra Padhye
  • Domenico Ficara, Stefano Giordano, Gregorio Procissi, Fabio Vitucci, Gianni Antichi, and Andrea Di Pietro

    Modern network devices need to perform deep packet inspection at high speed for security and application-specific services. Finite Automata (FAs) are used to implement regular expressions matching, but they require a large amount of memory. Many recent works have proposed improvements to address this issue. This paper presents a new representation for deterministic nite automata (orthogonal to previous solutions), called Delta Finite Automata (dFA), which considerably reduces states and transitions and requires a transition per character only, thus allowing fast matching. Moreover, a new state encoding scheme is proposed and the comprehensive algorithm is tested for use in the packet classification area.

    Chadi Barakat
  • Sebastian Castro, Duane Wessels, Marina Fomenkov, and Kimberly Claffy

    We analyzed the largest simultaneous collection of full-payload packet traces from a core component of the global Internet infrastructure ever made available to academic researchers. Our dataset consists of three large samples of global DNS traffic collected during three annual 'Day in the Life of the Internet' (DITL) experiments in January 2006, January 2007, and March 2008. Building on our previous comparison of DITL 2006 and DITL 2007 DNS datasets [28], we venture to extract historical trends, comparisons with other data sources, and interpretations, including traffic growth, usage patterns, impact of anycast distribution, and persistent problems in the root nameserver system that reflect ominously on the global Internet. Most notably, the data consistently reveals an extraordinary amount of DNS pollution - an estimated 98% of the traffic at the root servers should not be there at all. Unfortunately, there is no clear path to reducing the pollution, so root server operators, and those who finance them, must perpetually overprovision to handle this pollution. Our study presents the most complete characterization to date of traffic reaching the roots, and while the study does not adequately fulfill the 'Day in the Life of the Internet' vision, it does succeed at unequivocally demonstrating that the infrastructure on which we are all now betting our professional, personal, and political lives deserves a closer and more scientific look.

  • Damon Wischik, Mark Handley, and Marcelo Bagnulo Braun

    Since the ARPAnet, network designers have built localized mechanisms for statistical multiplexing, load balancing, and failure resilience, often without understanding the broader implications. These mechanisms are all types of resource pooling, whichmeans making a collection of resources behave like a single pooled resource. We believe that the natural evolution of the Internet is that it should achieve resource pooling by harnessing the responsiveness of multipath-capable end systems. We argue that this approach will solve the problems and limitations of the current piecemeal approaches.

  • Yiyi Huang, Nick Feamster, and Renata Teixeira

    This paper investigates the practical issues in applying network tomography to monitor failures. We outline an approach for selecting paths to monitor, detecting and confirming the existence of a failure, correlating multiple independent observations into a single failure event, and applying existing binary networking tomography algorithms to identify failures. We evaluate the ability of network tomography algorithms to correctly detect and identify failures in a controlled environment on the VINI testbed.

  • Yaping Zhu, Andy Bavier, Nick Feamster, Sampath Rangarajan, and Jennifer Rexford

    Conventional wisdom has held that routing protocols cannot achieve both scalability and high availability. Despite scaling relatively well, today's Internet routing system does not react quickly to changing network conditions (e.g., link failures or excessive congestion). Overlay networks, on the other hand, can respond quickly to changing network conditions, but their reliance on aggressive probing does not scale to large topologies. The paper presents a layered routing architecture called UFO (Underlay Fused with Overlays), which achieves the best of both worlds by having the ?underlay? provide explicit notication about network conditions to help improve the efciency and scalability of routing overlays.

  • Jon Crowcroft, Eiko Yoneki, Pan Hui, and Tristan Henderson

    So what is all this DTN research about anyway? Sceptics ask: 'Why are there no DTN applications?', or 'Why is DTN performance so miserable?' This article attempts to address some of these complaints. We present suggestions of expectations for applications, and metrics for performance, which suggest a more tolerant view of research in the area.

  • Rui L. Aguiar

    The hourglass concept has been undisputable ruler of networking visions on the last years. As network evolution is now a hot topic, this article aims to reflect on this concept, highlighting its current shortcomings, and suggesting its evolution.

  • Michalis Faloutsos

    The proof is in the pudding. Never have truer words been spoken. If only research was a lavish meal that concluded with a pudding. But now, what is a researcher to do?

  • Neil Spring

    Sally Floyd wins 2007 SIGCOMM Award

    CALL FOR NOMINATIONS: EDITOR-IN-CHIEF, ACM Computer Communications Review (CCR)

  • Petter Holme, Josh Karlin, and Stephanie Forrest

    Modeling Internet growth is important both for understanding the current network and to predict and improve its future. To date, Internet models have typically attempted to explain a subset of the following characteristics: network structure, traffic flow, geography, and economy. In this paper we present a discrete, agent-based model, that integrates all of them. We show that the model generates networks with topologies, dynamics, and more speculatively spatial distributions that are similar to the Internet.

    Jon Crowcroft
  • Joon Ahn, Shyam Kapadia, Sundeep Pattem, Avinash Sridharan, Marco Zuniga, Jung-Hyun Jun, Chen Avin, and Bhaskar Krishnamachari

    In the last few years, several studies have analyzed the performance of flooding and random walks as querying mechanisms for unstructured wireless sensor networks. However, most of the work is theoretical in nature and while providing insights into the asymptotic behavior of these querying mechanisms, does not account for the non-idealities faced by the network in real deployments. In this paper, we propose a 3-way handshake protocol as a reliable implementation of a random walk and compare its performance with °ooding in real environments. The metrics considered are delay, reliability and transmission cost. Our initial results suggest that flooding is better suited for low-interference environments, while random walks might be a better option in networks with high interference. We also present possible research directions to improve the performance of flooding and random walks.

    Kevin Almeroth
  • Grenville Armitage, Lawrence Stewart, Michael Welzl, and James Healy

    A key requirement for IETF recognition of new TCP algorithms is having an independent, interoperable implementation. This paper describes our BSD-licensed implementation of H-TCP within FreeBSD 7.0, publicly available as a dynamically loadable kernel module. Based on our implementation experience we provide a summary description of the H-TCP algorithm to assist other groups build further interoperable implementations. Using data from our live testbed we demonstrate that our version exhibits expected H-TCP behavior, and describe a number of implementation-specific issues that influence H-TCP’s dynamic behavior. Finally, we illustrate the actual collateral impact on path latency of using H-TCP instead of NewReno. In particular we illustrate how, compared to NewReno, H-TCP’s cwnd growth strategy can cause faster fluctuations in queue sizes at, yet lower median latency through, congestion points. We believe these insights will prove valuable predictors of H-TCP’s potential impact if deployed in consumer end-hosts in addition to specialist, high-performance network environments.

    Jithendra Padye
  • Christian Henke, Carsten Schmoll, and Tanja Zseby

    A broad spectrum of network measurement applications demand passive multipoint measurements in which data from multiple observation points has to be correlated. Examples are the passive measurement of one-way delay or the observation of the path that a packet takes through a network. Nevertheless, due to high data rates and the need for fine granular measurements, the resource consumption for passive measurements can be immense. Furthermore, the resource consumption depends on the traffic in the network, which usually is highly dynamic. Packet and ow-selection methods provide a solution to reduce and control the resource consumption for passive measurements. In order to apply such techniques to multipoint measurements the selection processes need to be synchronized. Hash-based selection is a deterministic packet selection based on a hash function computed on selected parts of the packet content. This selection decision is consistent throughout the network and enables packet tracing and the measurement of delay between network nodes. Because the selection is based on deterministic function it can introduce bias which leads to wrong estimation of traffic characteristics. In this paper we define a set of quality criteria and select methods to investigate which hash function is most suitable for hash-based packet selection. We analyze 23 non-cryptographic and 2 cryptographic hash functions. Experiments are performed with real traffic traces from different networks. Based on the results we recommend 2 fast hash functions which show low bias and sample a representative subset of the population.

    Chadi Barakat
  • Frank Kelly, Gaurav Raina, and Thomas Voice

    Rate control protocols that utilise explicit feedback from routers are able to achieve fast convergence to an equilibrium which approximates processor-sharing on a single bottleneck link, and hence such protocols allow short flows to complete quickly. For a network, however, processor-sharing is not uniquely defined but corresponds with a choice of fairness criteria, and proportional fairness has a reasonable claim to be the network generalization of processor-sharing.

    In this paper, we develop a variant of RCP (rate control protocol) that achieves alpha-fairness when buffers are small, including proportional fairness as the case alpha = 1. At the level of theoretical abstraction treated, our model incorporates a general network topology, and heterogeneous propagation delays. For our variant of the RCP algorithm, we establish a simple decentralized sufficient condition for local stability.

    Darryl Veitch
  • Haldane Peterson, Soumya Sen, Jaideep Chandrashekar, Lixin Gao, Roch Guerin, and Zhi-Li Zhang

    With steady improvement in the reliability and performance of communication devices, routing instabilities now contribute to many of the remaining service degradations and interruptions in modern networks. This has led to a renewed interest in centralized routing systems that, compared to distributed routing, can provide greater control over routing decisions and better visibility of the results. One benefit of centralized control is the opportunity to readily eliminate transient routing loops, which arise frequently after network changes because of inconsistent routing states across devices. Translating this conceptual simplicity into a solution with tolerable message complexity is non-trivial. Addressing this issue is the focus of this paper. We identify when and why avoiding transient loops might require a significant number of messages in a centralized routing system, and demonstrate that this is the case under many common failure scenarios. We also establish that minimizing the number of required messages is NP-hard, and propose a greedy heuristic that we show to perform well under many conditions. The paper’s results can facilitate the deployment and evaluation of centralized architectures by leveraging their strengths without incurring unacceptable overhead.

    Dmitri Krioukov
  • Bhaskaran Raman and Kameswari Chebrolu

    This writeup presents a critique of the field of “Wireless Sensor Networks (WSNs)”. Literature in this domain falls into two main, distinct categories: (1) algorithms or protocols, and (2) application-centric system design. A striking observation is that references across these two categories are minimal, and superficial at best. We argue that this is not accidental, and is the result of three main flaws in the former category of work. Going forward, an application-driven, bottom-up approach is required for meaningful articulation and subsequent solution of any networking issues in WSNs.

  • Venkata N. Padmanabhan

    When Jim Kurose invited me to write a piece for the "recommended reading" series in CCR, I thought it would be a fun exercise and accepted the invitation right away. Little did I realize how challenging it would be to put together this list, as much because I was only allowed to pick 10 papers as because I had to steer clear of the many fine papers that had already been picked in previous lists. Rather than focusing on a single topic, I decided to pick papers from across sub-areas of networking that have left a lasting impression on me (and quite possibly on other researchers as well) because of their elegance, the insights they provide, and in many cases both of these. I hope many readers will enjoy reading these papers, if they have not already done so.

  • Tim Strayer, Mark Allman, Grenville Armitage, Steve Bellovin, Shudong Jin, and Andrew W. Moore

    The Internet Research Task Force’s (IRTF) Internet Measurement Research Group (IMRG) continued its practice of sponsoring workshops on current topics in computer network measurement with the Workshop on Application Classification and Identification (WACI) held October 3, 2007, at BBN Technologies in Cambridge, Massachusetts. There has been much general interest within the community recently in finding techniques to identify traffic without examination of the service ports used in the TCP or UDP header, as these increasingly do not accurately indicate the application generating the traffic. The workshop agenda was formed around six abstracts that were selected from an open call by the workshop organizing committee.

  • Michalis Faloutsos

    The goal of this column this time was to address major scientific issues and propose novel scientific methods for doing the same thing under different names. However, it turned out to be too complicated as it required mastery of the english alphabet. Then, I had an epiphany (greek word): Why propose something new, when you can complain about something that already exists? I think I am turning into an angry old man prematurely.

  • Jeffrey C. Mogul and Tom Anderson

    The Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems (WOWCS) was organized to “bring together conference organizers (past, present, and future) and other interested people to discuss the issues they confront.” In conjunction with WOWCS, we survey some previous publications that discuss open issues related to organizing computer systems conferences, especially concerning conduct and management of the review process. We also list some topics about which we wish WOWCS had received submissions, but did not; these could be good topics for future articles.

  • Xenofontas Dimitropoulos, M. Ángeles Serrano, and Dmitri Krioukov

    Several users of our AS relationship inference data asked us why it contained AS relationship cycles, e.g., cases where AS A is a provider of AS B, B is a provider of C, and C is a provider of A, or other cycle types. Having been answering these questions in private communications, we have eventually decided to write down our answers here for future reference.

Syndicate content