CCR Papers from april 2010

  • S. Keshav

    This editorial is about some changes that will affect CCR and its community in the months ahead.

    Changes in the Editorial Board

    CCR Area Editors serve for a two-year term. Since the last issue, the terms of the following Area Editors have expired:
    • Kevin Almeroth, UC Santa Barbara, USA
    • Chadi Barakat, INRIA Sophia Antipolis, France
    • Dmitri Krioukov, CAIDA, USA
    • Jitendra Padhye, Microsoft Research, USA
    • Pablo Rodriguez, Telefonica, Spain
    • Darryl Veitch, University of Melbourne, Australia

    I would like to thank them for their devotion, time, and effort. They have greatly enhanced the quality and reputation of this publication.

    Taking their place is an equally illustrious team of researchers:
    • Augustin Chaintreau, Thomson Research, France
    • Stefan Saroiu, Microsoft Research, USA
    • Renata Teixeira, LIP6, France
    • Jia Wang, AT&T Research, USA
    • David Wetherall, University of Washington, USA Welcome aboard!

    Online Submission System

    This is the first issue of CCR completely created using an online paper submission system rather than email. A slight variant to Eddie Kohler's HotCRP, the CCR submission site allows authors to submit papers at any time, and for them to receive reviews as they are finalized.

    Moreover, they can respond to the reviews and conduct an anonymized conversation with their Area Editor. The system is currently batched: reviewer assignments and reviews are done once every three months. However, starting shortly, papers will be assigned to an Area Editor for review as they are submitted and the set of accepted papers will be published quarterly in CCR. We hope that this will allow authors to have the benefits of a 'rolling deadline,' similar to that pioneered by the VLDB journal.

    Reviewer Pool

    The reviewer pool is a set of volunteer reviewers, usually post-PhD, who are called upon by Area Editors to review papers in their special interests. The current set of reviewers in the pool can be found here: http://blizzard.cs.uwaterloo.ca/ccr/reviewers.html. If you would like to join the pool, please send mail to ccr-edit@uwaterloo.ca with your name, affiliation, interests, and contact URL.

    Page Limits

    We have had a six-page limit for the last year. The purpose of this limit was to prevent CCR from becoming a cemetery for dead papers. This policy has been a success: the set of technical papers in each issue has been vibrant and well-suited to this venue. However, we recognize that it is difficult to fit work into six pages. Therefore, from now on, although submissions will still be limited to six pages (unless permission is obtained in advance), if the reviewers suggest additional work, additional pages will be automatically granted.

    I hope that these changes will continue to make CCR a bellwether for our community. As always, your comments and suggestions for improvement are always welcome.

  • Hilary Finucane and Michael Mitzenmacher

    We provide a detailed analysis of the Lossy Difference Aggregator, a recently developed data structure for measuring latency in a router environment where packet losses can occur. Our analysis provides stronger performance bounds than those given originally, and leads us to a model for how to optimize the parameters for the data structure when the loss rate is not known in advance by using competitive analysis.

    Dmitri Krioukov
  • Marta Carbone and Luigi Rizzo

    Dummynet is a widely used link emulator, developed long ago to run experiments in user-configurable network environments. Since its original design, our system has been extended in various ways, and has become very popular in the research community due to its features and to the ability to emulate even moderately complex network setups on unmodified operating systems.

    We have recently made a number of extensions to the emulator, including loadable packet schedulers, support for better MAC layer modeling, the inclusion in PlanetLab, and development of Linux and Windows versions in addition to the native FreeBSD and Mac OS X ones.

    The goal of this paper is to present in detail the current features of Dummynet, compare it with other emulation solutions, and discuss what operating conditions should be considered and what kind of accuracy to expect when using an emulation system.

    Kevin Almeroth
  • Hamed Haddadi

    Online advertising is currently the richest source of revenue for many Internet giants. The increased number of online businesses, specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses.

    In this note we wish to raise the awareness of the networking research community on potential research areas within the online advertising field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels.

    Adrian Perrig
  • Dirk Trossen, Mikko Sarela, and Karen Sollins

    The current Internet architecture focuses on communicating entities, largely leaving aside the information to be ex-changed among them. However, trends in communication scenarios show that WHAT is being exchanged becoming more important than WHO are exchanging information. Van Jacobson describes this as moving from interconnecting ma-chines to interconnecting information. Any change of this part of the Internet needs argumentation as to why it should be undertaken in the first place. In this position paper, we identify four key challenges, namely information-centrism of applications, supporting and exposing tussles, increasing accountability, and addressing attention scarcity, that we believe an information-centric internetworking architecture could address better and would make changing such crucial part worthwhile. We recognize, however, that a much larger and more systematic debate for such change is needed, underlined by factual evidence on the gain for such change.

    Kevin Almeroth
  • Pei-chun Cheng, Xin Zhao, Beichuan Zhang, and Lixia Zhang

    BGP routing data collected by RouteViews and RIPE RIS have become an essential asset to both the network research and operation communities. However, it has long been speculated that the BGP monitoring sessions between operational routers and the data collectors fail from time to time. Such session failures lead to missing update messages as well as duplicate updates during session re-establishment, making analysis results derived from such data inaccurate. Since there is no complete record of these monitoring session failures, data users either have to sanitize the data discretionarily with respect to their specific needs or, more commonly, assume that session failures are infrequent enough and simply ignore them. In this paper, we present the first systematic assessment and documentary on BGP session failures of RouteViews and RIPE data collectors over the past eight years. Our results show that monitoring session failures are rather frequent, more than 30% of BGP monitoring sessions experienced at least one failure every month. Furthermore, we observed failures that happen to multiple peer sessions on the same collector around the same time, suggesting that the collector’s local problems are a major factor in the session instability. We also developed a web site as a community resource to publish all session failures detected for RouteViews and RIPE RIS data collectors to help users select and clean up BGP data before performing their analysis.

    Jitendra Padhye
  • David R. Choffnes and Fabian E. Bustamante

    Today’s open platforms for network measurement and distributed system research, which we collectively refer to as testbeds in this article, provide opportunities for controllable experimentation and evaluations of systems at the scale of hundreds or thousands of hosts. In this article, we identify several issues with extending results from such platforms to Internet wide perspectives. Specifically, we try to quantify the level of inaccuracy and incompleteness of testbed results when applied to the context of a large-scale peer-to-peer (P2P) system. Based on our results, we emphasize the importance of measurements in the appropriate environment when evaluating Internet-scale systems.

    Pablo Rodrigues
  • Diana Joumblatt, Renata Teixeira, Jaideep Chandrashekar, and Nina Taft

    There is an amazing paucity of data that is collected directly from users’ personal computers. One key reason for this is the perception among researchers that users are unwilling to participate in such a data collection effort. To understand the range of opinions on matters that occur with end-host data tracing, we conducted a survey of 400 computer scientists. In this paper, we summarize and share our findings.

  • k. c. claffy

    On September 23, 2009, CAIDA hosted a virtual Workshop on Internet Economics to bring together network technology and policy researchers, commercial Internet facilities and service providers, and communications regulators to explore a common goal: framing a concrete agenda for the emerging but empirically stunted field of Internet infrastructure economics. With participants stretching from Washington D.C. to Queensland, Australia, we used the electronic conference hosting facilities supported by the California Institute of Technology (CalTech) EVO Collaboration Network. This report describes the workshop discussions and presents relevant open research questions identified by participants.

  • Ratul Mahajan

    This paper is based on a talk that I gave at CoNEXT 2009. Inspired by Hal Varian’s paper on building economic models, it describes a research method for building computer systems. I find this method useful in my work and hope that some readers will find it helpful as well.

  • Matthew Caesar, Martin Casado, Teemu Koponen, Jennifer Rexford, and Scott Shenker

    This paper advocates a different approach to reduce routing convergence—side-stepping the problem by avoiding it in the first place! Rather than recomputing paths after temporary topology changes, we argue for a separation of timescale between offline computation of multiple diverse paths and online spreading of load over these paths. We believe decoupling failure recovery from path computation leads to networks that are inherently more efficient, more scalable, and easier to manage.

  • Constantine Dovrolis and J. Todd Streelman

    There is significant research interest recently to understand the evolution of the current Internet, as well as to design clean-slate Future Internet architectures. Clearly, even when network architectures are designed from scratch, they have to evolve as their environment (i.e., technological constraints, service requirements, applications, economic conditions, etc) always changes. A key question then is: what makes a network architecture evolvable? What determines the ability of a network architecture to evolve as its environment changes? In this paper, we review some relevant ideas about evolvability from the biological literature. We examine the role of robustness and modularity in evolution, and their relation with evolvability. We also discuss evolutionary kernels and punctuated equilibria, two important concepts that may be relevant to the so-called ossification of the core Internet protocols. Finally, we examine optimality, a design objective that is often of primary interest in engineering but that does not seem to be abundant in biology.

  • Hamed Haddadi, Tristan Henderson, and Jon Crowcroft

    On numerous occasions, trips to the facilities coincide with an important mobile phone call. Due to the sleek and polished nature of modern phones, attempting to promptly deal with such calls can occasionally lead to the phone sliding through the owner’s hands, surrendering to the force of gravity and flying down the hole. This is a disaster, and often an expensive incident. It can also be a health and safety hazard, with the owner desperately attempting to retrieve their phone and re-using it.

    This paper provides a first attempt at a cell phone recovery system using themodern functionalities of Toto Japanese toilets.1 In our approach, the phone is calmly recovered, sanitized and retrieved by the user. This can all happen without the call even being dropped, with possibility of secure backup of the user data by embedded sensor and Wi-Fi network connectivity in the toilet. We envision that such an approach will increase the collaboration between Japanese, European and American mobile operators, network researchers and hardware manufacturers.

Syndicate content