Computer Communication Review: Papers

Find a CCR issue:
  • Igor Ganichev, Bin Dai, P. Brighten Godfrey, and Scott Shenker

    Multipath routing is a promising technique to increase the Internet’s reliability and to give users greater control over the service they receive. However, past proposals choose paths which are not guaranteed to have high diversity. In this paper, we propose yet another multipath routing scheme (YAMR) for the interdomain case. YAMR provably constructs a set of paths that is resilient to any one inter-domain link failure, thus achieving high reliability in a systematic way. Further, even though YAMR maintains more paths than BGP, it actually requires significantly less control traffic, thus alleviating instead of worsening one of the Internet’s scalability problems. This reduction in churn is achieved by a novel hiding technique that automatically localizes failures leaving the greater part of the Internet completely oblivious.

    J. Wang
  • Niccolo' Cascarano, Pierluigi Rolando, Fulvio Risso, and Riccardo Sisto

    This paper presents iNFAnt, a parallel engine for regular expression pattern matching. In contrast with traditional approaches, iNFAnt adopts non-deterministic automata, al- lowing the compilation of very large and complex rule sets that are otherwise hard to treat.

    iNFAnt is explicitly designed and developed to run on graphical processing units that provide large amounts of concurrent threads; this parallelism is exploited to handle the non-determinism of the model and to process multiple packets at once, thus achieving high performance levels.

    Y. Zhang
  • Mark Allman

    In this paper we propose a system that will allow people to communicate their status with friends and family when they find themselves caught up in a large disaster (e.g., sending “I’m fine” in the immediate aftermath of an earthquake). Since communication between a disaster zone and the non-affected world is often highly constrained we design the system around lightweight triggers such that people can communicate status with only crude infrastructure (or even sneaker-nets). In this paper we provide the high level system design, discuss the security aspects of the system and study the overall feasibility of a purpose-built social networking system for communication during an emergency.

    S. Saroiu
  • Alisa Devlic

    Context-aware applications need quickly access to current context information, in order to adapt their behavior before this context changes. To achieve this, the context distribution mechanism has to timely discover context sources that can provide a particular context type, then acquire and distribute context information from these sources to the applications that requested this type of information. This paper reviews the state-of-the-art context distribution mechanisms according to identified requirements, then introduces a resource listbased subscription/notification mechanism for context sharing. This SIP-based mechanism enables subscriptions to a resource list containing URIs of multiple context sources that can provide the same context type and delivery of aggregated notifications containing context updates from each of these sources. Aggregation of context is thought to be important as it reduces the network traffic between entities involved in context distribution. However, it introduces an additional delay due to waiting for context updates and their aggregation. To investigate if this aggregation actually pays off, we measured and compared the time needed by an application to receive context updates after subscribing to a particular resource list (using RLS) versus after subscribing to each of the individual context sources (using SIMPLE) for different numbers of context sources. Our results show that RLS aggregation outperforms the SIMPLE presence mechanism with 3 or more context sources, regardless of their context updates size. Database performance was identified as a major bottleneck during aggregation, hence we used in-memory tables & prepared statements, leading to up to 57% database time improvement, resulting in a reduction of the aggregation time by up to 34%. With this reduction and an increase in context size, we pushed the aggregation payoff threshold closer to 2 context sources.

    K. Papagiannaki
  • Augusto Ciuffoletti

    Infrastructure as a Service (IaaS) providers keep extending with new features the computing infrastructures they offer on a pay per use basis. In this paper we explore reasons and opportunities to include networking within such features, meeting the demand of users that need composite computing architectures similar to Grids.

    The introduction of networking capabilities within IaaSs would further increase the potential of this technology, and also foster an evolution of Grids towards a confluence, thus incorporating the ex- periences matured in this environment.

    Network monitoring emerges as a relevant feature of such virtual architectures, which must exhibit the distinguishing properties of the IaaS paradigm: scalability, dynamic configuration, accounting. Monitoring tools developed with the same purpose in Grids provide useful insights on problems and solutions.

  • kc claffy, Emile Aben, Jordan Auge, Robert Beverly, Fabian Bustamante, Benoit Donnet, Timur Friedman, Marina Fomenkov, Peter Haga, Matthew Luckie, and Yuval Shavitt

    On February 8-10, 2010, CAIDA hosted the second Workshop on Active Internet Measurements (AIMS-2) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. The goals of this workshop were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to addressing future data needs of the network and security research communities. The three-day workshop included presentations, group discussion and analysis, and focused interaction between participating researchers, operators, and policymakers from all over the world. This report describes the motivation and findings of the workshop, and reviews progress on recommendations developed at the 1st Active Internet Measurements Workshop in 2009 [18]. Slides from the workshop presentations are available at [9].

  • Anthony Rutkowski, Youki Kadobayashi, Inette Furey, Damir Rajnovic, Robert Martin, Takeshi Takahashi, Craig Schultz, Gavin Reid, Gregg Schudel, Mike Hird, and Stephen Adegbite

    The cybersecurity information exchange framework, known as CYBEX, is currently undergoing its first iteration of standardization efforts within ITU-T. The framework describes how cybersecurity information is exchanged between cybersecurity entities on a global scale and how the exchange is assured. The worldwide implementation of the framework will eventually minimize the disparate availability of cybersecurity information. This paper provides a specification overview, use cases, and the current status of CYBEX.

  • Balachander Krishnamurthy

    This is a brief journey across the Internet privacy landscape. After trying to convince you about the importance of the problem I will try to present questions of interest and how you might be able to apply your expertise to them.

  • Andreas Maeder and Nader Zein

    OFDMA will be the predominant technology for the air interface of broadband mobile wireless systems for the next decades. In recent years, OFDMA-based networks based on IEEE 802.16, and increasingly also on 3GPP LTE are rolled out for commercial use. This article gives an overview of the main challenges for the deployment and operation of state-of-the-art OFDMA networks, along with an outlook into future developments for 4G and beyond 4G networks.

  • S. Keshav

    Every scientific discipline builds on the past: new ideas invariably appear from the analysis, synthesis, and repudiation of prior work. Even an innovator like Sir Isaac Newton wrote to Robert Hooke on 15 February 1676: “If I have seen further it is only by standing on the shoulders of giants.” A necessary prerequisite for building on the past is for the body of archival work to be of the highest possible quality. Work that enters the communal memory should have no errors that either that the authors are aware of, or that can be rectified by careful peer review. Of course, no process can hope to eliminate errors altogether, but archival work should be free from errors that can be avoided with reasonable care.

    Conference publications, by their very nature, are susceptible to errors. The process is driven by strict deadlines, preventing authors from having a back-and-forth exchange with the reviewers in an attempt to fix problems. Program committee members, faced with a stack of 15 to 25 papers to review, naturally limit the depth of their reviews. Moreover, the selection of a paper for publication means only that a paper ranked amongst the best of those submitted for consideration by the program committee, rather than a guarantee of absolute quality. Although shepherding does improve the quality of an accepted paper, even shepherding is only mildly effective when faced with the natural reluctance of authors to do additional work for a paper that has already been accepted for publication. For these reasons, a field that treats conferences as archival publications is building on a foundation of sand.

    The Computer Research Association (CRA), however, arguing on behalf of the entire field of computer science, states that: “The reason conference publication is preferred to journal publication, at least for experimentalists, is the shorter time to print (7 months vs 1-2 years), the opportunity to describe the work before one’s peers at a public presentation, and the more complete level of review (4-5 evaluations per paper compared to 2-3 for an archival journal) [Academic Careers, 94]. Publication in the prestige conferences is inferior to the prestige journals only in having significant page limitations and little time to polish the paper. In those dimensions that count most, conferences are superior.” [1]

    The two negatives for conferences identified by the CRA, page limits and ‘lack of polish’ are worth examining. Today, the IEEE/ACM Transactions on Networking (ToN) limits papers to ten free pages and a maximum of 14 pages [2]. This is scarcely longer than many conference papers. Thus, the situation for journal papers is even worse than what the CRA states. On the other hand, what the CRA dismissively calls a ‘lack of polish’ sweeps many issues under the metaphorical carpet: issues like inadequate experimental design, lack of rigour in statistical analysis, and incorrect proofs. It seems unwise to permit these severe problems in papers that we admit to archival status. Unfortunately, given the conference publication process, these errors are unavoidable. Perhaps it would be better to think of ways of improving the journal publication process instead.

    Let's start by considering the reasons why the CRA thinks conference publications are superior to journal publications. Two of the three reasons – number of reviews and time to publication – are easily remedied. There is no reason why journal editors could not ask for more reviews. Few conference papers receive more than three reviews and this number could be easily matched by journal editors. Second, the two-to-three year publication delay for a journal paper, according to Henning Schulzrinne, who has had a long history of dealing with this process at ToN, arises primarily from the delay in assigning papers to reviewers and the delay in the authors’ responses to the first round of reviewer comments. The equivalent processes at conferences take only a few weeks. Why can’t journals match that? As a contrasting data point, journals in civil engineering have review times of 90 days and publication delays of only three to five months [3], which is shorter than even conference publication delays in computer science.

    This leaves conferences with just one advantage over journals, that of permitting face-to-face meetings. Specifically, in his recent article in CACM [3], Lance Fortnow argues that conferences allow the community:
    * To rate publications and researchers.
    * To disseminate new research results and ideas.
    * To network, gossip, and recruit.
    * To discuss controversial issues in the community.

    These are tangible and valuable benefits. However, as Fortnow and others have argued, we could organize conferences where not all accepted papers are presented on stage, leaving some to be presented in the form of posters. These would result in better-attended, more inclusive conferences, which would meet the needs Fortnow identifies, while not detracting from the archival value of journals. The informal poster format would also allow the presentation of early-stage ideas, which is valuable both to authors and to the research community. If posters are clearly marked, this would not detract from the prestige of full papers already published in the conference.

    I believe that we can begin to restore the integrity of archival publications by taking the following steps. First, we should increase the number and perceived prestige of posters at SIGCOMM-sponsored conferences, with more time set aside in the technical program for attendees to view posters. This would boost conference participation and better disseminate early stage ideas. Second, we should re-engineer the journal publication process to cap publication delay to six months. Third, journal editors should allow papers to be as lengthy as they need to be, instead of imposing an artificial page limit. Fourth, a greater emphasis on journal publications will be possible only if journals themselves are economically viable. If it turns out that print journals are unviable (a debatable point), we should consider moving to electronic-only journals or subsidize the production cost from conference fees.

    As these changes are made, other synergies may also present themselves. For example, reducing the conference review load could free up resources for journal reviews. Similarly, increased conference attendance from a more generous poster acceptance policy could increase journal subsidies, and moving to electronic journals would not only reduce costs, but would also cut publication delay.

    The net result of these changes will be to restore the integrity of our archival work. We cannot afford to let this slip much longer: the time to act is now!

    [1] D. Patterson, J. Snyder, J. Ullman, Evaluating computer scientists and engineers for promotion and tenure; http://www.cra.org/reports/tenure_review.html, August 1999.
    [2] http://www.ton.seas.upenn.edu/submissions.html#format
    [3] http://pubs.asce.org/editors/journal/resourceeditor/editorresponsibiliti...
    [4] Lance Fortnow, Viewpoint: Time for computer science to grow up, Communications of the ACM. Vol. 52 No. 8, Pages 33-35.

  • Sardar Ali, Irfan Ul Haq, Sajjad Rizvi, Naurin Rasheed, Unum Sarfraz, Syed Ali Khayam, and Fauzan Mirza

    Real-time Anomaly Detection Systems (ADSs) use packet sampling to realize tra±c analysis at wire speeds. While recent studies have shown that a considerable loss of anomaly detection accuracy is incurred due to sampling, solutions to mitigate this loss are largely unexplored. In this paper, we propose a Progressive Security-Aware Packet Sampling (PSAS) algorithm which enables a real-time inline anomaly detector to achieve higher accuracy by sampling larger volumes of malicious tra±c than random sampling, while adhering to a given sampling budget. High malicious sampling rates are achieved by deploying inline ADSs progressively on a packet's path. Each ADS encodes a binary score (malicious or benign) of a sampled packet into the packet before forwarding it to the next hop node. The next hop node then samples packets marked as malicious with a higher probability. We analytically prove that under certain realistic conditions, irrespective of the intrusion detection algorithm used to formulate the packet score, PSAS always provides higher malicious packet sampling rates. To empirically evaluate the proposed PSAS algorithm, we simultaneously collect an Internet tra±c dataset containing DoS and portscan attacks at three di®erent deployment points in our university's network. Experimental results using four existing anomaly detectors show that PSAS, while having no extra communication overhead and extremely low complexity, allows these detectors to achieve signi¯cantly higher accuracies than those operating on random packet samples.

    R. Teixeira
  • Roch Guérin and Kartik Hosanagar

    Although IPv6 has been the next generation Internet protocol for nearly 15 years, new evidences indicate that transitioning from IPv4 to IPv6 is about to become a more pressing issue. This paper attempts to quantify if and how such a transition may unfold. The focus is on “connectivity quality,” e.g., as measured by users’ experience when accessing content, as a possible incentive (or disincentive) for migrating to IPv6, and on “translation costs” (between IPv6 and IPv4) that Internet Service Providers will incur during this transition. The paper develops a simple model that captures some of the underlying interactions, and highlights the ambiguous role of translation gateways that can either help or discourage IPv6 adoption. The paper is an initial foray in the complex and often puzzling issue of migrating the current Internet to a new version with which it is incompatible.

    S. Saroiu
  • Nandita Dukkipati, Tiziana Refice, Yuchung Cheng, Jerry Chu, Tom Herbert, Amit Agarwal, Arvind Jain, and Natalia Sutin

    TCP flows start with an initial congestion window of at most four segments or approximately 4KB of data. Because most Web transactions are short-lived, the initial congestion window is a critical TCP parameter in determining how quickly flows can finish. While the global network access speeds increased dramatically on average in the past decade, the standard value of TCP’s initial congestion window has remained unchanged.

    In this paper, we propose to increase TCP’s initial congestion window to at least ten segments (about 15KB). Through large-scale Internet experiments, we quantify the latency benefits and costs of using a larger window, as functions of network bandwidth, round-trip time (RTT), bandwidthdelay product (BDP), and nature of applications. We show that the average latency of HTTP responses improved by approximately 10% with the largest benefits being demonstrated in high RTT and BDP networks. The latency of low bandwidth networks also improved by a significant amount in our experiments. The average retransmission rate increased by a modest 0.5%, with most of the increase coming from applications that effectively circumvent TCP’s slow start algorithm by using multiple concurrent connections. Based on the results from our experiments, we believe the initial congestion window should be at least ten segments and the same be investigated for standardization by the IETF.

    Y. Zhang
  • Zuoning Yin, Matthew Caesar, and Yuanyuan Zhou

    Software errors and vulnerabilities in core Internet routers have led to several high-profile attacks on the Internet infrastructure and numerous outages. Building an understanding of bugs in open-source router software is a first step towards addressing these problems. In this paper, we study router bugs found in two widely-used open-source router implementations. We evaluate the root cause of bugs, ease of diagnosis and detectability, ease of prevention and avoidance, and their effect on network behavior.

    S. Saroiu
  • Michael Buettner and David Wetherall

    We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (> $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.

    A. Chaintreau
  • Jon Crowcroft

    I’m so Bored of the Future Internet (FI). There are so many initiatives to look at the Internet’s Future1, anyone would think that there was some tremendous threat like global warming, about to bring about its immediate demise, and that this would bring civilisation crashing down around our ears.

    The Internet has a great future behind it, of course. However, my thesis is that the Future Internet is about as relevant as Anthropogenic Global Warming (AGW), in the way it is being used to support various inappropriate activities. Remember that the start of all this was not the exhaustion of IPv4 address space, or the incredibly slow convergence time of BGP routes, or the problem of scaling router memory for FIBs. It was the US research community reacting to a minor (as in parochial) temporary problem of funding in Communications due to slow down within NSF and differing agendas within DARPA.

    It is not necessary to invoke all the hype and hysteria - it is both necessary and sufficient to talk about sustainable energy2, and good technical communications research, development, deployment and operations.

    To continue the analogy between FI and AGW, what we really do not need is yet more climatologists with dodgy data curation methodologies (or ethnographers studying Internet governance).

    What we do need is some solid engineering, to address a number of problems the Internet has. However, this is in fact happening, and would not stop happening if the entire Future Internet flagship was kidnapped by aliens.
    “We don’t need no” government agency doing top down dictats about what to do when. It won’t work and it will be a massive waste of time, energy and other resources - i.e. like AGW, it will be a load of hot air:)

    On the other hand, there are a number of deeper lessons from the Internet Architecture which might prove useful in other domains, and in the bulk of this opinion piece, I give examples of these, applying the Postel and End-to-end principles to transport, energy, government information/vices.

  • Constantine Dovrolis, Krishna Gummadi, Aleksandar Kuzmanovic, and Sascha D. Meinrath

    Measurement Lab (M-Lab) is an open, distributed server platform for researchers to deploy active Internet measurement tools. The goal of M-Lab is to advance network research and empower the public with useful information about their broadband connections. By enhancing Internet transparency, M-Lab helps sustain a healthy, innovative Internet. This article describes M-Lab’s objectives, administrative organization, software and hardware infrastructure. It also provides an overview of the currently available measurement tools and datasets, and invites the broader networking research community to participate in the project.

  • S. Keshav

    This editorial is about some changes that will affect CCR and its community in the months ahead.

    Changes in the Editorial Board

    CCR Area Editors serve for a two-year term. Since the last issue, the terms of the following Area Editors have expired:
    • Kevin Almeroth, UC Santa Barbara, USA
    • Chadi Barakat, INRIA Sophia Antipolis, France
    • Dmitri Krioukov, CAIDA, USA
    • Jitendra Padhye, Microsoft Research, USA
    • Pablo Rodriguez, Telefonica, Spain
    • Darryl Veitch, University of Melbourne, Australia

    I would like to thank them for their devotion, time, and effort. They have greatly enhanced the quality and reputation of this publication.

    Taking their place is an equally illustrious team of researchers:
    • Augustin Chaintreau, Thomson Research, France
    • Stefan Saroiu, Microsoft Research, USA
    • Renata Teixeira, LIP6, France
    • Jia Wang, AT&T Research, USA
    • David Wetherall, University of Washington, USA Welcome aboard!

    Online Submission System

    This is the first issue of CCR completely created using an online paper submission system rather than email. A slight variant to Eddie Kohler's HotCRP, the CCR submission site allows authors to submit papers at any time, and for them to receive reviews as they are finalized.

    Moreover, they can respond to the reviews and conduct an anonymized conversation with their Area Editor. The system is currently batched: reviewer assignments and reviews are done once every three months. However, starting shortly, papers will be assigned to an Area Editor for review as they are submitted and the set of accepted papers will be published quarterly in CCR. We hope that this will allow authors to have the benefits of a 'rolling deadline,' similar to that pioneered by the VLDB journal.

    Reviewer Pool

    The reviewer pool is a set of volunteer reviewers, usually post-PhD, who are called upon by Area Editors to review papers in their special interests. The current set of reviewers in the pool can be found here: http://blizzard.cs.uwaterloo.ca/ccr/reviewers.html. If you would like to join the pool, please send mail to ccr-edit@uwaterloo.ca with your name, affiliation, interests, and contact URL.

    Page Limits

    We have had a six-page limit for the last year. The purpose of this limit was to prevent CCR from becoming a cemetery for dead papers. This policy has been a success: the set of technical papers in each issue has been vibrant and well-suited to this venue. However, we recognize that it is difficult to fit work into six pages. Therefore, from now on, although submissions will still be limited to six pages (unless permission is obtained in advance), if the reviewers suggest additional work, additional pages will be automatically granted.

    I hope that these changes will continue to make CCR a bellwether for our community. As always, your comments and suggestions for improvement are always welcome.

  • Hilary Finucane and Michael Mitzenmacher

    We provide a detailed analysis of the Lossy Difference Aggregator, a recently developed data structure for measuring latency in a router environment where packet losses can occur. Our analysis provides stronger performance bounds than those given originally, and leads us to a model for how to optimize the parameters for the data structure when the loss rate is not known in advance by using competitive analysis.

    Dmitri Krioukov
  • Marta Carbone and Luigi Rizzo

    Dummynet is a widely used link emulator, developed long ago to run experiments in user-configurable network environments. Since its original design, our system has been extended in various ways, and has become very popular in the research community due to its features and to the ability to emulate even moderately complex network setups on unmodified operating systems.

    We have recently made a number of extensions to the emulator, including loadable packet schedulers, support for better MAC layer modeling, the inclusion in PlanetLab, and development of Linux and Windows versions in addition to the native FreeBSD and Mac OS X ones.

    The goal of this paper is to present in detail the current features of Dummynet, compare it with other emulation solutions, and discuss what operating conditions should be considered and what kind of accuracy to expect when using an emulation system.

    Kevin Almeroth
  • Hamed Haddadi

    Online advertising is currently the richest source of revenue for many Internet giants. The increased number of online businesses, specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses.

    In this note we wish to raise the awareness of the networking research community on potential research areas within the online advertising field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels.

    Adrian Perrig
  • Dirk Trossen, Mikko Sarela, and Karen Sollins

    The current Internet architecture focuses on communicating entities, largely leaving aside the information to be ex-changed among them. However, trends in communication scenarios show that WHAT is being exchanged becoming more important than WHO are exchanging information. Van Jacobson describes this as moving from interconnecting ma-chines to interconnecting information. Any change of this part of the Internet needs argumentation as to why it should be undertaken in the first place. In this position paper, we identify four key challenges, namely information-centrism of applications, supporting and exposing tussles, increasing accountability, and addressing attention scarcity, that we believe an information-centric internetworking architecture could address better and would make changing such crucial part worthwhile. We recognize, however, that a much larger and more systematic debate for such change is needed, underlined by factual evidence on the gain for such change.

    Kevin Almeroth
  • Pei-chun Cheng, Xin Zhao, Beichuan Zhang, and Lixia Zhang

    BGP routing data collected by RouteViews and RIPE RIS have become an essential asset to both the network research and operation communities. However, it has long been speculated that the BGP monitoring sessions between operational routers and the data collectors fail from time to time. Such session failures lead to missing update messages as well as duplicate updates during session re-establishment, making analysis results derived from such data inaccurate. Since there is no complete record of these monitoring session failures, data users either have to sanitize the data discretionarily with respect to their specific needs or, more commonly, assume that session failures are infrequent enough and simply ignore them. In this paper, we present the first systematic assessment and documentary on BGP session failures of RouteViews and RIPE data collectors over the past eight years. Our results show that monitoring session failures are rather frequent, more than 30% of BGP monitoring sessions experienced at least one failure every month. Furthermore, we observed failures that happen to multiple peer sessions on the same collector around the same time, suggesting that the collector’s local problems are a major factor in the session instability. We also developed a web site as a community resource to publish all session failures detected for RouteViews and RIPE RIS data collectors to help users select and clean up BGP data before performing their analysis.

    Jitendra Padhye
  • David R. Choffnes and Fabian E. Bustamante

    Today’s open platforms for network measurement and distributed system research, which we collectively refer to as testbeds in this article, provide opportunities for controllable experimentation and evaluations of systems at the scale of hundreds or thousands of hosts. In this article, we identify several issues with extending results from such platforms to Internet wide perspectives. Specifically, we try to quantify the level of inaccuracy and incompleteness of testbed results when applied to the context of a large-scale peer-to-peer (P2P) system. Based on our results, we emphasize the importance of measurements in the appropriate environment when evaluating Internet-scale systems.

    Pablo Rodrigues
  • Diana Joumblatt, Renata Teixeira, Jaideep Chandrashekar, and Nina Taft

    There is an amazing paucity of data that is collected directly from users’ personal computers. One key reason for this is the perception among researchers that users are unwilling to participate in such a data collection effort. To understand the range of opinions on matters that occur with end-host data tracing, we conducted a survey of 400 computer scientists. In this paper, we summarize and share our findings.

  • k. c. claffy

    On September 23, 2009, CAIDA hosted a virtual Workshop on Internet Economics to bring together network technology and policy researchers, commercial Internet facilities and service providers, and communications regulators to explore a common goal: framing a concrete agenda for the emerging but empirically stunted field of Internet infrastructure economics. With participants stretching from Washington D.C. to Queensland, Australia, we used the electronic conference hosting facilities supported by the California Institute of Technology (CalTech) EVO Collaboration Network. This report describes the workshop discussions and presents relevant open research questions identified by participants.

  • Ratul Mahajan

    This paper is based on a talk that I gave at CoNEXT 2009. Inspired by Hal Varian’s paper on building economic models, it describes a research method for building computer systems. I find this method useful in my work and hope that some readers will find it helpful as well.

  • Matthew Caesar, Martin Casado, Teemu Koponen, Jennifer Rexford, and Scott Shenker

    This paper advocates a different approach to reduce routing convergence—side-stepping the problem by avoiding it in the first place! Rather than recomputing paths after temporary topology changes, we argue for a separation of timescale between offline computation of multiple diverse paths and online spreading of load over these paths. We believe decoupling failure recovery from path computation leads to networks that are inherently more efficient, more scalable, and easier to manage.

  • Constantine Dovrolis and J. Todd Streelman

    There is significant research interest recently to understand the evolution of the current Internet, as well as to design clean-slate Future Internet architectures. Clearly, even when network architectures are designed from scratch, they have to evolve as their environment (i.e., technological constraints, service requirements, applications, economic conditions, etc) always changes. A key question then is: what makes a network architecture evolvable? What determines the ability of a network architecture to evolve as its environment changes? In this paper, we review some relevant ideas about evolvability from the biological literature. We examine the role of robustness and modularity in evolution, and their relation with evolvability. We also discuss evolutionary kernels and punctuated equilibria, two important concepts that may be relevant to the so-called ossification of the core Internet protocols. Finally, we examine optimality, a design objective that is often of primary interest in engineering but that does not seem to be abundant in biology.

  • Hamed Haddadi, Tristan Henderson, and Jon Crowcroft

    On numerous occasions, trips to the facilities coincide with an important mobile phone call. Due to the sleek and polished nature of modern phones, attempting to promptly deal with such calls can occasionally lead to the phone sliding through the owner’s hands, surrendering to the force of gravity and flying down the hole. This is a disaster, and often an expensive incident. It can also be a health and safety hazard, with the owner desperately attempting to retrieve their phone and re-using it.

    This paper provides a first attempt at a cell phone recovery system using themodern functionalities of Toto Japanese toilets.1 In our approach, the phone is calmly recovered, sanitized and retrieved by the user. This can all happen without the call even being dropped, with possibility of secure backup of the user data by embedded sensor and Wi-Fi network connectivity in the toilet. We envision that such an approach will increase the collaboration between Japanese, European and American mobile operators, network researchers and hardware manufacturers.

  • Martin Burkhart, Dominik Schatzmann, Brian Trammell, Elisa Boschi, and Bernhard Plattner

    In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.

    Dmitri Krioukov
  • Yao Liang and Wei Peng

    We present a sophisticated framework to systematically explore the temporal correlation in environmental monitoring wireless sensor networks. The presented framework optimizes lossless data compression in communications given the resource constraint of sensor nodes. The insights and analyses obtained from the framework can directly lead to innovative and better design of data gathering protocols for wireless sensor networks operated in noisy environments to dramatically reduce the energy consumptions.

    Martin May
  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    The use of multiple antennas and MIMO techniques based on them is the key feature of 802.11n equipment that sets it apart from earlier 802.11a/g equipment. It is responsible for superior performance, reliability and range. In this tutorial, we provide a brief introduction to multiple antenna techniques. We describe the two main classes of those techniques, spatial diversity and spatial multiplexing. To ground our discussion, we explain how they work in 802.11n NICs in practice.

  • Parag Kulkarni, Woon Hau Chin, and Tim Farnham

    Femtocell access points (FAPs), also popularly known as Home Base Stations, are small base stations for use within indoor environments to improve coverage and capacity. FAPs have a limited range (e.g. limited to a home or office area) but offer immense capacity improvements for the network due to the ability to reuse a frequency more often as a result of smaller coverage areas. Because there may be thousands of these devices and since the nature of deployment is adhoc, it may not be possible to carry out elaborate frequency planning like that in the traditional cellular network. This paper aims to outline the radio resource management considerations within the context of femto cells, the broader objective being to initiate a discussion and encourage research in the areas highlighted.

  • Brian E. Carpenter and Craig Partridge

    This note describes the various peer review processes applied to Internet Requests for Comments (RFCs) over a number of years, and suggests that these have been up to normal scholarly standards since at least 1992. The authors believe that these documents should be considered the equivalent of scholarly publications.

  • Dah Ming Chiu and Tom Z.J. Fu

    This study takes papers from a selected set of computer networking conferences and journals spanning the past twenty years (1989-2008) to produce various statistics to show how our community publishes papers, and how this process is changing over the years. We observe the rapid growth in the rate of publications, venues, citations, authors, and number of co-authors. We explain how these quantities are related, in particular explore how they are related over time and the reasons behind the changes. The widely accepted model to explain the power law distribution of paper citations is preferential attachment. We propose an extension and refinement that suggests elapsed time is also a factor to determine which papers get cited. We try to compare the selected venues based on citation count, and discuss how we might think about these comparisons, in terms of the roles played by different venues, and the ability to predict impact by venues, and citation counts. The treatment of these issues is general and can be applied to study publication patterns in other research communities. The larger goal of this study is to generate discussion about our publication system, and work towards a vision to transform our publication system for better scalability and effectiveness.

  • Nathan Farrington, Nikhil Handigol, Christoph Mayer, Kok-Kiong Yap, and Jeffrey C. Mogul

    WREN 2009, the Workshop on Research on Enterprise Networking, was held on August 21, 2009, in conjunction with SIGCOMM 2009 in Barcelona. WREN focussed on research challenges and results specific to enterprise and data-center networks. Details about the workshop, including the organizers and the papers presented, are at http://conferences.sigcomm.org/sigcomm/2009/workshops/wren/index.php. Approximately 48 people registered to attend WREN.

    The workshop was structured to encourage a lot of questions and discussion. To record what was said, four volunteer scribes (Nathan Farrington, Nikhil Handigol, Christoph Mayer, and Kok-Kiong Yap) took notes. This report is a merged and edited version of their notes. Please realize that the result, while presented in the form of quotations, is at best a paraphrasing of what was actually said, and in some cases may be mistaken. Also, some quotes might be mis-attributed, and some discussion has been lost, due to the interactive nature of the workshop.

    The second instance of WREN will be combined with the Internet Network Management Workshop (INM), in conjunction with NSDI 2010; see http://www.usenix.org/event/inmwren10/cfp/ for deadlines and additional information.

    Also note that two papers from WREN were re-published in the January 2010 issue of Computer Communication Review: “Understanding Data Center Traffic Characteristics,” by Theophilus A Benson, Ashok Anand, Aditya Akella, and Ming Zhang, and “Remote Network Labs: An On-Demand Network Cloud for Configuration Testing,” by Huan Liu and Dan Orban.

  • Ken Keys

    The well-known traceroute probing method discovers links between interfaces on Internet routers. IP alias resolution, the process of identifying IP addresses belonging to the same router, is a critical step in producing Internet topology maps. We compare the performance and accuracy of known alias resolution techniques, propose some enhancements, and suggest a practical combination of techniques that can produce the most accurate and complete IP-to-router mapping at macroscopic scale.

  • James Kelly, Wladimir Araujo, and Kallol Banerjee

    The creation of services on IP networks is a lengthy process. The development time is further increased if this involves the equipment manufacturer adding third-party technology in their product. In this work we describe how the JUNOS SDK (part of Juniper Networks Partner Solution Development Platform) facilitates innovation and can be used to considerably shorten the development cycle for the creation of services based on embedding third-party software into Juniper Networks routers. We describe how the JUNOS SDK exposes programmatic interfaces to enable packet manipulation by third-party software and how it can be used as a common platform for deploying unique services through the combination of multiple components from multiple parties.

  • Xu Chen, Yun Mao, Z. Morley Mao, and Jacobus Van der Merwe

    Network management operations are complicated, tedious and error-prone, requiring significant human involvement and expert knowledge. In this paper, we first examine the fundamental components of management operations and argue that the lack of automation is due to a lack of programmability at the right level of abstraction. To address this challenge, we present DECOR, a database-oriented, declarative framework towards automated network management. DECOR models router configuration and any generic network status as relational data in a conceptually centralized database. As such, network management operations can be represented as a series of transactional database queries, which provide the benefit of atomicity, consistency and isolation. The rulebased language in DECOR provides the flexible programmability to specify and enforce network-wide management constraints, and achieve high-level task scheduling. We describe the design rationale and architecture of DECOR and present some preliminary examples applying our approach to common network management tasks.

Syndicate content