Computer Communication Review: Papers

Find a CCR issue:
  • Supasate Choochaisri, Kittipat Apicharttrisorn, Kittiporn Korprasertthaworn, Pongpakdi Taechalertpaisarn, Chalermek Intanagonwiwat

    Desynchronization is useful for scheduling nodes to perform tasks at different time. This property is desirable for resource sharing, TDMA scheduling, and collision avoiding. Inspired by robotic circular formation, we propose DWARF (Desynchronization With an ARtificial Force field), a novel technique for desynchronization in wireless networks. Each neighboring node has artificial forces to repel other nodes to perform tasks at different time phases. Nodes with closer time phases have stronger forces to repel each other in the time domain. Each node adjusts its time phase proportionally to its received forces. Once the received forces are balanced, nodes are desynchronized. We evaluate our implementation of DWARF on TOSSIM, a simulator for wireless sensor networks. The simulation results indicate that DWARF incurs significantly lower desynchronization error and scales much better than existing approaches.

    Bhaskaran Raman
  • André Zúquete, Carlos Frade

    The IPv4 address space is quickly getting exhausted, putting a tremendous pressure on the adoption of even more NAT levels or IPv6. On the other hand, many authors propose the adoption of new Internet addressing capabilities, namely content-based addressing, to complement the existing IP host-based addressing. In this paper we propose the introduction of a location layer, between transport and network layers, to address both problems. We keep the existing IPv4 (or IPv6) host-based core routing functionalities, while we enable hosts to become routers between separate address spaces by exploring the new location header. For a proof of concept, we modified the TCP/IP stack of a Linux host to handle our new protocol layer and we designed and conceived a novel NAT box to enable current hosts to interact with the modified stack.

    David Wetherall
  • Kate Lin, Yung-Jen Chuang, Dina Katabi

    In many wireless systems, it is desirable to precede a data transmission with a handshake between the sender and the receiver. For example, RTS-CTS is a handshake that prevents collisions due to hidden terminals. Past work, however, has shown that the overhead of such handshake is too high for practical deployments. We present a new approach to wireless handshake that is almost overhead free. The key idea underlying the design is to separate a packet's PLCP header and MAC header from its body and have the sender and receiver first exchange the data and ACK headers, then exchange the bodies of the data and ACK packets without additional headers. The header exchange provides a natural handshake at almost no extra cost. We empirically evaluate the feasibility of such lightweight handshake and some of its applications. Our testbed evaluation shows that header-payload separation does not hamper packet decodabilty. It also shows that a light handshake enables hidden terminals, i.e., nodes that interfere with each other without RTS/CTS, to experience less than 4% of collisions. Furthermore, it improves the accuracy of bit rate selection in bursty and mobile environments producing a throughput gain of about 2x.

    Bhaskaran Raman
  • Cheng Huang, Ivan Batanov, Jin Li

    Internet services are often deployed in multiple (tens to hundreds) of geographically distributed data centers. They rely on Global Traffic Management (GTM) solutions to direct clients to the optimal data center based on a number of criteria like network performance, geographic location, availability, etc. The GTM solutions, however, have a fundamental design limitation in their ability to accurately map clients to data centers - they use the IP address of the local DNS resolver (LDNS) used by a client as a proxy for the true client identity, which in some cases causes suboptimal performance. This issue is known as the client-LDNS mismatch problem. We argue that recent proposals to address the problem suffer from serious limitations. We then propose a simple new solution, named ``FQDN extension'', which can solve the client-LDNS mismatch problem completely. We build a prototype system and demonstrate the effectiveness of the proposed solution. Using JavaScript, the solution can be deployed immediately for some online services, such as Web search, without modifying either client or local resolver.

    Renata Teixeira
  • Shane Alcock, Perry Lorier, Richard Nelson

    This paper introduces libtrace, an open-source software library for reading and writing network packet traces. Libtrace offers performance and usability enhancements compared to other libraries that are currently used. We describe the main features of libtrace and demonstrate how the libtrace programming API enables users to easily develop portable trace analysis tools without needing to consider the details of the capture format, file compression or intermediate protocol headers. We compare the performance of libtrace against other trace processing libraries to show that libtrace offers the best compromise between development effort and program run time. As a result, we conclude that libtrace is a valuable contribution to the passive measurement community that will aid the development of better and more reliable trace analysis and network monitoring tools.

    AT&T Labs
  • Pamela Zave

    Correctness of the Chord ring-maintenance protocol would mean that the protocol can eventually repair all disruptions in the ring structure, given ample time and no further disruptions while it is working. In other words, it is "eventual reachability." Under the same assumptions about failure behavior as made in the Chord papers, no published version of Chord is correct. This result is based on modeling the protocol in Alloy and analyzing it with the Alloy Analyzer. By combining the right selection of pseudocode and textual hints from several papers, and fixing flaws revealed by analysis, it is possible to get a version that may be correct. The paper also discusses the significance of these results, describes briefly how Alloy is used to model and reason about Chord, and compares Alloy analysis to model-checking.

    David Wetherall
  • Juan Camilo Cardona Restrepo, Rade Stanojevic

    In spite of the tremendous amount of measurement efforts on understanding the Internet as a global system, little is known about the 'local' Internet (among ISPs inside a region or a country) due to limitations of the existing measurement tools and scarce data. In this paper, empirical in nature, we characterize the evolution of one such ecosystem of local ISPs by studying the interactions between ISPs happening at the Slovak Internet eXchange (SIX). By crawling the web archive we collect 158 snapshots (spanning 14 years) of the SIX website, with the relevant data that allows us to study the dynamics of the Slovak ISPs in terms of: the local ISP peering, the traffic distribution, the port capacity/utilization and the local AS-level traffic matrix. Examining our data revealed a number of invariant and dynamic properties of the studied ecosystem that we report in detail.

    Yin Zhang
  • Eric Keller, Michael Schapira, Jennifer Rexford

    Traditional traffic engineering adapts the routing of traffic within the network to maximize performance. We propose a new approach that also adaptively changes where traffic enters and leaves the network—changing the “traffic matrix”, and not just the intradomain routing configuration. Our approach does not affect traffic patterns and BGP routes seen in neighboring networks, unlike conventional inter-domain traffic engineering where changes in BGP policies shift traf-

    fic and routes from one edge link to another. Instead, we capitalize on recent innovations in edge-link migration that enable seamless rehoming of an edge link to a different internal router in an ISP backbone network—completely transparent to the router in the neighboring domain. We present an optimization framework for traffic engineering with migration and develop algorithms that determine which edge links should migrate, where they should go, and how often
    they should move. Our experiments with Internet2 traffic and topology data show that edge-link migration allows the network to carry 18.8% more traffic (at the same level of performance) over optimizing routing alone.
    Telefonica Research
  • Craig A. Shue, Andrew J. Kalafut, Mark Allman, Curtis R. Taylor

    There are many deployed approaches for blocking unwanted traffic, either once it reaches the recipient's network, or closer to its point of origin. One of these schemes is based on the notion of traffic carrying capabilities that grant access to a network and/or end host. However, leveraging capabilities results in added complexity and additional steps in the communication process: Before communication starts a remote host must be vetted and given a capability to use in the subsequent communication. In this paper, we propose a lightweight mechanism that turns the answers provided by DNS name resolution - which Internet communication broadly depends on anyway - into capabilities. While not achieving an ideal capability system, we show the mechanism can be built from commodity technology and is therefore a pragmatic way to gain some of the key benefits of capabilities without requiring new infrastructure.

    Stefan Saroiu
  • Yingdi Yu, Duane Wessels, Matt Larson, Lixia Zhang

    Operators of high-profile DNS zones utilize multiple authority servers for performance and robustness. We conducted a series of trace-driven measurements to understand how current caching resolver implementations distribute queries among a set of authority servers. Our results reveal areas for improvement in the ``apparently sound'' server selection schemes used by some popular implementations. In some cases, the selection schemes lead to sub-optimal behavior of caching resolvers, e.g. sending a significant amount of queries to unresponsive servers. We believe that most of these issues are caused by careless implementations, such as keeping decreasing a server's SRTT after the server has been selected, treating unresponsive servers as responsive ones, and using constant SRTT decaying factor. For the problems identified in this work, we recommended corresponding solutions.

    Renata Teixeira
  • Benoit Donnet, Matthew Luckie, Pascal Mérindol, Jean-Jacques Pansiot

    Operators have deployed Multiprotocol Label Switching (MPLS) in the Internet for over a decade. However, its impact on Internet topology measurements is not well known, and it is possible for some MPLS configurations to lead to false router-level links in maps derived from traceroute data. In this paper, we introduce a measurement-based classification of MPLS tunnels, identifying tunnels where IP hops are revealed but not explicitly tagged as label switching routers, as well as tunnels that obscure the underlying path. Using a large-scale dataset we collected, we show that paths frequently cross MPLS tunnels in today's Internet: in our data, at least 30% of the paths we tested traverse an MPLS tunnel. We also propose and evaluate several methods to reveal MPLS tunnels that are not explicitly flagged as such: we discover that their fraction is significant (up to half the explicit tunnel quantity) but most of them do not obscure IP-level topology discovery.

    Yin Zhang
  • Hamed Haddadi, Richard Mortier, Steven Hand

    People everywhere are generating ever-increasing amounts of data, often without being fully aware of who is recording what about them. For example, initiatives such as mandated smart metering, expected to be widely deployed in the UK in the next few years and already attempted in countries such as the Netherlands, will generate vast quantities of detailed, personal data about huge segments of the population. Neither the impact nor the potential of this society-wide data gathering are well understood. Once data is gathered, it will be processed -- and society is only now beginning to grapple with the consequences for privacy, both legal and ethical, of these actions, e.g., Brown et al. There is the potential for great harm through, e.g., invasion of privacy; but also the potential for great benefits by using this data to make more efficient use of resources, as well as releasing its vast economic potential. In this editorial we briefly discuss work in this area, the challenges still faced, and some potential avenues for addressing them.

  • Martin Arlitt

    Time tends to pass more quickly than we would like. Sometimes it is helpful to reflect on what you have accomplished, and to derive what you have learned from the experiences. These "lessons learned" may then be leveraged by yourself or others in the future. Occasionally, an external event will motivate this self reflection. For me, it was the 50th anniversary reunion of the St. Walburg Eagles, held in July 2011. The Eagles are a full-contact (ice) hockey team I played with between 1988 and 1996 (the Eagles ceased operations twice during this period, which limited me to four seasons playing with them), while attending university. What would I tell my friends and former teammates that I had been doing for the past 15 years? After some thought, I realized that my time as an Eagle had prepared me for a research career, in ways I would never have imagined. This article (an extended version with color photos is available in [1]) shares some of these similarities, to motivate others to reflect on their own careers and achievements, and perhaps make proactive changes as a result.

  • Jon Crowcroft

    The Internet is not a Universal service, but then neither is democracy. So should the Internet be viewed as a right? It's certainly sometimes wrong. In this brief article, we depend on the Internet to reach our readers, and we hope that they don't object our doing that.

  • Charles Kalmanek

    It has become a truism that innovation in the information and communications technology (ICT) fields is occurring faster than ever before. This paper posits that successful innovation requires three essential elements: a need, know-how or knowledge, and favorable economics. The paper examines this proposition by considering three technical areas in which there has been significant innovation in recent years: server virtualization and the cloud, mobile application optimization, and mobile speech services. An understanding of the elements that contribute to successful innovation is valuable to anyone that does either fundamental or applied research in fields of information and communication technology.

  • kc claffy

    The second Workshop on Internet Economics [2], hosted by CAIDA and Georgia Institute of Technology on December 1-2, 2011, brought together network technology and policy researchers with providers of commercial Internet facilities and services (network operators) to further explore the common objective of framing an agenda for the emerging but empirically stunted field of Internet infrastructure economics. This report describes the workshop discussions and presents relevant open research questions identified by its participants.

  • S. Keshav

    I'd like to devote this editorial to a description of the process we use to select and publish papers submitted to CCR. CCR publishes two types of papers: technical papers and editorials.  I'll first describe the process for technical papers then for editorials.

    Technical papers are submitted to the CCR online website (currently at which runs a modified version of Eddie Kohler's HOTCRP system. Authors are required to submit a paper in the standard SIGCOMM format with subject classifiers and keywords required by the ACM Digital Library. We restrict technical papers to six pages for two reasons. First, it prevents rejected conference papers from being trivially resubmitted to CCR. Second, it limits the load on area editors and reviewers, which is important given the quick turnaround we'd like for CCR. Some papers do need more than six pages. If so, authors should write to me and, if I find their argument convincing, I usually grant the request immediately. I also add a note to the online system so that Area Editors do not reject the paper for being over-length.
    Once a paper is in the system, I assign it to an Area Editor for review. If I have free time, I do this immediately after paper submission. If I'm backed up, which is true more often than I'd like, this happens in the week following the quarterly submission deadlines of March 1, June 1, September 1, and December 1. Area Editors are given seven weeks to obtain up to five reviews. Most papers receive comments from at least three reviewers but papers that are clearly not acceptable may be rejected with a single review.
    Reviewers judge papers along three axes: timeliness, clarity, and novelty; the range of scores is from one to five. Reviewers also summarize the contribution of the paper and provide detailed comments to improve paper quality. Finally, each reviewer suggests a potential paper outcome: accept, revise-and-resubmit, or reject. CCR's goal is to accept high-quality papers that are both novel and timely. Technical accuracy is necessary, of course, but we do not require papers to be as thorough in their evaluation as a flagship conference or a journal.
    Reviewers use the CCR online system to submit their reviews. After finalizing their own review, they are permitted to read other reviews and, if they wish, update their review. This tends to dampen outliers in review scores.
    Authors are automatically informed when each review is finalized and are permitted to rebut the review online. Some authors immediately rebut each review; other wait for all their reviews before responding. Authors typically respond to reviews with painstakinglydetailed responses; it is truly remarkable to see how carefully each reviewer criticism is considered in these responses! Author rebuttals are viewable both by reviewers and the assigned Area Editor. Although reviewers are free to comment on the rebuttals or even modify their reviews based on the rebuttal, this option is seldom exercised.
    After seven to eight weeks it is time for Area Editors to make editorial decisions. The Area Editor reads the paper, its reviews, and the author rebuttals and decides whether the paper is to be rejected, accepted, or revised and resubmitted. The decision is entered as a comment to the paper. This decision may or may not be signed by the Area Editor, as they wish.
    If the paper is rejected, I send a formal rejection letter to the authors and the paper is put to rest. If the paper is accepted and the revisions are minor, then the authors are asked to prepare the camera-ready copy and upload it to the publisher's website for publication. On the other hand, if the revisions are major, then the Area Editor typically asks the authors to revise the paper for re-review before the author is allowed to generate camera-ready copy. In either case, the Area Editor writes a public review for publication with the paper.
    Revise-and-resubmit decisions can be tricky. If the revisions are minor and the authors can turn things around, they are allowed to resubmit the paper for review in the same review cycle. This needs my careful attention to ensure that the authors and the Area Editor are moving things along in time for publication. Major revisions are usually submitted to the next issue. I try to ensure that the paper is sent back for re-review by the same Area Editor (in some cases, this may happen after the Area Editor has stepped off the Editorial Board).
    Editorials are handled rather differently: I read and approve editorials myself. If I approve the editorial, it is published, and if it is not, I send authors a non-anonymous review telling them why I rejected the paper. I judge editorials for timeliness, breadth, potential for controversy, and whether they instructive. As a rule, however, given the role of CCR as a newsletter, all reports on conferences and workshops are automatically accepted; this is an easy way for you to pile up your CCR publications.
    About a month and half before the issue publication date, we have a full set of papers approved for publication. My admin assistant, the indefatigable Gail Chopiak, uses this list to prepare a Table of Contents to send to Lisa Tolles, our contact with the ACM-Sheridan service, Sheridan Printing Co. Lisa contacts authors with a URL where they upload their camera-ready papers. Lisa and her associates works individually with authors to make sure that their papers meets CCR and ACM publication standards. When all is in order, the issue is composited and the overall PDF is ready.
    At this point, I am sent the draft issue to suggest minor changes, such as in paper ordering, or in the choice of advertisements that go into the issue. I also approve any changes to the masthead and the boilerplate that goes in the inside front and back covers. Once the PDFs are finalized, the SIGCOMM online editor uploads these PDFs to the ACM Digital Library for CCR Online. Finally, the issue is sent to print and, after about a month or so, it is mailed to SIGCOMM members.
    I hope this glimpse into the publication process helps you understand the roles played by the Area Editors, the reviewers, Sheridan staff, the SIGCOMM online editor, and myself, in bringing each issue to you. My sincere thanks to everyone who volunteers their valuable time to make CCR one of the best and also one of the bestread newsletters in ACM!
  • Partha Kanuparthy, Constantine Dovrolis, Konstantina Papagiannaki, Srinivasan Seshan, Peter Steenkiste

    Common Wireless LAN (WLAN) pathologies include low signal-to-noise ratio, congestion, hidden terminals or interference from non-802.11 devices and phenomena. Prior work has focused on the detection and diagnosis of such problems using layer-2 information from 802.11 devices and special purpose access points and monitors, which may not be generally available. Here, we investigate a user-level approach: is it possible to detect and diagnose 802.11 pathologies with strictly user-level active probing, without any cooperation from, and without any visibility in, layer-2 devices? In this paper, we present preliminary but promising results indicating that such diagnostics are feasible.

    Renata Teixeira
  • Nadi Sarrar, Steve Uhlig, Anja Feldmann, Rob Sherwood, Xin Huang

    Internet traffic has Zipf-like properties at multiple aggregation levels. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the forwarder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintaining a reliable set of heavy hitters over time is challenging. This is especially true when we face a volume limit in the non-offloaded traffic in combination with a constraint in the size of the heavy hitter set or its rate of change. We propose a set selection strategy that takes advantage of the properties of heavy hitters at different time scales. Based on real Internet traffic traces, we show that our strategy is able to offload most of the traffic while limiting the rate of change of the heavy hitter set, suggesting the feasibility of alternative router designs.

    Jia Wang
  • Thomas Bonald, James W. Roberts

    We demonstrate that the Internet has a formula linking demand, capacity and performance that in many ways is the analogue of the Erlang loss formula of telephony. Surprisingly, this formula is none other than the Erlang delay formula. It provides an upper bound on the probability a flow of given peak rate suffers degradation when bandwidth sharing is max-min fair. Apart from the flow rate, the only relevant parameters are link capacity and overall demand. We explain why this result is valid under a very general and realistic traffic model and discuss its significance for network engineering.

    Augustin Chaintreau
  • Alberto Dainotti, Roman Amman, Emile Aben, Kimberly C. Claffy

    Unsolicited one-way Internet traffic, also called Internet background radiation (IBR), has been used for years to study malicious activity on the Internet, including worms, DoS attacks, and scanning address space looking for vulnerabilities to exploit. We show how such traffic can also be used to analyze macroscopic Internet events that are unrelated to malware. We examine two phenomena: country-level censorship of Internet communications described in recent work, and natural disasters (two recent earthquakes). We introduce a new metric of local IBR activity based on the number of unique IP addresses per hour contributing to IBR. The advantage of this metric is that it is not affected by bursts of traffic from a few hosts. Although we have only scratched the surface, we are convinced that IBR traffic is an important building block for comprehensive monitoring, analysis, and possibly even detection of events unrelated to the IBR itself. In particular, IBR offers the opportunity to monitor the impact of events such as natural disasters on network infrastructure, and in particular reveals a view of events that is complementary to many existing measurement platforms based on (BGP) control-plane views or targeted active ICMP probing.

    Sharad Agarwal
  • Phillipa Gill, Michael Schapira, Sharon Goldberg

    Researchers studying the interdomain routing system, its properties and new protocols, face many challenges in performing realistic evaluations and simulations. Modeling decisions with respect to AS-level topology, routing policies and traffic matrices are complicated by a scarcity of ground truth for each of these components. Moreover, scalability issues arise when attempting to simulate over large (although still incomplete) empirically-derived AS-level topologies. In this paper, we discuss our approach for analyzing the robustness of our results to incomplete empirical data. We do this by (1) developing fast simulation algorithms that enable us to (2) running multiple simulations with varied parameters that test the sensitivity of our research results.

    Yin Zhang
  • Francesco Fusco, Xenofontas Dimitropoulos, Michail Vlachos, Luca Deri

    Long-term historical analysis of captured network traffic is a topic of great interest in network monitoring and network security. A critical requirement is the support for fast discovery of packets that satisfy certain criteria within large-scale packet repositories. This work presents the first indexing scheme for network packet traces based on compressed bitmap indexing principles. Our approach supports very fast insertion rates and results in compact index sizes. The proposed indexing methodology builds upon libpcap, the de-facto reference library for accessing packet-trace repositories. Our solution is therefore backward compatible with any solution that uses the original library. We experience impressive speedups on packet-trace search operations: our experiments suggest that the index-enabled libpcap may reduce the packet retrieval time by more than 1100 times.

    Philip Levis
  • Murtaza Motiwala, Amogh Dhamdhere, Nick Feamster, Anukool Lakhina

    We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems.

    Augustin Chaintreau
  • Maxim Podlesny, Carey Williamson

    ADSL and cable connections are the prevalent technologies available from Internet Service Providers (ISPs) for residential Internet access. Asymmetric access technologies such as these offer high download capacity, but moderate upload capacity. When the Transmission Control Protocol (TCP) is used on such access networks, performance degradation can occur. In particular, sharing a bottleneck link with different upstream and downstream capacities among competing TCP flows in opposite directions can degrade the throughput of the higher speed link. Despite many research efforts to solve this problem in the past, there is no solution that is both highly effective and easily deployable in residential networks. In this paper, we propose an Asymmetric Queueing (AQ) mechanism that enables full utilization of the bottleneck access link in residential networks with asymmetric capacities. The extensive simulation evaluation of our design shows its effectiveness and robustness in a variety of network conditions. Furthermore, our solution is easy to deploy and configure in residential networks.

    Renata Teixeira
  • Yoo Chung

    Distributed denial of service attacks are often considered just a security problem. While this may be the way to view the problem with the Internet of today, perhaps new network architectures attempting to address the issue should view it as a scalability problem. In addition, they may need to approach the problem based on a rigorous foundation.

  • Jonathon Duerig, Robert Ricci, Leigh Stoller, Matt Strum, Gary Wong, Charles Carpenter, Zongming Fei, James Griffioen, Hussamuddin Nasir, Jeremy Reed, Xiongqi Wu
  • John W. Byers, Jeffrey C. Mogul, Fadel Adib, Jay Aikat, Danai Chasaki, Ming-Hung Chen, Marshini Chetty, Romain Fontugne, Vijay Gabale, László Gyarmati, Katrina LaCurts, Qi Liao, Marc Mendonca, Trang Cao Minh, S.H. Shah Newaz, Pawan Prakash, Yan Shvartzshnaider, Praveen Yalagandula, Chun-Yu Yang

    This document provides reports on the presentations at the SIGCOMM 2011 Conference, the annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM).

  • S. Keshav

    This editorial was motivated by a panel on the relationship between academia and industry at the SIGCOMM 2011 conference that was moderated by Bruce Davie. I can claim some familiarity with the topic having spent roughly ten years each in academia and industry during the last twenty years.

    My thesis is that although industry can make incremental gains, real technical breakthroughs can only come from academia. However, to have any impact, these academic breakthroughs must be motivated, at some level, by a real-world problem and the proposed solutions should be feasible, even if implausible. Therefore, it is in the self-interest of industry to fund risky, longterm, curiosity-driven academic research rather than sure-shot, short-term, practical research with welldefined objectives. Symmetrically, it is in the self-interest of academic researchers to tackle real-world problems motivated by the problems faced in industry and propose reasonable solutions.

    There are many underlying reasons why technological revolutions today can only come from academia. Perhaps the primary reason is that, unlike most industrial research labs of today (and I am purposely excluding the late, great, Bell Labs of yore), academia still supports long-term, curiosity-driven research. This is both risky and an inherently `wasteful’ use of time. Yet, this apparently wasteful work is the basis for many of today’s technologies, ranging from Google search to the World Wide Web to BSD Unix and Linux. On closer thought, this is not too surprising. Short-term, practical research requires the investigator to have well-defined goals. But revolutionary ideas cannot be reduced to bullet items on Powerpoint slides: they usually arise as unexpected outcomes of curiosity-driven research. Moreover, it takes time for ideas to mature and for the inevitable missteps to be detected and corrected. Industrial funding cycles of six months to a year are simply not set up to fund ideas whose maturation can take five or even ten years. In contrast, academic research built on the basis of academic tenure and unencumbered by the demands of the marketplace is the ideal locus for long-term work.

    Long-term, curiosity-driven research alone does not lead to revolutions. It must go hand-in-hand with an atmosphere of intellectual openness and rigour. Ideas must be freely exchanged and freely shot down.

    The latest work should be widely disseminated and incorporated into one’s thinking. This openness is antithetical to the dogma of `Intellectual Property’ by which most corporations are bound. Academia, thankfully, has mostly escaped from this intellectual prison. Moreover, industry is essentially incompatible with intellectual rigour: corporate researchers, by and large, cannot honestly comment on the quality of their own company’s products and services.

    A third ingredient in the revolutionary mix is the need for intense thinking by a dedicated group of researchers. Hands-on academic research tends to be carried out by young graduate students (under the supervision of their advisors) who are unburdened by either responsibilities or by knowing that something just cannot be done. Given training and guidance, given challenging goals, and given a soul-searing passion to make a difference in the world, a mere handful of researchers can do what corporate legions cannot.

    These three foundations of curiosity-driven research, intellectual openness, and intense thinking set academic research apart from the industrial research labs of today and are also the reason why the next technological revolution is likely to come from academia, not industry.

    In the foregoing, I admit that I have painted a rather rosy picture of academic research. It is important to recognize, however, that the same conditions that lead to breakthrough research also are susceptible to abuse. The freedom to pursue long-term ideas unconstrained by the marketplace can also lead to work that is shoddy and intellectually dishonest. For instance, I believe that it may be intellectually honest for a researcher to make assumptions that do not match current technology, but it is intellectually dishonest to make assumptions that violate the laws of physics. In a past editorial, I have written in more depth about these assumptions, so I will not belabour the point. I will merely remark here that it is incumbent on academic researchers not to abuse their freedom.

    A second inherent problem with academic research, especially in the field of computer networking, is that it is difficult, perhaps impossible, to do large-scale datadriven research. As a stark example, curiosity-driven work on ISP topology is impossible if ISPs sequester this data. Similarly, studying large-scale data centre topology is challenging when the largest data centre one can build in academia has only a few hundred servers.

    Finally, academic research tends to be self-driven and sometimes far removed from real-world problems. These real-world problems, which are faced daily by industrial researchers, can be intellectually demanding and their solution can be highly impactful. Academic researchers would benefit from dialogue with industrial researchers in posing and solving such problems.

    Given this context, the relationship between academia and industry becomes relatively clear. What academia has and industry needs is committed, focussed researchers and the potential for long-term, revolutionary work. What industry has and academia needs is exposure to real-world problems, large-scale data and systems, and funding. Therefore, it would be mutually beneficial for each party to contribute to the other. Here are a few specific suggestions how.

    First, industry should fund academic research without demanding concrete deliverables and unnecessary constraints. Of course, the research (and, in particular, the research assumptions) should be adequately monitored. But the overall expectation should be that academic work would be curiosity-driven, open, and long-term.

    Second, industry should try to expose academic researchers to fundamental real-world problems and put at their disposal the data that is needed for their solution. If necessary, academic researchers should be given access to large-scale systems to try out their solutions. This can be done without loss of intellectual property by having students and PIs visit industrial research labs as interns or during sabbaticals. It could also be done by having industrial researchers spend several weeks or months as visitors to university research labs.

    Third, industry should spend resources not only on funding, but on internal resources to match the output of academic research (papers and prototypes) to their own needs (products and systems).

    Fourth, academic researchers should choose research problems based not just on what is publishable, but (also) based on the potential for real-world impact. This would naturally turn them to problems faced by industry.

    Fifth, academic researchers should ensure that their solutions are feasible, even if implausible. For instance, a wireless system for cognitive radio built on USRP boards is implausible but feasible. In contrast, a wireless system that assumes that all radio coverage areas are perfectly circular is neither plausible nor feasible. This distinction should be emphasized in the academic review of technical papers.

    Finally, academic researchers should recognize the constraints under which industry operates and, to the extent possible, accommodate them. For instance, they should encourage students to take on internships, fight the inevitable battles with the university office of research to negotiate IP terms, and understand that their points of contact will change periodically due to the nature of corporate (re-)organizations.

    The SIG can also help this interaction. Industry-academic fora such as the panel at SIGCOMM, industryspecific workshops, and industry desks at conferences allow academic researchers to interact with representatives from industry. SIGCOMM could have tutorials focussed on topics of current interest to industry. These two efforts would certainly make deep collaboration between academia and industry more likely.

    I hope that these steps will move our community towards a future where academic research, though curiosity-driven, continues to drive real-world change because of its symbiotic relationship with industrial partners.

    This editorial benefited from comments from Bruce Davie and Gail Chopiak. 

  • Giuseppe Bianchi, Nico d'Heureuse, and Saverio Niccolini

    Several traffic monitoring applications may benefit from the availability of efficient mechanisms for approximately tracking smoothed time averages rather than raw counts. This paper provides two contributions in this direction. First, our analysis of Time-decaying Bloom filters, formerly proposed data structures devised to perform approximate Exponentially Weighted Moving Averages on streaming data, reveals two major shortcomings: biased estimation when measurements are read in arbitrary time instants, and slow operation resulting from the need to periodically update all the filter's counters at once. We thus propose a new construction, called On-demand Time-decaying Bloom filter, which relies on a continuous-time operation to overcome the accuracy/performance limitations of the original window-based approach. Second, we show how this new technique can be exploited in thedesign of high performance stream-based monitoring applications, by developing VoIPSTREAM, a proof-of-concept real-time analysis version of a formerly proposed system for telemarketing call detection. Our validation results, carried out over real telephony data, show how VoIPSTREAM closely mimics the feature extraction process and traffic analysis techniques implemented in the offline system, at a significantly higher processing speed, and without requiring any storage of per-user call detail records.

    Augustin Chaintreau
  • Tom Callahan, Mark Allman, Michael Rabinovich, and Owen Bell

    The Internet has changed dramatically in recent years. In particular, the fundamental change has occurred in terms of who generates most of the content, the variety of applications used and the diverse ways normal users connect to the Internet. These factors have led to an explosion of the amount of user-specific meta-information that is required to access Internet content (e.g., email addresses, URLs, social graphs). In this paper we describe a foundational service for storing and sharing user-specific meta-information and describe how this new abstraction could be utilized in current and future applications.

    Stefan Saroiu
  • Craig Partridge

    About ten years ago, Bob Lucky asked me for a list of open research questions in networking. I didn't have a ready list and reacted it would be good to have one. This essay is my (long- belated) reply.

  • Soumya Sen, Roch Guerin, and Kartik Hosanagar

    Should a new "platform" target a functionality-rich but complex and expensive design or instead opt for a bare-bone but cheaper one? This is a fundamental question with profound implications for the eventual success of any platform. A general answer is, however, elusive as it involves a complex trade-off between benefits and costs. The intent of this paper is to introduce an approach based on standard tools from the field of economics, which can offer some insight into this difficult question. We demonstrate its applicability by developing and solving a generic model that incorporates key interactions between platform stakeholders. The solution confirms that the "optimal" number of features a platform should offer strongly depends on variations in cost factors. More interestingly, it reveals a high sensitivity to small relative changes in those costs. The paper's contribution and motivation are in establishing the potential of such a cross-disciplinary approach for providing qualitative and quantitative insights into the complex question of platform design.

  • kc claffy

    In June 2011 I participated on a panel on network neutrality hosted at the June cybersecurity meeting of the DHS/SRI Infosec Technology Transition Council (ITTC), where "experts and leaders from the government, private, financial, IT, venture capitalist,and academia and science sectors came together to address the problem of identity theft and related criminal activity on the Internet." I recently wrote up some of my thoughts on that panel, including what network neutrality has to do with cybersecurity.

  • kc claffy

    I recently published this essay on CircleID on my thoughts on ICANN's recent decision to launch .XXX and the larger new gTLD program this year. Among other observations, I describe how .XXX marks a historical inflection point, where ICANN's board formally abandoned any responsibility to present an understanding of the ramifications of probable negative externalities ("harms") in setting its policies. That ICANN chose to relinquish this responsibility puts the U.S. government in the awkward position of trying to tighten the few inadequate controls that remain over ICANN, and leaves individual and responsible corporate citizens in the unenviable yet familiar position of bracing for the consequences.

  • S. Keshav

    This edition of CCR bears a dubious distinction of having no technical articles, only editorial content. This is not because no technical articles were submitted: in fact, there were 13 technical submissions. However, all of them were rejected by the Area Editors on the advice of the reviewers, a decision that I did express concern with, but could not, in good conscience, overturn.

    One could ask: were all the papers so terrible? Certainly some papers were unacceptably bad and some were simply out of scope. However, the fate of most papers was to be judged to be not good enough to publish. Some submissions were too broad, others too narrow, many were too incremental, some too radical, and some were just not interesting enough. The opposite of a Procrustean bed, CCR has become a bed that no paper seems to fit!

    This, by itself, would normally not cause me too much concern. However, I feel that this attitude has permeated our community at large. A similar spirit of harsh criticism is used to judge papers at SIGCOMM, MOBICOM, CoNEXT, and probably every other top-tier computer science conference. Reviewers seem only to want to find fault with papers, rather than appreciate insights despite inevitable errors and a lack of technical completeness.

    I think that a few all-too-human foibles lie at the bottom of this hyper-critical attitude of paper reviewers. First, a subconscious desire to get one’s back: if my paper has been rejected from a venue due to sharp criticism, why not pay this back with sharp criticism of my own? Second, a desire to prove one’s expertise: if I can show that a paper is not perfect, that shows how clever I am. Third, a biased view of what papers in a particular area should look like: I’m the expert in my field, so I think I know what every paper in my field should look like! Finally, unrealistic expectations: I may not write perfect papers but I expect to read only perfect ones. I think I have a good understanding of the psychological basis of reviewer nitpicking because I too am guilty of these charges.

    These subconscious attitudes are exacerbated by two other factors: a ballooning of reviewer workloads, and, with journals in computer science languishing in their roles, conference papers being held to archival standard. These factors force reviewers into looking for excuses to reject papers, adding momentum to the push towards perfection. As the quote from Voltaire shows, this has negative consequences.

    One negative consequence is the stifling of innovation. Young researchers learn that to be successful in publishing in top-tier venues, it pays to stick to well-established areas of research, where reviewers cannot fault them in their assumptions, because these already appear in the published literature. Then, they scale the walls by adding epsilon to delta until the incrementality threshold is breached. This has an opportunity cost in that well-studied areas are further overstudied to the detriment of others.

    A second negative consequence is that it turns some researchers off. They simply do not want to take part in a game where they cannot respect the winners or the system. This has an even greater opportunity cost.

    How can we address this problem? As PC chairs and Area Editors, we need to set the right expectations with reviewers. No paper will be perfect: that is a given. We have to change our mental attitude from finding reasons to reject a paper to finding reasons to accept a paper. We will certainly be trying to do this from now on at CCR.

    We can also remove the notion of a publication bar altogether. An online version of CCR, which will be coming some day, could easily accept all articles submitted to it. Editors and reviewers could rank papers and do public reviews and readers can judge whether or not to read a paper. This is already common practice in physics, using the Arxiv system.

    Finally, I would urge readers to look within. As a reviewer of a paper, it is your duty to critique a paper and point out its flaws. But can you overlook minor flaws and find the greater good? In some cases, I hope your answer will be yes. And with this small change, the system will also change. One review at a time.

  • Jennifer Rexford

    While computer networking is an exciting research field, we are far from having a clear understanding of the core concepts and questions that define our discipline. This position paper, a summary of a talk I gave at the CoNext’10 Student Workshop, captures my current frustrations and hopes about the field.

  • Marcelo Bagnulo, Philip Eardley, Lars Eggert, and Rolf Winter

    The development of new technology is driven by scientific research. The Internet, with its roots in the ARPANET and NSFNet, is no exception. Many of the fundamental, long-term improvements to the architecture, security, end-to-end protocols and management of the Internet originate in the related academic research communities. Even shorter-term, more commercially driven extensions are oftentimes derived from academic research. When interoperability is required, the IETF standardizes such new technology. Timely and relevant standardization benefits from continuous input and review from the academic research community.

    For an individual researcher, it can however by quite puzzling how to begin to most effectively participate in the IETF and arguably to a much lesser degree in the IRTF. The interactions in the IETF are much different than those in academic conferences, and effective participation follows different rules. The goal of this document is to highlight such differences and provide a rough guideline that will hopefully enable researchers new to the IETF to become successful contributors more quickly.

  • Eiko Yoneki, Jon Crowcroft, Pietro Lio', Neil Walton, Milan Vojnovic, and Roger Whitaker

    Electronic social networks are a relatively new pervasive phenomenon that has changed the way in which we communicate and interact. They are now supporting new applications, leading to new trends and posing new challenges. The workshop titled ”Future of Social Networking: Experts from Industry and Academia” took place in Cambridge on November 18, 2010 to expose how the future of social networking may develop and be exploited in new technologies and systems. We provide a summary of this event and some observations on the key outcomes.

  • Teemu Koponen, Scott Shenker, Hari Balakrishnan, Nick Feamster, Igor Ganichev, Ali Ghodsi, P. Brighten Godfrey, Nick McKeown, Guru Parulkar, Barath Raghavan, Jennifer Rexford, Somaya Arianfar, and Dmitriy Kuptsov

    We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural “framework” in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) — which is derived from the simple observation that network interfaces should be extensible and abstract — allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII’s ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces.

Syndicate content