Computer Communication Review: Papers

Find a CCR issue:
  • Craig A. Shue, Andrew J. Kalafut, Mark Allman, Curtis R. Taylor

    There are many deployed approaches for blocking unwanted traffic, either once it reaches the recipient's network, or closer to its point of origin. One of these schemes is based on the notion of traffic carrying capabilities that grant access to a network and/or end host. However, leveraging capabilities results in added complexity and additional steps in the communication process: Before communication starts a remote host must be vetted and given a capability to use in the subsequent communication. In this paper, we propose a lightweight mechanism that turns the answers provided by DNS name resolution - which Internet communication broadly depends on anyway - into capabilities. While not achieving an ideal capability system, we show the mechanism can be built from commodity technology and is therefore a pragmatic way to gain some of the key benefits of capabilities without requiring new infrastructure.

    Stefan Saroiu
  • Yingdi Yu, Duane Wessels, Matt Larson, Lixia Zhang

    Operators of high-profile DNS zones utilize multiple authority servers for performance and robustness. We conducted a series of trace-driven measurements to understand how current caching resolver implementations distribute queries among a set of authority servers. Our results reveal areas for improvement in the ``apparently sound'' server selection schemes used by some popular implementations. In some cases, the selection schemes lead to sub-optimal behavior of caching resolvers, e.g. sending a significant amount of queries to unresponsive servers. We believe that most of these issues are caused by careless implementations, such as keeping decreasing a server's SRTT after the server has been selected, treating unresponsive servers as responsive ones, and using constant SRTT decaying factor. For the problems identified in this work, we recommended corresponding solutions.

    Renata Teixeira
  • Benoit Donnet, Matthew Luckie, Pascal Mérindol, Jean-Jacques Pansiot

    Operators have deployed Multiprotocol Label Switching (MPLS) in the Internet for over a decade. However, its impact on Internet topology measurements is not well known, and it is possible for some MPLS configurations to lead to false router-level links in maps derived from traceroute data. In this paper, we introduce a measurement-based classification of MPLS tunnels, identifying tunnels where IP hops are revealed but not explicitly tagged as label switching routers, as well as tunnels that obscure the underlying path. Using a large-scale dataset we collected, we show that paths frequently cross MPLS tunnels in today's Internet: in our data, at least 30% of the paths we tested traverse an MPLS tunnel. We also propose and evaluate several methods to reveal MPLS tunnels that are not explicitly flagged as such: we discover that their fraction is significant (up to half the explicit tunnel quantity) but most of them do not obscure IP-level topology discovery.

    Yin Zhang
  • Hamed Haddadi, Richard Mortier, Steven Hand

    People everywhere are generating ever-increasing amounts of data, often without being fully aware of who is recording what about them. For example, initiatives such as mandated smart metering, expected to be widely deployed in the UK in the next few years and already attempted in countries such as the Netherlands, will generate vast quantities of detailed, personal data about huge segments of the population. Neither the impact nor the potential of this society-wide data gathering are well understood. Once data is gathered, it will be processed -- and society is only now beginning to grapple with the consequences for privacy, both legal and ethical, of these actions, e.g., Brown et al. There is the potential for great harm through, e.g., invasion of privacy; but also the potential for great benefits by using this data to make more efficient use of resources, as well as releasing its vast economic potential. In this editorial we briefly discuss work in this area, the challenges still faced, and some potential avenues for addressing them.

  • Martin Arlitt

    Time tends to pass more quickly than we would like. Sometimes it is helpful to reflect on what you have accomplished, and to derive what you have learned from the experiences. These "lessons learned" may then be leveraged by yourself or others in the future. Occasionally, an external event will motivate this self reflection. For me, it was the 50th anniversary reunion of the St. Walburg Eagles, held in July 2011. The Eagles are a full-contact (ice) hockey team I played with between 1988 and 1996 (the Eagles ceased operations twice during this period, which limited me to four seasons playing with them), while attending university. What would I tell my friends and former teammates that I had been doing for the past 15 years? After some thought, I realized that my time as an Eagle had prepared me for a research career, in ways I would never have imagined. This article (an extended version with color photos is available in [1]) shares some of these similarities, to motivate others to reflect on their own careers and achievements, and perhaps make proactive changes as a result.

  • Jon Crowcroft

    The Internet is not a Universal service, but then neither is democracy. So should the Internet be viewed as a right? It's certainly sometimes wrong. In this brief article, we depend on the Internet to reach our readers, and we hope that they don't object our doing that.

  • Charles Kalmanek

    It has become a truism that innovation in the information and communications technology (ICT) fields is occurring faster than ever before. This paper posits that successful innovation requires three essential elements: a need, know-how or knowledge, and favorable economics. The paper examines this proposition by considering three technical areas in which there has been significant innovation in recent years: server virtualization and the cloud, mobile application optimization, and mobile speech services. An understanding of the elements that contribute to successful innovation is valuable to anyone that does either fundamental or applied research in fields of information and communication technology.

  • kc claffy

    The second Workshop on Internet Economics [2], hosted by CAIDA and Georgia Institute of Technology on December 1-2, 2011, brought together network technology and policy researchers with providers of commercial Internet facilities and services (network operators) to further explore the common objective of framing an agenda for the emerging but empirically stunted field of Internet infrastructure economics. This report describes the workshop discussions and presents relevant open research questions identified by its participants.

  • S. Keshav

    I'd like to devote this editorial to a description of the process we use to select and publish papers submitted to CCR. CCR publishes two types of papers: technical papers and editorials.  I'll first describe the process for technical papers then for editorials.

    Technical papers are submitted to the CCR online website (currently at http://blizzard.cs.uwaterloo.ca/ccr) which runs a modified version of Eddie Kohler's HOTCRP system. Authors are required to submit a paper in the standard SIGCOMM format with subject classifiers and keywords required by the ACM Digital Library. We restrict technical papers to six pages for two reasons. First, it prevents rejected conference papers from being trivially resubmitted to CCR. Second, it limits the load on area editors and reviewers, which is important given the quick turnaround we'd like for CCR. Some papers do need more than six pages. If so, authors should write to me and, if I find their argument convincing, I usually grant the request immediately. I also add a note to the online system so that Area Editors do not reject the paper for being over-length.
     
    Once a paper is in the system, I assign it to an Area Editor for review. If I have free time, I do this immediately after paper submission. If I'm backed up, which is true more often than I'd like, this happens in the week following the quarterly submission deadlines of March 1, June 1, September 1, and December 1. Area Editors are given seven weeks to obtain up to five reviews. Most papers receive comments from at least three reviewers but papers that are clearly not acceptable may be rejected with a single review.
     
    Reviewers judge papers along three axes: timeliness, clarity, and novelty; the range of scores is from one to five. Reviewers also summarize the contribution of the paper and provide detailed comments to improve paper quality. Finally, each reviewer suggests a potential paper outcome: accept, revise-and-resubmit, or reject. CCR's goal is to accept high-quality papers that are both novel and timely. Technical accuracy is necessary, of course, but we do not require papers to be as thorough in their evaluation as a flagship conference or a journal.
     
    Reviewers use the CCR online system to submit their reviews. After finalizing their own review, they are permitted to read other reviews and, if they wish, update their review. This tends to dampen outliers in review scores.
     
    Authors are automatically informed when each review is finalized and are permitted to rebut the review online. Some authors immediately rebut each review; other wait for all their reviews before responding. Authors typically respond to reviews with painstakinglydetailed responses; it is truly remarkable to see how carefully each reviewer criticism is considered in these responses! Author rebuttals are viewable both by reviewers and the assigned Area Editor. Although reviewers are free to comment on the rebuttals or even modify their reviews based on the rebuttal, this option is seldom exercised.
     
    After seven to eight weeks it is time for Area Editors to make editorial decisions. The Area Editor reads the paper, its reviews, and the author rebuttals and decides whether the paper is to be rejected, accepted, or revised and resubmitted. The decision is entered as a comment to the paper. This decision may or may not be signed by the Area Editor, as they wish.
     
    If the paper is rejected, I send a formal rejection letter to the authors and the paper is put to rest. If the paper is accepted and the revisions are minor, then the authors are asked to prepare the camera-ready copy and upload it to the publisher's website for publication. On the other hand, if the revisions are major, then the Area Editor typically asks the authors to revise the paper for re-review before the author is allowed to generate camera-ready copy. In either case, the Area Editor writes a public review for publication with the paper.
     
    Revise-and-resubmit decisions can be tricky. If the revisions are minor and the authors can turn things around, they are allowed to resubmit the paper for review in the same review cycle. This needs my careful attention to ensure that the authors and the Area Editor are moving things along in time for publication. Major revisions are usually submitted to the next issue. I try to ensure that the paper is sent back for re-review by the same Area Editor (in some cases, this may happen after the Area Editor has stepped off the Editorial Board).
     
    Editorials are handled rather differently: I read and approve editorials myself. If I approve the editorial, it is published, and if it is not, I send authors a non-anonymous review telling them why I rejected the paper. I judge editorials for timeliness, breadth, potential for controversy, and whether they instructive. As a rule, however, given the role of CCR as a newsletter, all reports on conferences and workshops are automatically accepted; this is an easy way for you to pile up your CCR publications.
     
    About a month and half before the issue publication date, we have a full set of papers approved for publication. My admin assistant, the indefatigable Gail Chopiak, uses this list to prepare a Table of Contents to send to Lisa Tolles, our contact with the ACM-Sheridan service, Sheridan Printing Co. Lisa contacts authors with a URL where they upload their camera-ready papers. Lisa and her associates works individually with authors to make sure that their papers meets CCR and ACM publication standards. When all is in order, the issue is composited and the overall PDF is ready.
     
    At this point, I am sent the draft issue to suggest minor changes, such as in paper ordering, or in the choice of advertisements that go into the issue. I also approve any changes to the masthead and the boilerplate that goes in the inside front and back covers. Once the PDFs are finalized, the SIGCOMM online editor uploads these PDFs to the ACM Digital Library for CCR Online. Finally, the issue is sent to print and, after about a month or so, it is mailed to SIGCOMM members.
     
    I hope this glimpse into the publication process helps you understand the roles played by the Area Editors, the reviewers, Sheridan staff, the SIGCOMM online editor, and myself, in bringing each issue to you. My sincere thanks to everyone who volunteers their valuable time to make CCR one of the best and also one of the bestread newsletters in ACM!
  • Partha Kanuparthy, Constantine Dovrolis, Konstantina Papagiannaki, Srinivasan Seshan, Peter Steenkiste

    Common Wireless LAN (WLAN) pathologies include low signal-to-noise ratio, congestion, hidden terminals or interference from non-802.11 devices and phenomena. Prior work has focused on the detection and diagnosis of such problems using layer-2 information from 802.11 devices and special purpose access points and monitors, which may not be generally available. Here, we investigate a user-level approach: is it possible to detect and diagnose 802.11 pathologies with strictly user-level active probing, without any cooperation from, and without any visibility in, layer-2 devices? In this paper, we present preliminary but promising results indicating that such diagnostics are feasible.

    Renata Teixeira
  • Nadi Sarrar, Steve Uhlig, Anja Feldmann, Rob Sherwood, Xin Huang

    Internet traffic has Zipf-like properties at multiple aggregation levels. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the forwarder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintaining a reliable set of heavy hitters over time is challenging. This is especially true when we face a volume limit in the non-offloaded traffic in combination with a constraint in the size of the heavy hitter set or its rate of change. We propose a set selection strategy that takes advantage of the properties of heavy hitters at different time scales. Based on real Internet traffic traces, we show that our strategy is able to offload most of the traffic while limiting the rate of change of the heavy hitter set, suggesting the feasibility of alternative router designs.

    Jia Wang
  • Thomas Bonald, James W. Roberts

    We demonstrate that the Internet has a formula linking demand, capacity and performance that in many ways is the analogue of the Erlang loss formula of telephony. Surprisingly, this formula is none other than the Erlang delay formula. It provides an upper bound on the probability a flow of given peak rate suffers degradation when bandwidth sharing is max-min fair. Apart from the flow rate, the only relevant parameters are link capacity and overall demand. We explain why this result is valid under a very general and realistic traffic model and discuss its significance for network engineering.

    Augustin Chaintreau
  • Alberto Dainotti, Roman Amman, Emile Aben, Kimberly C. Claffy

    Unsolicited one-way Internet traffic, also called Internet background radiation (IBR), has been used for years to study malicious activity on the Internet, including worms, DoS attacks, and scanning address space looking for vulnerabilities to exploit. We show how such traffic can also be used to analyze macroscopic Internet events that are unrelated to malware. We examine two phenomena: country-level censorship of Internet communications described in recent work, and natural disasters (two recent earthquakes). We introduce a new metric of local IBR activity based on the number of unique IP addresses per hour contributing to IBR. The advantage of this metric is that it is not affected by bursts of traffic from a few hosts. Although we have only scratched the surface, we are convinced that IBR traffic is an important building block for comprehensive monitoring, analysis, and possibly even detection of events unrelated to the IBR itself. In particular, IBR offers the opportunity to monitor the impact of events such as natural disasters on network infrastructure, and in particular reveals a view of events that is complementary to many existing measurement platforms based on (BGP) control-plane views or targeted active ICMP probing.

    Sharad Agarwal
  • Phillipa Gill, Michael Schapira, Sharon Goldberg

    Researchers studying the interdomain routing system, its properties and new protocols, face many challenges in performing realistic evaluations and simulations. Modeling decisions with respect to AS-level topology, routing policies and traffic matrices are complicated by a scarcity of ground truth for each of these components. Moreover, scalability issues arise when attempting to simulate over large (although still incomplete) empirically-derived AS-level topologies. In this paper, we discuss our approach for analyzing the robustness of our results to incomplete empirical data. We do this by (1) developing fast simulation algorithms that enable us to (2) running multiple simulations with varied parameters that test the sensitivity of our research results.

    Yin Zhang
  • Francesco Fusco, Xenofontas Dimitropoulos, Michail Vlachos, Luca Deri

    Long-term historical analysis of captured network traffic is a topic of great interest in network monitoring and network security. A critical requirement is the support for fast discovery of packets that satisfy certain criteria within large-scale packet repositories. This work presents the first indexing scheme for network packet traces based on compressed bitmap indexing principles. Our approach supports very fast insertion rates and results in compact index sizes. The proposed indexing methodology builds upon libpcap, the de-facto reference library for accessing packet-trace repositories. Our solution is therefore backward compatible with any solution that uses the original library. We experience impressive speedups on packet-trace search operations: our experiments suggest that the index-enabled libpcap may reduce the packet retrieval time by more than 1100 times.

    Philip Levis
  • Murtaza Motiwala, Amogh Dhamdhere, Nick Feamster, Anukool Lakhina

    We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems.

    Augustin Chaintreau
  • Maxim Podlesny, Carey Williamson

    ADSL and cable connections are the prevalent technologies available from Internet Service Providers (ISPs) for residential Internet access. Asymmetric access technologies such as these offer high download capacity, but moderate upload capacity. When the Transmission Control Protocol (TCP) is used on such access networks, performance degradation can occur. In particular, sharing a bottleneck link with different upstream and downstream capacities among competing TCP flows in opposite directions can degrade the throughput of the higher speed link. Despite many research efforts to solve this problem in the past, there is no solution that is both highly effective and easily deployable in residential networks. In this paper, we propose an Asymmetric Queueing (AQ) mechanism that enables full utilization of the bottleneck access link in residential networks with asymmetric capacities. The extensive simulation evaluation of our design shows its effectiveness and robustness in a variety of network conditions. Furthermore, our solution is easy to deploy and configure in residential networks.

    Renata Teixeira
  • Yoo Chung

    Distributed denial of service attacks are often considered just a security problem. While this may be the way to view the problem with the Internet of today, perhaps new network architectures attempting to address the issue should view it as a scalability problem. In addition, they may need to approach the problem based on a rigorous foundation.

  • Jonathon Duerig, Robert Ricci, Leigh Stoller, Matt Strum, Gary Wong, Charles Carpenter, Zongming Fei, James Griffioen, Hussamuddin Nasir, Jeremy Reed, Xiongqi Wu
  • John W. Byers, Jeffrey C. Mogul, Fadel Adib, Jay Aikat, Danai Chasaki, Ming-Hung Chen, Marshini Chetty, Romain Fontugne, Vijay Gabale, László Gyarmati, Katrina LaCurts, Qi Liao, Marc Mendonca, Trang Cao Minh, S.H. Shah Newaz, Pawan Prakash, Yan Shvartzshnaider, Praveen Yalagandula, Chun-Yu Yang

    This document provides reports on the presentations at the SIGCOMM 2011 Conference, the annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM).

  • S. Keshav

    This editorial was motivated by a panel on the relationship between academia and industry at the SIGCOMM 2011 conference that was moderated by Bruce Davie. I can claim some familiarity with the topic having spent roughly ten years each in academia and industry during the last twenty years.

    My thesis is that although industry can make incremental gains, real technical breakthroughs can only come from academia. However, to have any impact, these academic breakthroughs must be motivated, at some level, by a real-world problem and the proposed solutions should be feasible, even if implausible. Therefore, it is in the self-interest of industry to fund risky, longterm, curiosity-driven academic research rather than sure-shot, short-term, practical research with welldefined objectives. Symmetrically, it is in the self-interest of academic researchers to tackle real-world problems motivated by the problems faced in industry and propose reasonable solutions.

    There are many underlying reasons why technological revolutions today can only come from academia. Perhaps the primary reason is that, unlike most industrial research labs of today (and I am purposely excluding the late, great, Bell Labs of yore), academia still supports long-term, curiosity-driven research. This is both risky and an inherently `wasteful’ use of time. Yet, this apparently wasteful work is the basis for many of today’s technologies, ranging from Google search to the World Wide Web to BSD Unix and Linux. On closer thought, this is not too surprising. Short-term, practical research requires the investigator to have well-defined goals. But revolutionary ideas cannot be reduced to bullet items on Powerpoint slides: they usually arise as unexpected outcomes of curiosity-driven research. Moreover, it takes time for ideas to mature and for the inevitable missteps to be detected and corrected. Industrial funding cycles of six months to a year are simply not set up to fund ideas whose maturation can take five or even ten years. In contrast, academic research built on the basis of academic tenure and unencumbered by the demands of the marketplace is the ideal locus for long-term work.

    Long-term, curiosity-driven research alone does not lead to revolutions. It must go hand-in-hand with an atmosphere of intellectual openness and rigour. Ideas must be freely exchanged and freely shot down.

    The latest work should be widely disseminated and incorporated into one’s thinking. This openness is antithetical to the dogma of `Intellectual Property’ by which most corporations are bound. Academia, thankfully, has mostly escaped from this intellectual prison. Moreover, industry is essentially incompatible with intellectual rigour: corporate researchers, by and large, cannot honestly comment on the quality of their own company’s products and services.

    A third ingredient in the revolutionary mix is the need for intense thinking by a dedicated group of researchers. Hands-on academic research tends to be carried out by young graduate students (under the supervision of their advisors) who are unburdened by either responsibilities or by knowing that something just cannot be done. Given training and guidance, given challenging goals, and given a soul-searing passion to make a difference in the world, a mere handful of researchers can do what corporate legions cannot.

    These three foundations of curiosity-driven research, intellectual openness, and intense thinking set academic research apart from the industrial research labs of today and are also the reason why the next technological revolution is likely to come from academia, not industry.

    In the foregoing, I admit that I have painted a rather rosy picture of academic research. It is important to recognize, however, that the same conditions that lead to breakthrough research also are susceptible to abuse. The freedom to pursue long-term ideas unconstrained by the marketplace can also lead to work that is shoddy and intellectually dishonest. For instance, I believe that it may be intellectually honest for a researcher to make assumptions that do not match current technology, but it is intellectually dishonest to make assumptions that violate the laws of physics. In a past editorial, I have written in more depth about these assumptions, so I will not belabour the point. I will merely remark here that it is incumbent on academic researchers not to abuse their freedom.

    A second inherent problem with academic research, especially in the field of computer networking, is that it is difficult, perhaps impossible, to do large-scale datadriven research. As a stark example, curiosity-driven work on ISP topology is impossible if ISPs sequester this data. Similarly, studying large-scale data centre topology is challenging when the largest data centre one can build in academia has only a few hundred servers.

    Finally, academic research tends to be self-driven and sometimes far removed from real-world problems. These real-world problems, which are faced daily by industrial researchers, can be intellectually demanding and their solution can be highly impactful. Academic researchers would benefit from dialogue with industrial researchers in posing and solving such problems.

    Given this context, the relationship between academia and industry becomes relatively clear. What academia has and industry needs is committed, focussed researchers and the potential for long-term, revolutionary work. What industry has and academia needs is exposure to real-world problems, large-scale data and systems, and funding. Therefore, it would be mutually beneficial for each party to contribute to the other. Here are a few specific suggestions how.

    First, industry should fund academic research without demanding concrete deliverables and unnecessary constraints. Of course, the research (and, in particular, the research assumptions) should be adequately monitored. But the overall expectation should be that academic work would be curiosity-driven, open, and long-term.

    Second, industry should try to expose academic researchers to fundamental real-world problems and put at their disposal the data that is needed for their solution. If necessary, academic researchers should be given access to large-scale systems to try out their solutions. This can be done without loss of intellectual property by having students and PIs visit industrial research labs as interns or during sabbaticals. It could also be done by having industrial researchers spend several weeks or months as visitors to university research labs.

    Third, industry should spend resources not only on funding, but on internal resources to match the output of academic research (papers and prototypes) to their own needs (products and systems).

    Fourth, academic researchers should choose research problems based not just on what is publishable, but (also) based on the potential for real-world impact. This would naturally turn them to problems faced by industry.

    Fifth, academic researchers should ensure that their solutions are feasible, even if implausible. For instance, a wireless system for cognitive radio built on USRP boards is implausible but feasible. In contrast, a wireless system that assumes that all radio coverage areas are perfectly circular is neither plausible nor feasible. This distinction should be emphasized in the academic review of technical papers.

    Finally, academic researchers should recognize the constraints under which industry operates and, to the extent possible, accommodate them. For instance, they should encourage students to take on internships, fight the inevitable battles with the university office of research to negotiate IP terms, and understand that their points of contact will change periodically due to the nature of corporate (re-)organizations.

    The SIG can also help this interaction. Industry-academic fora such as the panel at SIGCOMM, industryspecific workshops, and industry desks at conferences allow academic researchers to interact with representatives from industry. SIGCOMM could have tutorials focussed on topics of current interest to industry. These two efforts would certainly make deep collaboration between academia and industry more likely.

    I hope that these steps will move our community towards a future where academic research, though curiosity-driven, continues to drive real-world change because of its symbiotic relationship with industrial partners.

    This editorial benefited from comments from Bruce Davie and Gail Chopiak. 

  • Giuseppe Bianchi, Nico d'Heureuse, and Saverio Niccolini

    Several traffic monitoring applications may benefit from the availability of efficient mechanisms for approximately tracking smoothed time averages rather than raw counts. This paper provides two contributions in this direction. First, our analysis of Time-decaying Bloom filters, formerly proposed data structures devised to perform approximate Exponentially Weighted Moving Averages on streaming data, reveals two major shortcomings: biased estimation when measurements are read in arbitrary time instants, and slow operation resulting from the need to periodically update all the filter's counters at once. We thus propose a new construction, called On-demand Time-decaying Bloom filter, which relies on a continuous-time operation to overcome the accuracy/performance limitations of the original window-based approach. Second, we show how this new technique can be exploited in thedesign of high performance stream-based monitoring applications, by developing VoIPSTREAM, a proof-of-concept real-time analysis version of a formerly proposed system for telemarketing call detection. Our validation results, carried out over real telephony data, show how VoIPSTREAM closely mimics the feature extraction process and traffic analysis techniques implemented in the offline system, at a significantly higher processing speed, and without requiring any storage of per-user call detail records.

    Augustin Chaintreau
  • Tom Callahan, Mark Allman, Michael Rabinovich, and Owen Bell

    The Internet has changed dramatically in recent years. In particular, the fundamental change has occurred in terms of who generates most of the content, the variety of applications used and the diverse ways normal users connect to the Internet. These factors have led to an explosion of the amount of user-specific meta-information that is required to access Internet content (e.g., email addresses, URLs, social graphs). In this paper we describe a foundational service for storing and sharing user-specific meta-information and describe how this new abstraction could be utilized in current and future applications.

    Stefan Saroiu
  • Craig Partridge

    About ten years ago, Bob Lucky asked me for a list of open research questions in networking. I didn't have a ready list and reacted it would be good to have one. This essay is my (long- belated) reply.

  • Soumya Sen, Roch Guerin, and Kartik Hosanagar

    Should a new "platform" target a functionality-rich but complex and expensive design or instead opt for a bare-bone but cheaper one? This is a fundamental question with profound implications for the eventual success of any platform. A general answer is, however, elusive as it involves a complex trade-off between benefits and costs. The intent of this paper is to introduce an approach based on standard tools from the field of economics, which can offer some insight into this difficult question. We demonstrate its applicability by developing and solving a generic model that incorporates key interactions between platform stakeholders. The solution confirms that the "optimal" number of features a platform should offer strongly depends on variations in cost factors. More interestingly, it reveals a high sensitivity to small relative changes in those costs. The paper's contribution and motivation are in establishing the potential of such a cross-disciplinary approach for providing qualitative and quantitative insights into the complex question of platform design.

  • kc claffy

    In June 2011 I participated on a panel on network neutrality hosted at the June cybersecurity meeting of the DHS/SRI Infosec Technology Transition Council (ITTC), where "experts and leaders from the government, private, financial, IT, venture capitalist,and academia and science sectors came together to address the problem of identity theft and related criminal activity on the Internet." I recently wrote up some of my thoughts on that panel, including what network neutrality has to do with cybersecurity.

  • kc claffy

    I recently published this essay on CircleID on my thoughts on ICANN's recent decision to launch .XXX and the larger new gTLD program this year. Among other observations, I describe how .XXX marks a historical inflection point, where ICANN's board formally abandoned any responsibility to present an understanding of the ramifications of probable negative externalities ("harms") in setting its policies. That ICANN chose to relinquish this responsibility puts the U.S. government in the awkward position of trying to tighten the few inadequate controls that remain over ICANN, and leaves individual and responsible corporate citizens in the unenviable yet familiar position of bracing for the consequences.

  • S. Keshav

    This edition of CCR bears a dubious distinction of having no technical articles, only editorial content. This is not because no technical articles were submitted: in fact, there were 13 technical submissions. However, all of them were rejected by the Area Editors on the advice of the reviewers, a decision that I did express concern with, but could not, in good conscience, overturn.

    One could ask: were all the papers so terrible? Certainly some papers were unacceptably bad and some were simply out of scope. However, the fate of most papers was to be judged to be not good enough to publish. Some submissions were too broad, others too narrow, many were too incremental, some too radical, and some were just not interesting enough. The opposite of a Procrustean bed, CCR has become a bed that no paper seems to fit!

    This, by itself, would normally not cause me too much concern. However, I feel that this attitude has permeated our community at large. A similar spirit of harsh criticism is used to judge papers at SIGCOMM, MOBICOM, CoNEXT, and probably every other top-tier computer science conference. Reviewers seem only to want to find fault with papers, rather than appreciate insights despite inevitable errors and a lack of technical completeness.

    I think that a few all-too-human foibles lie at the bottom of this hyper-critical attitude of paper reviewers. First, a subconscious desire to get one’s back: if my paper has been rejected from a venue due to sharp criticism, why not pay this back with sharp criticism of my own? Second, a desire to prove one’s expertise: if I can show that a paper is not perfect, that shows how clever I am. Third, a biased view of what papers in a particular area should look like: I’m the expert in my field, so I think I know what every paper in my field should look like! Finally, unrealistic expectations: I may not write perfect papers but I expect to read only perfect ones. I think I have a good understanding of the psychological basis of reviewer nitpicking because I too am guilty of these charges.

    These subconscious attitudes are exacerbated by two other factors: a ballooning of reviewer workloads, and, with journals in computer science languishing in their roles, conference papers being held to archival standard. These factors force reviewers into looking for excuses to reject papers, adding momentum to the push towards perfection. As the quote from Voltaire shows, this has negative consequences.

    One negative consequence is the stifling of innovation. Young researchers learn that to be successful in publishing in top-tier venues, it pays to stick to well-established areas of research, where reviewers cannot fault them in their assumptions, because these already appear in the published literature. Then, they scale the walls by adding epsilon to delta until the incrementality threshold is breached. This has an opportunity cost in that well-studied areas are further overstudied to the detriment of others.

    A second negative consequence is that it turns some researchers off. They simply do not want to take part in a game where they cannot respect the winners or the system. This has an even greater opportunity cost.

    How can we address this problem? As PC chairs and Area Editors, we need to set the right expectations with reviewers. No paper will be perfect: that is a given. We have to change our mental attitude from finding reasons to reject a paper to finding reasons to accept a paper. We will certainly be trying to do this from now on at CCR.

    We can also remove the notion of a publication bar altogether. An online version of CCR, which will be coming some day, could easily accept all articles submitted to it. Editors and reviewers could rank papers and do public reviews and readers can judge whether or not to read a paper. This is already common practice in physics, using the Arxiv system.

    Finally, I would urge readers to look within. As a reviewer of a paper, it is your duty to critique a paper and point out its flaws. But can you overlook minor flaws and find the greater good? In some cases, I hope your answer will be yes. And with this small change, the system will also change. One review at a time.

  • Jennifer Rexford

    While computer networking is an exciting research field, we are far from having a clear understanding of the core concepts and questions that define our discipline. This position paper, a summary of a talk I gave at the CoNext’10 Student Workshop, captures my current frustrations and hopes about the field.

  • Marcelo Bagnulo, Philip Eardley, Lars Eggert, and Rolf Winter

    The development of new technology is driven by scientific research. The Internet, with its roots in the ARPANET and NSFNet, is no exception. Many of the fundamental, long-term improvements to the architecture, security, end-to-end protocols and management of the Internet originate in the related academic research communities. Even shorter-term, more commercially driven extensions are oftentimes derived from academic research. When interoperability is required, the IETF standardizes such new technology. Timely and relevant standardization benefits from continuous input and review from the academic research community.

    For an individual researcher, it can however by quite puzzling how to begin to most effectively participate in the IETF and arguably to a much lesser degree in the IRTF. The interactions in the IETF are much different than those in academic conferences, and effective participation follows different rules. The goal of this document is to highlight such differences and provide a rough guideline that will hopefully enable researchers new to the IETF to become successful contributors more quickly.

  • Eiko Yoneki, Jon Crowcroft, Pietro Lio', Neil Walton, Milan Vojnovic, and Roger Whitaker

    Electronic social networks are a relatively new pervasive phenomenon that has changed the way in which we communicate and interact. They are now supporting new applications, leading to new trends and posing new challenges. The workshop titled ”Future of Social Networking: Experts from Industry and Academia” took place in Cambridge on November 18, 2010 to expose how the future of social networking may develop and be exploited in new technologies and systems. We provide a summary of this event and some observations on the key outcomes.

  • Teemu Koponen, Scott Shenker, Hari Balakrishnan, Nick Feamster, Igor Ganichev, Ali Ghodsi, P. Brighten Godfrey, Nick McKeown, Guru Parulkar, Barath Raghavan, Jennifer Rexford, Somaya Arianfar, and Dmitriy Kuptsov

    We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural “framework” in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) — which is derived from the simple observation that network interfaces should be extensible and abstract — allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII’s ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces.

  • kc claffy

    On February 10-12, 2011, CAIDA hosted the third Work- shop on Active Internet Measurements (AIMS-3) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with the previous two AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security research communities. For three years, the workshop has fostered interdisciplinary conversation among researchers, operators, and government, focused on analysis of goals, means, and emerging issues in active Internet measurement projects. The first workshop emphasized discussion of existing hardware and software platforms for macroscopic measurement and mapping of Internet properties, in particular those related to cybersecurity. The second workshop included more performance evaluation and data-sharing approaches. This year we expanded the work- shop agenda to include active measurement topics of more recent interest: broadband performance; gauging IPv6 deployment; and measurement activities in international re- search networks.

  • kc claffy

    Exhaustion of the Internet addressing authority’s (IANA) available IPv4 address space, which occurred in February 2011, is finally exerting exogenous pressure on network operators to begin to deploy IPv6. There are two possible outcomes from this transition. IPv6 may be widely adopted and embraced, causing many existing methods to measure and monitor the Internet to be ineffective. A second possibility is that IPv6 languishes, transition mechanisms fail, or performance suffers. Either scenario requires data, measurement, and analysis to inform technical, business, and policy decisions. We survey available data that have allowed limited tracking of IPv6 deployment thus far, describe additional types of data that would support better tracking, and offer a perspective on the challenging future of IPv6 evolution.

  • S. Keshav

    Twenty years ago, when I was still a graduate student, going online meant firing up a high-speed 1200 baud modem and typing text on a Z19 glass terminal to interact with my university’s VAX 11/780 server. Today, this seems quaint, if not downright archaic. Fast forwarding twenty years from now, it seems very likely that reading newspapers and magazines on paper will seem equally quaint, if not downright wasteful. It is clear that the question is when, not if, CCR goes completely online.

    CCR today provides two types of content: editorials and technical articles. Both are selected to be relevant, novel, and timely. By going online only, we would certainly not give up these qualities. Instead, by not being tied to the print medium, we could publish articles as they were accepted, instead of waiting for a publication deadline. This would reduce the time-to-publication from the current 16 weeks to less than 10 weeks, making the content even more timely.

    Freeing CCR from print has many other benefits. We could publish content that goes well beyond black-and-white print and graphics. For example, graphs and photographs in papers would no longer have to be black-and-white. But that is not all: it would be possible, for example, to publish professional- quality videos of paper presentations at the major SIG conferences. We could also publish and archive the software and data sets for accepted papers. Finally, it would allow registered users to receive alerts when relevant content was published. Imagine the benefits from getting a weekly update from CCR with pointers to freshly-published content that is directly relevant to your research!

    These potential benefits can be achieved at little additional cost and using off-the-shelf technologies. They would, however, significantly change the CCR experience for SIG members. Therefore, before we plunge ahead, we’d like to know what you think. Do send your comments to me at: ccr-edit@uwaterlo.ca

  • Martin Heusse, Sears A. Merritt, Timothy X. Brown, and Andrzej Duda

    Many papers explain the drop of download performance when two TCP connections in opposite directions share a common bottleneck link by ACK compression, the phenomenon in which download ACKs arrive in bursts so that TCP self clocking breaks. Efficient mechanisms to cope with the performance problem exist and we do not consider proposing yet another solution. We rather thoroughly analyze the interactions between connections and show that actually ACK compression only arises in a perfectly symmetrical setup and it has little impact on performance. We provide a different explanation of the interactions—data pendulum, a core phenomenon that we analyze in this paper. In the data pendulum effect, data and ACK segments alternately fill only one of the link buffers (on the upload or download side) at a time, but almost never both of them. We analyze the effect in the case in which buffers are structured as arrays of bytes and derive an expression for the ratio between the download and upload throughput. Simulation results and measurements confirm our analysis and show how appropriate buffer sizing alleviates performance degradation. We also consider the case of buffers structured as arrays of packets and show that it amplifies the effects of data pendulum.

    D. Papagiannaki
  • Nasif Ekiz, Abuthahir Habeeb Rahman, and Paul D. Amer

    While analyzing CAIDA Internet traces of TCP traffic to detect instances of data reneging, we frequently observed seven misbehaviors in the generation of SACKs. These misbehaviors could result in a data sender mistakenly thinking data reneging occurred. With one misbehavior, the worst case could result in a data sender receiving a SACK for data that was transmitted but never received. This paper presents a methodology and its application to test a wide range of operating systems using TBIT to fingerprint which ones misbehave in each of the seven ways. Measuring the performance loss due to these misbehaviors is outside the scope of this study; the goal is to document the misbehaviors so they may be corrected. One can conclude that the handling of SACKs while simple in concept is complex to implement.

    S. Saroiu
  • Shane Alcock and Richard Nelson

    This paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40% of packet loss events in YouTube flows in a residential DSL dataset and the re- transmission of over 1% of all YouTube data sent after the application flow control began. We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams.

    S. Moon
  • Marcus Lundén and Adam Dunkels

    In low-power wireless networks, nodes need to duty cycle their radio transceivers to achieve a long system lifetime. Counter-intuitively, in such networks broadcast becomes expensive in terms of energy and bandwidth since all neighbors must be woken up to receive broadcast messages. We argue that there is a class of traffic for which broadcast is overkill: periodic redundant transmissions of semi-static information that is already known to all neighbors, such as neighbor and router advertisements. Our experiments show that such traffic can account for as much as 20% of the network power consumption. We argue that this calls for a new communication primitive and present politecast, a communication primitive that allows messages to be sent without explicitly waking neighbors up. We have built two systems based on politecast: a low-power wireless mobile toy and a full-scale low-power wireless network deployment in an art gallery and our experimental results show that politecast can provide up to a four-fold lifetime improvement over broadcast.

    P. Levis
  • Xiang Cheng, Sen Su, Zhongbao Zhang, Hanchi Wang, Fangchun Yang, Yan Luo, and Jie Wang

    Virtualizing and sharing networked resources have become a growing trend that reshapes the computing and networking architectures. Embedding multiple virtual networks (VNs) on a shared substrate is a challenging problem on cloud computing platforms and large-scale sliceable network testbeds. In this paper we apply the Markov Random Walk (RW) model to rank a network node based on its resource and topological attributes. This novel topology-aware node ranking measure reflects the relative importance of the node. Using node ranking we devise two VN embedding algorithms. The first algorithm maps virtual nodes to substrate nodes according to their ranks, then embeds the virtual links between the mapped nodes by finding shortest paths with unsplittable paths and solving the multi-commodity flow problem with splittable paths. The second algorithm is a backtracking VN embedding algorithm based on breadth-first search, which embeds the virtual nodes and links during the same stage using node ranks. Extensive simulation experiments show that the topology-aware node rank is a better resource measure and the proposed RW-based algorithms increase the long-term average revenue and acceptance ratio compared to the existing embedding algorithms.

    S. Agarwal
Syndicate content