CCR Papers from January 2012

Find a CCR issue:
  • S. Keshav

    I'd like to devote this editorial to a description of the process we use to select and publish papers submitted to CCR. CCR publishes two types of papers: technical papers and editorials.  I'll first describe the process for technical papers then for editorials.

    Technical papers are submitted to the CCR online website (currently at http://blizzard.cs.uwaterloo.ca/ccr) which runs a modified version of Eddie Kohler's HOTCRP system. Authors are required to submit a paper in the standard SIGCOMM format with subject classifiers and keywords required by the ACM Digital Library. We restrict technical papers to six pages for two reasons. First, it prevents rejected conference papers from being trivially resubmitted to CCR. Second, it limits the load on area editors and reviewers, which is important given the quick turnaround we'd like for CCR. Some papers do need more than six pages. If so, authors should write to me and, if I find their argument convincing, I usually grant the request immediately. I also add a note to the online system so that Area Editors do not reject the paper for being over-length.
     
    Once a paper is in the system, I assign it to an Area Editor for review. If I have free time, I do this immediately after paper submission. If I'm backed up, which is true more often than I'd like, this happens in the week following the quarterly submission deadlines of March 1, June 1, September 1, and December 1. Area Editors are given seven weeks to obtain up to five reviews. Most papers receive comments from at least three reviewers but papers that are clearly not acceptable may be rejected with a single review.
     
    Reviewers judge papers along three axes: timeliness, clarity, and novelty; the range of scores is from one to five. Reviewers also summarize the contribution of the paper and provide detailed comments to improve paper quality. Finally, each reviewer suggests a potential paper outcome: accept, revise-and-resubmit, or reject. CCR's goal is to accept high-quality papers that are both novel and timely. Technical accuracy is necessary, of course, but we do not require papers to be as thorough in their evaluation as a flagship conference or a journal.
     
    Reviewers use the CCR online system to submit their reviews. After finalizing their own review, they are permitted to read other reviews and, if they wish, update their review. This tends to dampen outliers in review scores.
     
    Authors are automatically informed when each review is finalized and are permitted to rebut the review online. Some authors immediately rebut each review; other wait for all their reviews before responding. Authors typically respond to reviews with painstakinglydetailed responses; it is truly remarkable to see how carefully each reviewer criticism is considered in these responses! Author rebuttals are viewable both by reviewers and the assigned Area Editor. Although reviewers are free to comment on the rebuttals or even modify their reviews based on the rebuttal, this option is seldom exercised.
     
    After seven to eight weeks it is time for Area Editors to make editorial decisions. The Area Editor reads the paper, its reviews, and the author rebuttals and decides whether the paper is to be rejected, accepted, or revised and resubmitted. The decision is entered as a comment to the paper. This decision may or may not be signed by the Area Editor, as they wish.
     
    If the paper is rejected, I send a formal rejection letter to the authors and the paper is put to rest. If the paper is accepted and the revisions are minor, then the authors are asked to prepare the camera-ready copy and upload it to the publisher's website for publication. On the other hand, if the revisions are major, then the Area Editor typically asks the authors to revise the paper for re-review before the author is allowed to generate camera-ready copy. In either case, the Area Editor writes a public review for publication with the paper.
     
    Revise-and-resubmit decisions can be tricky. If the revisions are minor and the authors can turn things around, they are allowed to resubmit the paper for review in the same review cycle. This needs my careful attention to ensure that the authors and the Area Editor are moving things along in time for publication. Major revisions are usually submitted to the next issue. I try to ensure that the paper is sent back for re-review by the same Area Editor (in some cases, this may happen after the Area Editor has stepped off the Editorial Board).
     
    Editorials are handled rather differently: I read and approve editorials myself. If I approve the editorial, it is published, and if it is not, I send authors a non-anonymous review telling them why I rejected the paper. I judge editorials for timeliness, breadth, potential for controversy, and whether they instructive. As a rule, however, given the role of CCR as a newsletter, all reports on conferences and workshops are automatically accepted; this is an easy way for you to pile up your CCR publications.
     
    About a month and half before the issue publication date, we have a full set of papers approved for publication. My admin assistant, the indefatigable Gail Chopiak, uses this list to prepare a Table of Contents to send to Lisa Tolles, our contact with the ACM-Sheridan service, Sheridan Printing Co. Lisa contacts authors with a URL where they upload their camera-ready papers. Lisa and her associates works individually with authors to make sure that their papers meets CCR and ACM publication standards. When all is in order, the issue is composited and the overall PDF is ready.
     
    At this point, I am sent the draft issue to suggest minor changes, such as in paper ordering, or in the choice of advertisements that go into the issue. I also approve any changes to the masthead and the boilerplate that goes in the inside front and back covers. Once the PDFs are finalized, the SIGCOMM online editor uploads these PDFs to the ACM Digital Library for CCR Online. Finally, the issue is sent to print and, after about a month or so, it is mailed to SIGCOMM members.
     
    I hope this glimpse into the publication process helps you understand the roles played by the Area Editors, the reviewers, Sheridan staff, the SIGCOMM online editor, and myself, in bringing each issue to you. My sincere thanks to everyone who volunteers their valuable time to make CCR one of the best and also one of the bestread newsletters in ACM!
  • Partha Kanuparthy, Constantine Dovrolis, Konstantina Papagiannaki, Srinivasan Seshan, Peter Steenkiste

    Common Wireless LAN (WLAN) pathologies include low signal-to-noise ratio, congestion, hidden terminals or interference from non-802.11 devices and phenomena. Prior work has focused on the detection and diagnosis of such problems using layer-2 information from 802.11 devices and special purpose access points and monitors, which may not be generally available. Here, we investigate a user-level approach: is it possible to detect and diagnose 802.11 pathologies with strictly user-level active probing, without any cooperation from, and without any visibility in, layer-2 devices? In this paper, we present preliminary but promising results indicating that such diagnostics are feasible.

    Renata Teixeira
  • Nadi Sarrar, Steve Uhlig, Anja Feldmann, Rob Sherwood, Xin Huang

    Internet traffic has Zipf-like properties at multiple aggregation levels. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the forwarder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintaining a reliable set of heavy hitters over time is challenging. This is especially true when we face a volume limit in the non-offloaded traffic in combination with a constraint in the size of the heavy hitter set or its rate of change. We propose a set selection strategy that takes advantage of the properties of heavy hitters at different time scales. Based on real Internet traffic traces, we show that our strategy is able to offload most of the traffic while limiting the rate of change of the heavy hitter set, suggesting the feasibility of alternative router designs.

    Jia Wang
  • Thomas Bonald, James W. Roberts

    We demonstrate that the Internet has a formula linking demand, capacity and performance that in many ways is the analogue of the Erlang loss formula of telephony. Surprisingly, this formula is none other than the Erlang delay formula. It provides an upper bound on the probability a flow of given peak rate suffers degradation when bandwidth sharing is max-min fair. Apart from the flow rate, the only relevant parameters are link capacity and overall demand. We explain why this result is valid under a very general and realistic traffic model and discuss its significance for network engineering.

    Augustin Chaintreau
  • Alberto Dainotti, Roman Amman, Emile Aben, Kimberly C. Claffy

    Unsolicited one-way Internet traffic, also called Internet background radiation (IBR), has been used for years to study malicious activity on the Internet, including worms, DoS attacks, and scanning address space looking for vulnerabilities to exploit. We show how such traffic can also be used to analyze macroscopic Internet events that are unrelated to malware. We examine two phenomena: country-level censorship of Internet communications described in recent work, and natural disasters (two recent earthquakes). We introduce a new metric of local IBR activity based on the number of unique IP addresses per hour contributing to IBR. The advantage of this metric is that it is not affected by bursts of traffic from a few hosts. Although we have only scratched the surface, we are convinced that IBR traffic is an important building block for comprehensive monitoring, analysis, and possibly even detection of events unrelated to the IBR itself. In particular, IBR offers the opportunity to monitor the impact of events such as natural disasters on network infrastructure, and in particular reveals a view of events that is complementary to many existing measurement platforms based on (BGP) control-plane views or targeted active ICMP probing.

    Sharad Agarwal
  • Phillipa Gill, Michael Schapira, Sharon Goldberg

    Researchers studying the interdomain routing system, its properties and new protocols, face many challenges in performing realistic evaluations and simulations. Modeling decisions with respect to AS-level topology, routing policies and traffic matrices are complicated by a scarcity of ground truth for each of these components. Moreover, scalability issues arise when attempting to simulate over large (although still incomplete) empirically-derived AS-level topologies. In this paper, we discuss our approach for analyzing the robustness of our results to incomplete empirical data. We do this by (1) developing fast simulation algorithms that enable us to (2) running multiple simulations with varied parameters that test the sensitivity of our research results.

    Yin Zhang
  • Francesco Fusco, Xenofontas Dimitropoulos, Michail Vlachos, Luca Deri

    Long-term historical analysis of captured network traffic is a topic of great interest in network monitoring and network security. A critical requirement is the support for fast discovery of packets that satisfy certain criteria within large-scale packet repositories. This work presents the first indexing scheme for network packet traces based on compressed bitmap indexing principles. Our approach supports very fast insertion rates and results in compact index sizes. The proposed indexing methodology builds upon libpcap, the de-facto reference library for accessing packet-trace repositories. Our solution is therefore backward compatible with any solution that uses the original library. We experience impressive speedups on packet-trace search operations: our experiments suggest that the index-enabled libpcap may reduce the packet retrieval time by more than 1100 times.

    Philip Levis
  • Murtaza Motiwala, Amogh Dhamdhere, Nick Feamster, Anukool Lakhina

    We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems.

    Augustin Chaintreau
  • Maxim Podlesny, Carey Williamson

    ADSL and cable connections are the prevalent technologies available from Internet Service Providers (ISPs) for residential Internet access. Asymmetric access technologies such as these offer high download capacity, but moderate upload capacity. When the Transmission Control Protocol (TCP) is used on such access networks, performance degradation can occur. In particular, sharing a bottleneck link with different upstream and downstream capacities among competing TCP flows in opposite directions can degrade the throughput of the higher speed link. Despite many research efforts to solve this problem in the past, there is no solution that is both highly effective and easily deployable in residential networks. In this paper, we propose an Asymmetric Queueing (AQ) mechanism that enables full utilization of the bottleneck access link in residential networks with asymmetric capacities. The extensive simulation evaluation of our design shows its effectiveness and robustness in a variety of network conditions. Furthermore, our solution is easy to deploy and configure in residential networks.

    Renata Teixeira
  • Yoo Chung

    Distributed denial of service attacks are often considered just a security problem. While this may be the way to view the problem with the Internet of today, perhaps new network architectures attempting to address the issue should view it as a scalability problem. In addition, they may need to approach the problem based on a rigorous foundation.

  • Jonathon Duerig, Robert Ricci, Leigh Stoller, Matt Strum, Gary Wong, Charles Carpenter, Zongming Fei, James Griffioen, Hussamuddin Nasir, Jeremy Reed, Xiongqi Wu
  • John W. Byers, Jeffrey C. Mogul, Fadel Adib, Jay Aikat, Danai Chasaki, Ming-Hung Chen, Marshini Chetty, Romain Fontugne, Vijay Gabale, László Gyarmati, Katrina LaCurts, Qi Liao, Marc Mendonca, Trang Cao Minh, S.H. Shah Newaz, Pawan Prakash, Yan Shvartzshnaider, Praveen Yalagandula, Chun-Yu Yang

    This document provides reports on the presentations at the SIGCOMM 2011 Conference, the annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM).

Syndicate content