Computer Communication Review: Papers

Find a CCR issue:
  • Fernando Garcia Calvo, Javier Lucendo de Gregorio, Fernando Soto de Toro, Joaquin Munoz Lopez, Teo Mayo Muniz, Jose Maria Miranda, Oscar Gavilan Ballesteros

    The Consumer Electronics Show, which is held every year in Las Vegas in early January, continues to be an important fair in the consumer sector, though increasingly the major manufacturers prefer to announce their new products at their own specific events in order to gain greater impact. Only the leading TV brands unveil their artillery of new models for the coming year. Despite this, it continues to break records: there were over 150,000 visitors (from more than 150 countries), the number of new products announced exceeded 20,000 and the fair occupied over 2 million square meters.

  • Roch Guérin, Olivier Bonaventure

    There have been many recent discussions within the computer science community on the relative roles of conferences and journals [1, 2, 3]. They clearly offer different forums for the dissemination of scientific and technical ideas, and much of the debate has been on if and how to leverage both. These are important questions that every conference and journal ought to carefully consider, and the CoNEXT Steering Committee recently initiated a discussion on this topic. The main focus of the discussion was on how to on one hand maintain the high quality of papers accepted for presentation at CoNEXT, and on the other hand improve the conference's ability to serve as a timely forum where new and exciting but not necessarily polished or fully developed ideas could be presented. Unfortunately, the stringent "quality control" that prevails during the paper selection process of selective conferences, including CoNEXT, often makes it difficult for interesting new ideas to break-through. To make it, papers need to ace it along three major dimensions, namely, technical correctness and novelty, polish of exposition and motivations, and completeness of the results. Most if not all hot-off-the-press papers will fail in at least one of those dimensions. On the other hand, there are conferences and workshops that target short papers. Hotnets is one of such venues that has attracted short papers presenting new ideas. However, from a community viewpoint, Hotnets has several limitations. First, Hotnets is an invitation-only workshop. Coupled with a low acceptance rate, this limits the exposure of Hotnets papers to the community. Second, Hotnets has never been held outside North-America. The SIGCOMM and CoNEXT workshops are also a venue where short papers can be presented and discussed. However, these workshops are focussed on a specific subdomain and usually do not attract a broad audience. The IMC short papers are a more interesting model because short and regular papers are mixed in the single track conference. This ensures broad exposure for the short papers, but the scope of IMC is much smaller than CoNEXT. In order to address this intrinsic tension that plagues all selective conferences, CoNEXT 2013 is introducing a short paper category with submissions requested through a logically separate call-for-papers. The separate call for paper is meant to clarify to both authors and TPC members that short papers are to be judged using different criteria. Short papers will be limited to six (6) two-column pages in the standard ACM conference format. Most importantly, short papers are not meant to be condensed versions of standard length papers and neither are they targeted at traditional "position papers." In particular, papers submitted as regular (long) papers will not be eligible for consideration as short papers. Instead, short paper submissions are intended for high-quality technical works that either target a topical issue that can be covered in 6 pages, or are introducing a novel but not fully flushed out idea that can benefit from the feedback that early exposure can provide. Short papers will be reviewed and selected through a process distinct from that of long papers and based on how good a match they are for the above criteria. As alluded to, this separation is meant to address the inherent dilemma faced by highly selective conferences, where reviewers typically approach the review process looking for reasons to reject a paper (how high are the odds that a paper is in the top 10-15%?). For that purpose, Program Committee members will be reminded that completeness of the results should NOT be a criterion used when assessing short papers. Similarly, while an unreadable paper is obviously not one that should be accepted, polish should not be a major consideration either. As long as the paper manages to convey its idea, a choppy presentation should not by itself be ground for rejecting a paper. Finally, while technical correctness is important, papers that maybe claim more than they should, are not to be disqualified simply on those grounds. As a rule, the selection process should focus on the "idea" presented in the paper. If the idea is new, or interesting, or unusual, etc., and is not fundamentally broken, the paper should be considered. Eventual acceptance will ultimately depend on logistics constraints (how many such papers can be presented), but the goal is to offer a venue at CoNEXT where new, emerging ideas can be presented and receive constructive feedback. The CoNEXT web site1 provide additional information on the submission process of short (and regular) papers.

  • Dina Papagiannaki

    A new year begins and a new challenge needs to be undertaken. Life is full of challenges, but those that we invite ourselves have something special of their own. In that spirit, I am really happy to be taking on as the editor for ACM Computer Communications Review. Keshav has done a tremendous job making CCR a high quality publication that unites our community. The combination of peer reviewed papers and editorial submissions provides a ground to publish the latest scientific achievements in our field, but also position them within the context of our ever changing technological landscape.

    With that in mind, I would like to continue encouraging the submission of the latest research results to CCR. I would also like to try to broaden its reach. A little less than two years ago, I changed jobs, and took the position of the scientific director responsible for Internet, systems, and networks at Telefonica Research in Barcelona, Spain*. I am now part of one of the most diverse research groups I have ever known. The team comprises researchers expert in multimedia analysis, data mining, machine learning, human computer interaction, distributed systems, network economics, wireless networking, security and privacy. One could see this as a research team that could potentially address problems at all layers of the stack. As such, I am learning so much from the team on an every day basis.
    I would love to bring that broader perspective to CCR and enrich the way we see and use telecommunications infrastructure. I would like to encourage the submission of editorials from other disciplines of computer science that build and deploy technologies over the Internet.
    There are so many questions that need to be addressed when you start thinking about networking fueling smart cities, smart utilities, novel services that will enable our younger generation to learn the skills they need and  put them in practice in a world that suffers from large
    unemployment rates. Having three young children makes me wonder about ways we could use all the work that we have done in the past 20 years to enable sustainable societies in the future. And the Internet will be the skeleton that makes this possible, leading to true globalization. We have so much more to offer.
    In parallel, I would love to use CCR as a vehicle to disemminate lessons learnt and current best practices. With the help of the editorial team, we are going to include an interview section in CCR, where we will be asking prominent members of our community for their perspective on what they think have been the main lessons they learnt from their past work, as well as their outlook for the future.
    In this January issue, you are not only going to find technical contributions, but also reports from workshops that took place in the recent months. In addition, I have invited an editorial covering the Mobile World Congress 2012. Recent trends in technology influence and enrich
    our research. This is the first step in trying to bridge those two worlds. If you do happen to attend venues such as MWC, CES, or standardization bodies, please do send me your editorial notes on your impressions from those events.
    Finally, I would like to extend my sincerest thank you to Prof. David Wetherall that has decided to step down from the editorial board, and welcome Dr. Katerina Argyraki, Dr. Hitesh Ballani, Prof. Fabián Bustamante, Prof. Marco Mellia, and Prof. Joseph Camp, that are becoming part of our editorial team. With their expertise and motivation we are bound to do great things in 2013! With all that, I sincerely hope that you will enjoy this issue and I am looking forward to hearing any further suggestions to make CCR as timely and impactful as possible.
  • Yeonhee Lee, Youngseok Lee

    Internet traffic measurement and analysis has long been used to characterize network usage and user behaviors, but faces the problem of scalability under the explosive growth of Internet traffic and high-speed access. Scalable Internet traffic measurement and analysis is difficult because a large data set requires matching computing and storage resources. Hadoop, an open-source computing platform of MapReduce and a distributed file system, has become a popular infrastructure for massive data analytics because it facilitates scalable data processing and storage services on a distributed computing system consisting of commodity hardware. In this paper, we present a Hadoop-based traffic monitoring system that performs IP, TCP, HTTP, and NetFlow analysis of multi-terabytes of Internet traffic in a scalable manner. From experiments with a 200-node testbed, we achieved 14 Gbps throughput for 5 TB files with IP and HTTP-layer analysis MapReduce jobs. We also explain the performance issues related with traffic analysis MapReduce jobs.

    Sharad Agarwal
  • Yaoqing Liu, Syed Obaid Amin, Lan Wang

    The size of the global Routing Information Base (RIB) has been increasing at an alarming rate. This directly leads to the rapid growth of the global Forwarding Information Base (FIB) size, which raises serious concerns for ISPs as the FIB memory in line cards is much more expensive than regular memory modules and it is very costly to increase this memory capacity frequently for all the routers in an ISP. One potential solution is to install only the most popular FIB entries into the fast memory (i.e., a FIB cache), while storing the complete FIB in slow memory. In this paper, we propose an effective FIB caching scheme that achieves a considerably higher hit ratio than previous approaches while preventing the cache-hiding problem. Our experimental results show that with only 20K prefixes in the cache (5.36% of the actual FIB size), the hit ratio of our scheme is higher than 99.95%. Our scheme can also handle cache misses, cache replacement and routing updates efficiently.

    Fabián E. Bustamante
  • Robert Beverly, Mark Allman

    The computer science research paper review process is largely human and time-intensive. More worrisome, review processes are frequently questioned, and often non-transparent. This work advocates applying computer science methods and tools to the computer science review process. As an initial exploration, we data mine the submissions, bids, reviews, and decisions from a recent top-tier computer networking conference. We empirically test several common hypotheses, including the existence of readability, citation, call-for-paper adherence, and topical bias. From our findings, we hypothesize review process methods to improve fairness, efficiency, and transparency.

    Sharad Agarwal
  • Mark Allman

    While there has been much buzz in the community about the large depth of queues throughout the Internet—the socalled “bufferbloat” problem—there has been little empirical understanding of the scope of the phenomenon. Yet, the supposed problem is being used as input to engineering decisions about the evolution of protocols. While we know from wide scale measurements that bufferbloat can happen, we have no empirically-based understanding of how often bufferbloat does happen. In this paper we use passive measurements to assess the bufferbloat phenomena.

    Nikolaos Laoutaris
  • P. Brighten Godfrey

    This article captures some of the discussion and insights from this year's ACM Workshop on Hot Topics in Networks (HotNets-XI).

  • Yan Grunenberger, Jonathan M. Smith

    We attended the 2012 Mobile World Congress in Barcelona, Spain. This note reports on some of our observations that we believe might be relevant to the SIGCOMM community.

  • Engin Arslan, Murat Yuksel, Mehmet Hadi Gunes

    Management and automated configuration of large-scale networks is one of the crucial issues for Internet Service Providers (ISPs). Since wrong configurations may lead to loss of an enormous amount of customer traffic, highly experienced network administrators are typically the ones who are trusted for the management and configuration of a running ISP network. We frame the management and experimentation of a network as a "game" for training network administrators without having to risk the network operation. The interactive environment treats the trainee network administrators as players of a game and tests them with various network failures or dynamics.

  • Arjuna Sathiaseelan, Jon Crowcroft

    The Computer Laboratory, University of Cambridge hosted a workshop on "Internet on the Move" on September 22, 2012. The objective of the workshop was to bring academia, industry and regulators to discuss the challenges in realizing the notion of ubiquitous mobile Internet. The editorial summarises a general overview of the issues discussed on enabling universal mobile coverage and some of the solutions that have been proposed to alleviate the problem of having ubiquitous mobile connectivity.

  • Jennifer Rexford, Pamela Zave

    A workshop on Abstractions for Network Services, Architecture, and Implementation brought together researchers interested in creating better abstractions for creating and analyzing networked services and network architectures. The workshop took place at DIMACS on May 21-23, 2012. This report summarizes the presentations and discussions that took place at the workshop, organized by areas of abstractions such as layers, domains, and graph properties.

  • S. Keshav

    I considered many ideas for my last CCR editorial but, in the end, decided to write about something that I think I share with every reader of CCR, yet is something we rarely acknowledge even in conversation, let alone in print: the joy of research.

    For me, research is the process of exploring new ideas, formulating problems in areas yet undefined, and then using our ever-expanding toolkit of algorithms, technologies, and theories to solve them. I find this process to be deep, satisfying, and fun. It is fun to explore new ideas, fun to learn new tools, techniques, and theories, and fun to solve puzzles. I'm especially delighted during that brief, sharp, shining moment when it is as if a puzzle piece has clicked into place and confusion is transformed into simplicity. It is this that keeps me fueled as a researcher; it is the direct experience of the fun of research that converts the best of our students to our ranks.
    To be sure, there are many other ways to have fun. One can climb mountains or hike forbidding landscapes; swim the waves or fly from continent to continent in search of exotic cuisines. I have done some of these, but find them all, to some degree, unsatisfying. These experiences are intense but ephemeral. Besides, it is hard to justify that they have any socially redeeming value. In contrast, research, especially the kind of work that is both theoretically challenging yet practically applicable, is not only fun but also worthwhile.
    Of course, not all aspects of research are fun. Behind each sweet moment of success there can be many dreary hours of work, with little guarantee that a hunch may pan out. Each idea carried into practice, each paper accepted for publication, and each research project that benefits society builds on many discarded ideas, rejcted papers, and failed projects. Yet, even in the face of these failures, I feel that the process itself is fun. I sympathize with Oscar Wilde, who wrote “We are all in the gutter, but some of us are looking at the stars.”
    I think every researcher, at some level, has a directunderstanding of what I mean by the joy of research. I know this because our shared experience binds us despite barriers of geography, culture, and language. I find an instant rapport with other researchers when discussing each other’s work: the barriers to communication drop as we share our experiences, hunches, and ideas. The excitement simply shines through.
    Unfortunately, we do not often share our sense of joy with outsiders. Our ideas are usually hidden behind walls of jargon, inscrutable mathematical notation, and the arcane conventions of academic publishing. This does not serve us well: baffled funders and soporific students do not aid our cause. Instead, we should let our exuberance and joy--tempered with gratitude to our employers--motivate us to share our ideas. By interpreting our work to non-experts we open channels of communication with those who can directly benefit from our ideas and innovations. For many of us, this is the one of the deepest motivations for our work.
    So, let the joy of research be your touchstone. Share this joy with your fellow researchers, but share it too with others, that the fire in your work may ignite a light elsewhere, and that your work benefit society at large.
  • Xuetao Wei, Nicholas Valler, B. Aditya Prakash, Iulian Neamtiu, Michalis Faloutsos, Christos Faloutsos

    If a false rumor propagates via Twitter, while the truth propagates between friends in Facebook, which one will prevail? This question captures the essence of the problem we address here. We study the intertwined propagation of two competing "memes" (or viruses, rumors, products etc.) in a composite network. A key novelty is the use of a composite network, which in its simplest model is defined as a single set of nodes with two distinct types of edges interconnecting them. Each meme spreads across the composite network in accordance to an SIS-like propagation model (a flu-like infection-recovery). To study the epidemic behavior of our system, we formulate it as a non-linear dynamic system (NLDS). We develop a metric for each meme that is based on the eigenvalue of an appropriately constructed matrix and argue that this metric plays a key role in determining the "winning" meme. First, we prove that our metric determines the tipping point at which both memes become extinct eventually. Second, we conjecture that the meme with the strongest metric will most likely prevail over the other, and we show evidence of that via simulations in both real and synthetic composite networks. Our work is among the first to study the interplay between two competing memes in composite networks.

    Augustin Chaintreau
  • Sebastian Zander, Lachlan L.H. Andrew, Grenville Armitage, Geoff Huston, George Michaelson

    The Teredo auto-tunnelling protocol allows IPv6 hosts behind IPv4 NATs to communicate with other IPv6 hosts. It is enabled by default on Windows Vista and Windows 7. But Windows clients are self-constrained: if their only IPv6 access is Teredo, they are unable to resolve host names to IPv6 addresses. We use web-based measurements to investigate the (latent) Teredo capability of Internet clients, and the delay introduced by Teredo. We compare this with native IPv6 and 6to4 tunnelling capability and delay. We find that only 6--7% of connections are from fully IPv6-capable clients, but an additional 15--16% of connections are from clients that would be IPv6-capable if Windows Teredo was not constrained. However, Teredo increases the median latency to fetch objects by 1--1.5 seconds compared to IPv4 or native IPv6, even with an optimally located Teredo relay. Furthermore, in many cases Teredo fails to establish a tunnel.

    Jia Wang
  • Ingmar Poese, Benjamin Poese, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, Bruce Maggs

    Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate only in server selection, not routing, in order to perform traffic engineering. To this end, we propose Content-aware Traffic Engineering (CaTE), which dynamically adapts server selection for content hosted by CDNs using ISP recommendations on small time scales. CaTE relies on the observation that by selecting an appropriate server among those available to deliver the content, the path of the traffic in the network can be influenced in a desired way. We present the design and implementation of a prototype to realize CaTE, and show how CDNs and ISPs can jointly take advantage of the already deployed distributed hosting infrastructures and path diversity, as well as the ISP detailed view of the network status without revealing sensitive operational information. By relying on tier-1 ISP traces, we show that CaTE allows CDNs to enhance the end-user experience while enabling an ISP to achieve several traffic engineering goals.

    Renata Teixeira
  • Marko Zec, Luigi Rizzo, Miljenko Mikuc

    Can a software routing implementation compete in a field generally reserved for specialized lookup hardware? This paper presents DXR, an IPv4 lookup scheme based on transforming large routing tables into compact lookup structures which easily fit into cache hierarchies of modern CPUs. DXR supports various memory/speed tradeoffs and scales almost linearly with the number of CPU cores. The smallest configuration, D16R, distills a real-world BGP snapshot with 417,000 IPv4 prefixes and 213 distinct next hops into a structure consuming only 782 Kbytes, less than 2 bytes per prefix, and achieves 490 million lookups per second (MLps) in synthetic tests using uniformly random IPv4 keys on a commodity 8-core CPU. Some other DXR configurations exceed 700~MLps at the cost of increased memory footprint. DXR significantly outperforms a software implementation of DIR-24-8-BASIC, has better scalability, and requires less DRAM bandwidth. Our prototype works inside the FreeBSD kernel, which permits DXR to be used with standard APIs and routing daemons such as Quagga and XORP, and to be validated by comparing lookup results against the BSD radix tree.

    Nikolaos Laoutaris
  • Jon Whiteaker, Fabian Schneider, Renata Teixeira, Christophe Diot, Augustin Soule, Fabio Picconi, Martin May

    The success of over-the-top (OTT) services reflects users' demand for personalization of digital services at home. ISPs propose fulfilling this demand with a cloud delivery model, which would simplify the management of the service portfolio and bring them additional revenue streams. We argue that this approach has many limitations that can be fixed by turning the home gateway into a flexible execution platform. We define requirements for such a "service-hosting gateway" and build a proof of concept prototype using a virtualized Intel Groveland system-on-a-chip platform. We discuss remaining challenges such as service distribution, security and privacy, management, and home integration.

    David Wetherall
  • Jeffrey C. Mogul, Lucian Popa

    Infrastructure-as-a-Service ("Cloud") data-centers intrinsically depend on high-performance networks to connect servers within the data-center and to the rest of the world. Cloud providers typically offer different service levels, and associated prices, for different sizes of virtual machine, memory, and disk storage. However, while all cloud providers provide network connectivity to tenant VMs, they seldom make any promises about network performance, and so cloud tenants suffer from highly-variable, unpredictable network performance. Many cloud customers do want to be able to rely on network performance guarantees, and many cloud providers would like to offer (and charge for) these guarantees. But nobody really agrees on how to define these guarantees, and it turns out to be challenging to define "network performance" in a way that is useful to both customers and providers. We attempt to bring some clarity to this question.

  • Tanja Zseby, kc claffy

    On May 14-15, 2012, CAIDA hosted the first international Workshop on Darkspace and Unsolicited Traffic Analysis (DUST 2012) to provide a forum for discussion of the science, engineering, and policy challenges associated with darkspace and unsolicited traffic analysis. This report captures threads discussed at the workshop and lists resulting collaborations.

  • Tobias Lauinger, Nikolaos Laoutaris, Pablo Rodriguez, Thorsten Strufe, Ernst Biersack, Engin Kirda

    Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community.

  • Sameer S. Tilak, Philip Papadopoulos

    Software Operations and Management (O&M) i.e., installing, configuring, and updating thousands of software components within a conventional Data Center is a well-understood issue. Existing frameworks such as the Rocks toolkit have revolutionized the way system administrators deploy and manage large-scale compute clusters, storage servers, and visualization facilities. However, existing tools like Rocks are designed for a "friendly" Data Center environment where stable power along with high-performance compute, storage, and networking is the norm. In contrast, sensor networks are embedded deeply within the harsh physical environment where node failures, node mobility and idiosyncrasies of wireless networks are the norm. In addition, device heterogeneity and resource-constrained nature (e.g., power, memory, CPU capability) of the sensor cyberinfrastructure (CI) are realities that must be addressed and reconciled. Although sensor CI must be more adaptable and more-rapidly reconfigurable than the data center equivalents, few if any of the existing software O&M tools and techniques have been adapted to the significantly more challenging environment of sensor networks. A more automated approach to software O&M would provide significant benefits to system builders, operators, and sensor network researchers. We argue that by starting with software O&M techniques developed for data centers, and then adapting and extending them to the world of resource-constrained sensor networks, we will be able to provide robust and scientifically reproducible mechanisms for defining the software footprint of individual sensors and networks of sensors. This paper describes the current golden-image based software O&M practice in Android world. We then propose an approach that adapts the Rocks toolkit to allow one to rapidly and reliably build complete Android environments (firmware flashes) at the individual sensor level and extend to a large networks of diverse sensors.

  • Dimitri Papadimitriou, Lluís Fàbrega, Pere Vilà, Davide Careglio, Piet Demeester

    In this paper, we report the results of the workshop organized by the FP7 EULER project on measurement-based research and associated methodology, experiments and tools. This workshop aimed at gathering all Future Internet Research and Experimentation (FIRE) experimental research projects under this thematic. Participants were invited to present the usage of measurement techniques in their experiments, their developments on measurement tools, and their foreseeable needs with respect to new domains of research not currently addressed by existing measurement techniques and tools.

  • S. Keshav

    Networking researchers seem to fall into two nearly non-overlapping categories: those whose blood runs with the practical clarity of “rough consensus and running code” (in the words of Dave Clark) and those who worship, instead, at the altar of mathematical analysis. The former build systems that work, even work well, but don’t necessarily know at what level of scaling or load their systems will catastrophically fail. Congestion collapse in the Internet in the mid-1980’s, for example, was a direct result of this approach, and similar scaling failures recur periodically (HTTP 1.0, “push” content distribution, and Shoutcast, to name a few), although many pragmatically-engineered systems, such as DNS, email, and Twitter, have proved to be incredibly scalable and robust.

    Proponents of mathematical modeling are better able to quantify the performance of their systems, using the powerful tools arising from theories such as model checking, queueing theory, and control theory. However, purely analytical approaches (remember PetriNets?) have had little practical success due to three inherent limitations. To begin with, it is not clear what mathematical approach is the best fit to a given problem. There are a plethora of approaches -- each of which can take years to master -- and it is nearly impossible to decide, a priori, which one best matches the problem at hand. For example, to optimize a system one can use linear or quadratic optimization, or any number of heuristic approaches, such as hill-climbing, genetic algorithms, and taboo search. Which one to pick? It all depends on the nuances of the problem, the quality of the available tools, and prior experience in using these approaches. That’s pretty daunting for a seasoned researcher, let alone a graduate student. Second, every mathematically sound approach necessarily makes simplifying assumptions. Fitting the square peg of reality into the round hole of mathematical assumptions can lead to impractical, even absurd, designs. As a case in point, assumptions of individual rationality needed by decision and game theory rarely hold in practice. Third, having spent the time to learn about a particular modeling approach, a researcher may be seduced into viewing the approach as being more powerful than it really is, ignoring its faults and modeling assumptions. For these reasons, I believe that one should couple a healthy respect for mathematical modeling with a hearty skepticism of its outcomes.
    When mathematical modeling and pragmatic system design come together, it can lead to beautiful systems. The original Ethernet, for example, brought together the elegant mathematics of researchers like Kleinrock, Tobagi, Lam, and Abramson with hands-on implementation by Metcalfe. Similarly, Jacobson and Karels brought a deep understanding of control theory to their inspired design of TCP congestion control. More recently, the Google Page Rank algorithm by Page, Brin, Motwani, and Winograd is based on eigenvalue computation in sparse Markov matrices.
    Given these enormous successes, it is no wonder that researchers in our community try hard to combine mathematical modeling with system building. Most papers in SIGCOMM these days build and study real systems applying analytical techniques arising from areas such as optimization, protocol verification, information theory, and communication theory. Although I must confess that the mathematical details of many papers are beyond my understanding (despite my recent attempt to remedy the situation), I think this is a positive development.
    Yet, much needs to be done. As a field, we lack widely accepted abstractions for even relatively simple concepts such as names and addresses, let alone routing and middleboxes. These have stymied our ability to build standard models for networking problems or a standard list of Grand Challenges. The recent emphasis on clean-slate design has renewed focus on these problems, and I look forward to the outcomes of these efforts in the years to come.
  • Vinh The Lam, Sivasankar Radhakrishnan, Rong Pan, Amin Vahdat, George Varghese

    Application performance in cloud data centers often depends crucially on network bandwidth, not just the aggregate data transmitted as in typical SLAs. We describe a mechanism for data center networks called NetShare that requires no hardware changes to routers but allows bandwidth to be allocated predictably across services based on weights. The weights are either specified by a manager, or automatically assigned at each switch port based on a virtual machine heuristic for isolation. Bandwidth unused by a service is shared proportionately by other services, providing weighted hierarchical max-min fair sharing. On a testbed of Fulcrum switches, we demonstrate that NetShare provides bandwidth isolation in various settings, including multipath networks.

    Sharad Agarwal
  • Yosuke Himura, Yoshiko Yasuda

    Multi-tenant datacenter networking, with which multiple customer (tenant) networks are virtualized over a single shared physical infrastructure, is cost-effective but poses significant costs on manual configuration. Such tasks would be alleviated with configuration templates, whereas a crucial difficulty stems from creating appropriate (i.e., reusable) ones. In this work, we propose a graph-based method of mining configurations of existing tenants to extract their recurrent patterns that would be used as reusable templates for upcoming tenants. The effectiveness of the proposed method is demonstrated with actual configuration files obtained from a business datacenter network.

    Sharad Agarwal
  • Anonymous

    Some ISPs and governments (most notably the Great Firewall of China) use DNS injection to block access to "unwanted" websites. The censorship tools inspect DNS queries near the ISP's boundary routers for sensitive domain keywords and inject forged DNS responses, blocking the users from accessing censored sites, such as twitter and facebook. Unfortunately this causes collateral damage, affecting communication beyond the censored networks when outside DNS traffic traverses censored links. In this paper, we analyze the causes of the collateral damages and measure the Internet to identify the injecting activities and their effect. We find 39 ASes in China injecting forged DNS replies. Furthermore, 26 of 43,000 measured open resolvers outside China, distributed in 109 countries, may suffer some collateral damage from these forged replies. Different from previous work that considers the collateral damage being limited to queries to root servers (F, I, J) located in China, we find that most collateral damage arises when the paths between resolvers and some TLD name servers transit through ISPs in China.

    Philip Levis
  • kc claffy

    On Monday, 22 August 2011, CAIDA hosted a one-day workshop to discuss scalable measurement and analysis of BGP and traceroute topology data, and practical applications of such data analysis including tracking of macroscopic censorship and filtering activities on the Internet. Discussion topics included: the surprisingly stability in the number of BGP updates over time; techniques for improving measurement and analysis of inter-domain routing policies; an update on Colorado State's BGPMon instrumentation; using BGP data to improve the interpretation of traceroute data, both for real-time diagnostics (e.g., AS traceroute) and for large-scale topology mapping; using both BGP and traceroute data to support detection and mapping infrastructure integrity, including different types of of filtering and censorship; and use of BGP data to analyze existing and proposed approaches to securing the interdomain routing system. This report briefly summarizes the presentations and discussions that followed.

  • Jon Crowcroft

    In all seriousness, Differential Privacy is a new technique and set of tools for managing responses to statistical queries over secured data, in such a way that the user cannot reconstruct more precise identification of principles in the dataset beyond a formally well-specified bound. This means that personally sensitive data such as Internet packet traces or social network measurements can be shared between researchers without invading personal privacy, and that assurances can be made with accuracy. With less seriousness, I would like to talk about Differential Piracy, but not without purpose. For sure, while there are legitimate reasons for upstanding citizens to live without fear of eternal surveillance, there is also a segment of society that gets away with things they shouldn't, under a cloak. Perhaps that is the (modest) price we have to pay for a modicum less paranoia in this brave new world. So, there has been a lot of work recently on Piracy Preserving Queries and Differential Piracy. These two related technologies exploit new ideas in statistical security. Rather than security through obscurity, the idea is to offer privacy through lack of differentiation (no, not inability to perform basic calculus, more the inability to distinguish between large numbers of very similar things).

  • kc claffy

    On February 8-10, 2012, CAIDA hosted the fourth Workshop on Active Internet Measurements (AIMS-4) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with the previous three AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. This year we continued to focus on how measurement can illuminate two specific public policy concerns: IPv6 deployment and broadband performance. This report briefly describes topics discussed at this year's workshop. Slides and other materials related to the workshop are available at

  • Rute Sofia, Paulo Mendes, Manuel José Damásio, Sara Henriques, Fabio Giglietto, Erica Giambitto, Alessandro Bogliolo

    This paper provides an interdisciplinary perspective concerning the role of prosumers on future Internet design based on the current trend of Internet user empowerment. The paper debates the prosumer role, and addresses models to develop a symmetric Internet architecture and supply-chain based on the integration of social capital aspects. It has as goal to ignite the discussion concerning a socially-driven Internet architectural design.

  • Dirk Trossen

    The late noughties have seen an influx of work in different scientific disciplines, all addressing the question of 'design' and 'architecture'. It is a battle between those advocating the theory of 'emergent properties' and others who strive for a 'theory for 'architecture'. We provide a particular insight into this battle, represented in the form of a story that focuses on the role of a possibly unusual protagonist and his influence on computer science, the Internet, architecture and beyond. We show his relation to one of the great achievements of system engineering, the Internet, and the possible future as it might unfold. Note from the writer: The tale is placed in a mixture of reality and fiction, while postulating a certain likelihood for this fiction. There is no proof for the assertions made in this tale, leaving the space for a sequel to be told.

  • Marshini Chetty, Nick Feamster

    Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure.

  • Cheng Yi, Alexander Afanasyev, Lan Wang, Beichuan Zhang, Lixia Zhang

    In Named Data Networking (NDN) architecture, packets carry data names rather than source or destination addresses. This change of paradigm leads to a new data plane: data consumers send out Interest packets, routers forward them and maintain the state of pending Interests, which is used to guide Data packets back to the consumers. NDN routers' forwarding process is able to detect network problems by observing the two-way traffic of Interest and Data packets, and explore multiple alternative paths without loops. This is in sharp contrast to today's IP forwarding process which follows a single path chosen by the routing process, with no adaptability of its own. In this paper we outline the design of NDN's adaptive forwarding, articulate its potential benefits, and identify open research issues.

  • S. Keshav

    As a networking researcher, working with computer networks day in and day out, you probably have rarely paused to reflect on the surprisingly difficult question of "What is a network?" For example, would you consider a bio-chemical system to be a network? How about a social network? Or a water supply network? Or the electrical grid? After all, all of these share some aspects in common with a computer network: they can be represented as a graph and they carry a flow (of chemical signals, messages, water, and electrons, respectively) from one or more sources to one or more destinations. So, shouldn't we make them equally objects of study by SIGCOMM members?

    You could argue that some of these networks differ dramatically from the Internet. The water network, for example, does not carry any messages and is unidirectional. So, it is not a communication network, unlike the Internet or, perhaps, a social network. This implicitly takes the position that the only networks we (as computer networking researchers) ought to study are bidirectional communication networks. This is a conservative position that is relatively easy to justify, but it excludes from consideration some interesting and open research questions that arise in the context of these other networks. Choosing the capacity of a water tank or an electrical transformer turns out to be similar in many respects to choosing the capacity of a router buffer or a transmission link. Similarly, one could imagine that the round-trip-time on a social network (the time it takes for a rumour you started to get back to you by word of mouth) would inform you about the structure of social network in much the same way as an ICMP ping. For these reasons, a more open-minded view about the nature of a network may be both pragmatic and conducive to innovation.
    My own view is that a network is any system that can be naturally represented by a graph. Additionally, a communication network is any system where a flow
    that originates at some set of source nodes is delivered to some set of destination nodes typically due to the forwarding action of intermediate nodes (although this may not be strictly necessary). This broad definition encompasses water networks, biological networks, and electrical networks as well as telecommunication networks and the Internet. It seeks to present a unifying abstraction so that techniques developed in one form of network can be adopted by researchers in the others.
    Besides a broad definition of networks, like the one above, the integrative study of networks--or ‘Network Science’ as its proponents call it-requires the underlying communities (and there are more than one) to be open to ideas from each other, and for the publication fora in these communities to be likewise “liberal in what you accept,” in Jon Postel's famous words. This is essential to allow researchers in Network Science to carry ideas from one community to another, despite their being less than expert in certain aspects of their work. CCR, through its publication of non-peer-reviewed Editorials, is perfectly positioned to follow this principle.
    I will end with a couple of important announcements. First, this issue will mark the end of Stefan Saroiu's tenure as an Area Editor. His steady editorial hand will be much missed. Thanks, Stefan!
    Second, starting September 1, 2012, Dina Papagiannaki will take over as the new Editor of CCR. Dina has demonstrated a breadth of understanding and depth of vision that assures me that CCR will be in very good hands. I am confident that under her stewardship CCR will rise to ever greater heights. I wish her the very best.
  • Supasate Choochaisri, Kittipat Apicharttrisorn, Kittiporn Korprasertthaworn, Pongpakdi Taechalertpaisarn, Chalermek Intanagonwiwat

    Desynchronization is useful for scheduling nodes to perform tasks at different time. This property is desirable for resource sharing, TDMA scheduling, and collision avoiding. Inspired by robotic circular formation, we propose DWARF (Desynchronization With an ARtificial Force field), a novel technique for desynchronization in wireless networks. Each neighboring node has artificial forces to repel other nodes to perform tasks at different time phases. Nodes with closer time phases have stronger forces to repel each other in the time domain. Each node adjusts its time phase proportionally to its received forces. Once the received forces are balanced, nodes are desynchronized. We evaluate our implementation of DWARF on TOSSIM, a simulator for wireless sensor networks. The simulation results indicate that DWARF incurs significantly lower desynchronization error and scales much better than existing approaches.

    Bhaskaran Raman
  • André Zúquete, Carlos Frade

    The IPv4 address space is quickly getting exhausted, putting a tremendous pressure on the adoption of even more NAT levels or IPv6. On the other hand, many authors propose the adoption of new Internet addressing capabilities, namely content-based addressing, to complement the existing IP host-based addressing. In this paper we propose the introduction of a location layer, between transport and network layers, to address both problems. We keep the existing IPv4 (or IPv6) host-based core routing functionalities, while we enable hosts to become routers between separate address spaces by exploring the new location header. For a proof of concept, we modified the TCP/IP stack of a Linux host to handle our new protocol layer and we designed and conceived a novel NAT box to enable current hosts to interact with the modified stack.

    David Wetherall
  • Kate Lin, Yung-Jen Chuang, Dina Katabi

    In many wireless systems, it is desirable to precede a data transmission with a handshake between the sender and the receiver. For example, RTS-CTS is a handshake that prevents collisions due to hidden terminals. Past work, however, has shown that the overhead of such handshake is too high for practical deployments. We present a new approach to wireless handshake that is almost overhead free. The key idea underlying the design is to separate a packet's PLCP header and MAC header from its body and have the sender and receiver first exchange the data and ACK headers, then exchange the bodies of the data and ACK packets without additional headers. The header exchange provides a natural handshake at almost no extra cost. We empirically evaluate the feasibility of such lightweight handshake and some of its applications. Our testbed evaluation shows that header-payload separation does not hamper packet decodabilty. It also shows that a light handshake enables hidden terminals, i.e., nodes that interfere with each other without RTS/CTS, to experience less than 4% of collisions. Furthermore, it improves the accuracy of bit rate selection in bursty and mobile environments producing a throughput gain of about 2x.

    Bhaskaran Raman
  • Cheng Huang, Ivan Batanov, Jin Li

    Internet services are often deployed in multiple (tens to hundreds) of geographically distributed data centers. They rely on Global Traffic Management (GTM) solutions to direct clients to the optimal data center based on a number of criteria like network performance, geographic location, availability, etc. The GTM solutions, however, have a fundamental design limitation in their ability to accurately map clients to data centers - they use the IP address of the local DNS resolver (LDNS) used by a client as a proxy for the true client identity, which in some cases causes suboptimal performance. This issue is known as the client-LDNS mismatch problem. We argue that recent proposals to address the problem suffer from serious limitations. We then propose a simple new solution, named ``FQDN extension'', which can solve the client-LDNS mismatch problem completely. We build a prototype system and demonstrate the effectiveness of the proposed solution. Using JavaScript, the solution can be deployed immediately for some online services, such as Web search, without modifying either client or local resolver.

    Renata Teixeira
  • Shane Alcock, Perry Lorier, Richard Nelson

    This paper introduces libtrace, an open-source software library for reading and writing network packet traces. Libtrace offers performance and usability enhancements compared to other libraries that are currently used. We describe the main features of libtrace and demonstrate how the libtrace programming API enables users to easily develop portable trace analysis tools without needing to consider the details of the capture format, file compression or intermediate protocol headers. We compare the performance of libtrace against other trace processing libraries to show that libtrace offers the best compromise between development effort and program run time. As a result, we conclude that libtrace is a valuable contribution to the passive measurement community that will aid the development of better and more reliable trace analysis and network monitoring tools.

    AT&T Labs
Syndicate content