Computer Communication Review: Papers

Find a CCR issue:
  • Giuseppe Bianchi, Andrea Detti, Alberto Caponi, Nicola Blefari Melazzi

    In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.

    Sharad Agarwal
  • Bart Braem, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Leandro Navarro, Joseph Bonicioli, Stavros Papathanasiou, Pau Escrich, Roger Baig Viñas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, Blaine Tatum, Malcolm Matson

    Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.

  • Mohamed Ali Kaafar, Shlomo Berkovsky, Benoit Donnet

    During the last decade, we have witnessed a substantial change in content delivery networks (CDNs) and user access paradigms. If previously, users consumed content from a central server through their personal computers, nowadays they can reach a wide variety of repositories from virtually everywhere using mobile devices. This results in a considerable time-, location-, and event-based volatility of content popularity. In such a context, it is imperative for CDNs to put in place adaptive content management strategies, thus, improving the quality of services provided to users and decreasing the costs. In this paper, we introduce predictive content distribution strategies inspired by methods developed in the Recommender Systems area. Specifically, we outline different content placement strategies based on the observed user consumption patterns, and advocate their applicability in the state of the art CDNs.

  • Mark Allman
  • Sam Burnett, Nick Feamster

    Free and open access to information on the Internet is at risk: more than 60 countries around the world practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are likely to increase. We posit that, although it may not always be feasible to guarantee free and open access to information, citizens have the right to know when their access has been obstructed, restricted, or tampered with, so that they can make informed decisions on information access. We motivate the need for a system that provides accurate, verifiable reports of censorship and discuss the challenges involved in designing such a system. We place these challenges in context by studying their applicability to OONI, a new censorship measurement platform.

  • Jeffrey C. Mogul

    Many people in CS in general, and SIGCOMM in particular, have expressed concerns about an increasingly "hypercritical" approach to reviewing, which can block or discourage the publication of innovative research. The SIGCOMM Technical Steering Committee (TSC) has been addressing this issue, with the goal of encouraging cultural change without undermining the integrity of peer review. Based on my experience as an author, PC member, TSC member, and occasional PC chair, I examine possible causes for hypercritical reviewing, and offer some advice for PC chairs, reviewers, and authors. My focus is on improving existing publication cultures and peer review processes, rather than on proposing radical changes.

  • kc claffy, David Clark

    On December 12-13 2012, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 3rd interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform current and emerging regulatory and policy debates. The theme for this year's workshop was "Definitions and Data". This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at [2]

  • kc claffy

    On February 6-8, 2013, CAIDA hosted the fifth Workshop on Active Internet Measurements (AIMS-5) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. The workshop focus this year was on creating, managing, and analyzing annotations of large longitudinal active Internet measurement data sets. Due to popular demand, we also dedicated half a day to large-scale active measurement (performance/topology) from mobile/cellular devices. This report describes topics discussed at this year's workshop. Materials related to the workshop are available at http://www.caida.org/workshops/.

  • Richard Charles Gass, Damien Saucez

    The ACM 8th international conference on emerging Networking EXperiements and Technologies (CoNEXT) was or- ganized in a lovely hotel in the south of France. Although it was in an excellent location in the city center of Nice with views to the sea, it suffered from poor Internet connectivity. In this paper we describe what happened to the network at CoNEXT and explain why Internet connectivity is usually a problem at small hotel venues. Next we highlight the usual issues with the network equipment that leads to the general network dissatisfaction of conference attendees. Finally we describe how we alleviated the problem by offloading network services and all network traffic into the cloud while supporting over 100 simultaneous connected devices on a single ADSL link with a device that is rated to only support around 15-20. Our experience shows that with simple offloading of certain network services, small conference venues with limited budget no longer have to be plagued by the usual factors that lead to an unsatisfactory Internet connectivity experience.

  • Dina Papagiannaki

    Here is my second issue of CCR, and I am really happy to see that a lot of the things I wrote in my previous editorial are happening or are on their way! Thanks to the wonderful editorial team this issue has five technical papers, while some of the area editors have started contacting prominent members of our community to obtain their retrospective on their past work. In parallel, I have been really fortunate to receive a number of interesting editorials, some of which I solicited and some of which I received through the normal submission process.

    Craig Partridge has provided us with an editorial on the history of CCR. A very interesting read not only for the new members of our community, but for everyone. This issue features an editorial on the challenges that cognitive radio deployments are going to face, and a new network paradigm that could be very relevant in developing regions, named "lowest cost denominator networking." I am positive that each one of those editorials is bound to make you think. 

    Following my promise in January's editorial note, this issue is also bringing some industrial perspective to CCR. We have two editorial notes on standardization activities at the IETF, 3GPP, ITU, etc. I would like to sincerely thank the authors, since putting structure around such activities to report them in a concise form is not an easy task to say the least.

    Research in the area of networking has seen a tremendous increase in breadth in recent years.  Our community is now studying core networking technologies, cellular networks, mobile systems, networked applications. In addition, a large number of consumer electronics products are increasingly becoming connected, using wired or wireless technologies. Understanding the trends in the consumer electronics industry is bound to inform interesting related research in our field. With that in mind, I invited my colleagues in the Telefonica Video Unit and Telefonica Digital to submit their report on what they considered the highlights of the Consumer Electronics Show (CES) that took place in Las Vegas in January 2013. I hope that article inspires you towards novel directions.

    I am really pleased to see CCR growing! Please do not hesitate to contact me with comments, and suggestions!

  • Johann Schlamp, Georg Carle, Ernst W. Biersack

    The Border Gateway Protocol (BGP) was designed without security in mind. Until today, this fact makes the Internet vulnerable to hijacking attacks that intercept or blackhole Internet traffic. So far, significant effort has been put into the detection of IP prefix hijacking, while AS hijacking has received little attention. AS hijacking is more sophisticated than IP prefix hijacking, and is aimed at a long-term benefit such as over a duration of months. In this paper, we study a malicious case of AS hijacking, carried out in order to send spam from the victim's network. We thoroughly investigate this AS hijacking incident using live data from both the control and the data plane. Our analysis yields insights into how an attacker proceeded in order to covertly hijack a whole autonomous system, how he misled an upstream provider, and how he used an unallocated address space. We further show that state of the art techniques to prevent hijacking are not fully capable of dealing with this kind of attack. We also derive guidelines on how to conduct future forensic studies of AS hijacking. Our findings show that there is a need for preventive measures that would allow to anticipate AS hijacking and we outline the design of an early warning system.

    Fabian E. Bustamante
  • Zhe Wu, Harsha V. Madhyastha

    To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.

    Katerina Argyraki
  • Yao Liang, Rui Liu

    We consider an important problem of wireless sensor network (WSN) routing topology inference/tomography from indirect measurements observed at the data sink. Previous studies on WSN topology tomography are restricted to static routing tree estimation, which is unrealistic in real-world WSN time-varying routing due to wireless channel dynamics. We study general WSN routing topology inference where routing structure is dynamic. We formulate the problem as a novel compressed sensing problem. We then devise a suite of decoding algorithms to recover the routing path of each aggregated measurement. Our approach is tested and evaluated though simulations with favorable results. WSN routing topology inference capability is essential for routing improvement, topology control, anomaly detection and load balance to enable effective network management and optimized operations of deployed WSNs.

    Augustin Chaintreau
  • Davide Simoncelli, Maurizio Dusi, Francesco Gringoli, Saverio Niccolini

    Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor [1], we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.

    Konstantina Papagiannaki
  • Damien Saucez, Luigi Iannone, Benoit Donnet

    During the last decade, we have seen the rise of discussions regarding the emergence of a Future Internet. One of the proposed approaches leverages on the separation of the identifier and the locator roles of IP addresses, leading to the LISP (Locator/Identifier Separation Protocol) protocol, currently under development at the IETF (Internet Engineering Task Force). Up to now, researches made on LISP have been rather theoretical, i.e., based on simulations/emulations often using Internet traffic traces. There is no work in the literature attempting to assess the state of its deployment and how this has evolved in recent years. This paper aims at bridging this gap by presenting a first measurement study on the existing worldwide LISP network (lisp4.net). Early results indicate that there is a steady growth of the LISP network but also that network manageability might receive a higher priority than performance in a large scale deployment.

    Sharad Agarwal
  • Konstantinos Pelechrinis, Prashant Krishnamurthy, Martin Weiss, Taieb Znati

    A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.

  • Arjuna Sathiaseelan, Jon Crowcroft

    "The Internet is for everyone" claims Vint Cerf, the father of the Internet via RFC 3271. The Internet Society's recent global Internet survey reveals that the Internet should be considered as a basic human birth right. We strongly agree with these and believe that basic access to the Internet should be made free, at least to access the essential services. However the current Internet access model, which is governed by market economics makes it practically infeasible for enabling universal access especially for those with socio-economic barriers. We see enabling benevolence in the Internet (act of sharing resources) as a potential solution to solve the problem of digital exclusion caused due to socio-economic barriers. In this paper, we propose LCD-Net: Lowest Cost Denominator Networking, a new Internet paradigm that architects multi-layer resource pooling Internet technologies to support benevolence in the Internet. LCD-Net proposes to bring together several existing resource pooling Internet technologies to ensure that users and network operators who share their resources are not affected and at the same time are incentivised for sharing. The paper also emphasizes the need to identify and extend the stakeholder value chain to ensure such benevolent access to the Internet is sustainable.

  • Marcelo Bagnulo, Philip Eardley, Trevor Burbridge, Brian Trammell, Rolf Winter

    Over the last few years, we have witnessed the deployment of large measurement platforms that enable measurements from many vantage points. Examples of these platforms include SamKnows and RIPE ATLAS. All told, there are tens of thousands of measurement agents. Most of these measurement agents are located in the end-user premises; these can run measurements against other user agents located in strategic locations, according to the measurements to be performed. Thanks to the large number of measurement agents, these platforms can provide data about key network performance indicators from the end-user perspective. This data is useful to network operators to improve their operations, as well to regulators and to end users themselves. Currently deployed platforms use proprietary protocols to exchange information between the different parts. As these platforms grow to become an important tool to understand network performance, it is important to standardize the protocols between the different elements of the platform. In this paper, we present ongoing standardization efforts in this area as well as the main challenges that these efforts are facing.

  • Xavier Costa-Pérez, Andreas Festag, Hans-Joerg Kolbe, Juergen Quittek, Stefan Schmid, Martin Stiemerling, Joerg Swetina, Hans van der Veen

    Standardization organizations play a major role in the telecommunications industry to guarantee interoperability between vendors and allow for a common ground where all players can voice their opinion regarding the direction the industry should follow. In this paper we review the current activities in some of the most relevant standardization bodies in the area of communication networks: 3GPP, IEEE 802.11, BBF, IETF, ONF, ETSI ISG NFV, oneM2M and ETSI TC ITS. Major innovations being developed in these bodies are summarized describing the most disruptive directions taken and expected to have a remarkable impact in future networks. Finally, some trends common among different bodies are identified covering different dimensions: i) core technology enhancements, ii) inter-organizations cooperation for convergence, iii) consideration of raising disruptive technical concepts, and iv) expanding into emerging use cases aiming at an increase of future market size.

  • Craig Partridge

    A brief history of the evolution of ACM SIGCOMM Computer Communication Review as a newsletter and journal is presented.

  • Fernando Garcia Calvo, Javier Lucendo de Gregorio, Fernando Soto de Toro, Joaquin Munoz Lopez, Teo Mayo Muniz, Jose Maria Miranda, Oscar Gavilan Ballesteros

    The Consumer Electronics Show, which is held every year in Las Vegas in early January, continues to be an important fair in the consumer sector, though increasingly the major manufacturers prefer to announce their new products at their own specific events in order to gain greater impact. Only the leading TV brands unveil their artillery of new models for the coming year. Despite this, it continues to break records: there were over 150,000 visitors (from more than 150 countries), the number of new products announced exceeded 20,000 and the fair occupied over 2 million square meters.

  • Roch Guérin, Olivier Bonaventure

    There have been many recent discussions within the computer science community on the relative roles of conferences and journals [1, 2, 3]. They clearly offer different forums for the dissemination of scientific and technical ideas, and much of the debate has been on if and how to leverage both. These are important questions that every conference and journal ought to carefully consider, and the CoNEXT Steering Committee recently initiated a discussion on this topic. The main focus of the discussion was on how to on one hand maintain the high quality of papers accepted for presentation at CoNEXT, and on the other hand improve the conference's ability to serve as a timely forum where new and exciting but not necessarily polished or fully developed ideas could be presented. Unfortunately, the stringent "quality control" that prevails during the paper selection process of selective conferences, including CoNEXT, often makes it difficult for interesting new ideas to break-through. To make it, papers need to ace it along three major dimensions, namely, technical correctness and novelty, polish of exposition and motivations, and completeness of the results. Most if not all hot-off-the-press papers will fail in at least one of those dimensions. On the other hand, there are conferences and workshops that target short papers. Hotnets is one of such venues that has attracted short papers presenting new ideas. However, from a community viewpoint, Hotnets has several limitations. First, Hotnets is an invitation-only workshop. Coupled with a low acceptance rate, this limits the exposure of Hotnets papers to the community. Second, Hotnets has never been held outside North-America. The SIGCOMM and CoNEXT workshops are also a venue where short papers can be presented and discussed. However, these workshops are focussed on a specific subdomain and usually do not attract a broad audience. The IMC short papers are a more interesting model because short and regular papers are mixed in the single track conference. This ensures broad exposure for the short papers, but the scope of IMC is much smaller than CoNEXT. In order to address this intrinsic tension that plagues all selective conferences, CoNEXT 2013 is introducing a short paper category with submissions requested through a logically separate call-for-papers. The separate call for paper is meant to clarify to both authors and TPC members that short papers are to be judged using different criteria. Short papers will be limited to six (6) two-column pages in the standard ACM conference format. Most importantly, short papers are not meant to be condensed versions of standard length papers and neither are they targeted at traditional "position papers." In particular, papers submitted as regular (long) papers will not be eligible for consideration as short papers. Instead, short paper submissions are intended for high-quality technical works that either target a topical issue that can be covered in 6 pages, or are introducing a novel but not fully flushed out idea that can benefit from the feedback that early exposure can provide. Short papers will be reviewed and selected through a process distinct from that of long papers and based on how good a match they are for the above criteria. As alluded to, this separation is meant to address the inherent dilemma faced by highly selective conferences, where reviewers typically approach the review process looking for reasons to reject a paper (how high are the odds that a paper is in the top 10-15%?). For that purpose, Program Committee members will be reminded that completeness of the results should NOT be a criterion used when assessing short papers. Similarly, while an unreadable paper is obviously not one that should be accepted, polish should not be a major consideration either. As long as the paper manages to convey its idea, a choppy presentation should not by itself be ground for rejecting a paper. Finally, while technical correctness is important, papers that maybe claim more than they should, are not to be disqualified simply on those grounds. As a rule, the selection process should focus on the "idea" presented in the paper. If the idea is new, or interesting, or unusual, etc., and is not fundamentally broken, the paper should be considered. Eventual acceptance will ultimately depend on logistics constraints (how many such papers can be presented), but the goal is to offer a venue at CoNEXT where new, emerging ideas can be presented and receive constructive feedback. The CoNEXT web site1 provide additional information on the submission process of short (and regular) papers.

  • Dina Papagiannaki

    A new year begins and a new challenge needs to be undertaken. Life is full of challenges, but those that we invite ourselves have something special of their own. In that spirit, I am really happy to be taking on as the editor for ACM Computer Communications Review. Keshav has done a tremendous job making CCR a high quality publication that unites our community. The combination of peer reviewed papers and editorial submissions provides a ground to publish the latest scientific achievements in our field, but also position them within the context of our ever changing technological landscape.

    With that in mind, I would like to continue encouraging the submission of the latest research results to CCR. I would also like to try to broaden its reach. A little less than two years ago, I changed jobs, and took the position of the scientific director responsible for Internet, systems, and networks at Telefonica Research in Barcelona, Spain*. I am now part of one of the most diverse research groups I have ever known. The team comprises researchers expert in multimedia analysis, data mining, machine learning, human computer interaction, distributed systems, network economics, wireless networking, security and privacy. One could see this as a research team that could potentially address problems at all layers of the stack. As such, I am learning so much from the team on an every day basis.
     
    I would love to bring that broader perspective to CCR and enrich the way we see and use telecommunications infrastructure. I would like to encourage the submission of editorials from other disciplines of computer science that build and deploy technologies over the Internet.
    There are so many questions that need to be addressed when you start thinking about networking fueling smart cities, smart utilities, novel services that will enable our younger generation to learn the skills they need and  put them in practice in a world that suffers from large
    unemployment rates. Having three young children makes me wonder about ways we could use all the work that we have done in the past 20 years to enable sustainable societies in the future. And the Internet will be the skeleton that makes this possible, leading to true globalization. We have so much more to offer.
     
    In parallel, I would love to use CCR as a vehicle to disemminate lessons learnt and current best practices. With the help of the editorial team, we are going to include an interview section in CCR, where we will be asking prominent members of our community for their perspective on what they think have been the main lessons they learnt from their past work, as well as their outlook for the future.
     
    In this January issue, you are not only going to find technical contributions, but also reports from workshops that took place in the recent months. In addition, I have invited an editorial covering the Mobile World Congress 2012. Recent trends in technology influence and enrich
    our research. This is the first step in trying to bridge those two worlds. If you do happen to attend venues such as MWC, CES, or standardization bodies, please do send me your editorial notes on your impressions from those events.
     
    Finally, I would like to extend my sincerest thank you to Prof. David Wetherall that has decided to step down from the editorial board, and welcome Dr. Katerina Argyraki, Dr. Hitesh Ballani, Prof. Fabián Bustamante, Prof. Marco Mellia, and Prof. Joseph Camp, that are becoming part of our editorial team. With their expertise and motivation we are bound to do great things in 2013! With all that, I sincerely hope that you will enjoy this issue and I am looking forward to hearing any further suggestions to make CCR as timely and impactful as possible.
  • Yeonhee Lee, Youngseok Lee

    Internet traffic measurement and analysis has long been used to characterize network usage and user behaviors, but faces the problem of scalability under the explosive growth of Internet traffic and high-speed access. Scalable Internet traffic measurement and analysis is difficult because a large data set requires matching computing and storage resources. Hadoop, an open-source computing platform of MapReduce and a distributed file system, has become a popular infrastructure for massive data analytics because it facilitates scalable data processing and storage services on a distributed computing system consisting of commodity hardware. In this paper, we present a Hadoop-based traffic monitoring system that performs IP, TCP, HTTP, and NetFlow analysis of multi-terabytes of Internet traffic in a scalable manner. From experiments with a 200-node testbed, we achieved 14 Gbps throughput for 5 TB files with IP and HTTP-layer analysis MapReduce jobs. We also explain the performance issues related with traffic analysis MapReduce jobs.

    Sharad Agarwal
  • Yaoqing Liu, Syed Obaid Amin, Lan Wang

    The size of the global Routing Information Base (RIB) has been increasing at an alarming rate. This directly leads to the rapid growth of the global Forwarding Information Base (FIB) size, which raises serious concerns for ISPs as the FIB memory in line cards is much more expensive than regular memory modules and it is very costly to increase this memory capacity frequently for all the routers in an ISP. One potential solution is to install only the most popular FIB entries into the fast memory (i.e., a FIB cache), while storing the complete FIB in slow memory. In this paper, we propose an effective FIB caching scheme that achieves a considerably higher hit ratio than previous approaches while preventing the cache-hiding problem. Our experimental results show that with only 20K prefixes in the cache (5.36% of the actual FIB size), the hit ratio of our scheme is higher than 99.95%. Our scheme can also handle cache misses, cache replacement and routing updates efficiently.

    Fabián E. Bustamante
  • Robert Beverly, Mark Allman

    The computer science research paper review process is largely human and time-intensive. More worrisome, review processes are frequently questioned, and often non-transparent. This work advocates applying computer science methods and tools to the computer science review process. As an initial exploration, we data mine the submissions, bids, reviews, and decisions from a recent top-tier computer networking conference. We empirically test several common hypotheses, including the existence of readability, citation, call-for-paper adherence, and topical bias. From our findings, we hypothesize review process methods to improve fairness, efficiency, and transparency.

    Sharad Agarwal
  • Mark Allman

    While there has been much buzz in the community about the large depth of queues throughout the Internet—the socalled “bufferbloat” problem—there has been little empirical understanding of the scope of the phenomenon. Yet, the supposed problem is being used as input to engineering decisions about the evolution of protocols. While we know from wide scale measurements that bufferbloat can happen, we have no empirically-based understanding of how often bufferbloat does happen. In this paper we use passive measurements to assess the bufferbloat phenomena.

    Nikolaos Laoutaris
  • P. Brighten Godfrey

    This article captures some of the discussion and insights from this year's ACM Workshop on Hot Topics in Networks (HotNets-XI).

  • Yan Grunenberger, Jonathan M. Smith

    We attended the 2012 Mobile World Congress in Barcelona, Spain. This note reports on some of our observations that we believe might be relevant to the SIGCOMM community.

  • Engin Arslan, Murat Yuksel, Mehmet Hadi Gunes

    Management and automated configuration of large-scale networks is one of the crucial issues for Internet Service Providers (ISPs). Since wrong configurations may lead to loss of an enormous amount of customer traffic, highly experienced network administrators are typically the ones who are trusted for the management and configuration of a running ISP network. We frame the management and experimentation of a network as a "game" for training network administrators without having to risk the network operation. The interactive environment treats the trainee network administrators as players of a game and tests them with various network failures or dynamics.

  • Arjuna Sathiaseelan, Jon Crowcroft

    The Computer Laboratory, University of Cambridge hosted a workshop on "Internet on the Move" on September 22, 2012. The objective of the workshop was to bring academia, industry and regulators to discuss the challenges in realizing the notion of ubiquitous mobile Internet. The editorial summarises a general overview of the issues discussed on enabling universal mobile coverage and some of the solutions that have been proposed to alleviate the problem of having ubiquitous mobile connectivity.

  • Jennifer Rexford, Pamela Zave

    A workshop on Abstractions for Network Services, Architecture, and Implementation brought together researchers interested in creating better abstractions for creating and analyzing networked services and network architectures. The workshop took place at DIMACS on May 21-23, 2012. This report summarizes the presentations and discussions that took place at the workshop, organized by areas of abstractions such as layers, domains, and graph properties.

  • S. Keshav

    I considered many ideas for my last CCR editorial but, in the end, decided to write about something that I think I share with every reader of CCR, yet is something we rarely acknowledge even in conversation, let alone in print: the joy of research.

    For me, research is the process of exploring new ideas, formulating problems in areas yet undefined, and then using our ever-expanding toolkit of algorithms, technologies, and theories to solve them. I find this process to be deep, satisfying, and fun. It is fun to explore new ideas, fun to learn new tools, techniques, and theories, and fun to solve puzzles. I'm especially delighted during that brief, sharp, shining moment when it is as if a puzzle piece has clicked into place and confusion is transformed into simplicity. It is this that keeps me fueled as a researcher; it is the direct experience of the fun of research that converts the best of our students to our ranks.
     
    To be sure, there are many other ways to have fun. One can climb mountains or hike forbidding landscapes; swim the waves or fly from continent to continent in search of exotic cuisines. I have done some of these, but find them all, to some degree, unsatisfying. These experiences are intense but ephemeral. Besides, it is hard to justify that they have any socially redeeming value. In contrast, research, especially the kind of work that is both theoretically challenging yet practically applicable, is not only fun but also worthwhile.
     
    Of course, not all aspects of research are fun. Behind each sweet moment of success there can be many dreary hours of work, with little guarantee that a hunch may pan out. Each idea carried into practice, each paper accepted for publication, and each research project that benefits society builds on many discarded ideas, rejcted papers, and failed projects. Yet, even in the face of these failures, I feel that the process itself is fun. I sympathize with Oscar Wilde, who wrote “We are all in the gutter, but some of us are looking at the stars.”
     
    I think every researcher, at some level, has a directunderstanding of what I mean by the joy of research. I know this because our shared experience binds us despite barriers of geography, culture, and language. I find an instant rapport with other researchers when discussing each other’s work: the barriers to communication drop as we share our experiences, hunches, and ideas. The excitement simply shines through.
     
    Unfortunately, we do not often share our sense of joy with outsiders. Our ideas are usually hidden behind walls of jargon, inscrutable mathematical notation, and the arcane conventions of academic publishing. This does not serve us well: baffled funders and soporific students do not aid our cause. Instead, we should let our exuberance and joy--tempered with gratitude to our employers--motivate us to share our ideas. By interpreting our work to non-experts we open channels of communication with those who can directly benefit from our ideas and innovations. For many of us, this is the one of the deepest motivations for our work.
     
    So, let the joy of research be your touchstone. Share this joy with your fellow researchers, but share it too with others, that the fire in your work may ignite a light elsewhere, and that your work benefit society at large.
  • Xuetao Wei, Nicholas Valler, B. Aditya Prakash, Iulian Neamtiu, Michalis Faloutsos, Christos Faloutsos

    If a false rumor propagates via Twitter, while the truth propagates between friends in Facebook, which one will prevail? This question captures the essence of the problem we address here. We study the intertwined propagation of two competing "memes" (or viruses, rumors, products etc.) in a composite network. A key novelty is the use of a composite network, which in its simplest model is defined as a single set of nodes with two distinct types of edges interconnecting them. Each meme spreads across the composite network in accordance to an SIS-like propagation model (a flu-like infection-recovery). To study the epidemic behavior of our system, we formulate it as a non-linear dynamic system (NLDS). We develop a metric for each meme that is based on the eigenvalue of an appropriately constructed matrix and argue that this metric plays a key role in determining the "winning" meme. First, we prove that our metric determines the tipping point at which both memes become extinct eventually. Second, we conjecture that the meme with the strongest metric will most likely prevail over the other, and we show evidence of that via simulations in both real and synthetic composite networks. Our work is among the first to study the interplay between two competing memes in composite networks.

    Augustin Chaintreau
  • Sebastian Zander, Lachlan L.H. Andrew, Grenville Armitage, Geoff Huston, George Michaelson

    The Teredo auto-tunnelling protocol allows IPv6 hosts behind IPv4 NATs to communicate with other IPv6 hosts. It is enabled by default on Windows Vista and Windows 7. But Windows clients are self-constrained: if their only IPv6 access is Teredo, they are unable to resolve host names to IPv6 addresses. We use web-based measurements to investigate the (latent) Teredo capability of Internet clients, and the delay introduced by Teredo. We compare this with native IPv6 and 6to4 tunnelling capability and delay. We find that only 6--7% of connections are from fully IPv6-capable clients, but an additional 15--16% of connections are from clients that would be IPv6-capable if Windows Teredo was not constrained. However, Teredo increases the median latency to fetch objects by 1--1.5 seconds compared to IPv4 or native IPv6, even with an optimally located Teredo relay. Furthermore, in many cases Teredo fails to establish a tunnel.

    Jia Wang
  • Ingmar Poese, Benjamin Poese, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, Bruce Maggs

    Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate only in server selection, not routing, in order to perform traffic engineering. To this end, we propose Content-aware Traffic Engineering (CaTE), which dynamically adapts server selection for content hosted by CDNs using ISP recommendations on small time scales. CaTE relies on the observation that by selecting an appropriate server among those available to deliver the content, the path of the traffic in the network can be influenced in a desired way. We present the design and implementation of a prototype to realize CaTE, and show how CDNs and ISPs can jointly take advantage of the already deployed distributed hosting infrastructures and path diversity, as well as the ISP detailed view of the network status without revealing sensitive operational information. By relying on tier-1 ISP traces, we show that CaTE allows CDNs to enhance the end-user experience while enabling an ISP to achieve several traffic engineering goals.

    Renata Teixeira
  • Marko Zec, Luigi Rizzo, Miljenko Mikuc

    Can a software routing implementation compete in a field generally reserved for specialized lookup hardware? This paper presents DXR, an IPv4 lookup scheme based on transforming large routing tables into compact lookup structures which easily fit into cache hierarchies of modern CPUs. DXR supports various memory/speed tradeoffs and scales almost linearly with the number of CPU cores. The smallest configuration, D16R, distills a real-world BGP snapshot with 417,000 IPv4 prefixes and 213 distinct next hops into a structure consuming only 782 Kbytes, less than 2 bytes per prefix, and achieves 490 million lookups per second (MLps) in synthetic tests using uniformly random IPv4 keys on a commodity 8-core CPU. Some other DXR configurations exceed 700~MLps at the cost of increased memory footprint. DXR significantly outperforms a software implementation of DIR-24-8-BASIC, has better scalability, and requires less DRAM bandwidth. Our prototype works inside the FreeBSD kernel, which permits DXR to be used with standard APIs and routing daemons such as Quagga and XORP, and to be validated by comparing lookup results against the BSD radix tree.

    Nikolaos Laoutaris
  • Jon Whiteaker, Fabian Schneider, Renata Teixeira, Christophe Diot, Augustin Soule, Fabio Picconi, Martin May

    The success of over-the-top (OTT) services reflects users' demand for personalization of digital services at home. ISPs propose fulfilling this demand with a cloud delivery model, which would simplify the management of the service portfolio and bring them additional revenue streams. We argue that this approach has many limitations that can be fixed by turning the home gateway into a flexible execution platform. We define requirements for such a "service-hosting gateway" and build a proof of concept prototype using a virtualized Intel Groveland system-on-a-chip platform. We discuss remaining challenges such as service distribution, security and privacy, management, and home integration.

    David Wetherall
  • Jeffrey C. Mogul, Lucian Popa

    Infrastructure-as-a-Service ("Cloud") data-centers intrinsically depend on high-performance networks to connect servers within the data-center and to the rest of the world. Cloud providers typically offer different service levels, and associated prices, for different sizes of virtual machine, memory, and disk storage. However, while all cloud providers provide network connectivity to tenant VMs, they seldom make any promises about network performance, and so cloud tenants suffer from highly-variable, unpredictable network performance. Many cloud customers do want to be able to rely on network performance guarantees, and many cloud providers would like to offer (and charge for) these guarantees. But nobody really agrees on how to define these guarantees, and it turns out to be challenging to define "network performance" in a way that is useful to both customers and providers. We attempt to bring some clarity to this question.

  • Tanja Zseby, kc claffy

    On May 14-15, 2012, CAIDA hosted the first international Workshop on Darkspace and Unsolicited Traffic Analysis (DUST 2012) to provide a forum for discussion of the science, engineering, and policy challenges associated with darkspace and unsolicited traffic analysis. This report captures threads discussed at the workshop and lists resulting collaborations.

Syndicate content