Computer Communication Review: Papers

Find a CCR issue:
  • N. Feamster, J. Rexford, E. Zegura

    Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.

  • A. Dainotti, K. Benson, A. King, kc claffy, M. Kallitsis, E. Glatz, X. Dimitropoulos

    This errata is to help viewers/readers identify/properly understand our contribution to the SIGCOMM CCR Newsletter. Volume 44 Issue 1, (January 2014) on pages 42-49.

  • Dina Papagiannaki
    Happy new year! Welcome to the January 2014 issue of ACM Computer Communications Review. We are starting the new year with one of the largest CCR issues I have had the pleasure to edit. This issue contains 10 papers, 6 technical peer reviewed contributions and 4 editorial notes.
     
    The technical papers cover a range of areas, such as routing, Internet measurements, WiFi networking, named data networking and online social networks. They should make a very diverse and interesting read for the CCR audience. In the editorial zone, we have had the pleasure to receive 4 contributions, 3 out of which address fundamental issues around how our community works.
     
    In his editorial note, Prof. Nick McKeown, from Stanford University, is providing his perspective on the issues that go right and the issues that could be improved in the way our premier conference, ACM SIGCOMM, is organized. Prof. McKeown is making a case for a more inclusive conference, drawing examples from other communities. He is further attempting to identify possible directions we could pursue in order to transfer our fundamental contributions into the industry and the society as a whole. 
     
    One more editorial is touching upon some of the issues that Prof. McKeown is outlining in his editorial. Its focus is to identify ways to bridge the gap between the networking community and the Internet standardization bodies. The authors, from Broadcom, Nokia, University of Cambridge, Aalto University and University of Helsinki, are describing the differences and similarities between how the two communities operate. They further provide interesting data on the participation of academic and industrial researchers in standardization bodies. They discuss ways to minimize the friction that may exist as a particular technology is making the leap from the scientific community into the industry. 
     
    Similarities can also be found in Dr. Partridge’s editorial. Dr. Partridge identifies the difficulties faced in publishing work that challenges the existing Internet architecture. One of the interesting recommendations made in the editorial is that a new Internet architecture should not start off trying to be backwards compatible. He encourages our community to be more receptive when it comes to those contributions.
     
    Lastly, we have the pleasure to host our second interview in this issue of CCR. Prof. Mellia interviewed Dr. Antonio Nucci, that is the current CTO of Narus, based in the Bay Area. In this interview you will see a description of Dr. Nucci’s journey from an academic researcher to the Best CTO awardee and his recommendations on interesting research directions for current and future PhD candidates.
     
    All in all, this issue of CCR features a number of interesting, thought provoking articles that we hope you enjoy. The intention behind some of them is that they become the catalyst to a discussion as to how we can make our work more impactful in today’s society, a discussion that I find of critical importance, given our society’s increasing reliance on the Internet.
     
    This issue is also accompanied by a number of departures from the editorial board. I would like to thank Dr. Nikolaos Laoutaris, and Dr. Jia Wang, for their continuous help over the past 2 and 3 years respectively. And we are welcoming Prof. Phillipa Gill, from Stony Brook University, and Prof. Joel Sommers, from Colgate University. They both join the editorial board with a lot of passion to contribute to CCR’s continued success.
    I hope this issue stimulates some discussion and I am at your disposal for any questions or suggestions.
  • Ahmed Elmokashfi, Amogh Dhamdhere
    In the mid 2000s there was some concern in the research and operational communities over the scalability of BGP, the Internet’s interdomain routing protocol. The focus was on update churn (the number of routing protocol messages that are exchanged when the network undergoes routing changes) and whether churn was growing too fast for routers to handle. Recent work somewhat allayed those fears, showing that update churn grows slowly in IPv4, but the question of routing scalability has re-emerged with IPv6. In this work, we develop amodel that expresses BGP churn in terms of four measurable properties of the routing system. We show why the number of updates normalized by the size of the topology is constant, and why routing dynamics are qualitatively similar in IPv4 and IPv6. We also show that the exponential growth of IPv6 churn is entirely expected, as the underlying IPv6 topology is also growing exponentially.
    Jia Wang
  • Mishari Almishari, Paolo Gasti, Naveen Nathan, Gene Tsudik
    Content-Centric Networking (CCN) is an alternative to today’s Internet IP-style packet-switched host-centric networking. One key feature of CCN is its focus on content distribution, which dominates current Internet traffic and which is not well-served by IP. Named Data Networking (NDN) is an instance of CCN; it is an on-going research effort aiming to design and develop a full-blown candidate future Internet architecture. Although NDN’s emphasizes content distribution, it must also support other types of traffic, such as conferencing (audio, video) as well as more historical applications, such as remote login and file transfer. However, suitability of NDN for applications that are not obviously or primarily content-centric. We believe that such applications are not going away any time soon. In this paper, we explore NDN in the context of a class of applications that involve lowlatency bi-directional (point-to-point) communication. Specifically, we propose a few architectural amendments to NDN that provide significantly better throughput and lower latency for this class of applications by reducing routing and forwarding costs. The proposed approach is validated via experiments.
    Katerina Argyraki
  • Mohammad Rezaur Rahman, Pierre-Andr Nol, Chen-Nee Chuah, Balachander Krishnamurthy, Raissa M. D'Souza, S. Felix Wu
    Online social network (OSN) based applications often rely on user interactions to propagate information or to recruit more users, producing a sequence of user actions called adoption process or cascades. This paper presents the first attempt to quantitatively study the adoption process or cascade of such OSN-based applications by analyzing detailed user activity data from a popular Facebook gifting application. In particular, due to the challenge of monitoring user interactions over all possible channels on OSN platforms, we focus on characterizing the adoption process that relies only on user-based invitation (which is applicable to most gifting applications). We characterize the adoptions by tracking the invitations sent by the existing users to their friends through the Facebook gifting application and the events when their friends install the application for the first time. We found that a small number of big cascades carry the adoption of
    most of the application users. Contrary to common beliefs, we did not observe special influential nodes that are responsible for the viral adoption of the application.
    Fabian E. Bustamante
  • Phillipa Gill, Michael Schapira, Sharon Goldberg
    Researchers studying the inter-domain routing system typically rely on models to ll in the gaps created by the lack of information about the business relationships and routing policies used by individual autonomous systems. To shed light on this unknown information, we asked  100 network
    operators about their routing policies, billing models, and thoughts on routing security. This short paper reports the survey's results and discusses their implications.
    Jia Wang
  • Pablo Salvador, Luca Cominardi, Francesco Gringoli, Pablo Serrano
    The IEEE 802.11aa Task Group has recently standardized a set of mechanisms to eciently support video multicasting, namely, the Group Addressed Transmission Service (GATS). In this article, we report the implementation of these mechanisms over commodity hardware, which we make publicly available, and conduct a study to assess their performance under a variety of real-life scenarios. To the best of our knowledge, this is the rst experimental assessment of GATS, which is performed along three axes: we report their complexity in terms of lines of code, their e ectiveness when delivering video trac, and their eciency when utilizing wireless resources. Our results provide key insights on the
    resulting trade-o s when using each mechanism, and paves the way for new enhancements to deliver video over 802.11 Wireless LANs.
    Sharad Agarwal
  • Alberto Dainotti, Karyn Benson, Alistair King, kc claffy, Michael Kallitsis, Eduard Glatz, Xenofontas Dimitropoulos
    One challenge in understanding the evolution of Internet infrastructure is the lack of systematic mechanisms for monitoring the extent to which allocated IP addresses are actually used. Address utilization has been monitored via actively scanning the entire IPv4 address space. We evaluate
    the potential to leverage passive network traffic measurements in addition to or instead of active probing. Passive traffic measurements introduce no network traffic overhead, do not rely on unfiltered responses to probing, and could potentially apply to IPv6 as well. We investigate two chal-
    lenges in using passive traffic for address utilization inference: the limited visibility of a single observation point; and the presence of spoofed IP addresses in packets that can distort results by implying faked addresses are active. We propose a methodology for removing such spoofed traf-
    fic on both darknets and live networks, which yields results comparable to inferences made from active probing. Our preliminary analysis reveals a number of promising findings, including novel insight into the usage of the IPv4 address space that would expand with additional vantage points.
    Renata Teixeira
  • Craig Partridge
    Some of the challenges of developing and maturing a future internet architecture (FIA) are described. Based on a talk given at the Conference on Future Internet Technologies 2013.
  • Marco Mellia
    Dr. Antonio Nucci is the chief technology officer of Narus1 and is responsible for setting the company’s direction with respect to technology and innovation. He oversees the en- tire technology innovation lifecycle, including incubation, research, and prototyping. He also is responsible for ensuring a smooth transition to engineering for final commercialization. Antonio has published more than 100 technical papers and has been awarded 38 U.S. patents. He authored a book, “Design, Measurement and Management of Large-Scale IP Networks Bridging the Gap Between Theory and Practice”, in 2009 on advanced network analytics. In 2007 he was recognized for his vision and contributions with the prestigious Infoworld CTO Top 25 Award. In 2013, Antonio was honored by InfoSecurity Products Guide’s 2013 Global Excellence Awards as “CTO of the Year” [1] and Gold winner in the “People Shaping Info Security” category. He served as a technical lead member of the Enduring Security Framework (ESF) initiative sponsored by various U.S. agencies to produce a set of recommendations, policies, and technology pilots to better secure the Internet (Integrated Network Defense). He is also a technical advisor for several venture capital firms. Antonio holds a Ph.D. in computer science, and master’s and bachelor’s degrees
  • Aaron Yi Ding, Jouni Korhonen, Teemu Savolainen, Markku Kojo, Joerg Ott, Sasu Tarkoma, Jon Crowcroft
    The participation of the network research community in the Internet Standards Development Organizations (SDOs) has been relatively low over the recent years, and this has drawn attention from both academics and industry due to its possible negative impact. The reasons for this gap are complex and extend beyond the purely technical. In this editorial we share our views on this challenge, based on the experience we have obtained from joint projects with universities and companies. We highlight the lessons learned, covering both successful and under-performing cases, and suggest viable approaches to bridge the gap between networking research and Internet standardization, aiming to promote and maximize the outcome of such collaborative endeavours.
  • Nick McKeown
    At every Sigcomm conference the corridors buzz with ideas about how to improve Sigcomm. It is a healthy sign that the premier conference in networking keeps debating how to reinvent and improve itself. In 2012 I got the chance to throw my hat into the ring; at the end of a talk I spent a
    few minutes describing why I think the Sigcomm conference should be greatly expanded. A few people encouraged me to write the ideas down.
    My high level goal is to enlarge the Sigcomm tent, welcoming in more researchers and more of our colleagues from industry. More researchers because our eld has grown enormously in the last two decades, and Sigcomm has not adapted. I believe our small program limits the opportunities for our young researchers and graduate students to publish new ideas, and therefore we are holding back their careers. More colleagues from industry because too few industry thought-leaders are involved in Sigcomm. The academic eld of networking has weak ties to the industry it
    serves, particularly when compared to other elds of systems research. Both sides lose out: there is very little transfer of ideas in either direction, and not enough vigorous debate about the directions networking should be heading.
  • Dina Papagiannaki
    Welcome to the October 2013 issue of ACM Computer Communications Review. This issue includes 1 technical peer-reviewed paper, and 3 editorial notes. The topics include content distribution, SDN, and Internet Exchange Points (IXPs).
     
    One of my goals upon taking over as editor of CCR was to try to make it the place where we would publish fresh, novel ideas, but also where we could exchange perspectives and share lessons. This is the reason why for the past 9 months the editorial board and I have been working on what we call the "interview section" of CCR. This October issue carries our first interview note, captured by Prof. Joseph Camp, from SMU.
     
    Prof. Camp recently interviewed Dr. Ranveer Chandra, from MSR Redmond. The idea was to get Dr. Chandra's view on what has happened in white space networking since his best paper award at ACM SIGCOMM 2009. I find that the resulting article is very interesting. The amount of progress made in white space networking solutions, that has actually led to an operational deployment in Africa, is truly inspiring, and a clear testament to the amount of impact our community can have. I do sincerely hope that you will be as inspired as I was while reading it. This issue of CCR is also being published after ACM SIGCOMM in Hong Kong. SIGCOMM 2013 was marked with a number of records: 1) it has been the only SIGCOMM, I at least remember, hit by a natural disaster - typhoon Utor, 2), resulting in 2 entire sessions postponed to the afternoon (making it essentially dual track:), and 3) it has had the highest acceptance rate since 1987 - with 38 accepted papers.
     
    During their opening remarks the TPC chairs, Prof. Paul Barford, University of Wisconsin at Madison, and Prof. Srini Seshan, Carnegie Mellon University, presented the following two word clouds, which I found highly interesting. The first word cloud represents the most common words found in the titles of the submitted papers, and the second one the most common words in the titles of the accepted papers. Maybe they could form the input to a future editorial by someone in the community.
     
    As one can tell, Software Defined Networking (SDN) was one major topic in this year's conference. Interestingly, behavior, experience and privacy also appear boldly, confirming the belief of some of the community, that indeed SIGCOMM is broadening its reach, covering a diverse set of topics that the Interent is touching in today's society.
     
    This year's SIGCOMM also featured an experiment. All sessions were scribed in real time and notes were added in the blog at layer9.org. You can find a lot of additional information on the papers, and the questions asked on that site.
     
    Reaching the end of this note, I would like to welcome Prof. Sanjay Jha, from the University of New South Wales, in Sydney, Australia, to the editorial board. Prof. Jha brings expertise in a wide range of topics in networking, including wireless sensor networks, ad-hoc/community wireless networks, resilience and multicasting in IP networks and security protocols for wired/wireless networks. I hope you enjoy this issue, and its accompanying special issue on ACM SIGCOMM and the best papers of its associated workshops. I am always at your disposal in case of questions, suggestions, and comments.
  • Stefano Traverso, Mohamed Ahmed, Michele Garetto, Paolo Giaccone, Emilio Leonardi, Saverio Niccolini
    The dimensioning of caching systems represents a difficult task in the design of infrastructures for content distribution in the current Internet. This paper addresses the problem of defining a realistic arrival process for the content requests generated by users, due its critical importance for both analytical and simulative evaluations of the performance of caching systems. First, with the aid of YouTube traces collected inside operational residential networks, we identify the characteristics of real traffic that need to be considered or can be safely neglected in order to accurately predict the performance of a cache. Second, we propose a new parsimonious traffic model, named the Shot Noise Model (SNM), that enables users to natively capture the dynamics of content popularity, whilst still being sufficiently simple to be employed effectively for both analytical and scalable simulative studies of caching systems. Finally, our results show that the SNM presents a much better solution to account for the temporal locality observed in real traffic compared to existing approaches.
    Augustin Chaintreau
  • Jon Crowcroft, Markus Fidler, Klara Nahrstedt, Ralf Steinmetz
    Dagstuhl hosted a three-day seminar on the Future Internet on March 25-27, 2013. At the seminar, about 40 invited researchers from academia and industry discussed the promises, approaches, and open challenges of the Future Internet. This report gives a general overview of the presentations and outcomes of discussions of the seminar.
  • Nikolaos Chatzis, Georgios Smaragdakis, Anja Feldmann, Walter Willinger
    Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points (NAPs) that were mandated as part of the decommissioning of the National Science Foundation Network (NSFNET) in 1994/95 to facilitate the transition from the NSFNET to the “public Internet” as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 20131), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs. However, IXPs have never really attracted much attention from the networking research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current “hot” topic areas such as data centers and cloud services or Software Defined Networking (SDN) and mobile communication. However, we argue in this article that, in fact, IXPs are all about data centers and cloud services and even SDN and mobile communication and should be of great interest to networking researchers interested in understanding the current and future Internet ecosystem. To this end, we survey the existing but largely fragmented sources of publicly available information about IXPs to describe their basic technical and operational aspects and highlight the critical differences among the various IXPs in the different regions of the world, especially in Europe and North America. More importantly, we illustrate the important role that IXPs play in today’s Internet ecosystem and discuss how IXP-driven innovation in Europe is shaping and redefining the Internet marketplace, not only in Europe but increasingly so around the world.
  • Joseph D. Camp

    Ranveer Chandra is a Senior Researcher in the Mobility & Networking Research Group at Microsoft Research. His research is focused on mobile devices, with particular emphasis on wireless communications and energy efficiency. Ranveer is leading the white space networking project at Microsoft Research. He was invited to the FCC to present his work, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the worlds first urban white space network. The following interview captures the essence of his work on white spaces by focusing on his work published in ACM SIGCOMM 2009, which received the Best Paper Award.

  • Dina Papagiannaki
    It is hard to believe it is already July. July marks a few milestones: i) schools are over, ii) most of the paper submission deadlines for the year are behind us, iii) a lot, but not all, of the reviewing duty has been accomplished. July also marks another milestone for CCR and myself: the longest CCR issue I have had the pleasure to edit this year! This issue of CCR features 15 papers in total: 7 technical papers, and 8 editorials. The technical papers cover the areas of Internet measurement, routing, privacy, content
    delivery, as well as data center networks.  
     
    The editorial zone features the report on the workshop of Internet economics, and the workshop on active Internet measurements that took place in early 2013. It also features position papers that regard empirical Internet measurement, community networks, the use of recommendation engines in content delivery, and censorship on the Internet. I found every single one of them thought provoking, and with the potential to initiate discussion in our community. Finally, we have two slightly more unusual editorial notes. The first one describes the experience of
    the CoNEXT 2012 Internet chairs, and the way they found to enable flawless connectivity despite only having access to residential grade equipment. The second one focuses on the criticism that we often portray as a community in our major conferences, in particular SIGCOMM, and suggests a number of directions conference organizers could take.
     
    This last editorial has made me think a little more about my experience as an author, reviewer, and TPC member in the past 15 years of my career. It quotes Jeffrey Naughton’s keynote at ICDE 2010 and his statement about the Computer Science community - “Funding agencies believe us when we say we suck.”. 
     
    Being on a TPC, one actually realizes that criticism is something that we naturally do as a community - not personal. Being an author, who has never been on a TPC, however, makes this process far more personal. I still remember the days when each one of my papers was prepared to what I considered perfection and sent into the “abyss”, sometimes with a positive, and others with a negative response. I also do remember the disappointment of my first rejection. Some perspective on the process could possibly be of interest.
  • Thomas Callahan, Mark Allman, Michael Rabinovich

    The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.

    Sharad Agarwal
  • Akmal Khan, Hyun-chul Kim, Taekyoung Kwon, Yanghee Choi

    The IRR is a set of globally distributed databases with which ASes can register their routing and address-related information. It is often believed that the quality of the IRR data is not reliable since there are few economic incentives for the ASes to register and update their routing information timely. To validate these negative beliefs, we carry out a comprehensive analysis of (IP prefix, its origin AS) pairs in BGP against the corresponding information registered with the IRR, and vice versa. Considering the BGP and IRR practices, we propose a methodology to match the (IP prefix, origin AS) pairs between the IRR and BGP. We observe that the practice of registering IP prefi xes and origin ASes with the IRR is prevalent. However, the quality of the IRR data can vary substantially depending on routing registries, regional Internet registries (to which ASes belong), and AS types. We argue that the IRR can help improve the security level of BGP routing by making BGP routers selectively rely on the corresponding IRR data considering these observations.

    Bhaskaran Raman
  • Abdelberi Chaabane, Emiliano De Cristofaro, Mohamed Ali Kaafar, Ersin Uzun

    As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.

    Augustin Chaintreau
  • Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber

    Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.

    Fabián E. Bustamante
  • Simone Basso, Michela Meo, Juan Carlos De Martin

    Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PLR.

    Nikolaos Laoutaris
  • Howard Wang, Yiting Xia, Keren Bergman, T.S. Eugene Ng, Sambit Sahu, Kunwadee Sripanidkulchai

    Not only do big data applications impose heavy bandwidth demands, they also have diverse communication patterns denoted as *-cast) that mix together unicast, multicast, incast, and all-to-all-cast. Effectively supporting such traffic demands remains an open problem in data center networking. We propose an unconventional approach that leverages physical layer photonic technologies to build custom communication devices for accelerating each *-cast pattern, and integrates such devices into an application-driven, dynamically configurable photonics accelerated data center network. We present preliminary results from a multicast case study to highlight the potential benefits of this approach.

    Hitesh Ballani
  • Giuseppe Bianchi, Andrea Detti, Alberto Caponi, Nicola Blefari Melazzi

    In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.

    Sharad Agarwal
  • Bart Braem, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Leandro Navarro, Joseph Bonicioli, Stavros Papathanasiou, Pau Escrich, Roger Baig Viñas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, Blaine Tatum, Malcolm Matson

    Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.

  • Mohamed Ali Kaafar, Shlomo Berkovsky, Benoit Donnet

    During the last decade, we have witnessed a substantial change in content delivery networks (CDNs) and user access paradigms. If previously, users consumed content from a central server through their personal computers, nowadays they can reach a wide variety of repositories from virtually everywhere using mobile devices. This results in a considerable time-, location-, and event-based volatility of content popularity. In such a context, it is imperative for CDNs to put in place adaptive content management strategies, thus, improving the quality of services provided to users and decreasing the costs. In this paper, we introduce predictive content distribution strategies inspired by methods developed in the Recommender Systems area. Specifically, we outline different content placement strategies based on the observed user consumption patterns, and advocate their applicability in the state of the art CDNs.

  • Mark Allman
  • Sam Burnett, Nick Feamster

    Free and open access to information on the Internet is at risk: more than 60 countries around the world practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are likely to increase. We posit that, although it may not always be feasible to guarantee free and open access to information, citizens have the right to know when their access has been obstructed, restricted, or tampered with, so that they can make informed decisions on information access. We motivate the need for a system that provides accurate, verifiable reports of censorship and discuss the challenges involved in designing such a system. We place these challenges in context by studying their applicability to OONI, a new censorship measurement platform.

  • Jeffrey C. Mogul

    Many people in CS in general, and SIGCOMM in particular, have expressed concerns about an increasingly "hypercritical" approach to reviewing, which can block or discourage the publication of innovative research. The SIGCOMM Technical Steering Committee (TSC) has been addressing this issue, with the goal of encouraging cultural change without undermining the integrity of peer review. Based on my experience as an author, PC member, TSC member, and occasional PC chair, I examine possible causes for hypercritical reviewing, and offer some advice for PC chairs, reviewers, and authors. My focus is on improving existing publication cultures and peer review processes, rather than on proposing radical changes.

  • kc claffy, David Clark

    On December 12-13 2012, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 3rd interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform current and emerging regulatory and policy debates. The theme for this year's workshop was "Definitions and Data". This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at [2]

  • kc claffy

    On February 6-8, 2013, CAIDA hosted the fifth Workshop on Active Internet Measurements (AIMS-5) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. The workshop focus this year was on creating, managing, and analyzing annotations of large longitudinal active Internet measurement data sets. Due to popular demand, we also dedicated half a day to large-scale active measurement (performance/topology) from mobile/cellular devices. This report describes topics discussed at this year's workshop. Materials related to the workshop are available at http://www.caida.org/workshops/.

  • Richard Charles Gass, Damien Saucez

    The ACM 8th international conference on emerging Networking EXperiements and Technologies (CoNEXT) was or- ganized in a lovely hotel in the south of France. Although it was in an excellent location in the city center of Nice with views to the sea, it suffered from poor Internet connectivity. In this paper we describe what happened to the network at CoNEXT and explain why Internet connectivity is usually a problem at small hotel venues. Next we highlight the usual issues with the network equipment that leads to the general network dissatisfaction of conference attendees. Finally we describe how we alleviated the problem by offloading network services and all network traffic into the cloud while supporting over 100 simultaneous connected devices on a single ADSL link with a device that is rated to only support around 15-20. Our experience shows that with simple offloading of certain network services, small conference venues with limited budget no longer have to be plagued by the usual factors that lead to an unsatisfactory Internet connectivity experience.

  • Dina Papagiannaki

    Here is my second issue of CCR, and I am really happy to see that a lot of the things I wrote in my previous editorial are happening or are on their way! Thanks to the wonderful editorial team this issue has five technical papers, while some of the area editors have started contacting prominent members of our community to obtain their retrospective on their past work. In parallel, I have been really fortunate to receive a number of interesting editorials, some of which I solicited and some of which I received through the normal submission process.

    Craig Partridge has provided us with an editorial on the history of CCR. A very interesting read not only for the new members of our community, but for everyone. This issue features an editorial on the challenges that cognitive radio deployments are going to face, and a new network paradigm that could be very relevant in developing regions, named "lowest cost denominator networking." I am positive that each one of those editorials is bound to make you think. 

    Following my promise in January's editorial note, this issue is also bringing some industrial perspective to CCR. We have two editorial notes on standardization activities at the IETF, 3GPP, ITU, etc. I would like to sincerely thank the authors, since putting structure around such activities to report them in a concise form is not an easy task to say the least.

    Research in the area of networking has seen a tremendous increase in breadth in recent years.  Our community is now studying core networking technologies, cellular networks, mobile systems, networked applications. In addition, a large number of consumer electronics products are increasingly becoming connected, using wired or wireless technologies. Understanding the trends in the consumer electronics industry is bound to inform interesting related research in our field. With that in mind, I invited my colleagues in the Telefonica Video Unit and Telefonica Digital to submit their report on what they considered the highlights of the Consumer Electronics Show (CES) that took place in Las Vegas in January 2013. I hope that article inspires you towards novel directions.

    I am really pleased to see CCR growing! Please do not hesitate to contact me with comments, and suggestions!

  • Johann Schlamp, Georg Carle, Ernst W. Biersack

    The Border Gateway Protocol (BGP) was designed without security in mind. Until today, this fact makes the Internet vulnerable to hijacking attacks that intercept or blackhole Internet traffic. So far, significant effort has been put into the detection of IP prefix hijacking, while AS hijacking has received little attention. AS hijacking is more sophisticated than IP prefix hijacking, and is aimed at a long-term benefit such as over a duration of months. In this paper, we study a malicious case of AS hijacking, carried out in order to send spam from the victim's network. We thoroughly investigate this AS hijacking incident using live data from both the control and the data plane. Our analysis yields insights into how an attacker proceeded in order to covertly hijack a whole autonomous system, how he misled an upstream provider, and how he used an unallocated address space. We further show that state of the art techniques to prevent hijacking are not fully capable of dealing with this kind of attack. We also derive guidelines on how to conduct future forensic studies of AS hijacking. Our findings show that there is a need for preventive measures that would allow to anticipate AS hijacking and we outline the design of an early warning system.

    Fabian E. Bustamante
  • Zhe Wu, Harsha V. Madhyastha

    To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.

    Katerina Argyraki
  • Yao Liang, Rui Liu

    We consider an important problem of wireless sensor network (WSN) routing topology inference/tomography from indirect measurements observed at the data sink. Previous studies on WSN topology tomography are restricted to static routing tree estimation, which is unrealistic in real-world WSN time-varying routing due to wireless channel dynamics. We study general WSN routing topology inference where routing structure is dynamic. We formulate the problem as a novel compressed sensing problem. We then devise a suite of decoding algorithms to recover the routing path of each aggregated measurement. Our approach is tested and evaluated though simulations with favorable results. WSN routing topology inference capability is essential for routing improvement, topology control, anomaly detection and load balance to enable effective network management and optimized operations of deployed WSNs.

    Augustin Chaintreau
  • Davide Simoncelli, Maurizio Dusi, Francesco Gringoli, Saverio Niccolini

    Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor [1], we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.

    Konstantina Papagiannaki
  • Damien Saucez, Luigi Iannone, Benoit Donnet

    During the last decade, we have seen the rise of discussions regarding the emergence of a Future Internet. One of the proposed approaches leverages on the separation of the identifier and the locator roles of IP addresses, leading to the LISP (Locator/Identifier Separation Protocol) protocol, currently under development at the IETF (Internet Engineering Task Force). Up to now, researches made on LISP have been rather theoretical, i.e., based on simulations/emulations often using Internet traffic traces. There is no work in the literature attempting to assess the state of its deployment and how this has evolved in recent years. This paper aims at bridging this gap by presenting a first measurement study on the existing worldwide LISP network (lisp4.net). Early results indicate that there is a steady growth of the LISP network but also that network manageability might receive a higher priority than performance in a large scale deployment.

    Sharad Agarwal
Syndicate content