Computer Communication Review: Papers

Find a CCR issue:
  • Alberto Dainotti, Karyn Benson, Alistair King, kc claffy, Michael Kallitsis, Eduard Glatz, Xenofontas Dimitropoulos
    One challenge in understanding the evolution of Internet infrastructure is the lack of systematic mechanisms for monitoring the extent to which allocated IP addresses are actually used. Address utilization has been monitored via actively scanning the entire IPv4 address space. We evaluate
    the potential to leverage passive network traffic measurements in addition to or instead of active probing. Passive traffic measurements introduce no network traffic overhead, do not rely on unfiltered responses to probing, and could potentially apply to IPv6 as well. We investigate two chal-
    lenges in using passive traffic for address utilization inference: the limited visibility of a single observation point; and the presence of spoofed IP addresses in packets that can distort results by implying faked addresses are active. We propose a methodology for removing such spoofed traf-
    fic on both darknets and live networks, which yields results comparable to inferences made from active probing. Our preliminary analysis reveals a number of promising findings, including novel insight into the usage of the IPv4 address space that would expand with additional vantage points.
    Renata Teixeira
  • Craig Partridge
    Some of the challenges of developing and maturing a future internet architecture (FIA) are described. Based on a talk given at the Conference on Future Internet Technologies 2013.
  • Marco Mellia
    Dr. Antonio Nucci is the chief technology officer of Narus1 and is responsible for setting the company’s direction with respect to technology and innovation. He oversees the en- tire technology innovation lifecycle, including incubation, research, and prototyping. He also is responsible for ensuring a smooth transition to engineering for final commercialization. Antonio has published more than 100 technical papers and has been awarded 38 U.S. patents. He authored a book, “Design, Measurement and Management of Large-Scale IP Networks Bridging the Gap Between Theory and Practice”, in 2009 on advanced network analytics. In 2007 he was recognized for his vision and contributions with the prestigious Infoworld CTO Top 25 Award. In 2013, Antonio was honored by InfoSecurity Products Guide’s 2013 Global Excellence Awards as “CTO of the Year” [1] and Gold winner in the “People Shaping Info Security” category. He served as a technical lead member of the Enduring Security Framework (ESF) initiative sponsored by various U.S. agencies to produce a set of recommendations, policies, and technology pilots to better secure the Internet (Integrated Network Defense). He is also a technical advisor for several venture capital firms. Antonio holds a Ph.D. in computer science, and master’s and bachelor’s degrees
  • Aaron Yi Ding, Jouni Korhonen, Teemu Savolainen, Markku Kojo, Joerg Ott, Sasu Tarkoma, Jon Crowcroft
    The participation of the network research community in the Internet Standards Development Organizations (SDOs) has been relatively low over the recent years, and this has drawn attention from both academics and industry due to its possible negative impact. The reasons for this gap are complex and extend beyond the purely technical. In this editorial we share our views on this challenge, based on the experience we have obtained from joint projects with universities and companies. We highlight the lessons learned, covering both successful and under-performing cases, and suggest viable approaches to bridge the gap between networking research and Internet standardization, aiming to promote and maximize the outcome of such collaborative endeavours.
  • Nick McKeown
    At every Sigcomm conference the corridors buzz with ideas about how to improve Sigcomm. It is a healthy sign that the premier conference in networking keeps debating how to reinvent and improve itself. In 2012 I got the chance to throw my hat into the ring; at the end of a talk I spent a
    few minutes describing why I think the Sigcomm conference should be greatly expanded. A few people encouraged me to write the ideas down.
    My high level goal is to enlarge the Sigcomm tent, welcoming in more researchers and more of our colleagues from industry. More researchers because our eld has grown enormously in the last two decades, and Sigcomm has not adapted. I believe our small program limits the opportunities for our young researchers and graduate students to publish new ideas, and therefore we are holding back their careers. More colleagues from industry because too few industry thought-leaders are involved in Sigcomm. The academic eld of networking has weak ties to the industry it
    serves, particularly when compared to other elds of systems research. Both sides lose out: there is very little transfer of ideas in either direction, and not enough vigorous debate about the directions networking should be heading.
  • Dina Papagiannaki
    Welcome to the October 2013 issue of ACM Computer Communications Review. This issue includes 1 technical peer-reviewed paper, and 3 editorial notes. The topics include content distribution, SDN, and Internet Exchange Points (IXPs).
     
    One of my goals upon taking over as editor of CCR was to try to make it the place where we would publish fresh, novel ideas, but also where we could exchange perspectives and share lessons. This is the reason why for the past 9 months the editorial board and I have been working on what we call the "interview section" of CCR. This October issue carries our first interview note, captured by Prof. Joseph Camp, from SMU.
     
    Prof. Camp recently interviewed Dr. Ranveer Chandra, from MSR Redmond. The idea was to get Dr. Chandra's view on what has happened in white space networking since his best paper award at ACM SIGCOMM 2009. I find that the resulting article is very interesting. The amount of progress made in white space networking solutions, that has actually led to an operational deployment in Africa, is truly inspiring, and a clear testament to the amount of impact our community can have. I do sincerely hope that you will be as inspired as I was while reading it. This issue of CCR is also being published after ACM SIGCOMM in Hong Kong. SIGCOMM 2013 was marked with a number of records: 1) it has been the only SIGCOMM, I at least remember, hit by a natural disaster - typhoon Utor, 2), resulting in 2 entire sessions postponed to the afternoon (making it essentially dual track:), and 3) it has had the highest acceptance rate since 1987 - with 38 accepted papers.
     
    During their opening remarks the TPC chairs, Prof. Paul Barford, University of Wisconsin at Madison, and Prof. Srini Seshan, Carnegie Mellon University, presented the following two word clouds, which I found highly interesting. The first word cloud represents the most common words found in the titles of the submitted papers, and the second one the most common words in the titles of the accepted papers. Maybe they could form the input to a future editorial by someone in the community.
     
    As one can tell, Software Defined Networking (SDN) was one major topic in this year's conference. Interestingly, behavior, experience and privacy also appear boldly, confirming the belief of some of the community, that indeed SIGCOMM is broadening its reach, covering a diverse set of topics that the Interent is touching in today's society.
     
    This year's SIGCOMM also featured an experiment. All sessions were scribed in real time and notes were added in the blog at layer9.org. You can find a lot of additional information on the papers, and the questions asked on that site.
     
    Reaching the end of this note, I would like to welcome Prof. Sanjay Jha, from the University of New South Wales, in Sydney, Australia, to the editorial board. Prof. Jha brings expertise in a wide range of topics in networking, including wireless sensor networks, ad-hoc/community wireless networks, resilience and multicasting in IP networks and security protocols for wired/wireless networks. I hope you enjoy this issue, and its accompanying special issue on ACM SIGCOMM and the best papers of its associated workshops. I am always at your disposal in case of questions, suggestions, and comments.
  • Stefano Traverso, Mohamed Ahmed, Michele Garetto, Paolo Giaccone, Emilio Leonardi, Saverio Niccolini
    The dimensioning of caching systems represents a difficult task in the design of infrastructures for content distribution in the current Internet. This paper addresses the problem of defining a realistic arrival process for the content requests generated by users, due its critical importance for both analytical and simulative evaluations of the performance of caching systems. First, with the aid of YouTube traces collected inside operational residential networks, we identify the characteristics of real traffic that need to be considered or can be safely neglected in order to accurately predict the performance of a cache. Second, we propose a new parsimonious traffic model, named the Shot Noise Model (SNM), that enables users to natively capture the dynamics of content popularity, whilst still being sufficiently simple to be employed effectively for both analytical and scalable simulative studies of caching systems. Finally, our results show that the SNM presents a much better solution to account for the temporal locality observed in real traffic compared to existing approaches.
    Augustin Chaintreau
  • Jon Crowcroft, Markus Fidler, Klara Nahrstedt, Ralf Steinmetz
    Dagstuhl hosted a three-day seminar on the Future Internet on March 25-27, 2013. At the seminar, about 40 invited researchers from academia and industry discussed the promises, approaches, and open challenges of the Future Internet. This report gives a general overview of the presentations and outcomes of discussions of the seminar.
  • Nikolaos Chatzis, Georgios Smaragdakis, Anja Feldmann, Walter Willinger
    Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points (NAPs) that were mandated as part of the decommissioning of the National Science Foundation Network (NSFNET) in 1994/95 to facilitate the transition from the NSFNET to the “public Internet” as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 20131), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs. However, IXPs have never really attracted much attention from the networking research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current “hot” topic areas such as data centers and cloud services or Software Defined Networking (SDN) and mobile communication. However, we argue in this article that, in fact, IXPs are all about data centers and cloud services and even SDN and mobile communication and should be of great interest to networking researchers interested in understanding the current and future Internet ecosystem. To this end, we survey the existing but largely fragmented sources of publicly available information about IXPs to describe their basic technical and operational aspects and highlight the critical differences among the various IXPs in the different regions of the world, especially in Europe and North America. More importantly, we illustrate the important role that IXPs play in today’s Internet ecosystem and discuss how IXP-driven innovation in Europe is shaping and redefining the Internet marketplace, not only in Europe but increasingly so around the world.
  • Joseph D. Camp

    Ranveer Chandra is a Senior Researcher in the Mobility & Networking Research Group at Microsoft Research. His research is focused on mobile devices, with particular emphasis on wireless communications and energy efficiency. Ranveer is leading the white space networking project at Microsoft Research. He was invited to the FCC to present his work, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the worlds first urban white space network. The following interview captures the essence of his work on white spaces by focusing on his work published in ACM SIGCOMM 2009, which received the Best Paper Award.

  • Dina Papagiannaki
    It is hard to believe it is already July. July marks a few milestones: i) schools are over, ii) most of the paper submission deadlines for the year are behind us, iii) a lot, but not all, of the reviewing duty has been accomplished. July also marks another milestone for CCR and myself: the longest CCR issue I have had the pleasure to edit this year! This issue of CCR features 15 papers in total: 7 technical papers, and 8 editorials. The technical papers cover the areas of Internet measurement, routing, privacy, content
    delivery, as well as data center networks.  
     
    The editorial zone features the report on the workshop of Internet economics, and the workshop on active Internet measurements that took place in early 2013. It also features position papers that regard empirical Internet measurement, community networks, the use of recommendation engines in content delivery, and censorship on the Internet. I found every single one of them thought provoking, and with the potential to initiate discussion in our community. Finally, we have two slightly more unusual editorial notes. The first one describes the experience of
    the CoNEXT 2012 Internet chairs, and the way they found to enable flawless connectivity despite only having access to residential grade equipment. The second one focuses on the criticism that we often portray as a community in our major conferences, in particular SIGCOMM, and suggests a number of directions conference organizers could take.
     
    This last editorial has made me think a little more about my experience as an author, reviewer, and TPC member in the past 15 years of my career. It quotes Jeffrey Naughton’s keynote at ICDE 2010 and his statement about the Computer Science community - “Funding agencies believe us when we say we suck.”. 
     
    Being on a TPC, one actually realizes that criticism is something that we naturally do as a community - not personal. Being an author, who has never been on a TPC, however, makes this process far more personal. I still remember the days when each one of my papers was prepared to what I considered perfection and sent into the “abyss”, sometimes with a positive, and others with a negative response. I also do remember the disappointment of my first rejection. Some perspective on the process could possibly be of interest.
  • Thomas Callahan, Mark Allman, Michael Rabinovich

    The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.

    Sharad Agarwal
  • Akmal Khan, Hyun-chul Kim, Taekyoung Kwon, Yanghee Choi

    The IRR is a set of globally distributed databases with which ASes can register their routing and address-related information. It is often believed that the quality of the IRR data is not reliable since there are few economic incentives for the ASes to register and update their routing information timely. To validate these negative beliefs, we carry out a comprehensive analysis of (IP prefix, its origin AS) pairs in BGP against the corresponding information registered with the IRR, and vice versa. Considering the BGP and IRR practices, we propose a methodology to match the (IP prefix, origin AS) pairs between the IRR and BGP. We observe that the practice of registering IP prefi xes and origin ASes with the IRR is prevalent. However, the quality of the IRR data can vary substantially depending on routing registries, regional Internet registries (to which ASes belong), and AS types. We argue that the IRR can help improve the security level of BGP routing by making BGP routers selectively rely on the corresponding IRR data considering these observations.

    Bhaskaran Raman
  • Abdelberi Chaabane, Emiliano De Cristofaro, Mohamed Ali Kaafar, Ersin Uzun

    As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.

    Augustin Chaintreau
  • Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber

    Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.

    Fabián E. Bustamante
  • Simone Basso, Michela Meo, Juan Carlos De Martin

    Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PLR.

    Nikolaos Laoutaris
  • Howard Wang, Yiting Xia, Keren Bergman, T.S. Eugene Ng, Sambit Sahu, Kunwadee Sripanidkulchai

    Not only do big data applications impose heavy bandwidth demands, they also have diverse communication patterns denoted as *-cast) that mix together unicast, multicast, incast, and all-to-all-cast. Effectively supporting such traffic demands remains an open problem in data center networking. We propose an unconventional approach that leverages physical layer photonic technologies to build custom communication devices for accelerating each *-cast pattern, and integrates such devices into an application-driven, dynamically configurable photonics accelerated data center network. We present preliminary results from a multicast case study to highlight the potential benefits of this approach.

    Hitesh Ballani
  • Giuseppe Bianchi, Andrea Detti, Alberto Caponi, Nicola Blefari Melazzi

    In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.

    Sharad Agarwal
  • Bart Braem, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Leandro Navarro, Joseph Bonicioli, Stavros Papathanasiou, Pau Escrich, Roger Baig Viñas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, Blaine Tatum, Malcolm Matson

    Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.

  • Mohamed Ali Kaafar, Shlomo Berkovsky, Benoit Donnet

    During the last decade, we have witnessed a substantial change in content delivery networks (CDNs) and user access paradigms. If previously, users consumed content from a central server through their personal computers, nowadays they can reach a wide variety of repositories from virtually everywhere using mobile devices. This results in a considerable time-, location-, and event-based volatility of content popularity. In such a context, it is imperative for CDNs to put in place adaptive content management strategies, thus, improving the quality of services provided to users and decreasing the costs. In this paper, we introduce predictive content distribution strategies inspired by methods developed in the Recommender Systems area. Specifically, we outline different content placement strategies based on the observed user consumption patterns, and advocate their applicability in the state of the art CDNs.

  • Mark Allman
  • Sam Burnett, Nick Feamster

    Free and open access to information on the Internet is at risk: more than 60 countries around the world practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are likely to increase. We posit that, although it may not always be feasible to guarantee free and open access to information, citizens have the right to know when their access has been obstructed, restricted, or tampered with, so that they can make informed decisions on information access. We motivate the need for a system that provides accurate, verifiable reports of censorship and discuss the challenges involved in designing such a system. We place these challenges in context by studying their applicability to OONI, a new censorship measurement platform.

  • Jeffrey C. Mogul

    Many people in CS in general, and SIGCOMM in particular, have expressed concerns about an increasingly "hypercritical" approach to reviewing, which can block or discourage the publication of innovative research. The SIGCOMM Technical Steering Committee (TSC) has been addressing this issue, with the goal of encouraging cultural change without undermining the integrity of peer review. Based on my experience as an author, PC member, TSC member, and occasional PC chair, I examine possible causes for hypercritical reviewing, and offer some advice for PC chairs, reviewers, and authors. My focus is on improving existing publication cultures and peer review processes, rather than on proposing radical changes.

  • kc claffy, David Clark

    On December 12-13 2012, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 3rd interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform current and emerging regulatory and policy debates. The theme for this year's workshop was "Definitions and Data". This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at [2]

  • kc claffy

    On February 6-8, 2013, CAIDA hosted the fifth Workshop on Active Internet Measurements (AIMS-5) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. The workshop focus this year was on creating, managing, and analyzing annotations of large longitudinal active Internet measurement data sets. Due to popular demand, we also dedicated half a day to large-scale active measurement (performance/topology) from mobile/cellular devices. This report describes topics discussed at this year's workshop. Materials related to the workshop are available at http://www.caida.org/workshops/.

  • Richard Charles Gass, Damien Saucez

    The ACM 8th international conference on emerging Networking EXperiements and Technologies (CoNEXT) was or- ganized in a lovely hotel in the south of France. Although it was in an excellent location in the city center of Nice with views to the sea, it suffered from poor Internet connectivity. In this paper we describe what happened to the network at CoNEXT and explain why Internet connectivity is usually a problem at small hotel venues. Next we highlight the usual issues with the network equipment that leads to the general network dissatisfaction of conference attendees. Finally we describe how we alleviated the problem by offloading network services and all network traffic into the cloud while supporting over 100 simultaneous connected devices on a single ADSL link with a device that is rated to only support around 15-20. Our experience shows that with simple offloading of certain network services, small conference venues with limited budget no longer have to be plagued by the usual factors that lead to an unsatisfactory Internet connectivity experience.

  • Dina Papagiannaki

    Here is my second issue of CCR, and I am really happy to see that a lot of the things I wrote in my previous editorial are happening or are on their way! Thanks to the wonderful editorial team this issue has five technical papers, while some of the area editors have started contacting prominent members of our community to obtain their retrospective on their past work. In parallel, I have been really fortunate to receive a number of interesting editorials, some of which I solicited and some of which I received through the normal submission process.

    Craig Partridge has provided us with an editorial on the history of CCR. A very interesting read not only for the new members of our community, but for everyone. This issue features an editorial on the challenges that cognitive radio deployments are going to face, and a new network paradigm that could be very relevant in developing regions, named "lowest cost denominator networking." I am positive that each one of those editorials is bound to make you think. 

    Following my promise in January's editorial note, this issue is also bringing some industrial perspective to CCR. We have two editorial notes on standardization activities at the IETF, 3GPP, ITU, etc. I would like to sincerely thank the authors, since putting structure around such activities to report them in a concise form is not an easy task to say the least.

    Research in the area of networking has seen a tremendous increase in breadth in recent years.  Our community is now studying core networking technologies, cellular networks, mobile systems, networked applications. In addition, a large number of consumer electronics products are increasingly becoming connected, using wired or wireless technologies. Understanding the trends in the consumer electronics industry is bound to inform interesting related research in our field. With that in mind, I invited my colleagues in the Telefonica Video Unit and Telefonica Digital to submit their report on what they considered the highlights of the Consumer Electronics Show (CES) that took place in Las Vegas in January 2013. I hope that article inspires you towards novel directions.

    I am really pleased to see CCR growing! Please do not hesitate to contact me with comments, and suggestions!

  • Johann Schlamp, Georg Carle, Ernst W. Biersack

    The Border Gateway Protocol (BGP) was designed without security in mind. Until today, this fact makes the Internet vulnerable to hijacking attacks that intercept or blackhole Internet traffic. So far, significant effort has been put into the detection of IP prefix hijacking, while AS hijacking has received little attention. AS hijacking is more sophisticated than IP prefix hijacking, and is aimed at a long-term benefit such as over a duration of months. In this paper, we study a malicious case of AS hijacking, carried out in order to send spam from the victim's network. We thoroughly investigate this AS hijacking incident using live data from both the control and the data plane. Our analysis yields insights into how an attacker proceeded in order to covertly hijack a whole autonomous system, how he misled an upstream provider, and how he used an unallocated address space. We further show that state of the art techniques to prevent hijacking are not fully capable of dealing with this kind of attack. We also derive guidelines on how to conduct future forensic studies of AS hijacking. Our findings show that there is a need for preventive measures that would allow to anticipate AS hijacking and we outline the design of an early warning system.

    Fabian E. Bustamante
  • Zhe Wu, Harsha V. Madhyastha

    To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.

    Katerina Argyraki
  • Yao Liang, Rui Liu

    We consider an important problem of wireless sensor network (WSN) routing topology inference/tomography from indirect measurements observed at the data sink. Previous studies on WSN topology tomography are restricted to static routing tree estimation, which is unrealistic in real-world WSN time-varying routing due to wireless channel dynamics. We study general WSN routing topology inference where routing structure is dynamic. We formulate the problem as a novel compressed sensing problem. We then devise a suite of decoding algorithms to recover the routing path of each aggregated measurement. Our approach is tested and evaluated though simulations with favorable results. WSN routing topology inference capability is essential for routing improvement, topology control, anomaly detection and load balance to enable effective network management and optimized operations of deployed WSNs.

    Augustin Chaintreau
  • Davide Simoncelli, Maurizio Dusi, Francesco Gringoli, Saverio Niccolini

    Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor [1], we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.

    Konstantina Papagiannaki
  • Damien Saucez, Luigi Iannone, Benoit Donnet

    During the last decade, we have seen the rise of discussions regarding the emergence of a Future Internet. One of the proposed approaches leverages on the separation of the identifier and the locator roles of IP addresses, leading to the LISP (Locator/Identifier Separation Protocol) protocol, currently under development at the IETF (Internet Engineering Task Force). Up to now, researches made on LISP have been rather theoretical, i.e., based on simulations/emulations often using Internet traffic traces. There is no work in the literature attempting to assess the state of its deployment and how this has evolved in recent years. This paper aims at bridging this gap by presenting a first measurement study on the existing worldwide LISP network (lisp4.net). Early results indicate that there is a steady growth of the LISP network but also that network manageability might receive a higher priority than performance in a large scale deployment.

    Sharad Agarwal
  • Konstantinos Pelechrinis, Prashant Krishnamurthy, Martin Weiss, Taieb Znati

    A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.

  • Arjuna Sathiaseelan, Jon Crowcroft

    "The Internet is for everyone" claims Vint Cerf, the father of the Internet via RFC 3271. The Internet Society's recent global Internet survey reveals that the Internet should be considered as a basic human birth right. We strongly agree with these and believe that basic access to the Internet should be made free, at least to access the essential services. However the current Internet access model, which is governed by market economics makes it practically infeasible for enabling universal access especially for those with socio-economic barriers. We see enabling benevolence in the Internet (act of sharing resources) as a potential solution to solve the problem of digital exclusion caused due to socio-economic barriers. In this paper, we propose LCD-Net: Lowest Cost Denominator Networking, a new Internet paradigm that architects multi-layer resource pooling Internet technologies to support benevolence in the Internet. LCD-Net proposes to bring together several existing resource pooling Internet technologies to ensure that users and network operators who share their resources are not affected and at the same time are incentivised for sharing. The paper also emphasizes the need to identify and extend the stakeholder value chain to ensure such benevolent access to the Internet is sustainable.

  • Marcelo Bagnulo, Philip Eardley, Trevor Burbridge, Brian Trammell, Rolf Winter

    Over the last few years, we have witnessed the deployment of large measurement platforms that enable measurements from many vantage points. Examples of these platforms include SamKnows and RIPE ATLAS. All told, there are tens of thousands of measurement agents. Most of these measurement agents are located in the end-user premises; these can run measurements against other user agents located in strategic locations, according to the measurements to be performed. Thanks to the large number of measurement agents, these platforms can provide data about key network performance indicators from the end-user perspective. This data is useful to network operators to improve their operations, as well to regulators and to end users themselves. Currently deployed platforms use proprietary protocols to exchange information between the different parts. As these platforms grow to become an important tool to understand network performance, it is important to standardize the protocols between the different elements of the platform. In this paper, we present ongoing standardization efforts in this area as well as the main challenges that these efforts are facing.

  • Xavier Costa-Pérez, Andreas Festag, Hans-Joerg Kolbe, Juergen Quittek, Stefan Schmid, Martin Stiemerling, Joerg Swetina, Hans van der Veen

    Standardization organizations play a major role in the telecommunications industry to guarantee interoperability between vendors and allow for a common ground where all players can voice their opinion regarding the direction the industry should follow. In this paper we review the current activities in some of the most relevant standardization bodies in the area of communication networks: 3GPP, IEEE 802.11, BBF, IETF, ONF, ETSI ISG NFV, oneM2M and ETSI TC ITS. Major innovations being developed in these bodies are summarized describing the most disruptive directions taken and expected to have a remarkable impact in future networks. Finally, some trends common among different bodies are identified covering different dimensions: i) core technology enhancements, ii) inter-organizations cooperation for convergence, iii) consideration of raising disruptive technical concepts, and iv) expanding into emerging use cases aiming at an increase of future market size.

  • Craig Partridge

    A brief history of the evolution of ACM SIGCOMM Computer Communication Review as a newsletter and journal is presented.

  • Fernando Garcia Calvo, Javier Lucendo de Gregorio, Fernando Soto de Toro, Joaquin Munoz Lopez, Teo Mayo Muniz, Jose Maria Miranda, Oscar Gavilan Ballesteros

    The Consumer Electronics Show, which is held every year in Las Vegas in early January, continues to be an important fair in the consumer sector, though increasingly the major manufacturers prefer to announce their new products at their own specific events in order to gain greater impact. Only the leading TV brands unveil their artillery of new models for the coming year. Despite this, it continues to break records: there were over 150,000 visitors (from more than 150 countries), the number of new products announced exceeded 20,000 and the fair occupied over 2 million square meters.

  • Roch Guérin, Olivier Bonaventure

    There have been many recent discussions within the computer science community on the relative roles of conferences and journals [1, 2, 3]. They clearly offer different forums for the dissemination of scientific and technical ideas, and much of the debate has been on if and how to leverage both. These are important questions that every conference and journal ought to carefully consider, and the CoNEXT Steering Committee recently initiated a discussion on this topic. The main focus of the discussion was on how to on one hand maintain the high quality of papers accepted for presentation at CoNEXT, and on the other hand improve the conference's ability to serve as a timely forum where new and exciting but not necessarily polished or fully developed ideas could be presented. Unfortunately, the stringent "quality control" that prevails during the paper selection process of selective conferences, including CoNEXT, often makes it difficult for interesting new ideas to break-through. To make it, papers need to ace it along three major dimensions, namely, technical correctness and novelty, polish of exposition and motivations, and completeness of the results. Most if not all hot-off-the-press papers will fail in at least one of those dimensions. On the other hand, there are conferences and workshops that target short papers. Hotnets is one of such venues that has attracted short papers presenting new ideas. However, from a community viewpoint, Hotnets has several limitations. First, Hotnets is an invitation-only workshop. Coupled with a low acceptance rate, this limits the exposure of Hotnets papers to the community. Second, Hotnets has never been held outside North-America. The SIGCOMM and CoNEXT workshops are also a venue where short papers can be presented and discussed. However, these workshops are focussed on a specific subdomain and usually do not attract a broad audience. The IMC short papers are a more interesting model because short and regular papers are mixed in the single track conference. This ensures broad exposure for the short papers, but the scope of IMC is much smaller than CoNEXT. In order to address this intrinsic tension that plagues all selective conferences, CoNEXT 2013 is introducing a short paper category with submissions requested through a logically separate call-for-papers. The separate call for paper is meant to clarify to both authors and TPC members that short papers are to be judged using different criteria. Short papers will be limited to six (6) two-column pages in the standard ACM conference format. Most importantly, short papers are not meant to be condensed versions of standard length papers and neither are they targeted at traditional "position papers." In particular, papers submitted as regular (long) papers will not be eligible for consideration as short papers. Instead, short paper submissions are intended for high-quality technical works that either target a topical issue that can be covered in 6 pages, or are introducing a novel but not fully flushed out idea that can benefit from the feedback that early exposure can provide. Short papers will be reviewed and selected through a process distinct from that of long papers and based on how good a match they are for the above criteria. As alluded to, this separation is meant to address the inherent dilemma faced by highly selective conferences, where reviewers typically approach the review process looking for reasons to reject a paper (how high are the odds that a paper is in the top 10-15%?). For that purpose, Program Committee members will be reminded that completeness of the results should NOT be a criterion used when assessing short papers. Similarly, while an unreadable paper is obviously not one that should be accepted, polish should not be a major consideration either. As long as the paper manages to convey its idea, a choppy presentation should not by itself be ground for rejecting a paper. Finally, while technical correctness is important, papers that maybe claim more than they should, are not to be disqualified simply on those grounds. As a rule, the selection process should focus on the "idea" presented in the paper. If the idea is new, or interesting, or unusual, etc., and is not fundamentally broken, the paper should be considered. Eventual acceptance will ultimately depend on logistics constraints (how many such papers can be presented), but the goal is to offer a venue at CoNEXT where new, emerging ideas can be presented and receive constructive feedback. The CoNEXT web site1 provide additional information on the submission process of short (and regular) papers.

  • Dina Papagiannaki

    A new year begins and a new challenge needs to be undertaken. Life is full of challenges, but those that we invite ourselves have something special of their own. In that spirit, I am really happy to be taking on as the editor for ACM Computer Communications Review. Keshav has done a tremendous job making CCR a high quality publication that unites our community. The combination of peer reviewed papers and editorial submissions provides a ground to publish the latest scientific achievements in our field, but also position them within the context of our ever changing technological landscape.

    With that in mind, I would like to continue encouraging the submission of the latest research results to CCR. I would also like to try to broaden its reach. A little less than two years ago, I changed jobs, and took the position of the scientific director responsible for Internet, systems, and networks at Telefonica Research in Barcelona, Spain*. I am now part of one of the most diverse research groups I have ever known. The team comprises researchers expert in multimedia analysis, data mining, machine learning, human computer interaction, distributed systems, network economics, wireless networking, security and privacy. One could see this as a research team that could potentially address problems at all layers of the stack. As such, I am learning so much from the team on an every day basis.
     
    I would love to bring that broader perspective to CCR and enrich the way we see and use telecommunications infrastructure. I would like to encourage the submission of editorials from other disciplines of computer science that build and deploy technologies over the Internet.
    There are so many questions that need to be addressed when you start thinking about networking fueling smart cities, smart utilities, novel services that will enable our younger generation to learn the skills they need and  put them in practice in a world that suffers from large
    unemployment rates. Having three young children makes me wonder about ways we could use all the work that we have done in the past 20 years to enable sustainable societies in the future. And the Internet will be the skeleton that makes this possible, leading to true globalization. We have so much more to offer.
     
    In parallel, I would love to use CCR as a vehicle to disemminate lessons learnt and current best practices. With the help of the editorial team, we are going to include an interview section in CCR, where we will be asking prominent members of our community for their perspective on what they think have been the main lessons they learnt from their past work, as well as their outlook for the future.
     
    In this January issue, you are not only going to find technical contributions, but also reports from workshops that took place in the recent months. In addition, I have invited an editorial covering the Mobile World Congress 2012. Recent trends in technology influence and enrich
    our research. This is the first step in trying to bridge those two worlds. If you do happen to attend venues such as MWC, CES, or standardization bodies, please do send me your editorial notes on your impressions from those events.
     
    Finally, I would like to extend my sincerest thank you to Prof. David Wetherall that has decided to step down from the editorial board, and welcome Dr. Katerina Argyraki, Dr. Hitesh Ballani, Prof. Fabián Bustamante, Prof. Marco Mellia, and Prof. Joseph Camp, that are becoming part of our editorial team. With their expertise and motivation we are bound to do great things in 2013! With all that, I sincerely hope that you will enjoy this issue and I am looking forward to hearing any further suggestions to make CCR as timely and impactful as possible.
Syndicate content