CCR Papers from July 2013

Find a CCR issue:
  • Dina Papagiannaki
    It is hard to believe it is already July. July marks a few milestones: i) schools are over, ii) most of the paper submission deadlines for the year are behind us, iii) a lot, but not all, of the reviewing duty has been accomplished. July also marks another milestone for CCR and myself: the longest CCR issue I have had the pleasure to edit this year! This issue of CCR features 15 papers in total: 7 technical papers, and 8 editorials. The technical papers cover the areas of Internet measurement, routing, privacy, content
    delivery, as well as data center networks.  
     
    The editorial zone features the report on the workshop of Internet economics, and the workshop on active Internet measurements that took place in early 2013. It also features position papers that regard empirical Internet measurement, community networks, the use of recommendation engines in content delivery, and censorship on the Internet. I found every single one of them thought provoking, and with the potential to initiate discussion in our community. Finally, we have two slightly more unusual editorial notes. The first one describes the experience of
    the CoNEXT 2012 Internet chairs, and the way they found to enable flawless connectivity despite only having access to residential grade equipment. The second one focuses on the criticism that we often portray as a community in our major conferences, in particular SIGCOMM, and suggests a number of directions conference organizers could take.
     
    This last editorial has made me think a little more about my experience as an author, reviewer, and TPC member in the past 15 years of my career. It quotes Jeffrey Naughton’s keynote at ICDE 2010 and his statement about the Computer Science community - “Funding agencies believe us when we say we suck.”. 
     
    Being on a TPC, one actually realizes that criticism is something that we naturally do as a community - not personal. Being an author, who has never been on a TPC, however, makes this process far more personal. I still remember the days when each one of my papers was prepared to what I considered perfection and sent into the “abyss”, sometimes with a positive, and others with a negative response. I also do remember the disappointment of my first rejection. Some perspective on the process could possibly be of interest.
  • Thomas Callahan, Mark Allman, Michael Rabinovich

    The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.

    Sharad Agarwal
  • Akmal Khan, Hyun-chul Kim, Taekyoung Kwon, Yanghee Choi

    The IRR is a set of globally distributed databases with which ASes can register their routing and address-related information. It is often believed that the quality of the IRR data is not reliable since there are few economic incentives for the ASes to register and update their routing information timely. To validate these negative beliefs, we carry out a comprehensive analysis of (IP prefix, its origin AS) pairs in BGP against the corresponding information registered with the IRR, and vice versa. Considering the BGP and IRR practices, we propose a methodology to match the (IP prefix, origin AS) pairs between the IRR and BGP. We observe that the practice of registering IP prefi xes and origin ASes with the IRR is prevalent. However, the quality of the IRR data can vary substantially depending on routing registries, regional Internet registries (to which ASes belong), and AS types. We argue that the IRR can help improve the security level of BGP routing by making BGP routers selectively rely on the corresponding IRR data considering these observations.

    Bhaskaran Raman
  • Abdelberi Chaabane, Emiliano De Cristofaro, Mohamed Ali Kaafar, Ersin Uzun

    As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.

    Augustin Chaintreau
  • Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber

    Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.

    Fabián E. Bustamante
  • Simone Basso, Michela Meo, Juan Carlos De Martin

    Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PLR.

    Nikolaos Laoutaris
  • Howard Wang, Yiting Xia, Keren Bergman, T.S. Eugene Ng, Sambit Sahu, Kunwadee Sripanidkulchai

    Not only do big data applications impose heavy bandwidth demands, they also have diverse communication patterns denoted as *-cast) that mix together unicast, multicast, incast, and all-to-all-cast. Effectively supporting such traffic demands remains an open problem in data center networking. We propose an unconventional approach that leverages physical layer photonic technologies to build custom communication devices for accelerating each *-cast pattern, and integrates such devices into an application-driven, dynamically configurable photonics accelerated data center network. We present preliminary results from a multicast case study to highlight the potential benefits of this approach.

    Hitesh Ballani
  • Giuseppe Bianchi, Andrea Detti, Alberto Caponi, Nicola Blefari Melazzi

    In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.

    Sharad Agarwal
  • Bart Braem, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Leandro Navarro, Joseph Bonicioli, Stavros Papathanasiou, Pau Escrich, Roger Baig Viñas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, Blaine Tatum, Malcolm Matson

    Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.

  • Mohamed Ali Kaafar, Shlomo Berkovsky, Benoit Donnet

    During the last decade, we have witnessed a substantial change in content delivery networks (CDNs) and user access paradigms. If previously, users consumed content from a central server through their personal computers, nowadays they can reach a wide variety of repositories from virtually everywhere using mobile devices. This results in a considerable time-, location-, and event-based volatility of content popularity. In such a context, it is imperative for CDNs to put in place adaptive content management strategies, thus, improving the quality of services provided to users and decreasing the costs. In this paper, we introduce predictive content distribution strategies inspired by methods developed in the Recommender Systems area. Specifically, we outline different content placement strategies based on the observed user consumption patterns, and advocate their applicability in the state of the art CDNs.

  • Mark Allman
  • Sam Burnett, Nick Feamster

    Free and open access to information on the Internet is at risk: more than 60 countries around the world practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are likely to increase. We posit that, although it may not always be feasible to guarantee free and open access to information, citizens have the right to know when their access has been obstructed, restricted, or tampered with, so that they can make informed decisions on information access. We motivate the need for a system that provides accurate, verifiable reports of censorship and discuss the challenges involved in designing such a system. We place these challenges in context by studying their applicability to OONI, a new censorship measurement platform.

  • Jeffrey C. Mogul

    Many people in CS in general, and SIGCOMM in particular, have expressed concerns about an increasingly "hypercritical" approach to reviewing, which can block or discourage the publication of innovative research. The SIGCOMM Technical Steering Committee (TSC) has been addressing this issue, with the goal of encouraging cultural change without undermining the integrity of peer review. Based on my experience as an author, PC member, TSC member, and occasional PC chair, I examine possible causes for hypercritical reviewing, and offer some advice for PC chairs, reviewers, and authors. My focus is on improving existing publication cultures and peer review processes, rather than on proposing radical changes.

  • kc claffy, David Clark

    On December 12-13 2012, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 3rd interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform current and emerging regulatory and policy debates. The theme for this year's workshop was "Definitions and Data". This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at [2]

  • kc claffy

    On February 6-8, 2013, CAIDA hosted the fifth Workshop on Active Internet Measurements (AIMS-5) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. The workshop focus this year was on creating, managing, and analyzing annotations of large longitudinal active Internet measurement data sets. Due to popular demand, we also dedicated half a day to large-scale active measurement (performance/topology) from mobile/cellular devices. This report describes topics discussed at this year's workshop. Materials related to the workshop are available at http://www.caida.org/workshops/.

  • Richard Charles Gass, Damien Saucez

    The ACM 8th international conference on emerging Networking EXperiements and Technologies (CoNEXT) was or- ganized in a lovely hotel in the south of France. Although it was in an excellent location in the city center of Nice with views to the sea, it suffered from poor Internet connectivity. In this paper we describe what happened to the network at CoNEXT and explain why Internet connectivity is usually a problem at small hotel venues. Next we highlight the usual issues with the network equipment that leads to the general network dissatisfaction of conference attendees. Finally we describe how we alleviated the problem by offloading network services and all network traffic into the cloud while supporting over 100 simultaneous connected devices on a single ADSL link with a device that is rated to only support around 15-20. Our experience shows that with simple offloading of certain network services, small conference venues with limited budget no longer have to be plagued by the usual factors that lead to an unsatisfactory Internet connectivity experience.

Syndicate content