CCR Papers from April 2013

Find a CCR issue:
  • Dina Papagiannaki

    Here is my second issue of CCR, and I am really happy to see that a lot of the things I wrote in my previous editorial are happening or are on their way! Thanks to the wonderful editorial team this issue has five technical papers, while some of the area editors have started contacting prominent members of our community to obtain their retrospective on their past work. In parallel, I have been really fortunate to receive a number of interesting editorials, some of which I solicited and some of which I received through the normal submission process.

    Craig Partridge has provided us with an editorial on the history of CCR. A very interesting read not only for the new members of our community, but for everyone. This issue features an editorial on the challenges that cognitive radio deployments are going to face, and a new network paradigm that could be very relevant in developing regions, named "lowest cost denominator networking." I am positive that each one of those editorials is bound to make you think. 

    Following my promise in January's editorial note, this issue is also bringing some industrial perspective to CCR. We have two editorial notes on standardization activities at the IETF, 3GPP, ITU, etc. I would like to sincerely thank the authors, since putting structure around such activities to report them in a concise form is not an easy task to say the least.

    Research in the area of networking has seen a tremendous increase in breadth in recent years.  Our community is now studying core networking technologies, cellular networks, mobile systems, networked applications. In addition, a large number of consumer electronics products are increasingly becoming connected, using wired or wireless technologies. Understanding the trends in the consumer electronics industry is bound to inform interesting related research in our field. With that in mind, I invited my colleagues in the Telefonica Video Unit and Telefonica Digital to submit their report on what they considered the highlights of the Consumer Electronics Show (CES) that took place in Las Vegas in January 2013. I hope that article inspires you towards novel directions.

    I am really pleased to see CCR growing! Please do not hesitate to contact me with comments, and suggestions!

  • Johann Schlamp, Georg Carle, Ernst W. Biersack

    The Border Gateway Protocol (BGP) was designed without security in mind. Until today, this fact makes the Internet vulnerable to hijacking attacks that intercept or blackhole Internet traffic. So far, significant effort has been put into the detection of IP prefix hijacking, while AS hijacking has received little attention. AS hijacking is more sophisticated than IP prefix hijacking, and is aimed at a long-term benefit such as over a duration of months. In this paper, we study a malicious case of AS hijacking, carried out in order to send spam from the victim's network. We thoroughly investigate this AS hijacking incident using live data from both the control and the data plane. Our analysis yields insights into how an attacker proceeded in order to covertly hijack a whole autonomous system, how he misled an upstream provider, and how he used an unallocated address space. We further show that state of the art techniques to prevent hijacking are not fully capable of dealing with this kind of attack. We also derive guidelines on how to conduct future forensic studies of AS hijacking. Our findings show that there is a need for preventive measures that would allow to anticipate AS hijacking and we outline the design of an early warning system.

    Fabian E. Bustamante
  • Zhe Wu, Harsha V. Madhyastha

    To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.

    Katerina Argyraki
  • Yao Liang, Rui Liu

    We consider an important problem of wireless sensor network (WSN) routing topology inference/tomography from indirect measurements observed at the data sink. Previous studies on WSN topology tomography are restricted to static routing tree estimation, which is unrealistic in real-world WSN time-varying routing due to wireless channel dynamics. We study general WSN routing topology inference where routing structure is dynamic. We formulate the problem as a novel compressed sensing problem. We then devise a suite of decoding algorithms to recover the routing path of each aggregated measurement. Our approach is tested and evaluated though simulations with favorable results. WSN routing topology inference capability is essential for routing improvement, topology control, anomaly detection and load balance to enable effective network management and optimized operations of deployed WSNs.

    Augustin Chaintreau
  • Davide Simoncelli, Maurizio Dusi, Francesco Gringoli, Saverio Niccolini

    Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor [1], we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.

    Konstantina Papagiannaki
  • Damien Saucez, Luigi Iannone, Benoit Donnet

    During the last decade, we have seen the rise of discussions regarding the emergence of a Future Internet. One of the proposed approaches leverages on the separation of the identifier and the locator roles of IP addresses, leading to the LISP (Locator/Identifier Separation Protocol) protocol, currently under development at the IETF (Internet Engineering Task Force). Up to now, researches made on LISP have been rather theoretical, i.e., based on simulations/emulations often using Internet traffic traces. There is no work in the literature attempting to assess the state of its deployment and how this has evolved in recent years. This paper aims at bridging this gap by presenting a first measurement study on the existing worldwide LISP network (lisp4.net). Early results indicate that there is a steady growth of the LISP network but also that network manageability might receive a higher priority than performance in a large scale deployment.

    Sharad Agarwal
  • Konstantinos Pelechrinis, Prashant Krishnamurthy, Martin Weiss, Taieb Znati

    A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.

  • Arjuna Sathiaseelan, Jon Crowcroft

    "The Internet is for everyone" claims Vint Cerf, the father of the Internet via RFC 3271. The Internet Society's recent global Internet survey reveals that the Internet should be considered as a basic human birth right. We strongly agree with these and believe that basic access to the Internet should be made free, at least to access the essential services. However the current Internet access model, which is governed by market economics makes it practically infeasible for enabling universal access especially for those with socio-economic barriers. We see enabling benevolence in the Internet (act of sharing resources) as a potential solution to solve the problem of digital exclusion caused due to socio-economic barriers. In this paper, we propose LCD-Net: Lowest Cost Denominator Networking, a new Internet paradigm that architects multi-layer resource pooling Internet technologies to support benevolence in the Internet. LCD-Net proposes to bring together several existing resource pooling Internet technologies to ensure that users and network operators who share their resources are not affected and at the same time are incentivised for sharing. The paper also emphasizes the need to identify and extend the stakeholder value chain to ensure such benevolent access to the Internet is sustainable.

  • Marcelo Bagnulo, Philip Eardley, Trevor Burbridge, Brian Trammell, Rolf Winter

    Over the last few years, we have witnessed the deployment of large measurement platforms that enable measurements from many vantage points. Examples of these platforms include SamKnows and RIPE ATLAS. All told, there are tens of thousands of measurement agents. Most of these measurement agents are located in the end-user premises; these can run measurements against other user agents located in strategic locations, according to the measurements to be performed. Thanks to the large number of measurement agents, these platforms can provide data about key network performance indicators from the end-user perspective. This data is useful to network operators to improve their operations, as well to regulators and to end users themselves. Currently deployed platforms use proprietary protocols to exchange information between the different parts. As these platforms grow to become an important tool to understand network performance, it is important to standardize the protocols between the different elements of the platform. In this paper, we present ongoing standardization efforts in this area as well as the main challenges that these efforts are facing.

  • Xavier Costa-Pérez, Andreas Festag, Hans-Joerg Kolbe, Juergen Quittek, Stefan Schmid, Martin Stiemerling, Joerg Swetina, Hans van der Veen

    Standardization organizations play a major role in the telecommunications industry to guarantee interoperability between vendors and allow for a common ground where all players can voice their opinion regarding the direction the industry should follow. In this paper we review the current activities in some of the most relevant standardization bodies in the area of communication networks: 3GPP, IEEE 802.11, BBF, IETF, ONF, ETSI ISG NFV, oneM2M and ETSI TC ITS. Major innovations being developed in these bodies are summarized describing the most disruptive directions taken and expected to have a remarkable impact in future networks. Finally, some trends common among different bodies are identified covering different dimensions: i) core technology enhancements, ii) inter-organizations cooperation for convergence, iii) consideration of raising disruptive technical concepts, and iv) expanding into emerging use cases aiming at an increase of future market size.

  • Craig Partridge

    A brief history of the evolution of ACM SIGCOMM Computer Communication Review as a newsletter and journal is presented.

  • Fernando Garcia Calvo, Javier Lucendo de Gregorio, Fernando Soto de Toro, Joaquin Munoz Lopez, Teo Mayo Muniz, Jose Maria Miranda, Oscar Gavilan Ballesteros

    The Consumer Electronics Show, which is held every year in Las Vegas in early January, continues to be an important fair in the consumer sector, though increasingly the major manufacturers prefer to announce their new products at their own specific events in order to gain greater impact. Only the leading TV brands unveil their artillery of new models for the coming year. Despite this, it continues to break records: there were over 150,000 visitors (from more than 150 countries), the number of new products announced exceeded 20,000 and the fair occupied over 2 million square meters.

  • Roch Guérin, Olivier Bonaventure

    There have been many recent discussions within the computer science community on the relative roles of conferences and journals [1, 2, 3]. They clearly offer different forums for the dissemination of scientific and technical ideas, and much of the debate has been on if and how to leverage both. These are important questions that every conference and journal ought to carefully consider, and the CoNEXT Steering Committee recently initiated a discussion on this topic. The main focus of the discussion was on how to on one hand maintain the high quality of papers accepted for presentation at CoNEXT, and on the other hand improve the conference's ability to serve as a timely forum where new and exciting but not necessarily polished or fully developed ideas could be presented. Unfortunately, the stringent "quality control" that prevails during the paper selection process of selective conferences, including CoNEXT, often makes it difficult for interesting new ideas to break-through. To make it, papers need to ace it along three major dimensions, namely, technical correctness and novelty, polish of exposition and motivations, and completeness of the results. Most if not all hot-off-the-press papers will fail in at least one of those dimensions. On the other hand, there are conferences and workshops that target short papers. Hotnets is one of such venues that has attracted short papers presenting new ideas. However, from a community viewpoint, Hotnets has several limitations. First, Hotnets is an invitation-only workshop. Coupled with a low acceptance rate, this limits the exposure of Hotnets papers to the community. Second, Hotnets has never been held outside North-America. The SIGCOMM and CoNEXT workshops are also a venue where short papers can be presented and discussed. However, these workshops are focussed on a specific subdomain and usually do not attract a broad audience. The IMC short papers are a more interesting model because short and regular papers are mixed in the single track conference. This ensures broad exposure for the short papers, but the scope of IMC is much smaller than CoNEXT. In order to address this intrinsic tension that plagues all selective conferences, CoNEXT 2013 is introducing a short paper category with submissions requested through a logically separate call-for-papers. The separate call for paper is meant to clarify to both authors and TPC members that short papers are to be judged using different criteria. Short papers will be limited to six (6) two-column pages in the standard ACM conference format. Most importantly, short papers are not meant to be condensed versions of standard length papers and neither are they targeted at traditional "position papers." In particular, papers submitted as regular (long) papers will not be eligible for consideration as short papers. Instead, short paper submissions are intended for high-quality technical works that either target a topical issue that can be covered in 6 pages, or are introducing a novel but not fully flushed out idea that can benefit from the feedback that early exposure can provide. Short papers will be reviewed and selected through a process distinct from that of long papers and based on how good a match they are for the above criteria. As alluded to, this separation is meant to address the inherent dilemma faced by highly selective conferences, where reviewers typically approach the review process looking for reasons to reject a paper (how high are the odds that a paper is in the top 10-15%?). For that purpose, Program Committee members will be reminded that completeness of the results should NOT be a criterion used when assessing short papers. Similarly, while an unreadable paper is obviously not one that should be accepted, polish should not be a major consideration either. As long as the paper manages to convey its idea, a choppy presentation should not by itself be ground for rejecting a paper. Finally, while technical correctness is important, papers that maybe claim more than they should, are not to be disqualified simply on those grounds. As a rule, the selection process should focus on the "idea" presented in the paper. If the idea is new, or interesting, or unusual, etc., and is not fundamentally broken, the paper should be considered. Eventual acceptance will ultimately depend on logistics constraints (how many such papers can be presented), but the goal is to offer a venue at CoNEXT where new, emerging ideas can be presented and receive constructive feedback. The CoNEXT web site1 provide additional information on the submission process of short (and regular) papers.

Syndicate content