CCR Papers from april 2015

  • Dina Papagiannaki

    Welcome to the April issue of Computer Communications Review, our community’s quarterly newsletter. Or maybe workshop?

    Many may not realize it but CCR is actually operating like a workshop with quarterly deadlines. Every quarter we receive 40-60 submissions that are being reviewed by a collective of more than 100 reviewers, and handled by our 12 member editorial board, to which I would like to welcome Alberto Dainotti, from CAIDA. Out of all submissions, some technical papers are published in the running issue, while others are being given feedback for further improvement and get re-evaluated for a later issue. I cannot thank enough the hard working editorial board that, quarter on quarter, handle their allocated papers, targeting to provide the best possible feedback.

    The editorial papers are not being peer reviewed, but solely reviewed by me. Editorial papers fall into two categories: i) position papers, or ii) workshop reports. My task is to ensure that the positions are clearly expressed and possibly identify cases where positions are being presented through technical arguments, in which case I may engage someone from the editorial board or redirect the paper to the technical track. Fundamentally, CCR is a vehicle to bring our community together and expose interesting, novel ideas as early as possible. And I believe we do achieve this.

    This issue is an example of the above process. We received 36 papers – 32 technical submissions, and 4 editorials. We accepted all editorials, and 2 of the technical papers, while 10 papers have been recommended for resubmission, with clear recommendations on the changes required. A lot of authors agree that their papers have improved in clarity and technical accuracy through the process of revise-and-resubmit.

    I hope you enjoy the two technical papers, on rate adaptation in 802.11n, and multipath routing on wireless sensor networks. Three of the editorials cover workshops and community meetings: 1) the 1st named data networking community meeting, 2) the Dagstuhl seminar on distributed cloud computing, and 3) the 1st data transparency lab workshop. Meeting reports are a wonderful way of tracking the state of the art in specific areas, and learning from the findings of the organizers.

    The last editorial is one of my favorite editorials so far. The authors provide a unique historical perspective on how IP address allocation has evolved since the inception of the Internet, and implications that our community has to deal with. A very interesting exposition of IP address scarcity but also a very valuable perspective on how the Internet as a whole has evolved.

    This issue is also bringing a novelty. We are establishing a new column, edited by Dr. Renata Teixeira from INRIA. The column is aiming to bring successful examples of technology transfer from our community to the networking industry. The inaugural example is provided by Dr. Paul Francis, discussing Network Address Translation (NAT). Funny how NAT was first proposed in CCR, and that the non-workshop editorial of this issue also deals with IP scarcity. It is interesting to read Paul’s exposition of the events, along with his own reflections on whether what was transferred was what he actually proposed :-).

    With all this, I hope you enjoy the content of this issue, as well as our second column on graduate advice. Finally, I am expecting you all at the best of CCR session of ACM Sigcomm in London – the Sigcomm session where we celebrate the best technical and the best editorial published by CCR during the past year. Dina Papagiannaki CCR Editor

  • L. Kriara, M. Marina

    We consider the link adaptation problem in 802.11n wireless LANs that involves adapting MIMO mode, channel bonding, modulation and coding scheme, and frame aggregation level with varying channel conditions. Through measurement-based analysis, we find that adapting all available 802.11n features results in higher goodput than adapting only a subset of features, thereby showing that holistic link adaptation is crucial to achieve best performance. We then design a novel hybrid link adaptation scheme termed SampleLite that adapts all 802.11n features while being efficient compared to sampling-based open-loop schemes and practical relative to closed loop schemes. SampleLite uses sender-side RSSI measurements to significantly lower the sampling overhead, by exploiting the monotonic relationship between best settings for each feature and the RSSI. Through analysis and experimentation in a testbed environment, we show that our proposed approach can reduce the sampling overhead by over 70% on average compared to the widely used Minstrel HT scheme. We also experimentally evaluate the goodput performance of SampleLite in a wide range of controlled and realworld interference scenarios. Our results show that SampleLite, while performing close to the ideal, delivers goodput that is 35– 100% better than with existing schemes.

    Aline Carneiro Viana
  • S. Sharma, S. Jena

    Wireless Sensor Network (WSN) consists of low power sensor nodes. Energy is the main constraint associated with the sensor nodes. In this paper, we propose a cluster based multipath routing protocol, which uses the clustering and multipath techniques to reduce energy consumption and increase the reliability. The basic idea is to reduce the load of the sensor node by giving more responsibility to the base station (sink). We have implemented and compared the protocol with existing protocols and found that it is more energy-efficient and reliable.

    Joseph Camp
  • P. Richter, M. Allman, R. Bush, V. Paxson

    With the ongoing exhaustion of free address pools at the registries serving the global demand for IPv4 address space, scarcity has become reality. Networks in need of address space can no longer get more address allocations from their respective registries. In this work we frame the fundamentals of the IPv4 address exhaustion phenomena and connected issues. We elaborate on how the current ecosystem of IPv4 address space has evolved since the standardization of IPv4, leading to the rather complex and opaque scenario we face today. We outline the evolution in address space management as well as address space use patterns, identifying key factors of the scarcity issues. We characterize the possible solution space to overcome these issues and open the perspective of address blocks as virtual resources, which involves issues such as differentiation between address blocks, the need for resource certification, and issues arising when transferring address space between networks.

  • claffy, J. Polterock, A. Afanesev, J. Burke, L. Zhang

    This report is a brief summary of the first NDN Community Meeting held at UCLA in Los Angeles, California on September 4-5, 2014. The meeting provided a platform for the attendees from 39 institutions across seven countries to exchange their recent NDN research and development results, to debate existing and proposed functionality in security support, and to provide feedback into the NDN architecture design evolution.

  • Y. Coady, O. Hohlfeld, J. Kempf, R. McGeer, S. Schmid

    A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today’s massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by offering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify different use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.

  • R. Gross-Brown, M. Ficek, J. Agundez, P. Dressler, N. Laoutaris

    On November 20 and 21 2014, Telefonica I+D hosted the Data Transparency Lab ("DTL") Kickoff Workshop on Personal Data Transparency and Online Privacy at its headquarters in Barcelona, Spain. This workshop provided a forum for technologists, researchers, policymakers and industry representatives to share and discuss current and emerging issues around privacy and transparency on the Internet. The objective of this workshop was to kick-start the creation of a community of research, industry, and public interest parties that will work together towards the following objectives: - The development of methodologies and user-friendly tools to promote transparency and empower users to understand online privacy issues and consequences; - The sharing of datasets and research results, and; - The support of research through grants and the provision of infrastructure to deploy tools. With the above activities, the DTL community aims to improve our understanding of technical, ethical, economic and regulatory issues related to the use of personal data by online services. It is hoped that successful execution of such activities will help sustain a fair and transparent exchange of personal data online. This report summarizes the presentations, discussions and questions that resulted from the workshop.

  • Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, Renata Teixeira

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption. Their story is here. We are seeking CCR articles describing interesting cases of “research affecting the practice,” including ideas transferred from research labs (academic or industrial) that became: • Commercial products. • Internet standards • Algorithms and ideas embedded into new or existing products • Widely used open-source software • Ideas deployed first by startups, by existing companies or by distribution of free software • Communities built around a toolbox, language, dataset We also welcome stories of negative experiences, or ideas that seemed promising but ended-up not taking off. Paul Francis has accepted to start this editorial series. Enjoy it!
    Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, and Renata Teixeira SIGCOMM Industrial Liaison Board

  • Aditya Akella

    As some of you may know, I gave a talk at CoNEXT 2014 titled "On future-proofing networks" (see [1] for slides). While most of the talk was focused on my past research projects, I spent a bit of time talking about the kind of problems I like to work on. I got several interesting questions about the latter, both during the talk and in the weeks following the talk. Given this, I thought I would devote this column (and perhaps parts of future columns) to putting my ideas on problem selection into words. I suspect what I write below will generate more questions than answers in your mind. Don’t hold your questions back, though! Write to me at!

  • Paul Francis

    In January of 1993, Tony Eng and I published the first paper to propose Network Address Translation (NAT) in CCR based on work done the previous summer during Tony's internship with me at Bellcore. Early in 1994, according to Wikipedia, development was started on the PIX (Private Internet Exchange) firewall product by John Mayes, which included NAT as a key feature. In May of 1994 the first RFC on NAT was published (RFC 1631). PIX was a huge success, and was bought by Cisco in November of 1995. I was asked by the Sigcomm Industrial Liaison Board to write the first of a series of articles under the theme “Examples of Research Affecting the Practice of Networking.” The goal of the series is to help find ways for the research community to increase its industrial impact. I argued with the board that NAT isn't a very good example because I don't think the lessons learned apply very well to today's environment. Better to start with a contemporary example like Openflow. Why isn’t NAT a good example? It's because I as a researcher had nothing to do with the success of NAT, and the reasons for the success of NAT had nothing to do with the reason I developed NAT in the first place. Let me explain. I conceived of NAT as a solution to the expected depletion of the IPv4 address space. Nobody bought a PIX, however, because they wanted to help slow down the global consumption of IP addresses. People bought PIX firewalls because they needed a way to connect their private networks to the public Internet. At the time, it was a relatively common practice for people to assign any random unregistered IP address to private networking equipment or hosts. Picking a random address was much easier than, for instance, going to a regional internet registry to obtain addresses, especially when there was no intent to connect the private network to the public Internet. This obviously became a problem once someone using unregistered private addresses wished to connect to the Internet. The PIX firewall saved people the trouble of trying to obtain an adequate block of IP addresses, and having to renumber networks in order to connect to the Internet if they had already been using unregistered addresses. Indeed, PIX was conceived by John Mayes as an IP variant of the telephone PBX (Private Branch Exchange), which allows private phone systems to operate with their own private numbers, often needing only a single public phone number. The whole NAT episode, seen from my point of view at the time, boils down to this. I published NAT in CCR (thanks Dave Oran, who was editor at the time and allowed it). For me, that was the end of it. Some time later, I was contacted by Kjeld Egevang of Cray Communications, who wanted to write an RFC on NAT so that they could legitimize their implementation of NAT. So I helped a little with that and lent my name to the RFC. (In those days, all you needed to publish an RFC was the approval of one guy, Jon Postel!) Next thing I knew, NAT was everywhere. Given that the problem that I was trying to solve, and the problem that PIX solved, are different, there is in fact no reason to think that John Mayes or Kjeld Egevang got the idea for NAT from the CCR paper. So what would the lesson learned for researchers today be? This: Solve an interesting problem with no business model, publish a paper about it, and hope that somebody uses the same idea to solve some problem that does have a business model. Clearly not a very interesting lesson. I agree with the motivation of this article series. It is very hard to have industrial impact in networking today. The last time I tried was five or six years ago when I made a serious effort to turn ViAggre (NSDI’09) into an RFC. Months of effort yielded nothing, and perhaps rightly so. I hope others, especially others with more positive recent results, will contribute to this series.

Syndicate content