Computer Communication Review: Papers

Find a CCR issue:
  • Walid Benchaita, Samir Ghamri-Doudane, S?bastien Tixeuil

    We present a flexible scheme and an optimization algorithm for request routing in Content Delivery Networks (CDN). Our online approach, which is based on Lyapunov theory, provides a stable quality of service to clients, while improving content delivery delays. It also reduces data transport costs for operators.

  • Mor Sides, Anat Bremler-Barr, Elisha Rosensweig
  • Morteza Kheirkhah, Ian Wakeman, George Parisis

    In this paper, we introduce MMPTCP, a novel transport protocol which aims at unifying the way data is transported in data centres. MMPTCP runs in two phases; initially, it randomly scatters packets in the network under a single congestion window exploiting all available paths. This is beneficial to latency-sensitive flows. During the second phase, MMPTCP runs in Multi-Path TCP (MPTCP) mode, which has been shown to be very efficient for long flows. Initial evaluation shows that our approach significantly improves short flow completion times while providing high throughput for long flows and high overall network utilisation.

  • Neelakandan Manihatty Bojan, Noa Zilberman, Gianni Antichi, Andrew W. Moore
  • Liqiong Chang, Xiaojiang Chen, Dingyi Fang, Ju Wang, Tianzhang Xing, Chen Liu, Zhanyong Tang

    Many emerging applications and the ubiquitous wireless signals have accelerated the development of Device Free localization (DFL) techniques, which can localize objects without the need to carry any wireless devices. Most traditional DFL methods have a main drawback that as the pre-obtained Received Signal Strength (RSS) measurements (i.e., fingerprint) in one area cannot be directly applied to the new area for localization, and the calibration process of each area will result in the human effort exhausting problem. In this paper, we propose FALE, a fine-grained transferring DFL method that can adaptively work in different areas with little human effort and low energy consumption. FALE employs a rigorously designed transferring function to transfer the fingerprint into a projected space, and reuse it across different areas, thus greatly reduce the human effort. On the other hand, FALE can reduce the data volume and energy consumption by taking advantage of the compressive sensing (CS) theory. Extensive real-word experimental results also illustrate the effectiveness of FALE.

  • Tobias Markmann, Thomas C. Schmidt, Matthias W?hlisch

    Authentication of smart objects is a major challenge for the Internet of Things (IoT), and has been left open in DTLS. Leveraging locally managed IPv6 addresses with identitybased cryptography (IBC), we propose an efficient end-to-end authentication that (a) assigns a robust and deploymentfriendly federation scheme to gateways of IoT subnetworks, and (b) has been evaluated with a modern twisted Edwards elliptic curve cryptography (ECC). Our early results demonstrate feasibility and promise efficiency after ongoing optimisations.

  • Hasnain Ali Pirzada, Muhammad Raza Mahboob, Ihsan Ayyub Qazi

    We propose eSDN; a practical approach for deploying new datacenter transports without requiring any changes to the switches. eSDN uses light-weight SDN controllers at the end-hosts for querying network state. It obviates the need for statistics collection by a centralized controller especially on short timescales. We show that eSDN can scale well and allow a range of datacenter transports to be realized.

  • Waleed Reda, Lalith Suresh, Marco Canini, Sean Braithwaite

    A common pattern in the architectures of modern interactive web-services is that of large request fan-outs, where even a single end-user request (task ) arriving at an application server triggers tens to thousands of data accesses (sub-tasks) to different stateful backend servers. The overall response time of each task is bottlenecked by the completion time of the slowest sub-task, making such workloads highly sensitive to the tail of latency distribution of the backend tier. The large number of decentralized application servers and skewed workload patterns exacerbate the challenge in addressing this problem. We address these challenges through BetteR Batch (BRB). By carefully scheduling requests in a decentralized and taskaware manner, BRB enables low-latency distributed storage systems to deliver predictable performance in the presence of large request fan-outs. Our preliminary simulation results based on production workloads show that our proposed design is at the 99th percentile latency within 38% of an ideal system model while offering latency improvements over the state-of-the-art by a factor of 2.

  • Guoshun Nan, Xiuquan Qiao, Yukai Tu, Wei Tan, Lei Guo, Junliang Chen

    Content-Centric Networking (CCN) has recently emerged as a clean-slate Future Internet architecture which has a completely different communication pattern compared with exiting IP network. Since the World Wide Web has become one of the most popular and important applications on the Internet, how to effectively support the dominant browser and server based web applications is a key to the success of CCN. However, the existing web browsers and servers are mainly designed for the HTTP protocol over TCP/IP networks and cannot directly support CCN-based web applications. Existing research mainly focuses on plug-in or proxy/gateway approaches at client and server sides, and these schemes seriously impact the service performance due to multiple protocol conversions. To address above problems, we designed and implemented a CCN web browser and a CCN web server to natively support CCN protocol. To facilitate the smooth evolution from IP networks to CCN, CCNBrowser and CCNxTomcat also support the HTTP protocol besides the CCN. Experimental results show that CCNBrowser and CCNxTomcat outperform existing implementations. Finally, a real CCN-based web application is deployed on a CCN experimental testbed, which validates the applicability of CCNBrowser and CCNxTomcat.

  • D?vid Szab?, Felici?n N?meth, Bal?zs Sonkoly, Andr?s Guly?s, Frank H.P. Fitzek

    Many networking visioners agree that 5G will be much more than the incremental improvement, in terms of data rate, of 4G. Besides the mobile networks, 5G will fundamentally influence the core infrastructure as well. In our vision the realization of the challenging promises of 5G (e.g. extremely fast, low-overhead, low-delay access of mostly cloudified services and content) will require the massive use of multipathing equipped with low overhead transport solutions tailored to fast, reliable and secure data retrieval from cloud architectures. In this demo we present a prototype architecture supporting such services by making use of automatically configured multipath service chains implementing network coding based transport solutions over off-the-shelf software defined networking (SDN) components.

  • Andreas Reuter, Matthias W?hlisch, Thomas C. Schmidt

    The Resource Public Key Infrastructure (RPKI) stores attestation objects for Internet resources. In this demo, we present RPKI MIRO, an open source software framework to monitor and inspect these RPKI objects. RPKI MIRO provides resource owners, RPKI operators, researchers, and lecturers with intuitive access to the content of the deployed RPKI repositories. It helps to optimize the repository structure and to identify failures.

  • Tam?s L?vai, Istv?n Pelle, Felici?n N?meth, Andr?s Guly?s

    SDN opens a new chapter in network troubleshooting as besides misconfigurations and firmware/hardware errors, software bugs can occur all over the SDN stack. As an answer to this challenge the networking community developed a wealth of piecemeal SDN troubleshooting tools aiming to track down misconfigurations or bugs of a specific nature (e.g. in a given SDN layer). In this demonstration we present EPOXIDE, an Emacs based modular framework, which can effectively combine existing network and software troubleshooting tools in a single platform and defines a possible way of integrated SDN troubleshooting.

  • Md. Faizul Bari, Shihabur Rahman Chowdhury, Reaz Ahmed, Raouf Boutaba
  • Noa Zilberman, Yury Audzevich, Georgina Kalogeridou, Neelakandan Manihatty-Bojan, Jingyun Zhang, Andrew Moore

    The demand-led growth of datacenter networks has meant that many constituent technologies are beyond the budget of the wider community. In order to make and validate timely and relevant new contributions, the wider community requires accessible evaluation, experimentation and demonstration environments with specification comparable to the subsystems of the most massive datacenter networks. We demonstrate NetFPGA, an open-source platform for rapid prototyping of networking devices with I/O capabilities up to 100Gbps. NetFPGA offers an integrated environment that enables networking research by users from a wide range of disciplines: from hardware-centric research to formal methods.

  • Bob Lantz, Brian O'Connor

    The need for fault tolerance and scalability is leading to the development of distributed SDN operating systems and applications. But how can you develop such systems and applications reliably without access to an expensive testbed? We continue to observe SDN development practices using full system virtualization or heavyweight containers, increasing complexity and overhead while decreasing usability. We demonstrate a simpler and more efficient approach: using Mininet's cluster mode to easily deploy a virtual testbed of lightweight containers on a single machine, an ad hoc cluster, or a dedicated hardware testbed. By adding an open source, distributed network operating system such as ONOS, we can create a flexible and scalable open source development platform for distributed SDN system and application software development.

  • Simon Yau, Liang Ge, Ping-Chun Hsieh, I-Hong Hou, Shuguang Cui, P.R. Kumar, Amal Ekbal, Nikhil Kundargi

    This demo presents WiMAC, a general-purpose wireless testbed for researchers to quickly prototype a wide variety of real-time MAC protocols for wireless networks. As the interface between the link layer and the physical layer, MAC protocols are often tightly coupled with the underlying physical layer, and need to have extremely small latencies. Implementing a new MAC requires a long time. In fact, very few MACs have ever been implemented, even though dozens of new MAC protocols have been proposed. To enable quick prototyping, we employ the mechanism vs. policy separation to decompose the functionality in the MAC layer and the PHY layer. Built on the separation framework, WiMAC achieves the independence of the software from the hardware, offering a high degree of function reuse and design flexibility. Hence, our platform not only supports easy cross-layer design but also allows protocol changes on the fly. Following the 802.11-like reference design, we demonstrate that deploying a new MAC protocol is quick and simple on the proposed platform through the implementation of the CSMA/CA and CHAIN protocols.

  • Ali Raza, Yasir Zaki, Thomas P?tsch, Jay Chen, Lakshmi Subramanian

    Modern web pages are very complex; each web page consists of hundreds of objects that are linked from various servers all over the world. While mechanisms such as caching reduce the overall number of end-to-end requests saving bandwidth and loading time, there is still a large portion of content that is re-fetched - despite not having changed. In this demo, we present Extreme Cache, a web caching architecture that enhances the web browsing experience through a smart pre-fetching engine. Our extreme cache tries to predict the rate of change of web page objects to bring cacheable content closer to the user.

  • Margus Ernits, Johannes Tammek?nd, Olaf Maennel

    We present an Intelligent Training Exercise Environment (itee1 ), a fully automated Cyber Defense Competition platform. The main features of i-tee are: automated attacks, automated scoring with immediate feedback using a scoreboard, and background traffic generation. The main advantage of the platform is easy integration into existing curricula and suitability for continuous education as well as on-site training at companies. The platform implements a modular approach called learning spaces for implementing different competitions and hands-on labs. The platform is highly automated to enable execution with up to 30 teams by one person using a single server. The platform is publicly available under MIT license.

  • Matthias W?hlisch, Thomas C. Schmidt

    The Resource Public Key Infrastructure (RPKI) allows BGP routers to verify the origin AS of an IP prefix. In this demo, we present a software extension which performs prefix origin validation in the web browser of end users. The browser extension shows the RPKI validation outcome of the web server infrastructure for the requested web domain. It follows the common plug-in concepts and does not require special modifications of the browser software. It operates on live data and helps end users as well as operators to gain better insight into the Internet security landscape.

  • Dan Alistarh, Hitesh Ballani, Paolo Costa, Adam Funnell, Joshua Benjamin, Philip Watts, Benn Thomsen

    We demonstrate an optical switch design that can scale up to a thousand ports with high per-port bandwidth (25 Gbps+) and low switching latency (40 ns). Our design uses a broadcast and select architecture, based on a passive star coupler and fast tunable transceivers. In addition we employ time division multiplexing to achieve very low switching latency. Our demo shows the feasibility of the switch data plane using a small testbed, comprising two transmitters and a receiver, connected through a star coupler.

  • Gianni Antichi, Charalampos Rotsos, Andrew W. Moore

    Despite network monitoring and testing being critical for computer networks, current solutions are both extremely expensive and inflexible. This demo presents OSNT (, a communitydriven, high-performance, open-source traffic generator and capture system built on top of the NetFPGA-10G board which enables flexible network testing. The platform supports full line-rate traffic generation regardless of packet size across the four card ports, packet capture filtering and packet thinning in hardware and sub-usec time precision in traffic generation and capture, corrected using an external GPS device. Furthermore, it provides a software APIs to test the dataplane performance of multi-10G switches, providing a starting point for a number of different test cases. OSNT flexibility is further demonstrated through the OFLOPS-turbo platform: an integration of OSNT with the OFLOPS OpenFlow switch performance evaluation platform, enabling control and data plane evaluation of 10G switches. This demo showcases the applicability of the OSNT platform to evaluate the performance of legacy and OpenFlowenabled networking devices, and demonstrates it using commercial switches.

  • Julius Schulz-Zander, Carlos Mayer, Bogdan Ciobotaru, Stefan Schmid, Anja Feldmann, Roberto Riggio

    The quickly growing demand for wireless networks and the numerous application-specific requirements stand in stark contrast to today's inflexible management and operation of WiFi networks. In this paper, we present and evaluate O PEN SDWN, a novel WiFi architecture based on an SDN/NFV approach. O PEN SDWN exploits datapath programmability to enable service differentiation and fine-grained transmission control, facilitating the prioritization of critical applications. O PEN SDWN implements per-client virtual access points and per-client virtual middleboxes, to render network functions more flexible and support mobility and seamless migration. O PEN SDWN can also be used to out-source the control over the home network to a participatory interface or to an Internet Service Provider.

  • Michael Alan Chang, Bredan Tschaen, Theophilus Benson, Laurent Vanbever
  • Jeongkeun Lee, Joon-Myung Kang, Chaithan Prakash, Sujata Banerjee, Yoshio Turner, Aditya Akella, Charles Clark, Yadi Ma, Puneet Sharma, Ying Zhang

    We present Policy Graph Abstraction (PGA) that graphically expresses network policies and service chain requirements, just as simple as drawing whiteboard diagrams. Different users independently draw policy graphs that can constrain each other. PGA graph clearly captures user intents and invariants and thus facilitates automatic composition of overlapping policies into a coherent policy.

  • Roberto Riggio, Julius Schulz-Zander, Abbas Bradai

    Network Function Virtualization promises to reduce the cost to deploy and to operate large networks by migrating various network functions from dedicated hardware appliances to software instances running on general purpose networking and computing platforms. In this paper we demonstrate Scylla a Programmable Network Fabric architecture for Enterprise WLANs. The framework supports basic Virtual Network Function lifecycle management functionalities such as instantiation, monitoring, and migration. We release the entire platform under a permissive license for academic use.

  • Bal?zs Sonkoly, J?nos Czentye, Robert Szabo, D?vid Jocha, J?nos Elek, Sahel Sahhaf, Wouter Tavernier, Fulvio Risso

    End-to-end service delivery often includes transparently inserted Network Functions (NFs) in the path. Flexible service chaining will require dynamic instantiation of both NFs and traffic forwarding overlays. Virtualization techniques in compute and networking, like cloud and Software Defined Networking (SDN), promise such flexibility for service providers. However, patching together existing cloud and network control mechanisms necessarily puts one over the above, e.g., OpenDaylight under an OpenStack controller. We designed and implemented a joint cloud and network resource virtualization and programming API. In this demonstration, we show that our abstraction is capable for flexible service chaining control over any technology domains.

  • Ezzeldin Hamed, Hariharan Rahul, Mohammed A. Abdelghany, Dina Katabi

    We present a demonstration of a real-time distributed MIMO system, DMIMO. DMIMO synchronizes transmissions from 4 distributed MIMO transmitters in time, frequency and phase, and performs distributed multi-user beamforming to independent clients. DMIMO is built on top of a Zynq hardware platform integrated with an FMCOMMS2 RF front end. The platform implements a custom 802.11n compatible MIMO PHY layer which is augmented with a lightweight distributed synchronization engine. The demonstration shows the received constellation points, channels, and effective data throughput at each client.

  • Deepak Vasisht, Swarun Kumar, Dina Katabi
  • Dina Papagiannaki

    Welcome to the July issue of Computer Communications Review. Over the past months we have tried to make CCR a resource where our community publishes fresh new ideas, expresses positions on interesting new research directions, as well as reports back on community activities, like workshops and regional meetings. In addition, we have introduced the student advice column and the column edited by our industrial board aiming to provide a clearer bridge between scientific practice and technology in commercial products. I am really proud to see all these additions being embraced by the community. This issue features three technical papers, two on cloud computing and one on TCP. Our sincerest thanks to all the authors that submit technical contributions to CCR. And my personal thanks to the tireless editorial board that always aims towards outstanding quality while providing constructive feedback for the continuous improvement of the submitted manuscripts. The editorial zone features three papers as well. One reports on the workshop on Internet economics that took place late last year. The other two cover (i) research challenges in multi-domain network measurement and monitoring, and (ii) the lessons learnt from using RIPE’s ATLAS for measurement research. I hope that both editorials will provide useful insights to the CCR audience. Our industrial column features a contribution from Akamai. The authors describe how research in our community has influenced the design of the content delivery network (CDN) of Akamai, as well as the practical realities they had to deal with to create that bridge between academic output and a scalable, robust content delivery network that serves trillion of requests per day. This issue is coming one month before the annual SIGCOMM conference, that will take place in London in August. Since a couple of years ago, SIGCOMM is the venue where CCR is presenting its "best-of" papers for the previous four issues (July 2014 to April 2015). I am really happy to announce the following two "best of CCR" papers for the year 2014-2015. Technical paper: "Programming Protocol-Independent Packet Processors", by P. Bosshart (Barefoot Networks), D. Daly (Intel), G. Gibb (Barefoot Networks), M. Izzard (Barefoot Networks), N. McKeown (Stanford University), J. Rexford (Princeton University), C. Schlesinger (Barefoot Networks), D. Talayco (Barefoot Networks), A. Vahdat (Google), G. Varghese (Microsoft), and D. Walker (Princeton University). Editorial paper: "A Primer on IPv4 Scarcity", by P. Richter (TU Berlin/ICSI), M. Allman (ICSI), R. Bush (Internet Initiative Japan), and V. Paxson (UC Berkeley/ICSI). Congratulations to the authors of the two best papers and I am really looking forward to seeing all of you in London in August.
    Dina Papagiannaki CCR Editor

  • S. Yaw, E. Howard, B. Mumey, M. Wittie

    Given a set of datacenters and groups of application clients, well-connected datacenters can be rented as traffic proxies to reduce client latency. Rental costs must be minimized while meeting the application specific latency needs. Here, we formally define the Cooperative Group Provisioning problem and show it is NP-hard to approximate within a constant factor. We introduce a novel greedy approach and demonstrate its promise through extensive simulation using real cloud network topology measurements and realistic client churn. We find that multi-cloud deployments dramatically increase the likelihood of meeting group latency thresholds with minimal cost increase compared to single-cloud deployments.

    Phillipa Gill
  • C. Fuerst, M. Rost, S. Schmid

    It is well-known that cloud application performance can critically depend on the network. Over the last years, several systems have been developed which provide the application with the illusion of a virtual cluster : a star-shaped virtual network topology connecting virtual machines to a logical switch with absolute bandwidth guarantees. In this paper, we debunk some of the myths around the virtual cluster embedding problem. First, we show that the virtual cluster embedding problem is not NP-hard, and present the fast and optimal embedding algorithm VC-ACE for arbitrary datacenter topologies. Second, we argue that resources may be wasted by enforcing star-topology embeddings, and alternatively promote a hose embedding approach. We discuss the computational complexity of hose embeddings and derive the HVC-ACE algorithm. Using simulations we substantiate the benefits of hose embeddings in terms of acceptance ratio and resource footprint.

    Hitesh Ballani
  • H. Ding, M. Rabinovich

    This paper examines several TCP characteristics and their effect on existing passive RTT measurement techniques. In particular, using packet traces from three geographically distributed vantage points, we find relatively low use of TCP timestamps and significant presence of stretch acknowledgements. While the former simply affects the applicability of some measurement techniques, the latter may in principle affect the accuracy of RTT estimation. Using these insights, we quantify implications of common methodologies for passive RTT measurement. In particular, we show that, unlike delayed TCP acknowledgement, stretch acknowledgements do not distort RTT estimations.

    Joseph Camp
  • P. Calyam, M. Swany

    The perfSONAR-based Multi-domain Network Performance Measurement and Monitoring Workshop was held on February 20-21, 2014 in Arlington, VA. The goal of the workshop was to review the state of the perfSONAR effort and catalyze future directions by cross-fertilizing ideas, and distilling common themes among the diverse perfSONAR stakeholders that include: network operators and managers, endusers and network researchers. The timing and organization for the second workshop is significant because there are an increasing number of groups within NSF supported data-intensive computing and networking programs that are dealing with measurement, monitoring and troubleshooting of multi-domain issues. These groups are forming explicit measurement federations using perfSONAR to address a wide range of issues. In addition, the emergence and wide-adoption of new paradigms such as software-defined networking are taking shape to aid in traffic management needs of scientific communities and network operators. Consequently, there are new challenges that need to be addressed for extensible and programmable instrumentation, measurement data analysis, visualization and middleware security features in perfSONAR. This report summarizes the workshop efforts to bring together diverse groups for delivering targeted short/long talks, sharing latest advances, and identifying gaps that exist in the community for solving end-toend performance problems in an effective, scalable fashion.

  • V. Bajpai, S. Eravuchira, J. Schönwälder

    We reflect upon our experience in using the RIPE Atlas platform for measurement-based research. We show how in addition to credits, control checks using rate limits are in place to ensure that the platform does not get overloaded with measurements. We show how the Autonomous System (AS)-based distribution of RIPE Atlas probes is heavily skewed which limits possibilities of measurements sourced from a specific origin-AS. We discuss the significance of probe calibration and how we leverage it to identify load issues in older hardware versions (38.6% overall as of Sep 2014) of probes. We show how performance measurement platforms (such as RIPE Atlas, SamKnows, BISmark and Dasu) can benefit from each other by demonstrating two example use-cases. We also open discussion on how RIPE Atlas deployment can be made more useful by relaying more probe metadata information back to the scientific community and by strategically deploying probes to reduce the inherent sampling bias embedded in probe-based measurement platforms.

  • kc claffy, D. Clark

    On December 10-11 2014, we hosted the 4th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. The objective for this year’s workshop was a structured consideration of whether and how policy-makers should try to shape the future of the Internet. To structure the discussion about policy, we began the workshop with a list of potential aspirations for our future telecommunications infrastructure (a list we had previously collated), and asked participants to articulate an aspiration or fear they had about the future of the Internet, which we summarized and discussed on the second day. The focus on aspirations was motivated by the high-level observation that before discussing regulation, we must agree on the objective of the regulation, and why the intended outcome is justified. In parallel, we used a similar format as in previous years: a series of focused sessions, where 3-4 presenters each prepared 10-minute talks on issues in recent regulatory discourse, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and a copy of this report are available at

  • A. Akella

    I've received several interesting, and varied, questions from students all over the world. Thank you for the warm response! In this issue, I have hand-picked a small subset of questions to answer. Many thanks to Brighten Godfrey (UIUC) and Vyas Sekar (CMU) for contributing their thoughts.

  • B. Davie, C. Diot, L. Eggert, N. McKeown, V. Padmanabhan, R. Teixeira

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption.

  • B. Maggs, R. Sitaraman

    This paper peeks under the covers at the subsystems that provide the basic functionality of a leading content delivery network. Based on our experiences in building one of the largest distributed systems in the world, we illustrate how sophisticated algorithmic research has been adapted to balance the load between and within server clusters, manage the caches on servers, select paths through an overlay routing network, and elect leaders in various contexts. In each instance, we first explain the theory underlying the algorithms, then introduce practical considerations not captured by the theoretical models, and finally describe what is implemented in practice. Through these examples, we highlight the role of algorithmic research in the design of complex networked systems. The paper also illustrates the close synergy that exists between research and industry where research ideas cross over into products and product requirements drive future research.

  • Dina Papagiannaki

    Welcome to the April issue of Computer Communications Review, our community’s quarterly newsletter. Or maybe workshop? Many may not realize it but CCR is actually operating like a workshop with quarterly deadlines. Every quarter we receive 40-60 submissions that are being reviewed by a collective of more than 100 reviewers, and handled by our 12 member editorial board, to which I would like to welcome Alberto Dainotti, from CAIDA. Out of all submissions, some technical papers are published in the running issue, while others are being given feedback for further improvement and get re-evaluated for a later issue. I cannot thank enough the hard working editorial board that, quarter on quarter, handle their allocated papers, targeting to provide the best possible feedback. The editorial papers are not being peer reviewed, but solely reviewed by me. Editorial papers fall into two categories: i) position papers, or ii) workshop reports. My task is to ensure that the positions are clearly expressed and possibly identify cases where positions are being presented through technical arguments, in which case I may engage someone from the editorial board or redirect the paper to the technical track. Fundamentally, CCR is a vehicle to bring our community together and expose interesting, novel ideas as early as possible. And I believe we do achieve this. This issue is an example of the above process. We received 36 papers – 32 technical submissions, and 4 editorials. We accepted all editorials, and 2 of the technical papers, while 10 papers have been recommended for resubmission, with clear recommendations on the changes required. A lot of authors agree that their papers have improved in clarity and technical accuracy through the process of revise-and-resubmit. I hope you enjoy the two technical papers, on rate adaptation in 802.11n, and multipath routing on wireless sensor networks. Three of the editorials cover workshops and community meetings: 1) the 1st named data networking community meeting, 2) the Dagstuhl seminar on distributed cloud computing, and 3) the 1st data transparency lab workshop. Meeting reports are a wonderful way of tracking the state of the art in specific areas, and learning from the findings of the organizers. The last editorial is one of my favorite editorials so far. The authors provide a unique historical perspective on how IP address allocation has evolved since the inception of the Internet, and implications that our community has to deal with. A very interesting exposition of IP address scarcity but also a very valuable perspective on how the Internet as a whole has evolved. This issue is also bringing a novelty. We are establishing a new column, edited by Dr. Renata Teixeira from INRIA. The column is aiming to bring successful examples of technology transfer from our community to the networking industry. The inaugural example is provided by Dr. Paul Francis, discussing Network Address Translation (NAT). Funny how NAT was first proposed in CCR, and that the non-workshop editorial of this issue also deals with IP scarcity. It is interesting to read Paul’s exposition of the events, along with his own reflections on whether what was transferred was what he actually proposed :-). With all this, I hope you enjoy the content of this issue, as well as our second column on graduate advice. Finally, I am expecting you all at the best of CCR session of ACM Sigcomm in London – the Sigcomm session where we celebrate the best technical and the best editorial published by CCR during the past year.
    Dina Papagiannaki CCR Editor

  • L. Kriara, M. Marina

    We consider the link adaptation problem in 802.11n wireless LANs that involves adapting MIMO mode, channel bonding, modulation and coding scheme, and frame aggregation level with varying channel conditions. Through measurement-based analysis, we find that adapting all available 802.11n features results in higher goodput than adapting only a subset of features, thereby showing that holistic link adaptation is crucial to achieve best performance. We then design a novel hybrid link adaptation scheme termed SampleLite that adapts all 802.11n features while being efficient compared to sampling-based open-loop schemes and practical relative to closed loop schemes. SampleLite uses sender-side RSSI measurements to significantly lower the sampling overhead, by exploiting the monotonic relationship between best settings for each feature and the RSSI. Through analysis and experimentation in a testbed environment, we show that our proposed approach can reduce the sampling overhead by over 70% on average compared to the widely used Minstrel HT scheme. We also experimentally evaluate the goodput performance of SampleLite in a wide range of controlled and realworld interference scenarios. Our results show that SampleLite, while performing close to the ideal, delivers goodput that is 35– 100% better than with existing schemes.

    Aline Carneiro Viana
Syndicate content