Computer Communication Review: Papers

Find a CCR issue:
  • Tam?s L?vai, Istv?n Pelle, Felici?n N?meth, Andr?s Guly?s

    SDN opens a new chapter in network troubleshooting as besides misconfigurations and firmware/hardware errors, software bugs can occur all over the SDN stack. As an answer to this challenge the networking community developed a wealth of piecemeal SDN troubleshooting tools aiming to track down misconfigurations or bugs of a specific nature (e.g. in a given SDN layer). In this demonstration we present EPOXIDE, an Emacs based modular framework, which can effectively combine existing network and software troubleshooting tools in a single platform and defines a possible way of integrated SDN troubleshooting.

  • Md. Faizul Bari, Shihabur Rahman Chowdhury, Reaz Ahmed, Raouf Boutaba
  • Noa Zilberman, Yury Audzevich, Georgina Kalogeridou, Neelakandan Manihatty-Bojan, Jingyun Zhang, Andrew Moore

    The demand-led growth of datacenter networks has meant that many constituent technologies are beyond the budget of the wider community. In order to make and validate timely and relevant new contributions, the wider community requires accessible evaluation, experimentation and demonstration environments with specification comparable to the subsystems of the most massive datacenter networks. We demonstrate NetFPGA, an open-source platform for rapid prototyping of networking devices with I/O capabilities up to 100Gbps. NetFPGA offers an integrated environment that enables networking research by users from a wide range of disciplines: from hardware-centric research to formal methods.

  • Bob Lantz, Brian O'Connor

    The need for fault tolerance and scalability is leading to the development of distributed SDN operating systems and applications. But how can you develop such systems and applications reliably without access to an expensive testbed? We continue to observe SDN development practices using full system virtualization or heavyweight containers, increasing complexity and overhead while decreasing usability. We demonstrate a simpler and more efficient approach: using Mininet's cluster mode to easily deploy a virtual testbed of lightweight containers on a single machine, an ad hoc cluster, or a dedicated hardware testbed. By adding an open source, distributed network operating system such as ONOS, we can create a flexible and scalable open source development platform for distributed SDN system and application software development.

  • Simon Yau, Liang Ge, Ping-Chun Hsieh, I-Hong Hou, Shuguang Cui, P.R. Kumar, Amal Ekbal, Nikhil Kundargi

    This demo presents WiMAC, a general-purpose wireless testbed for researchers to quickly prototype a wide variety of real-time MAC protocols for wireless networks. As the interface between the link layer and the physical layer, MAC protocols are often tightly coupled with the underlying physical layer, and need to have extremely small latencies. Implementing a new MAC requires a long time. In fact, very few MACs have ever been implemented, even though dozens of new MAC protocols have been proposed. To enable quick prototyping, we employ the mechanism vs. policy separation to decompose the functionality in the MAC layer and the PHY layer. Built on the separation framework, WiMAC achieves the independence of the software from the hardware, offering a high degree of function reuse and design flexibility. Hence, our platform not only supports easy cross-layer design but also allows protocol changes on the fly. Following the 802.11-like reference design, we demonstrate that deploying a new MAC protocol is quick and simple on the proposed platform through the implementation of the CSMA/CA and CHAIN protocols.

  • Ali Raza, Yasir Zaki, Thomas P?tsch, Jay Chen, Lakshmi Subramanian

    Modern web pages are very complex; each web page consists of hundreds of objects that are linked from various servers all over the world. While mechanisms such as caching reduce the overall number of end-to-end requests saving bandwidth and loading time, there is still a large portion of content that is re-fetched - despite not having changed. In this demo, we present Extreme Cache, a web caching architecture that enhances the web browsing experience through a smart pre-fetching engine. Our extreme cache tries to predict the rate of change of web page objects to bring cacheable content closer to the user.

  • Margus Ernits, Johannes Tammek?nd, Olaf Maennel

    We present an Intelligent Training Exercise Environment (itee1 ), a fully automated Cyber Defense Competition platform. The main features of i-tee are: automated attacks, automated scoring with immediate feedback using a scoreboard, and background traffic generation. The main advantage of the platform is easy integration into existing curricula and suitability for continuous education as well as on-site training at companies. The platform implements a modular approach called learning spaces for implementing different competitions and hands-on labs. The platform is highly automated to enable execution with up to 30 teams by one person using a single server. The platform is publicly available under MIT license.

  • Matthias W?hlisch, Thomas C. Schmidt

    The Resource Public Key Infrastructure (RPKI) allows BGP routers to verify the origin AS of an IP prefix. In this demo, we present a software extension which performs prefix origin validation in the web browser of end users. The browser extension shows the RPKI validation outcome of the web server infrastructure for the requested web domain. It follows the common plug-in concepts and does not require special modifications of the browser software. It operates on live data and helps end users as well as operators to gain better insight into the Internet security landscape.

  • Dan Alistarh, Hitesh Ballani, Paolo Costa, Adam Funnell, Joshua Benjamin, Philip Watts, Benn Thomsen

    We demonstrate an optical switch design that can scale up to a thousand ports with high per-port bandwidth (25 Gbps+) and low switching latency (40 ns). Our design uses a broadcast and select architecture, based on a passive star coupler and fast tunable transceivers. In addition we employ time division multiplexing to achieve very low switching latency. Our demo shows the feasibility of the switch data plane using a small testbed, comprising two transmitters and a receiver, connected through a star coupler.

  • Gianni Antichi, Charalampos Rotsos, Andrew W. Moore

    Despite network monitoring and testing being critical for computer networks, current solutions are both extremely expensive and inflexible. This demo presents OSNT (, a communitydriven, high-performance, open-source traffic generator and capture system built on top of the NetFPGA-10G board which enables flexible network testing. The platform supports full line-rate traffic generation regardless of packet size across the four card ports, packet capture filtering and packet thinning in hardware and sub-usec time precision in traffic generation and capture, corrected using an external GPS device. Furthermore, it provides a software APIs to test the dataplane performance of multi-10G switches, providing a starting point for a number of different test cases. OSNT flexibility is further demonstrated through the OFLOPS-turbo platform: an integration of OSNT with the OFLOPS OpenFlow switch performance evaluation platform, enabling control and data plane evaluation of 10G switches. This demo showcases the applicability of the OSNT platform to evaluate the performance of legacy and OpenFlowenabled networking devices, and demonstrates it using commercial switches.

  • Julius Schulz-Zander, Carlos Mayer, Bogdan Ciobotaru, Stefan Schmid, Anja Feldmann, Roberto Riggio

    The quickly growing demand for wireless networks and the numerous application-specific requirements stand in stark contrast to today's inflexible management and operation of WiFi networks. In this paper, we present and evaluate O PEN SDWN, a novel WiFi architecture based on an SDN/NFV approach. O PEN SDWN exploits datapath programmability to enable service differentiation and fine-grained transmission control, facilitating the prioritization of critical applications. O PEN SDWN implements per-client virtual access points and per-client virtual middleboxes, to render network functions more flexible and support mobility and seamless migration. O PEN SDWN can also be used to out-source the control over the home network to a participatory interface or to an Internet Service Provider.

  • Michael Alan Chang, Bredan Tschaen, Theophilus Benson, Laurent Vanbever
  • Jeongkeun Lee, Joon-Myung Kang, Chaithan Prakash, Sujata Banerjee, Yoshio Turner, Aditya Akella, Charles Clark, Yadi Ma, Puneet Sharma, Ying Zhang

    We present Policy Graph Abstraction (PGA) that graphically expresses network policies and service chain requirements, just as simple as drawing whiteboard diagrams. Different users independently draw policy graphs that can constrain each other. PGA graph clearly captures user intents and invariants and thus facilitates automatic composition of overlapping policies into a coherent policy.

  • Roberto Riggio, Julius Schulz-Zander, Abbas Bradai

    Network Function Virtualization promises to reduce the cost to deploy and to operate large networks by migrating various network functions from dedicated hardware appliances to software instances running on general purpose networking and computing platforms. In this paper we demonstrate Scylla a Programmable Network Fabric architecture for Enterprise WLANs. The framework supports basic Virtual Network Function lifecycle management functionalities such as instantiation, monitoring, and migration. We release the entire platform under a permissive license for academic use.

  • Bal?zs Sonkoly, J?nos Czentye, Robert Szabo, D?vid Jocha, J?nos Elek, Sahel Sahhaf, Wouter Tavernier, Fulvio Risso

    End-to-end service delivery often includes transparently inserted Network Functions (NFs) in the path. Flexible service chaining will require dynamic instantiation of both NFs and traffic forwarding overlays. Virtualization techniques in compute and networking, like cloud and Software Defined Networking (SDN), promise such flexibility for service providers. However, patching together existing cloud and network control mechanisms necessarily puts one over the above, e.g., OpenDaylight under an OpenStack controller. We designed and implemented a joint cloud and network resource virtualization and programming API. In this demonstration, we show that our abstraction is capable for flexible service chaining control over any technology domains.

  • Ezzeldin Hamed, Hariharan Rahul, Mohammed A. Abdelghany, Dina Katabi

    We present a demonstration of a real-time distributed MIMO system, DMIMO. DMIMO synchronizes transmissions from 4 distributed MIMO transmitters in time, frequency and phase, and performs distributed multi-user beamforming to independent clients. DMIMO is built on top of a Zynq hardware platform integrated with an FMCOMMS2 RF front end. The platform implements a custom 802.11n compatible MIMO PHY layer which is augmented with a lightweight distributed synchronization engine. The demonstration shows the received constellation points, channels, and effective data throughput at each client.

  • Deepak Vasisht, Swarun Kumar, Dina Katabi
  • Dina Papagiannaki

    Welcome to the July issue of Computer Communications Review. Over the past months we have tried to make CCR a resource where our community publishes fresh new ideas, expresses positions on interesting new research directions, as well as reports back on community activities, like workshops and regional meetings. In addition, we have introduced the student advice column and the column edited by our industrial board aiming to provide a clearer bridge between scientific practice and technology in commercial products. I am really proud to see all these additions being embraced by the community. This issue features three technical papers, two on cloud computing and one on TCP. Our sincerest thanks to all the authors that submit technical contributions to CCR. And my personal thanks to the tireless editorial board that always aims towards outstanding quality while providing constructive feedback for the continuous improvement of the submitted manuscripts. The editorial zone features three papers as well. One reports on the workshop on Internet economics that took place late last year. The other two cover (i) research challenges in multi-domain network measurement and monitoring, and (ii) the lessons learnt from using RIPE’s ATLAS for measurement research. I hope that both editorials will provide useful insights to the CCR audience. Our industrial column features a contribution from Akamai. The authors describe how research in our community has influenced the design of the content delivery network (CDN) of Akamai, as well as the practical realities they had to deal with to create that bridge between academic output and a scalable, robust content delivery network that serves trillion of requests per day. This issue is coming one month before the annual SIGCOMM conference, that will take place in London in August. Since a couple of years ago, SIGCOMM is the venue where CCR is presenting its "best-of" papers for the previous four issues (July 2014 to April 2015). I am really happy to announce the following two "best of CCR" papers for the year 2014-2015. Technical paper: "Programming Protocol-Independent Packet Processors", by P. Bosshart (Barefoot Networks), D. Daly (Intel), G. Gibb (Barefoot Networks), M. Izzard (Barefoot Networks), N. McKeown (Stanford University), J. Rexford (Princeton University), C. Schlesinger (Barefoot Networks), D. Talayco (Barefoot Networks), A. Vahdat (Google), G. Varghese (Microsoft), and D. Walker (Princeton University). Editorial paper: "A Primer on IPv4 Scarcity", by P. Richter (TU Berlin/ICSI), M. Allman (ICSI), R. Bush (Internet Initiative Japan), and V. Paxson (UC Berkeley/ICSI). Congratulations to the authors of the two best papers and I am really looking forward to seeing all of you in London in August.
    Dina Papagiannaki CCR Editor

  • S. Yaw, E. Howard, B. Mumey, M. Wittie

    Given a set of datacenters and groups of application clients, well-connected datacenters can be rented as traffic proxies to reduce client latency. Rental costs must be minimized while meeting the application specific latency needs. Here, we formally define the Cooperative Group Provisioning problem and show it is NP-hard to approximate within a constant factor. We introduce a novel greedy approach and demonstrate its promise through extensive simulation using real cloud network topology measurements and realistic client churn. We find that multi-cloud deployments dramatically increase the likelihood of meeting group latency thresholds with minimal cost increase compared to single-cloud deployments.

    Phillipa Gill
  • C. Fuerst, M. Rost, S. Schmid

    It is well-known that cloud application performance can critically depend on the network. Over the last years, several systems have been developed which provide the application with the illusion of a virtual cluster : a star-shaped virtual network topology connecting virtual machines to a logical switch with absolute bandwidth guarantees. In this paper, we debunk some of the myths around the virtual cluster embedding problem. First, we show that the virtual cluster embedding problem is not NP-hard, and present the fast and optimal embedding algorithm VC-ACE for arbitrary datacenter topologies. Second, we argue that resources may be wasted by enforcing star-topology embeddings, and alternatively promote a hose embedding approach. We discuss the computational complexity of hose embeddings and derive the HVC-ACE algorithm. Using simulations we substantiate the benefits of hose embeddings in terms of acceptance ratio and resource footprint.

    Hitesh Ballani
  • H. Ding, M. Rabinovich

    This paper examines several TCP characteristics and their effect on existing passive RTT measurement techniques. In particular, using packet traces from three geographically distributed vantage points, we find relatively low use of TCP timestamps and significant presence of stretch acknowledgements. While the former simply affects the applicability of some measurement techniques, the latter may in principle affect the accuracy of RTT estimation. Using these insights, we quantify implications of common methodologies for passive RTT measurement. In particular, we show that, unlike delayed TCP acknowledgement, stretch acknowledgements do not distort RTT estimations.

    Joseph Camp
  • P. Calyam, M. Swany

    The perfSONAR-based Multi-domain Network Performance Measurement and Monitoring Workshop was held on February 20-21, 2014 in Arlington, VA. The goal of the workshop was to review the state of the perfSONAR effort and catalyze future directions by cross-fertilizing ideas, and distilling common themes among the diverse perfSONAR stakeholders that include: network operators and managers, endusers and network researchers. The timing and organization for the second workshop is significant because there are an increasing number of groups within NSF supported data-intensive computing and networking programs that are dealing with measurement, monitoring and troubleshooting of multi-domain issues. These groups are forming explicit measurement federations using perfSONAR to address a wide range of issues. In addition, the emergence and wide-adoption of new paradigms such as software-defined networking are taking shape to aid in traffic management needs of scientific communities and network operators. Consequently, there are new challenges that need to be addressed for extensible and programmable instrumentation, measurement data analysis, visualization and middleware security features in perfSONAR. This report summarizes the workshop efforts to bring together diverse groups for delivering targeted short/long talks, sharing latest advances, and identifying gaps that exist in the community for solving end-toend performance problems in an effective, scalable fashion.

  • V. Bajpai, S. Eravuchira, J. Schönwälder

    We reflect upon our experience in using the RIPE Atlas platform for measurement-based research. We show how in addition to credits, control checks using rate limits are in place to ensure that the platform does not get overloaded with measurements. We show how the Autonomous System (AS)-based distribution of RIPE Atlas probes is heavily skewed which limits possibilities of measurements sourced from a specific origin-AS. We discuss the significance of probe calibration and how we leverage it to identify load issues in older hardware versions (38.6% overall as of Sep 2014) of probes. We show how performance measurement platforms (such as RIPE Atlas, SamKnows, BISmark and Dasu) can benefit from each other by demonstrating two example use-cases. We also open discussion on how RIPE Atlas deployment can be made more useful by relaying more probe metadata information back to the scientific community and by strategically deploying probes to reduce the inherent sampling bias embedded in probe-based measurement platforms.

  • kc claffy, D. Clark

    On December 10-11 2014, we hosted the 4th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. The objective for this year’s workshop was a structured consideration of whether and how policy-makers should try to shape the future of the Internet. To structure the discussion about policy, we began the workshop with a list of potential aspirations for our future telecommunications infrastructure (a list we had previously collated), and asked participants to articulate an aspiration or fear they had about the future of the Internet, which we summarized and discussed on the second day. The focus on aspirations was motivated by the high-level observation that before discussing regulation, we must agree on the objective of the regulation, and why the intended outcome is justified. In parallel, we used a similar format as in previous years: a series of focused sessions, where 3-4 presenters each prepared 10-minute talks on issues in recent regulatory discourse, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and a copy of this report are available at

  • A. Akella

    I've received several interesting, and varied, questions from students all over the world. Thank you for the warm response! In this issue, I have hand-picked a small subset of questions to answer. Many thanks to Brighten Godfrey (UIUC) and Vyas Sekar (CMU) for contributing their thoughts.

  • B. Davie, C. Diot, L. Eggert, N. McKeown, V. Padmanabhan, R. Teixeira

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption.

  • B. Maggs, R. Sitaraman

    This paper peeks under the covers at the subsystems that provide the basic functionality of a leading content delivery network. Based on our experiences in building one of the largest distributed systems in the world, we illustrate how sophisticated algorithmic research has been adapted to balance the load between and within server clusters, manage the caches on servers, select paths through an overlay routing network, and elect leaders in various contexts. In each instance, we first explain the theory underlying the algorithms, then introduce practical considerations not captured by the theoretical models, and finally describe what is implemented in practice. Through these examples, we highlight the role of algorithmic research in the design of complex networked systems. The paper also illustrates the close synergy that exists between research and industry where research ideas cross over into products and product requirements drive future research.

  • Dina Papagiannaki

    Welcome to the April issue of Computer Communications Review, our community’s quarterly newsletter. Or maybe workshop? Many may not realize it but CCR is actually operating like a workshop with quarterly deadlines. Every quarter we receive 40-60 submissions that are being reviewed by a collective of more than 100 reviewers, and handled by our 12 member editorial board, to which I would like to welcome Alberto Dainotti, from CAIDA. Out of all submissions, some technical papers are published in the running issue, while others are being given feedback for further improvement and get re-evaluated for a later issue. I cannot thank enough the hard working editorial board that, quarter on quarter, handle their allocated papers, targeting to provide the best possible feedback. The editorial papers are not being peer reviewed, but solely reviewed by me. Editorial papers fall into two categories: i) position papers, or ii) workshop reports. My task is to ensure that the positions are clearly expressed and possibly identify cases where positions are being presented through technical arguments, in which case I may engage someone from the editorial board or redirect the paper to the technical track. Fundamentally, CCR is a vehicle to bring our community together and expose interesting, novel ideas as early as possible. And I believe we do achieve this. This issue is an example of the above process. We received 36 papers – 32 technical submissions, and 4 editorials. We accepted all editorials, and 2 of the technical papers, while 10 papers have been recommended for resubmission, with clear recommendations on the changes required. A lot of authors agree that their papers have improved in clarity and technical accuracy through the process of revise-and-resubmit. I hope you enjoy the two technical papers, on rate adaptation in 802.11n, and multipath routing on wireless sensor networks. Three of the editorials cover workshops and community meetings: 1) the 1st named data networking community meeting, 2) the Dagstuhl seminar on distributed cloud computing, and 3) the 1st data transparency lab workshop. Meeting reports are a wonderful way of tracking the state of the art in specific areas, and learning from the findings of the organizers. The last editorial is one of my favorite editorials so far. The authors provide a unique historical perspective on how IP address allocation has evolved since the inception of the Internet, and implications that our community has to deal with. A very interesting exposition of IP address scarcity but also a very valuable perspective on how the Internet as a whole has evolved. This issue is also bringing a novelty. We are establishing a new column, edited by Dr. Renata Teixeira from INRIA. The column is aiming to bring successful examples of technology transfer from our community to the networking industry. The inaugural example is provided by Dr. Paul Francis, discussing Network Address Translation (NAT). Funny how NAT was first proposed in CCR, and that the non-workshop editorial of this issue also deals with IP scarcity. It is interesting to read Paul’s exposition of the events, along with his own reflections on whether what was transferred was what he actually proposed :-). With all this, I hope you enjoy the content of this issue, as well as our second column on graduate advice. Finally, I am expecting you all at the best of CCR session of ACM Sigcomm in London – the Sigcomm session where we celebrate the best technical and the best editorial published by CCR during the past year.
    Dina Papagiannaki CCR Editor

  • L. Kriara, M. Marina

    We consider the link adaptation problem in 802.11n wireless LANs that involves adapting MIMO mode, channel bonding, modulation and coding scheme, and frame aggregation level with varying channel conditions. Through measurement-based analysis, we find that adapting all available 802.11n features results in higher goodput than adapting only a subset of features, thereby showing that holistic link adaptation is crucial to achieve best performance. We then design a novel hybrid link adaptation scheme termed SampleLite that adapts all 802.11n features while being efficient compared to sampling-based open-loop schemes and practical relative to closed loop schemes. SampleLite uses sender-side RSSI measurements to significantly lower the sampling overhead, by exploiting the monotonic relationship between best settings for each feature and the RSSI. Through analysis and experimentation in a testbed environment, we show that our proposed approach can reduce the sampling overhead by over 70% on average compared to the widely used Minstrel HT scheme. We also experimentally evaluate the goodput performance of SampleLite in a wide range of controlled and realworld interference scenarios. Our results show that SampleLite, while performing close to the ideal, delivers goodput that is 35– 100% better than with existing schemes.

    Aline Carneiro Viana
  • S. Sharma, S. Jena

    Wireless Sensor Network (WSN) consists of low power sensor nodes. Energy is the main constraint associated with the sensor nodes. In this paper, we propose a cluster based multipath routing protocol, which uses the clustering and multipath techniques to reduce energy consumption and increase the reliability. The basic idea is to reduce the load of the sensor node by giving more responsibility to the base station (sink). We have implemented and compared the protocol with existing protocols and found that it is more energy-efficient and reliable.

    Joseph Camp
  • P. Richter, M. Allman, R. Bush, V. Paxson

    With the ongoing exhaustion of free address pools at the registries serving the global demand for IPv4 address space, scarcity has become reality. Networks in need of address space can no longer get more address allocations from their respective registries. In this work we frame the fundamentals of the IPv4 address exhaustion phenomena and connected issues. We elaborate on how the current ecosystem of IPv4 address space has evolved since the standardization of IPv4, leading to the rather complex and opaque scenario we face today. We outline the evolution in address space management as well as address space use patterns, identifying key factors of the scarcity issues. We characterize the possible solution space to overcome these issues and open the perspective of address blocks as virtual resources, which involves issues such as differentiation between address blocks, the need for resource certification, and issues arising when transferring address space between networks.

  • claffy, J. Polterock, A. Afanesev, J. Burke, L. Zhang

    This report is a brief summary of the first NDN Community Meeting held at UCLA in Los Angeles, California on September 4-5, 2014. The meeting provided a platform for the attendees from 39 institutions across seven countries to exchange their recent NDN research and development results, to debate existing and proposed functionality in security support, and to provide feedback into the NDN architecture design evolution.

  • Y. Coady, O. Hohlfeld, J. Kempf, R. McGeer, S. Schmid

    A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today’s massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by offering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify different use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.

  • R. Gross-Brown, M. Ficek, J. Agundez, P. Dressler, N. Laoutaris

    On November 20 and 21 2014, Telefonica I+D hosted the Data Transparency Lab ("DTL") Kickoff Workshop on Personal Data Transparency and Online Privacy at its headquarters in Barcelona, Spain. This workshop provided a forum for technologists, researchers, policymakers and industry representatives to share and discuss current and emerging issues around privacy and transparency on the Internet. The objective of this workshop was to kick-start the creation of a community of research, industry, and public interest parties that will work together towards the following objectives: - The development of methodologies and user-friendly tools to promote transparency and empower users to understand online privacy issues and consequences; - The sharing of datasets and research results, and; - The support of research through grants and the provision of infrastructure to deploy tools. With the above activities, the DTL community aims to improve our understanding of technical, ethical, economic and regulatory issues related to the use of personal data by online services. It is hoped that successful execution of such activities will help sustain a fair and transparent exchange of personal data online. This report summarizes the presentations, discussions and questions that resulted from the workshop.

  • Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, Renata Teixeira

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption. Their story is here. We are seeking CCR articles describing interesting cases of “research affecting the practice,” including ideas transferred from research labs (academic or industrial) that became: • Commercial products. • Internet standards • Algorithms and ideas embedded into new or existing products • Widely used open-source software • Ideas deployed first by startups, by existing companies or by distribution of free software • Communities built around a toolbox, language, dataset We also welcome stories of negative experiences, or ideas that seemed promising but ended-up not taking off. Paul Francis has accepted to start this editorial series. Enjoy it!
    Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, and Renata Teixeira SIGCOMM Industrial Liaison Board

  • Aditya Akella

    As some of you may know, I gave a talk at CoNEXT 2014 titled "On future-proofing networks" (see [1] for slides). While most of the talk was focused on my past research projects, I spent a bit of time talking about the kind of problems I like to work on. I got several interesting questions about the latter, both during the talk and in the weeks following the talk. Given this, I thought I would devote this column (and perhaps parts of future columns) to putting my ideas on problem selection into words. I suspect what I write below will generate more questions than answers in your mind. Don’t hold your questions back, though! Write to me at!

  • Paul Francis

    In January of 1993, Tony Eng and I published the first paper to propose Network Address Translation (NAT) in CCR based on work done the previous summer during Tony's internship with me at Bellcore. Early in 1994, according to Wikipedia, development was started on the PIX (Private Internet Exchange) firewall product by John Mayes, which included NAT as a key feature. In May of 1994 the first RFC on NAT was published (RFC 1631). PIX was a huge success, and was bought by Cisco in November of 1995. I was asked by the Sigcomm Industrial Liaison Board to write the first of a series of articles under the theme “Examples of Research Affecting the Practice of Networking.” The goal of the series is to help find ways for the research community to increase its industrial impact. I argued with the board that NAT isn't a very good example because I don't think the lessons learned apply very well to today's environment. Better to start with a contemporary example like Openflow. Why isn’t NAT a good example? It's because I as a researcher had nothing to do with the success of NAT, and the reasons for the success of NAT had nothing to do with the reason I developed NAT in the first place. Let me explain. I conceived of NAT as a solution to the expected depletion of the IPv4 address space. Nobody bought a PIX, however, because they wanted to help slow down the global consumption of IP addresses. People bought PIX firewalls because they needed a way to connect their private networks to the public Internet. At the time, it was a relatively common practice for people to assign any random unregistered IP address to private networking equipment or hosts. Picking a random address was much easier than, for instance, going to a regional internet registry to obtain addresses, especially when there was no intent to connect the private network to the public Internet. This obviously became a problem once someone using unregistered private addresses wished to connect to the Internet. The PIX firewall saved people the trouble of trying to obtain an adequate block of IP addresses, and having to renumber networks in order to connect to the Internet if they had already been using unregistered addresses. Indeed, PIX was conceived by John Mayes as an IP variant of the telephone PBX (Private Branch Exchange), which allows private phone systems to operate with their own private numbers, often needing only a single public phone number. The whole NAT episode, seen from my point of view at the time, boils down to this. I published NAT in CCR (thanks Dave Oran, who was editor at the time and allowed it). For me, that was the end of it. Some time later, I was contacted by Kjeld Egevang of Cray Communications, who wanted to write an RFC on NAT so that they could legitimize their implementation of NAT. So I helped a little with that and lent my name to the RFC. (In those days, all you needed to publish an RFC was the approval of one guy, Jon Postel!) Next thing I knew, NAT was everywhere. Given that the problem that I was trying to solve, and the problem that PIX solved, are different, there is in fact no reason to think that John Mayes or Kjeld Egevang got the idea for NAT from the CCR paper. So what would the lesson learned for researchers today be? This: Solve an interesting problem with no business model, publish a paper about it, and hope that somebody uses the same idea to solve some problem that does have a business model. Clearly not a very interesting lesson. I agree with the motivation of this article series. It is very hard to have industrial impact in networking today. The last time I tried was five or six years ago when I made a serious effort to turn ViAggre (NSDI’09) into an RFC. Months of effort yielded nothing, and perhaps rightly so. I hope others, especially others with more positive recent results, will contribute to this series.

  • Dina Papagiannaki

    Happy new year and one more issue of CCR in your inbox or your mailbox. At CCR, we are beginning the new year with a lot of energy and new content that we would like to establish as mainstream in our publication. Starting in January 2015, we are instating a new column in CCR, that is going to be edited by Prof. Aditya Akella, from University of Wisconsin at Madison. Its goal is to provide research and professional advice to our ever growing community. Prof. Akella is describing his intentions with this column in his own editorial. I sincerely hope that this new column will be tremendously successful and will help a lot of the CCR readers navigate academic and research directions or career choices. This issue contains one technical paper that is looking into the tail loss recovery mechanism of TCP and two editorials. The first editorial is an interesting overview of how Internet Exchange Points have evolved in Europe and the U.S.A. The authors provide technical and business related reasons around the observed evolution and outline the issues that would require more attention in the future. Lastly, the second editorial is what I promised in my October 2014 editor’s note. Dr. George Varghese has provided CCR with an editorial note that captures his thinking around what he calls "confluences", and which was presented during his SIGCOMM keynote speech. Reading his editorial literally brought me back to Chicago and the auditorium where George received his SIGCOMM award. I find the concept of "confluence" very important. Finding such confluences is not easy, but when it happens, research becomes fun, exciting, and certainly far easier to motivate and transfer to actual products. I do hope that PhD candidates try to apply George’s framework as they search for their thesis topics. With all this, I wanted to wish you a very happy, and productive 2015. We are always looking forward to your contributions!

  • M. Rajiullah, P. Hurtig, A. Brunstrom, A. Petlund, M. Welzl

    Interactive applications do not require more bandwidth to go faster. Instead, they require less latency. Unfortunately, the current design of transport protocols such as TCP limits possible latency reductions. In this paper we evaluate and compare different loss recovery enhancements to fight tail loss latency. The two recently proposed mechanisms "RTO Restart" (RTOR) and "Tail Loss Probe" (TLP) as well as a new mechanism that applies the logic of RTOR to the TLP timer management (TLPR) are considered. The results show that the relative performance of RTOR and TLP when tail loss occurs is scenario dependent, but with TLP having potentially larger gains. The TLPR mechanism reaps the benefits of both approaches and in most scenarios it shows the best performance.

    Joel Sommers
  • N. Chatzis, G. Smaragdakis, A. Feldmann, W. Willinger

    The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., a lack of diversity in available peering opportunities. We examine in this paper the cast of Internet stakeholders that are bound to play a critical role in determining the fate of this Open-IX effort. These include the large content and cloud providers, CDNs, Tier-1 ISPs, the well-established and some of the newer commercial datacenter and colocation companies, and the largest IXPs in Europe. In particular, we comment on these different parties’ current attitudes with respect to public and private peering and discuss some of the economic arguments that will ultimately determine whether or not the currently pursued strategy by OIX will succeed in achieving the main OIX-articulated goal – a more level playing field for private and public peering in the US such that the actual demand and supply for the different peering opportunities will be reflected in the cost structure.

Syndicate content