Computer Communication Review: Papers

Find a CCR issue:
  • Olivier Bonaventure
    As announced in the previous issue, this is the last issue of Computer Communication Review to be printed on paper. It could become a collector in a few years, so keep it in a safe place once you've read it of course. 
    Starting from the July 2016 issue, CCR will only be available online. The papers will be archived on the ACM Digital Library. They will also be posted on We are exploring other delivery methods to improve your online reading experience. We hope that having an online publication will allow us to better serve the community.
    This issue contains three technical contributions and two editorials. “On the Interplay of Link-Flooding Attacks and Traffic Engineering” discusses a specific type of denial of service attack where an attacker tries to disconnect some targets by overloading key links in their target’s neighborhood. “Attacking NTP's Authenticated Broadcast Mode” analyses the security problems that can occur when the Network Time Protocol (NTP) is used in broadcast mode. “Paxos Made Switch-y” proposes an implementation of the Paxos distributed consensus protocol in P4. The first editorial, “Global Measurements: Practice and Experience (Report on Dagstuhl Seminar #16012)” summarizes the lessons learned from a recent workshop on global Internet measurements. The second editorial, “Towards Considering Relationships between Values and Networks” looks at the interactions between human rights and the technology that we develop. It reminds us that when we decide to carry out research on a given topic, our research results may have a broader impact than simply a series of papers published in conference proceedings, journals or online libraries. Some of our work can influence, in one direction or another, the evolution of our society and some of our design choices may have a huge impact in the long term. I encourage you to read this editorial and then take some time to think about your ongoing work and the impact that it could have on values such as human 
    Repeatability, Replicability & Reproducibility 
    Scientific papers such as those published in CCR are expected to contain enough information to allow other researchers to obtain similar results. This is what differentiates scientific publications to blog posts or articles that appear in trade magazines.
    In practice, for experimental papers, describing all the experiments in enough details to ensure that they can be completely reproduced can be challenging given our page limits. 
    The ACM publications board has recently discussed this problem and came up with an interesting classification that applies to experimental papers. This classification provides a precise definition for three words: Repeatability, Replicability and Reproducibility that could be considered as synonyms by nonnative English speakers like me.
    The first level is Repeatability. A measurement described in an article is considered to be repeatable if the same team can obtain the same results with the same setup in multiple trials. For an experimental paper, this implies that the software used for the experiment produces the same results multiple times. This is the basic level and we expect that all CCR papers are repeatable. 
    The second level is Replicability. An article is considered to be replicable if a different team than the authors of the paper can obtain the same results as those stated in the paper by using the same software, datasets, etc. as those used for the paper. Replication of research results is obviously facilitated if the artifacts used to write the paper are available. The ACM Digital Library provides a permanent storage for all the papers published in CCR and our conferences. In addition to storing pdf versions of the articles and the associated metadata, it is now possible to associate artifacts to each published article. These artifacts contain additional material related to the article such as datasets, proofs for some theorems, multimedia sequences, software (source code or binaries), … These artifacts are important to ease the replication and the reproduction of our published research results. Some papers already include links to author's web pages for some of these artifacts. However, these links are rarely permanent and they often disappear after a few months or years. Starting with this issue of CCR, the authors of accepted papers will be encouraged to provide artifacts that will be linked to the paper in the ACM Digital Library. Two papers published in this issue already provide such artifacts. 
    The third level is Reproducibility. An article is considered to be reproducible if an independent group can implement the solution described in a paper and obtain similar results as those described in the paper without using the paper's artifacts. Reproducing an experimental paper is not a simple job since it often requires an engineering effort to implement the software used for all experiments. However, it is a very important step in the validation of new scientific results. As a community, we do not frequently encourage the reproduction of previous articles since we usually focus on original results. I believe that we could also learn a lot from articles that reproduce important results. I hope that future CCR issues will contain such papers. 
    Last year’s reviewers 
    CCR heavily depends on reviewers who agree to spend time to comment submitted papers. Their feedback is often very detailed and it clearly contributes to the quality of the papers that you read. While preparing this editorial, I checked the submission site and found that last year 150 members of our community agreed to review one or more papers for CCR: Cedric Adjih, Mohamed Ahmed, Mark Allman, Luigi Atzori, Brice Augustuin, Ihsan Ayyub Qazi, Jingwen Bai, Aruna Balasubramanian, Nicola Baldo, Sujata Banerjee, Theophilus Benson, Robert Beverly, Nevil Brownlee, Ed Bugnion, Giovanna Carofiglio, Antonio Carzaniga, Pedro Casas, Kai Chen, Chih-Chuan Cheng, David Choffnes, Antonio Cianfrani, Jon Crowcroft, Italo Cunha, Alberto Dainotti, Lara Deek, Shuo Deng, Luca Deri, Xenofontas Dimitropoulos, Ning Ding, Yongsheng Ding, Nandita Dukkipati, Alessandro Finamore, Davide Frey, Timur Friedman, Xinwen Fu, Erol Gelenbe, Aaron Gember-Jacobson, Minas Gjoka, Lukasz Golab, Andrea Goldsmith, Sergey Gorbunov, Tim Griffin, Arjun Guha, Saikat Guha, Deniz Gunduz, Chuanxiong Guo, Berk Gurakan, Gonna Gursun, Hamed Haddadi, Emir Halepovic, Sangjin Han, David Hay, Oliver Hohlfeld, Shengchun Huang, Asim Jamshed, R.C. Jin, Abdul Kabbani, Michalis Kallitsis, Naga Katta, Ethan Katz-Bassett, Eric Keller, Manjur Kolhar, Balachander Krishnamurthy, Kun Tan, Mirja Kuhlewind, Anh Le, Jungwoo Lee, Zhenhua Liu, Matthew Luckie, Sajjad Ahmad Madani, Olaf Maennel, John Maheswaran, Petri Mahonen, Saverio Mascolo, Deepak Merugu, Jelena Mirkovic, Vishal Misra, Radhika Mittal, Tal Mizrahi, Amitav Mukherjee, Dragos Niculescu, Nick Nikiforakis, Dave Oran, Chiara Orsini, Patrick P. C. Lee, Christos Papadopoulos, Dimitris Papadopoulos, Craig Partridge, Peter Peresini, Ben Pfaff, Guillaume Pierre, David Plonka, Ingmar Poese, Lucian Popa, Ihsan Qazi, Zafar Qazi, Feng Qian, Costin Raiciu, Bhaskaran Raman, Fernando Ramos, Ashwin Rao, Ravishankar Ravindran, Mark Reitblatt, James Roberts, Franziska Roesner, Dario Rossi, Michele Rossi, Mario Sanchez, Stuart Schechter, Fabian Schneider, Julius Schulz-Zander, Sayandeep Sen, Soumya Sen, Zubair Shafiq, Craig Shue, Georgos Siganos, Georgios Smaragdakis, Joel Sommers, Alex Sprintson, Stephen Strowes, Srikanth Sundaresan, Muhammad Talha Naeem Qureshi, Vamsi Talla, Boon Thau Loo, Brian Trammel, Martino Trevisan, Narseo Vallina-Rodriguez, Roland van Rijswijk-Deij, Matteo Varvello, Aravindan Vijayaraghavan, Stefano Vissicchio, Ashish Vulimiri, Mythili Vutukuru, Nick Weaver, Michael Welzl, James Westall, Erik Wilde, Walter Willinger, Craig Wills, Rolf Winter, Bernard Wong, Wenfei Wu, Matthias Wählisch, Di Xie, Teck Yoong Chai, Yan Zhang and Haitao Zhao. 
    As you can see from this list, producing CCR relies on the efforts of a large number of members of our community. Our Associate Editors selected these reviewers. They usually serve for a period of three years. Dr. Hitesh Ballani has finished his tenure. I would like to thank him for all the efforts he put in handling CCR papers and I welcome Prof. Costin Raiciu from University Politehnica of Bucharest (Romania) who joins our Editorial board.
  • Dimitrios Gkounis, Vasileios Kotronis, Christos Liaskos, Xenofontas Dimitropoulos

    Link-flooding attacks have the potential to disconnect even entire countries from the Internet. Moreover, newly proposed indirect link-flooding attacks, such as “Crossfire”, are extremely hard to expose and, subsequently, mitigate effectively. Traffic Engineering (TE) is the network’s natural way of mitigating link overload events, balancing the load and restoring connectivity. This work poses the question: Do we need a new kind of TE to expose an attack as well? The key idea is that a carefully crafted, attack-aware TE could force the attacker to follow improbable traffic patterns, revealing his target and his identity over time. We show that both existing and novel TE modules can efficiently expose the attack, and study the benefits of each approach. We implement defense prototypes using simulation mechanisms and evaluate them extensively on multiple real topologies.

    Katerina Argyraki
  • Aanchal Malhotra, Sharon Goldberg

    We identify two attacks on the Network Time Protocol (NTP)’s cryptographically-authenticated broadcast mode. First, we present a replay attack that allows an on-path attacker to indefinitely stick a broadcast client to a specific time. Second, we present a denial-of-service (DoS) attack that allows an off-path attacker to prevent a broadcast client from ever updating its system clock; to do this, the attacker sends the client a single malformed broadcast packet per query interval. Our DoS attack also applies to all other NTP modes that are ‘ephemeral’ or ‘preemptable’ (including manycast, pool, etc). We then use network measurements to give evidence that NTP’s broadcast and other ephemeral/preemptable modes are being used in the wild. We conclude by discussing why NTP’s current implementation of symmetric-key cryptographic authentication does not provide security in broadcast mode, and make some recommendations to improve the current state of affairs.

    Alberto Dainotti
  • Huynh Tu Dang, Marco Canini, Fernando Pedone, Robert Soulé

    The Paxos protocol is the foundation for building many fault-tolerant distributed systems and services. This paper posits that there are significant performance benefits to be gained by implementing Paxos logic in network devices. Until recently, the notion of a switchbased implementation of Paxos would be a daydream. However, new flexible hardware is on the horizon that will provide customizable packet processing pipelines needed to implement Paxos. While this new hardware is still not readily available, several vendors and consortia have made the programming languages that target these devices public. This paper describes an implementation of Paxos in one of those languages, P4. Implementing Paxos provides a critical use case for P4, and will help drive the requirements for data plane languages in general. In the long term, we imagine that consensus could someday be offered as a network service, just as point-to-point communication is provided today.

    Matteo Varvello
  • Carsten Orwat, Roland Bless

    Many technical systems of the Information and Communication Technology (ICT) sector enable, structure and/or constrain social interactions. Thereby, they influence or implement certain values, including human rights, and affect or raise conflicts among values. The ongoing developments toward an “Internet of everything” is likely to lead to further value conflicts. This trend illustrates that a better understanding of the relationships between social values and networks is urgently needed because it is largely unknown what values lie behind protocols, design principles, or technical and organizational options of the Internet. This paper focuses on the complex steps of realizing human rights in Internet architectures and protocols as well as in Internetbased products and services. Besides direct implementation of values in Internet protocols, there are several other options that can indirectly contribute to realizing human rights via political processes and market choices. Eventually, a better understanding of what values can be realized by networks in general, what technical measures may affect certain values, and where complementary institutional developments are needed may lead toward a methodology for considering technical and institutional systems together.

  • Vaibhav Bajpai, Arthur W. Berger, Philip Eardley, Jörg Ott, Jürgen Schönwälder

    This article summarises a 2.5 day long Dagstuhl seminar on Global Measurements: Practice and Experience held in January 2016. This seminar was a followup of the seminar on Global Measurement Frameworks held in 2013, which focused on the development of global Internet measurement platforms and associated metrics. The second seminar aimed at discussing the practical experience gained with building these global Internet measurement platforms. It brought together people who are actively involved in the design and maintenance of global Internet measurement platforms and who do research on the data delivered by such platforms. Researchers in this seminar have used data derived from global Internet measurement platforms in order to manage networks or services or as input for regulatory decisions. The entire set of presentations delivered during the seminar is made publicly available at [1].

  • Aditya Akella

    Dear students: This edition of the Student Mentoring Column focuses on various testbeds (for wired networking researching) and datasets. The questions below don't provide comprehensive coverage of either topic; as such, we may revisit them in future editions. I also hope to talk about wireless testbeds and datasets in a future column.
    I got plenty of help in preparing this edition. In particlar, many thanks to Aaron Gember-Jacobson (UW-Madison), Brighten Godfrey (UIUC), Ethan Katz-Bassett (USC), and Vyas Sekar (CMU).

  • Dina Papagiannaki

    Welcome to the January issue of CCR. This issue marks the beginning of a new year - 2016 - but also the end of my tenure as editor of Computer Communications Review. The new CCR editor will be Prof. Olivier Bonaventure, from University of Louvain, in Belgium.

    The past three years have been a true learning experience for me. Not only because they allowed me to experience the energy behind a newsletter like CCR, but also because I was given the opportunity to interact with a much broader part of our community. I got to read articles that I would probably not have read otherwise, and get excited by the increasing number of opportunities that computing and computer networks put at our disposal every day. Networks are the cornerstone of day to day discovery, and are an indispensable component of our societies. I am glad that our community continues to innovate and broadening its reach to encompass wired, and wireless networks, social networks, virtual infrastructures, data centers, while thinking about specific use cases of the underlying capabilities.

    These past three years have also given me a completely new perspective on what it takes to run a professional society based on volunteers. I have attended the SIGCOMM executive committee meetings and realized the tremendous amount of work that happens behind the scenes and that underlies the success of each one of our conferences. The executive committee meetings happen monthly and I have never seen an agenda with less than 5-10 items. All the committee members continuously think how to improve the benefits to our community's members and make our conferences exciting, informative, and fun venues where not only collaborations but also friendships form for life.

    The committee has been further working on a number of projects to encourage participation from under-represented areas, create and archive educational material, encourage good research practices, etc. I would greatly encourage all PhD students to go through the executive committee meeting notes that are publicly available online. Through my participation to those meetings I have come to respect and admire (even more than before) the members of the executive committee for their dedication and unconditional service. I will miss our monthly interactions Keshav, Renata, Jorg, Hamed, Yashar, Olivier, Bruce, and Bruce.

    I am very proud to have had the opportunity to be part of that team and to have affected some of the changes in our conferences' practices. I am particularly excited that our award committees are now publicly released, offering increased transparency around our most cherished processes that acknowledge outstanding scientific contributions. I am also quite excited about a recent change by which our SIG conferences will make public the list of the papers nominated for best papers awards, beyond the winning one. Networking research is highly selective and I feel that our community could certainly use a little more recognition in their day to day.

    As I have said in previous issues, CCR would not be possible without the work of a very large number of volunteers. It starts with the editorial board but it does not end there. Tens of reviewers are involved in the review process of CCR every quarter. Without the work of all these volunteers CCR would not be the same. I wanted to thank with all my heart all the associate editors I have had the pleasure to work with during the past 3 years. Special thanks to Prof. Augustin Chaintreau that is ending his tenure at CCR with this issue. I am also very pleased with the introduction of the ILB column and the student mentoring column, which I believe have given freshness to CCR. My deepest thanks to the Industrial Liaison Board and Prof. Akella, from University of Wisconsin.

    Maybe one of the most surprising elements of my tenure is how much I enjoyed reading the editorial submissions. I have to commend all the authors of editorial submissions. The workshop reports take so much work but made me feel I was attending venues thousands of miles away. Position papers made me think "why not" and "what if". Taking time out of busy schedules to put one's thoughts in paper is a task that is not to be underestimated. My deepest gratitude to all the authors of editorial submissions.

    Finally, thanks to all the authors of technical submissions that continuously advance the state of the art in computer networking. I have seen papers mature through the revise-and-resubmission process, and submissions addressing important problems through clear, practical solutions. Some of these works have been presented at ACM Sigcomm, always attracting attention and follow up work.

    With that, I invite you to read the current issue. It features four editorials, two of which present the reports for the 7th Workshop on Active Internet Measurements (AIMS-7), and the 2nd Named Data Networking Community meeting (NDNcomm). The third editorial is looking at an alternative information centric architecture for the Internet, while the last one presents a really nice exposition of how people think about network neutrality in different countries, and possible ways that one could use to get their head around the topic. I really liked seeing the many different views on network neutrality as instantiated across different countries. Our technical papers cover open networking and SDN, middleboxes, IXPs and ways to create an IP geolocation database.

    I hope you enjoy this first issue of CCR for 2016. Olivier, the best of luck in this new challenge and I am looking forward to a new era in CCR's history!

  • A. Panda, M. McCauley, A. Tootoonchian, J. Sherry, T. Koponen, S. Ratnasamy, S. Shenker

    With the increasing prevalence of middleboxes, networks today are capable of doing far more than merely delivering packets. In fact, to realize their full potential for both supporting innovation and generating revenue, we should think of carrier networks as servicedelivery platforms. This requires providing open interfaces that allow third-parties to leverage carrier-network infrastructures in building global-scale services. In this position paper, we take the first steps towards making this vision concrete by identifying a few such interfaces that are both simple-to-support and safe-to-deploy (for the carrier) while being flexibly useful (for third-parties).

    David Choffnes
  • Y. Lee, H. Park, Y. Lee

    In this paper, we propose an IP geolocation DB creation method based on a crowd-sourcing Internet broadband performance measurement tagged with locations and present an IP geolocation DB based on 7 years of Internet broadband performance data in Korea. Compared with other commercial IP geolocation DBs, our crowd-sourcing IP geolocation DB shows increased accuracy with fine-grained granularity. We confirm that the low accuracy of commercial IP geolocation DBs mainly results from selecting a single representative location for a large IP block from the Whois registry DB, parsing city names in a naive way, and resolving the wrong geolocation coordinates. We also found that the geographic location of IP blocks has continuously changed but has been stable. Although our IP geolocation DB is limited to Korea, the 32 million broadband performance test records over 7 years provide wide coverage as well as finegrained accuracy.

    Fabian Bustamante
  • R. Kloti, B. Ager, V. Kotronis, G. Nomikos, X. Dimitropoulos

    Internet eXchange Points (IXPs) are core components of the Internet infrastructure where Internet Service Providers (ISPs) meet and exchange traffic. During the last few years, the number and size of IXPs have increased rapidly, driving the flattening and shortening of Internet paths. However, understanding the present status of the IXP ecosystem and its potential role in shaping the future Internet requires rigorous data about IXPs, their presence, status, participants, etc. In this work, we do the first cross-comparison of three well-known publicly available IXP databases, namely of PeeringDB, Euro-IX, and PCH. A key challenge we address is linking IXP identifiers across databases maintained by different organizations. We find different AS-centric versus IXP-centric views provided by the databases as a result of their data collection approaches. In addition, we highlight differences and similarities w.r.t. IXP participants, geographical coverage, and co-location facilities. As a side-product of our linkage heuristics, we make publicly available the union of the three databases, which includes 40.2 % more IXPs and 66.3 % more IXP participants than the commonly-used PeeringDB. We also publish our analysis code to foster reproducibility of our experiments and shed preliminary insights into the accuracy of the union dataset.

    Fabián E. Bustamante
  • T. Lukovszki, M. Rost, S. Schmid

    The virtualization and softwarization of modern computer networks offers new opportunities for the simplified management and flexible placement of middleboxes as e.g. firewalls and proxies. This paper initiates the study of algorithmically exploiting the flexibilities present in virtualized and software-defined networks. Particularly, we are interested in the initial as well as the incremental deployment of middleboxes. We present a deterministic O(log(min{n, κ})) approximation algorithm for n-node computer networks, where κ is the middlebox capacity. The algorithm is based on optimizing over a submodular function which can be computed efficiently using a fast augmenting path approach. The derived approximation bound is optimal: the underlying problem is computationally hard to approximate within sublogarithmic factors, unless P = NP holds. We additionally present an exact algorithm based on integer programming, and complement our formal analysis with simulations. In particular, we consider the number of used middleboxes and highlight the benefits of the approximation algorithm in incremental deployments. Our approach also finds interesting applications, e.g., in the context of incremental deployment of software-defined networks.

    Fabián E. Bustamante
  • L. Schiff, S. Schmid, P. Kuznetsov

    Control planes of forthcoming Software-Defined Networks (SDNs) will be distributed : to ensure availability and faulttolerance, to improve load-balancing, and to reduce overheads, modules of the control plane should be physically distributed. However, in order to guarantee consistency of network operation, actions performed on the data plane by different controllers may need to be synchronized, which is a nontrivial task. In this paper, we propose a synchronization framework for control planes based on atomic transactions, implemented in-band, on the data-plane switches. We argue that this in-band approach is attractive as it keeps the failure scope local and does not require additional out-of-band coordination mechanisms. It allows us to realize fundamental consensus primitives in the presence of controller failures, and we discuss their applications for consistent policy composition and fault-tolerant control-planes. Interestingly, by using part of the data plane configuration space as a shared memory and leveraging the match-action paradigm, we can implement our synchronization framework in today’s standard OpenFlow protocol, and we report on our proof-ofconcept implementation.

    Katerina Argyraki
  • D. Trossen, A. Sathiaseelan, J. Ott

    Enabling universal Internet access has been recognized as a key issue to enabling sustained economic prosperity, evidenced by the myriad of initiatives in this space. However, the existing Internet architecture is seriously challenged to ensure universal service provisioning at economically sustainable price points, largely due to the costs associated with providing services in a perceived always-on manner. This paper puts forth our vision to provide global access to the Internet through a universal communication architecture that combines two emerging paradigms, namely that of Information Centric Networking (ICN) and Delay Tolerant Networking (DTN). The decoupling in space and time, achieved through these underlying paradigms, is key to aggressively widen the connectivity options and provide flexible service models beyond what is currently pursued in the game around universal service provisioning. In this paper, we provide an outlook on the main concepts underlying our universal architecture and the opportunities arising from it. We also offer some insight into ongoing work to realize our vision in a concrete test bed and trial setting.

  • kc claffy

    On 31 March - 2 April 2015, CAIDA hosted the seventh Workshop on Active Internet Measurements (AIMS-7) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies between academics, industry, policymakers, and funding agencies. This report describes topics discussed at the workshop, including current state of Ark and related infrastructure, current and proposed experiments using these infrastructures, and participants’ views of challenges and priorities. Materials related to the workshop are at

  • claffy, polterock, afanasyev, zhang, burke

    This report is a brief summary of the second NDN Community Meeting held at UCLA in Los Angeles, California on September 28–29, 2015. The meeting provided a platform for the attendees from 49 institutions across 13 countries to exchange their recent NDN research and development results, to debate existing and proposed functionality in NDN forwarding, routing, and security, and to provide feedback to the NDN architecture design evolution.

  • Hassan Habibi Gharakheiliy, Arun Vishwanath, Vijay Sivaramany

    "Net neutrality" and Internet "fast-lanes" have been the subject of raging debates for several years now, with various viewpoints put forth by stakeholders (Internet Service Providers, Content Service Providers, and consumers) seeking to influence how the Internet is regulated. In this paper we summarize the perspectives on this debate from multiple angles, and propose a fresh direction to address the current stalemate. Our first contribution is to highlight the contentions in the net neutrality debate from the viewpoints of technology (what mechanisms do or do not violate net neutrality?), economics (how does net neutrality help or hurt investment and growth?), and society (do fast-lanes disempower consumers?). Our second contribution is to survey the state-of-play of net neutrality in various regions of the world, highlighting the influence of factors such as consumer choice and public investment on the regulatory approach taken by governments. Our final contribution is to propose a new model that engages consumers in fast-lane negotiations, allowing them to customize fast-lane usage on their broadband link. We believe that our approach can provide a compromise solution that can break the current stalemate and be acceptable to all parties.

  • Aditya Akella

    Dear students: This edition of the Student Mentoring Column focuses on program committees (their composition and how they work) and the importance of social networking at conferences. The questions below don’t provide comprehensive coverage of either topic; as such, we may revisit them in future editions. I got plenty of help in preparing this edition. In particular, many thanks to Kyle Jamieson (Princeton), Ethan Katz-Bassett (USC), George Porter (UCSD), Vyas Sekar (CMU), and Minlan Yu (USC) for contributing answers.

  • Dina Papagiannaki

    Welcome to the last CCR issue for the year 2015. We have had quite an intensive year. Throughout the entire year of 2015 we received 105 submissions and published 19 papers (both technical and editorial). In August, we held our most successful ever “best of CCR session” that saw tremendous attendance but also very positive feedback. I am very excited about what CCR has achieved so far and thank all the authors for their interest in CCR.

    This last issue of 2015 features five papers, out of which four are technical contributions and one is an editorial. The technical papers cover topics such as secure DNS, OpenFlow, network analytics, and transparency in the Web. The editorial presents a position around what the authors call edge-centric computing. I hope you do enjoy all the articles.

    Our industrial liaison board column features Dr. George Varghese. Dr. Varghese discusses his experience in moving academic knowledge to commercial products. He uses 3 different examples and clearly demonstrates that there is no “one size fits all” approach to technology transfer. I find his “lessons learnt” a useful guideline to consider before embarking in such a journey.

    Finally, this issue sees the end of term for two of our associate editors. Prof. Phillipa Gill, from Stonybrook University, and Prof. Joel Sommers, from Colgate University, are ending their tenure at CCR. I would like to thank them both for having produced some of the most thought provoking public reviews and for always having provided considerate, constructive feedback in all the papers they had to deal with in the past 2 years.

    Our farewell to Joel and Phillipa comes with our welcome to two new associate editors. It is my great pleasure to welcome to the CCR editorial board Prof. Fahad Dogar, TUMS University, and Prof. David Choffnes, Northeastern University. They both join with tremendous energy. I am delighted they will lend us their expertise for the next 2 years.

    With this, it was a great pleasure to see some of you in London. I found SIGCOMM to be a stimulating, vibrant venue with ever increasing reach. I left London with lots of new ideas, and a sense that our community has the potential to make a true difference in the world. Let’s keep it up! Dina Papagiannaki CCR Editor

  • Hassan Metwalley, Stefano Traverso, Marco Mellia (Politecnico di Torino), Stanislav Miskovic (Symantec Corp.), Mario Baldi (Politecnico di Torino)

    Individuals lack proper means to supervise the services they contact and the information they exchange when surfing the web. This security task has become challenging due to the complexity of the modern web, of the data delivering technology, and even to the adoption of encryption, which, while improving privacy, makes innetwork services ineffective. The implications are serious, from a person contacting undesired services or unwillingly exposing private information, to a company being unable to control the flow of its information to the outside world. To empower transparency and the capability of taking informed choices in the web, we propose CROWDSURF, a system for comprehensive and collaborative auditing of data exchanged with Internet services. Similarly to crowdsourced efforts, we enable users to contribute in building awareness, supported by the semi-automatic analysis of data offered by a cloud-based system. The result is the creation of “suggestions” that individuals can transform in enforceable “rules” to customize their web browsing policy. CROWDSURF provides the core infrastructure to let individuals and enterprises regain visibility and control on their web activity. Preliminary results obtained executing a prototype implementation demonstrate the feasibility and potential of CROWDSURF.

    Joseph Camp
  • Roland van Rijswijk-Deij (University of Twente and SURFnet), Anna Sperotto, Aiko Pras (University of Twente)

    The Domain Name System Security Extensions (DNSSEC) add authenticity and integrity to the DNS, improving its security. Unfortunately, DNSSEC is not without problems. DNSSEC adds digital signatures to the DNS, significantly increasing the size of DNS responses. This means DNSSEC is more susceptible to packet fragmentation and makes DNSSEC an attractive vector to abuse in amplificationbased denial-of-service attacks. Additionally, key management policies are often complex. This makes DNSSEC fragile and leads to operational failures. In this paper, we argue that the choice for RSA as default cryptosystem in DNSSEC is a major factor in these three problems. Alternative cryptosystems, based on elliptic curve cryptography (ECDSA and EdDSA), exist but are rarely used in DNSSEC. We show that these are highly attractive for use in DNSSEC, although they also have disadvantages. To address these, we have initiated research that aims to investigate the viability of deploying ECC at a large scale in DNSSEC.

    Phillipa Gill
  • Dimitrios Sarlis, Nikolaos Papailiou, Ioannis Konstantinou (CSLAB, NTUA), Georgios Smaragdakis (MIT & TU Berlin), Nectarios Koziris (CSLAB, NTUA)

    The ever-increasing Internet traffic poses challenges to network operators and administrators that have to analyze large network datasets in a timely manner to make decisions regarding network routing, dimensioning, accountability and security. Network datasets collected at large networks such as Internet Service Providers (ISPs) or Internet Exchange Points (IXPs) can be in the order of Terabytes per hour. Unfortunately, most of the current network analysis approaches are ad-hoc and centralized, and thus not scalable. In this paper, we present Datix, a fully decentralized, open-source analytics system for network traffic data that relies on smart partitioning storage schemes to support fast join algorithms and efficient execution of filtering queries. We outline the architecture and design of Datix and we present the evaluation of Datix using real traces from an operational IXP. Datix is a system that deals with an important problem in the intersection of data management and network monitoring while utilizing state-of-the-art distributed processing engines. In brief, Datix manages to efficiently answer queries within minutes compared to more than 24 hours processing when executing existing Pythonbased code in single node setups. Datix also achieves nearly 70% speedup compared to baseline query implementations of popular big data analytics engines such as Hive and Shark.

    Marco Mellia
  • Sajad Shirali-Shahreza, Yashar Ganjali (University of Toronto)

    The ability to manage individual flows is a major benefit of Software-Defined Networking. The overheads of this fine-grained control, e.g. initial flow setup delay, can overcome the benefits, for example when we have many time-sensitive short flows. Coarse-grained control of groups of flows, on the other hand, can be very complex: each packet may match multiple rules, which requires conflict resolution. In this paper, we present ReWiFlow, a restricted class of OpenFlow wildcard rules (the fundamental way to control groups of flows in OpenFlow), which allows managing groups of flows with flexibility and without loss of performance. We demonstrate how ReWiFlow can be used to implement applications such as dynamic proactive routing. We also present a generalization of ReWiFlow, called MultiReWiFlow, and show how it can be used to efficiently represent access control rules collected from Stanford’s backbone network.

    Hitesh Ballani
  • Pedro Garcia Lopez (Universitat Rovira i Virgili), Alberto Montresor (University of Trento), Dick Epema (Delft University of Technology), Anwitaman Datta (Nanyang Technological University), Teruo Higashino (Osaka University), Adriana Iamnitchi (University of South Florida), Marinho Barcellos (Universidade do Vale do Rio dos Sinos), Pascal Felber, Etienne Riviere (University of Neuchatel)

    In many aspects of human activity, there has been a continuous struggle between the forces of centralization and decentralization. Computing exhibits the same phenomenon; we have gone from mainframes to PCs and local networks in the past, and over the last decade we have seen a centralization and consolidation of services and applications in data centers and clouds. We position that a new shift is necessary. Technological advances such as powerful dedicated connection boxes deployed in most homes, high capacity mobile end-user devices and powerful wireless networks, along with growing user concerns about trust, privacy, and autonomy requires taking the control of computing applications, data, and services away from some central nodes (the “core”) to the other logical extreme (the “edge”) of the Internet. We also position that this development can help blurring the boundary between man and machine, and embrace social computing in which humans are part of the computation and decision making loop, resulting in a human-centered system design. We refer to this vision of human-centered edge-device based computing as Edge-centric Computing. We elaborate in this position paper on this vision and present the research challenges associated with its implementation.

  • Aditya Akella (University of Wisconsin-Madison)

    Dear students: I hope you have been enjoying reading the column. This column is similar to the previous one: it attempts to address a few more of your questions. Thanks for the great questions; keep them coming! Again, many thanks to Brighten Godfrey (UIUC) and Vyas Sekar (CMU) for contributing their thoughts.

  • Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, Renata Teixeira (SIGCOMM Industrial Liaison Board)

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption. Their story is here. We are seeking CCR articles describing interesting cases of “research affecting the practice,” including ideas transferred from research labs (academic or industrial) that became: • Commercial products. • Internet standards • Algorithms and ideas embedded into new or existing products • Widely used open-source software • Ideas deployed first by startups, by existing companies or by distribution of free software • Communities built around a toolbox, language, dataset We also welcome stories of negative experiences, or ideas that seemed promising but ended-up not taking off. In this issue, George Varghese describes his experience with technology transfer of network algorithms.

  • George Varghese (Microsoft Research)

    I agree with the editors of this series that technology transfer from academia occurs in a variety of ways with different tradeoffs. I describe, to the best of my recollection, some of my experiences with ideas that originated or were described in academic papers, from Deficit Round Robin to Conga.

  • Paul Tune, Matthew Roughan

    Traffic matrices describe the volume of traffic between a set of sources and destinations within a network. These matrices are used in a variety of tasks in network planning and traffic engineering, such as the design of network topologies. Traffic matrices naturally possess complex spatiotemporal characteristics, but their proprietary nature means that little data about them is available publicly, and this situation is unlikely to change. Our goal is to develop techniques to synthesize traffic matrices for researchers who wish to test new network applications or protocols. The paucity of available data, and the desire to build a general framework for synthesis that could work in various settings requires a new look at this problem. We show how the principle of maximum entropy can be used to generate a wide variety of traffic matrices constrained by the needs of a particular task, and the available information, but otherwise avoiding hidden assumptions about the data. We demonstrate how the framework encompasses existing models and measurements, and we apply it in a simple case study to illustrate the value.

  • Arjun Roy, Hongyi Zeng, Jasmeet Bagga, George Porter, Alex C. Snoeren

    Large cloud service providers have invested in increasingly larger datacenters to house the computing infrastructure required to support their services. Accordingly, researchers and industry practitioners alike have focused a great deal of effort designing network fabrics to efficiently interconnect and manage the traffic within these datacenters in performant yet efficient fashions. Unfortunately, datacenter operators are generally reticent to share the actual requirements of their applications, making it challenging to evaluate the practicality of any particular design. Moreover, the limited large-scale workload information available in the literature has, for better or worse, heretofore largely been provided by a single datacenter operator whose use cases may not be widespread. In this work, we report upon the network traffic observed in some of Facebook's datacenters. While Facebook operates a number of traditional datacenter services like Hadoop, its core Web service and supporting cache infrastructure exhibit a number of behaviors that contrast with those reported in the literature. We report on the contrasting locality, stability, and predictability of network traffic in Facebook's datacenters, and comment on their implications for network architecture, traffic engineering, and switch design.

  • Liang Zheng, Carlee Joe-Wong, Chee Wei Tan, Mung Chiang, Xinyu Wang

    Amazon's Elastic Compute Cloud (EC2) uses auction based spot pricing to sell spare capacity, allowing users to bid for cloud resources at a highly reduced rate. Amazon sets the spot price dynamically and accepts user bids above this price. Jobs with lower bids (including those already running) are interrupted and must wait for a lower spot price before resuming. Spot pricing thus raises two basic questions: how might the provider set the price, and what prices should users bid? Computing users' bidding strategies is particularly challenging: higher bid prices reduce the probability of, and thus extra time to recover from, interruptions, but may increase users' cost. We address these questions in three steps: (1) modeling the cloud provider's setting of the spot price and matching the model to historically offered prices, (2) deriving optimal bidding strategies for different job requirements and interruption overheads, and (3) adapting these strategies to MapReduce jobs with master and slave nodes having different interruption overheads. We run our strategies on EC2 for a variety of job sizes and instance types, showing that spot pricing reduces user cost by 90% with a modest increase in completion time compared to on-demand pricing.

  • Hirochika Asai, Yasuhiro Ohara

    Internet of Things leads to routing table explosion. An inexpensive approach for IP routing table lookup is required against ever growing size of the Internet. We contribute by a fast and scalable software routing lookup algorithm based on a multiway trie, called Poptrie. Named after our approach to traversing the tree, it leverages the population count instruction on bit-vector indices for the descendant nodes to compress the data structure within the CPU cache. Poptrie outperforms the state-of-the-art technologies, Tree BitMap, DXR and SAIL, in all of the evaluations using random and real destination queries on 35 routing tables, including the real global tier-1 ISP's full-route routing table. Poptrie peaks between 174 and over 240 Million lookups per second (Mlps) with a single core and tables with 500- 800k routes, consistently 4-578% faster than all competing algorithms in all the tests we ran. We provide the comprehensive performance evaluation, remarkably with the CPU cycle analysis. This paper shows the suitability of Poptrie in the future Internet including IPv6, where a larger route table is expected with longer prefixes.

  • Matthew K. Mukerjee, David Naylor, Junchen Jiang, Dongsu Han, Srinivasan Seshan, Hui Zhang
  • Brandon Schlinker, Radhika Niranjan Mysore, Sean Smith, Jeffrey C. Mogul, Amin Vahdat, Minlan Yu, Ethan Katz-Bassett, Michael Rubin

    The design space for large, multipath datacenter networks is large and complex, and no one design fits all purposes. Network architects must trade off many criteria to design costeffective, reliable, and maintainable networks, and typically cannot explore much of the design space. We present Condor, our approach to enabling a rapid, efficient design cycle. Condor allows architects to express their requirements as constraints via a Topology Description Language (TDL), rather than having to directly specify network structures. Condor then uses constraint-based synthesis to rapidly generate candidate topologies, which can be analyzed against multiple criteria. We show that TDL supports concise descriptions of topologies such as fat-trees, BCube, and DCell; that we can generate known and novel variants of fat-trees with simple changes to a TDL file; and that we can synthesize large topologies in tens of seconds. We also show that Condor supports the daunting task of designing multi-phase network expansions that can be carried out on live networks.

  • Pan Hu, Pengyu Zhang, Deepak Ganesan

    Backscatter provides dual-benefits of energy harvesting and low-power communication, making it attractive to a broad class of wireless sensors. But the design of a protocol that enables extremely power-efficient radios for harvesting-based sensors as well as high-rate data transfer for data-rich sensors presents a conundrum. In this paper, we present a new fully asymmetric backscatter communication protocol where nodes blindly transmit data as and when they sense. This model enables fully flexible node designs, from extraordinarily powerefficient backscatter radios that consume barely a few micro-watts to high-throughput radios that can stream at hundreds of Kbps while consuming a paltry tens of micro-watts. The challenge, however, lies in decoding concurrent streams at the reader, which we achieve using a novel combination of time-domain separation of interleaved signal edges, and phase-domain separation of colliding transmissions. We provide an implementation of our protocol, LF-Backscatter, and show that it can achieve an order of magnitude or more improvement in throughput, latency and power over state-of-art alternatives.

  • Alok Kumar, Sushant Jain, Uday Naik, Anand Raghuraman, Nikhil Kasinadhuni, Enrique Cauich Zermeno, C. Stephen Gunn, Jing Ai, Bj?rn Carlin, Mihai Amarandei-Stavila, Mathieu Robin, Aspi Siganporia, Stephen Stuart, Amin Vahdat

    WAN bandwidth remains a constrained resource that is economically infeasible to substantially overprovision. Hence, it is important to allocate capacity according to service priority and based on the incremental value of additional allocation. For example, it may be the highest priority for one service to receive 10Gb/s of bandwidth but upon reaching such an allocation, incremental priority may drop sharply favoring allocation to other services. Motivated by the observation that individual flows with fixed priority may not be the ideal basis for bandwidth allocation, we present the design and implementation of Bandwidth Enforcer (BwE), a global, hierarchical bandwidth allocation infrastructure. BwE supports: i) service-level bandwidth allocation following prioritized bandwidth functions where a service can represent an arbitrary collection of flows, ii) independent allocation and delegation policies according to user-defined hierarchy, all accounting for a global view of bandwidth and failure conditions, iii) multi-path forwarding common in trafficengineered networks, and iv) a central administrative point to override (perhaps faulty) policy during exceptional conditions. BwE has delivered more service-efficient bandwidth utilization and simpler management in production for multiple years.

  • Keon Jang, Justine Sherry, Hitesh Ballani, Toby Moncaster

    Many cloud applications can benefit from guaranteed latency for their network messages, however providing such predictability is hard, especially in multi-tenant datacenters. We identify three key requirements for such predictability: guaranteed network bandwidth, guaranteed packet delay and guaranteed burst allowance. We present Silo, a system that offers these guarantees in multi-tenant datacenters. Silo leverages the tight coupling between bandwidth and delay: controlling tenant bandwidth leads to deterministic bounds on network queuing delay. Silo builds upon network calculus to place tenant VMs with competing requirements such that they can coexist. A novel hypervisor-based policing mechanism achieves packet pacing at sub-microsecond granularity, ensuring tenants do not exceed their allowances. We have implemented a Silo prototype comprising a VM placement manager and a Windows filter driver. Silo does not require any changes to applications, guest OSes or network switches. We show that Silo can ensure predictable message latency for cloud applications while imposing low overhead.

  • Mosharaf Chowdhury, Ion Stoica

    Inter-coflow scheduling improves application-level communication performance in data-parallel clusters. However, existing efficient schedulers require a priori coflow information and ignore cluster dynamics like pipelining, task failures, and speculative executions, which limit their applicability. Schedulers without prior knowledge compromise on performance to avoid head-of-line blocking. In this paper, we present Aalo that strikes a balance and efficiently schedules coflows without prior knowledge. Aalo employs Discretized Coflow-Aware Least-Attained Service (D-CLAS) to separate coflows into a small number of priority queues based on how much they have already sent across the cluster. By performing prioritization across queues and by scheduling coflows in the FIFO order within each queue, Aalo's non-clairvoyant scheduler reduces coflow completion times while guaranteeing starvation freedom. EC2 deployments and trace-driven simulations show that communication stages complete 1.93x faster on average and 3.59x faster at the 95th percentile using Aalo in comparison to per-flow mechanisms. Aalo's performance is comparable to that of solutions using prior knowledge, and Aalo outperforms them in presence of cluster dynamics.

  • Xiaoqi Ren, Ganesh Ananthanarayanan, Adam Wierman, Minlan Yu

    As clusters continue to grow in size and complexity, providing scalable and predictable performance is an increasingly important challenge. A crucial roadblock to achieving predictable performance is stragglers, i.e., tasks that take significantly longer than expected to run. At this point, speculative execution has been widely adopted to mitigate the impact of stragglers. However, speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. In this work, we present Hopper, a job scheduler that is speculationaware, i.e., that integrates the tradeoffs associated with speculation into job scheduling decisions. We implement both centralized and decentralized prototypes of the Hopper scheduler and show that 50% (66%) improvements over state-of-the-art centralized (decentralized) schedulers and speculation strategies can be achieved through the coordination of scheduling and speculation.

  • David Naylor, Kyle Schomp, Matteo Varvello, Ilias Leontiadis, Jeremy Blackburn, Diego R. L?pez, Konstantina Papagiannaki, Pablo Rodriguez Rodriguez, Peter Steenkiste

    A significant fraction of Internet traffic is now encrypted and HTTPS will likely be the default in HTTP/2. However, Transport Layer Security (TLS), the standard protocol for encryption in the Internet, assumes that all functionality resides at the endpoints, making it impossible to use in-network services that optimize network resource usage, improve user experience, and protect clients and servers from security threats. Re-introducing in-network functionality into TLS sessions today is done through hacks, often weakening overall security. In this paper we introduce multi-context TLS (mcTLS), which extends TLS to support middleboxes. mcTLS breaks the current "all-or-nothing" security model by allowing endpoints and content providers to explicitly introduce middleboxes in secure end-to-end sessions while controlling which parts of the data they can read or write. We evaluate a prototype mcTLS implementation in both controlled and "live" experiments, showing that its benefits come at the cost of minimal overhead. More importantly, we show that mcTLS can be incrementally deployed and requires only small changes to client, server, and middlebox software.

  • Yibo Zhu, Nanxi Kang, Jiaxin Cao, Albert Greenberg, Guohan Lu, Ratul Mahajan, Dave Maltz, Lihua Yuan, Ming Zhang, Ben Y. Zhao, Haitao Zheng

    Debugging faults in complex networks often requires capturing and analyzing traffic at the packet level. In this task, datacenter networks (DCNs) present unique challenges with their scale, traffic volume, and diversity of faults. To troubleshoot faults in a timely manner, DCN administrators must a) identify affected packets inside large volume of traffic; b) track them across multiple network components; c) analyze traffic traces for fault patterns; and d) test or confirm potential causes. To our knowledge, no tool today can achieve both the specificity and scale required for this task. We present Everflow, a packet-level network telemetry system for large DCNs. Everflow traces specific packets by implementing a powerful packet filter on top of "match and mirror" functionality of commodity switches. It shuffles captured packets to multiple analysis servers using load balancers built on switch ASICs, and it sends "guided probes" to test or confirm potential faults. We present experiments that demonstrate Everflow's scalability, and share experiences of troubleshooting network faults gathered from running it for over 6 months in Microsoft's DCNs.

Syndicate content