Computer Communication Review: Papers

Find a CCR issue:
  • Dina Papagiannaki

    Welcome to the July issue of CCR, an issue that should hopefully inspire a number of discussions that we can continue in person during Sigcomm, in Chicago. This issue features 17 papers, 5 of which are editorial notes, and 12 technical contributions from our community. The technical part features novel contributions in the area of router location inference, performance of fiber-to-the-home networks, BGP, programmable middleboxes, and a programming language for protocol independent packet processors. Each one of them is advancing the state of the art and should be a useful building block for future research. The research community is increasingly becoming multidisciplinary. One cannot help but get inspired when he/she sees the elegance of solutions that address real problems in one discipline while exploiting knowledge produced in another. This is the mission of the fifth technical submission in this issue. The core of the contribution is to adopt the concept of design contests and apply it to the area of congestion control protocols in wireless networks. The authors point out that one of the key requirements in any design contest is to “have an unambigious, measurable objective that will allow one to compare protocols”. And this is exactly what the authors do in their work. The article concludes that design contests can benefit networking research, if designed properly, and they encourage others to explore their strengths and weaknesses. The remaining papers of the technical part are devoted to one of the largest efforts undertaken in the recent years to rethink the architecture of the Internet, e.g. the Future Internet Architecture program of the U.S. National Science Foundation. FIA targets the design of a trustworthy Internet, that incorporates societal, economical, and legal constraints, while following a clean slate approach. It was the initiative of Prof. David Wetherall, from University of Washington, to bring the four FIA proposals, and affiliated project ChoiceNet, to CCR, and provide a very comprehensive exposition of the different avenues taken by the different consortia. I have to thank David for acm all the hard work he did in order to bring all the pieces in the same place, something that will undoubtedly help our community understand the FIA efforts in a greater extent. The FIA session is preceded by a technical note by Dr. Darleen Fisher, FIA program director at the U.S. National Science Foundation. It is inspiring to see how a long term (11-year) funding effort has led to a number of functioning components that may define the Internet of the future. Thank you Darleen for a wonderful introductory note! Our editorial session comprises 5 papers. Two of the papers cover workshop reports: i) the workshop on Internet Economics 2013, and ii) the roundtable on real time communications research, that was held along with IPTComm, in October 2013. We have an article introducing the ProtoRINA prototype, a user-space prototype of the Recursive InterNetwork Architecture (RINA), and a qualitative study of the Internet census data that was collected in March 2013, and that has attracted significant attention in our community. The last editorial is appearing in CCR per my own invitation to its author, Daniel Stenberg. By the end of this year the Internet Engineering Task Force (IETF) is aiming to standardize the second version of HTTP, i.e. HTTP 2.0. This new version is going to be a very significant change compared to HTTP v1 aiming to provide better support for mobile browsing. Daniel is a Mozilla engineer participating in the standardization of HTTP 2.0 and has kindly accepted to publish his thoughts on HTTP 2.0 at CCR. This issue also marks the start of tenure for Dr. Aline Carneiro Viana, from INRIA. Aline is bringing a lot of energy to the editorial board and her expertise in ad hoc, sensor networks, delay tolerant networks, and cognitive radio networks. With all that, I hope to see most of you in Chicago in August, and please feel free to send me any suggestions on things you would like to see published from CCR in the future.

  • B. Huffaker, M. Fomenkov, K. Claffy

    In this paper we focus on geolocating Internet routers, using a methodology for extracting and decoding geography-related strings from fully qualified domain names (hostnames). We first compiled an extensive dictionary associating geographic strings (e.g., airport codes) with geophysical locations. We then searched a large set of router hostnames for these strings, assuming each autonomous naming domain uses geographic hints consistently within that domain. We used topology and performance data continually collected by our global measurement infrastructure to discern whether a given hint appears to co-locate different hostnames in which it is found. Finally, we generalized geolocation hints into domain-specific rule sets. We generated a total of 1,711 rules covering 1,398 different domains and validated them using domain-specific ground truth we gathered for six domains. Unlike previous efforts which relied on labor-intensive domain-specific manual analysis, we automate our process for inferring the domain specific heuristics, substantially advancing the state-of-the-art of methods for geolocating Internet resources.

    Joel Sommers
  • M. Luckie

    Researchers depend on public BGP data to understand the structure and evolution of the AS topology, as well as the operational security and resiliency of BGP. BGP data is provided voluntarily by network operators who establish BGP sessions with route collectors that record this data. In this paper, we show how trivial it is for a single vantage point (VP) to introduce thousands of spurious routes into the collection by providing examples of five VPs that did so. We explore the impact these misbehaving VPs had on AS relationship inference, showing these misbehaving VPs introduced thousands of AS links that did not exist, and caused relationship inferences for links that did exist to be corrupted. We evaluate methods to automatically identify misbehaving VPs, although we find the result unsatisfying because the limitations of real-world BGP practices and AS relationship inference algorithms produce signatures similar to those created by misbehaving VPs. The most recent misbehaving VP we discovered added thousands of spurious routes for nine consecutive months until 8 November 2012. This misbehaving VP barely impacts (0.1%) our validation of our AS relationship inferences, but this number may be misleading since most of our validation data relies on BGP and RPSL which validates only existing links, rather than asserting the non-existence of links. We have only a few assertions of non-existent routes, all received via our public-facing website that allows operators to provide validation data through our interactive feedback mechanism. We only discovered this misbehavior because two independent operators corrected some inferences, and we noticed that the spurious routes all came from the same VP. This event highlights the limitations of even the best available topology data, and provides additional evidence that comprehensive ground truth validation from operators is essential to scientific research on Internet topology.

    Renata Teixeira
  • M. Sargent, M. Allman

    Fiber-To-The-Home (FTTH) networks are on the brink of bringing significantly higher capacity to residential users compared to today'ss commercial residential options. There are several burgeoning FTTH networks that provide capacities of up to 1 Gbps. We have been monitoring one such operational network the Case Connection Zone - for 23 months. In this paper we seek to understand the extent to which the users in this network are in fact making use of the provided bi-directional 1 Gbps capacity. We find that even when given virtually unlimited capacity the majority of the time users do not retrieve information from the Internet in excess of commercially available data rates and transmit at only modestly higher rates than commodity networks support. Further, we find that end host issues - most prominently buffering at both end points - are often the cause of the lower-than-expected performance.

    Fabián E. Bustamante
  • K. Khan, Z. Ahmed, S. Ahmed, A. Syed, S. Khayam

    With only access billing no longer ensuring profits, an ISP's growth now relies on rolling out new and differentiated services. However, ISPs currently do not have a well-defined architecture for rapid, cost-effective, and scalable dissemination of new services. We present iSDF, a new SDN-enabled framework that can meet an ISP's service delivery constraints concerning cost, scalability, deployment flexibility, and operational ease. We show that meeting these constraints necessitates an SDN philosophy for a centralized management plane, a decoupled (from data) control plane, and a programmable data plane at customer premises. We present an ISP service delivery framework (iSDF) that provides ISPs a domain-specific API for network function virtualization by leveraging a programmable middlebox built from commodity home-routers. It also includes an application server to disseminate, configure, and update ISP services. We develop and report results for three diverse ISP applications that demonstrate the practicality and flexibility of iSDF, namely distributed VPN (control plane decisions), pay-per-site (rapid deployment), and BitTorrent blocking (data plane processing).

    Katerina Argyraki
  • P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, D. Walker

    P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label.

    Marco Mellia
  • A. Sivaraman, K. Winstein, P. Varley, J. Batalha, A. Goyal, S. Das, J. Ma, H. Balakrishnan

    In fields like data mining and natural language processing, design contests have been successfully used to advance the state of the art. Such contests offer an opportunity to bring the excitement and challenges of protocol design - one of the core intellectual elements of research and practice in networked systems - to a broader group of potential contributors, whose ideas may prove important. Moreover, it may lead to an increase in the number of students, especially undergraduates or those learning via online courses, interested in pursuing a career in the field. We describe the creation of the infrastructure and our experience with a protocol design contest conducted in MIT's graduate Computer Networks class. This contest involved the design and evaluation of a congestion-control protocol for paths traversing cellular wireless networks. One key to the success of a design contest is an unambiguous, measurable objective to compare protocols. In practice, protocol design is the art of trading off conflicting goals with each other, but in this contest, we specified that the goal was to maximize log(throughput/delay). This goal is a good match for applications such as video streaming or videoconferencing that care about high throughput and low interactive delays. Some students produced protocols whose performance was better than published protocols tackling similar goals. Furthermore, the convex hull of the set of all student protocols traced out a tradeoff curve in the throughput-delay space, providing useful insights into the entire space of possible protocols. We found that student protocols diverged in performance between the training and testing traces, indicating that some students had overtrained ("overfitted") their protocols to the training trace. Our conclusion is that, if designed properly, such contests could benefit networking research by making new proposals more easily reproducible and amenable to such "gamification," improve networked systems, and provide an avenue for outreach.

    Augustin Chaintreau
  • G. Kalejaiye, J. Rondina, L. Albuquerque, T. Pereira, L. Campos, R. Melo, D. Mascarenhas, M. Carvalho

    This paper describes a strategy that was designed, implemented, and presented at the Mobile Ad Hoc Networking Interoperability and Cooperation (MANIAC) Challenge 2013. The theme of the challenge was "Mobile Data Offloading" and consisted on developing and comparatively evaluating strategies to offload infrastructure access points via customer ad hoc forwarding using handheld devices. According to the challenge rules, a hop-by-hop bidding contest should decide the path of each data packet towards its destination. Consequently, each team should rely on other teams' willingness to forward packets for them in order to get their traffic across the network. Following these rules, this paper proposes a strategy that is based on the concept of how "tight" a node is to successfully deliver a packet to its destination within a given deadline. This "tightness" idea relies on a shortest-path analysis of the underlying network graph, and it is used to define three sub-strategies that specify a) how to participate in an auction; b) how to announce an auction; and c) how to decide who wins the announced auction. The proposed strategy seeks to minimize network resource utilization and to promote cooperative behavior among participant nodes.

    Sanjay Jha
  • Darleen Fisher

    The Future Internet Architectures (FIA) constitutes a 10year effort by the U.S. National Science Foundation (NSF) that was launched in 2006, with the announcement of the Future INternet Design (FIND) research area within a Network Technologies and Systems (NeTS) program solicitation. This solicitation outlined a three-phase program to "rethink" the Internet, beginning with FIND and culminating in the recently announced two-year awards for Future Internet ArchitectureNext Phase (FIA-NP). Because many readers may not be familiar with the thinking behind this effort, this article aims to provide a historical context and background for the technical papers included in this issue.

    David Wetherall
  • D. Naylor, M. Mukerjee, P. Agyapong, R. Grandl, R. Kang

    Motivated by limitations in today's host-centric IP network, recent studies have proposed clean-slate network architectures centered around alternate first-class principals, such as content, services, or users. However, much like the hostcentric IP design, elevating one principal type above others hinders communication between other principals and inhibits the network's capability to evolve. This paper presents the eXpressive Internet Architecture (XIA), an architecture with native support for multiple principals and the ability to evolve its functionality to accommodate new, as yet unforeseen, principals over time. We present the results of our ongoing research motivated by and building on the XIA architecture, ranging from topics at the physical level ("how fast can XIA go") up through to the user level.

  • T. Wolf, J. Griffioen, K. Calvert, R. Dutta, G. Rouskas, I. Baldin, A. Nagurney

    The Internet has been a key enabling technology for many new distributed applications and services. However, the deployment of new protocols and services in the Internet infrastructure itself has been sluggish, especially where economic incentives for network providers are unclear. In our work, we seek to develop an "economy plane" for the Internet that enables network providers to offer new network-based services (QoS, storage, etc.) for sale to customers. The explicit connection between economic relationships and network services across various time scales enables users to select among service alternatives. The resulting competition among network service providers will lead to overall better technological solutions and more competitive prices. In this paper, we present the architectural aspects of our ChoiceNet economy plane as well as some of the technological problems that need to be addressed in a practical deployment.

  • A. Afanasyev, J. Burke, L. Zhang, claffy, L. Wang, V. Jacobson, P. Crowley, C. Papadopoulos, B. Zhang

    Named Data Networking (NDN) is one of five projects funded by the U.S. National Science Foundation under its Future Internet Architecture Program. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006.1 The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications for how we design, develop, deploy, and use networks and applications. We describe the motivation and vision of this new architecture, and its basic components and operations. We also provide a snapshot of its current design, development status, and research challenges. More information about the project, including prototype implementations, publications, and annual reports, is available on named-data.net.

  • A. Venkataramani, J. Kurose, D. Raychaudhuri, K. Nagaraja, M. Mao, S. Banerjee

    MobilityFirst is a future Internet architecture with mobility and trustworthiness as central design goals. Mobility means that all endpoints - devices, services, content, and networks - should be able to frequently change network attachment points in a seamless manner. Trustworthiness means that the network must be resilient to the presence of a small number of malicious endpoints or network routers. MobilityFirst enhances mobility by cleanly separating names or identifiers from addresses or network locations, and enhances security by representing both in an intrinsically verifiable manner, relying upon a massively scalable, distributed, global name service to bind names and addresses, and to facilitate services including device-to-service, multicast, anycast, and context-aware communication, content retrieval, and more. A key insight emerging from our experience is that a logically centralized global name service can significantly enhance mobility and security and transform network-layer functionality. Recognizing and validating this insight is the key contribution of the MobilityFirst architectural effort.

  • T. Anderson, K. Birman, R. Broberg, M. Caesar, D. Comer, C. Cotton, M. Freedman, A. Haeberlen, Z. Ives, A. Krishnamurthy, W. Lehr, B. Loo, D. Mazières, A. Nicolosi, J. Smith, I. Stoica, R. van Renesse, M. Walfish, H. Weatherspoon, C. Yoo

    NEBULA is a proposal for a Future Internet Architecture. It is based on the assumptions that: (1) cloud computing will comprise an increasing fraction of the application workload offered to an Internet, and (2) that access to cloud computing resources will demand new architectural features from a network. Features that we have identified include dependability, security, flexibility and extensibility, the entirety of which constitute resilience. NEBULA provides resilient networking services using ultrareliable routers, an extensible control plane and use of multiple paths upon which arbitrary policies may be enforced. We report on a prototype system, Zodiac, that incorporates these latter two features.

  • T. Krenc, O. Hohlfeld, A. Feldmann

    On March 17, 2013, an Internet census data set and an accompanying report were released by an anonymous author or group of authors. It created an immediate media buzz, mainly because of the unorthodox and unethical data collection methodology (i.e., exploiting default passwords to form the Carna botnet), but also because of the alleged unprecedented large scale of this census (even though legitimate census studies of similar and even larger sizes have been performed in the past). Given the unknown source of this released data set, little is known about it. For example, can it be ruled out that the data is faked? Or if it is indeed real, what is the quality of the released data? The purpose of this paper is to shed light on these and related questions and put the contributions of this anonymous Internet census study into perspective. Indeed, our findings suggest that the released data set is real and not faked, but that the measurements suffer from a number of methodological flaws and also lack adequate meta-data information. As a result, we have not been able to verify several claims that the anonymous author(s) made in the published report. In the process, we use this study as an educational example for illustrating how to deal with a large data set of unknown quality, hint at pitfalls in Internet-scale measurement studies, and discuss ethical considerations concerning third-party use of this released data set for publications.

  • C. Davids, G. Ormazabal, R. State

    In this article we describe the discussion and conclusions of the "Roundtable on Real-Time Communications Research: What to Study and How to Collaborate" held at the Illinois Institute of Technology's Real-Time Communications Conference and Expo, co-located with the IPTComm Conference, October 15-17, 2013.

  • Y. Wang, I. Matta, F. Esposito, J. Day

    ProtoRINA is a user-space prototype of the Recursive InterNetwork Architecture. RINA is a new architecture that builds on the fundamental principle that networking is interprocess communication. As a consequence, RINA overcomes inherent weaknesses of the current Internet, e.g., security, mobility support, and manageability. ProtoRINA serves not only as a prototype that demonstrates the advantages of RINA, but also as a network experimental tool that enables users to program different policies using its built-in mechanisms. In this note, we introduce ProtoRINA as a vehicle for making RINA concepts concrete and for encouraging researchers to use and benefit from the prototype.

  • kc claffy, D. Clark

    On December 12-13 2013, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 4th interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. This workshop series provides a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. The theme for this year's workshop was the economic health of the Internet ecosystem, including emphasis on the cost of and revenue sources to support content delivery, the quality of user experience, economic and policy influences on and effects of emerging specialized services, and the role of data in evaluating ecosystem health. This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at http://www.caida.org/workshops/wie/1312/.

  • D. Stenberg

    A detailed description explaining the background and problems with current HTTP that has lead to the development of the next generation HTTP protocol: HTTP 2. It also describes and elaborates around the new protocol design and functionality, including some implementation specifics and a few words about the future. This article is an editorial note submitted to CCR. It has NOT been peer reviewed. The author takes full responsibility for this article's technical content. Comments can be posted through CCR Online.

  • Dina Papagiannaki

    Welcome to the April 2014 issue for Computer Communications Review. I am really happy to see CCR increasing its presence in our community and serving as a venue where we express our opinions on the way our community is evolving, discussing its future, and publish papers that advance the state of the art in data communications. In the past 3 months, I have received a number of comments from members in the community, on previous published articles and expressing their willingness to contribute to its continued success. Thank you very much!

    This issue of CCR features 13 papers, out of which 6 are editorial notes. The technical papers cover wireless and wired networking solutions, as well as SDN. Our editorials cover workshop reports, but also opinion papers. Lastly, I am very happy to also include an editorial on MCKit, the smartphone app that was launched for SIGCOMM 2013, and the organizers’ thoughts on how well it worked, how it was built, and results on how it was used. I hope it proves to be useful as we are getting close to this year’s SIGCOMM in Chicago.

    One of the discussions we have started in the community has to do with our actual impact on commercial products. March was the month of the Mobile World Congress (MWC), in Barcelona, the premier industry venue in mobile communications and products. It was really exciting to see one of our community’s outcomes presented during the venue and receiving tremendous coverage by the media. I am referring to Kumu Networks, a startup company founded by Sachin Katti, Steven Hong, Jeffrey Mehlman, and Mayank Jain, whose seeds were sown in Stanford University, and that aims to commercialize full duplex radio technology. The technology behind Kumu Network was published in SIGCOMM 2012, SIGCOMM 2013, as well as NSDI, Mobicom and Hotnets in the past 4 years. Kumu Networks is a clear testament to the quality of work done in our community, and its relevance in the market. A tremendous achievement by all standards.

    This issue also marks the end of term for Sharad Agarwal, from Microsoft Research in Redmond. I really wanted to thank Sharad for his contributions throughout his tenure at CCR. We will miss your perspective, as well as some of the greatest public reviews CCR has even seen!

    We also say goodbye to Matteo Varvello, from Bell Labs. Matteo has been the heart behind the online version of CCR. I would really like to thank him for all his help throughout the past year, and welcome Prof. Mike Wittie, from Montana State University, who joins full of energy as the new CCR publications chair.

    With all that, I hope you enjoy this issue and I am always at your disposal in case of questions or comments.

  • X. Yao, W. Wang, S. Yang, Y. Cen, X. Yao, T. Pan

    This paper proposed an IPB-frame Adaptive Mapping Mechanism (AMM) to improve the video transmission quality over IEEE 802.11e Wireless Local Area Networks (WLANs). Based on the frame structure of hierarchical coding technology, the probability of each frame allocated to the most appropriate Access Category (AC) was dynamically updated according to its importance and traffic load of each AC. Simulation results showed the superior performance of the proposed AMM by comparing with three other existing mechanisms in terms of three objective metrics.

    Joseph Camp
  • F. Ge, L. Tan

    A communication network usually has data packets and acknowledge (ACK) packets being transmitted in opposite directions. ACK packet flows may affect the performance of data packet flows, which is unfortunately not considered in the usual network utility maximization (NUM) model. This paper presents a NUM model in networks with two-way flows (NUMtw) by adding a routing matrix to cover ACK packet flows. The source rates are obtained by solving the dual model and the relation to the routing matrix of ACK packet flows is disclosed. Furthermore, the source rates in networks with one-way flows by the usual NUM model are compared to those in networks with two-way flows by the NUMtw model.

    Nikolaos Laoutaris
  • A. Lodhi, N. Larson, A. Dhamdhere, C. Dovrolis, K. Claffy

    In this study we mine one of the few sources of public data available about the interdomain peering ecosytem: PeeringDB [1], an online database where participating networks contribute information about their peering policies, traffic volumes and presence at various geographic locations. Although established to support the practical needs of operators, this data also provides a valuable source of information to researchers. Using BGP data to cross-validate three years of PeeringDB snapshots, we find that PeeringDB membership is reasonably representative of the Internet’s transit, content, and access providers in terms of business types and geography of participants, and PeeringDB data is generally up-to-date. We find strong correlations among different measures of network size - BGP-advertised address space, PeeringDB-reported traffic volume and presence at peering facilities, and between these size measures and advertised peering policies.

    Renata Teixeira
  • M. Raju, A. Wundsam, M. Yu

    In spite of the standardization of the OpenFlow API, it is very difficult to write an SDN controller application that is portable (i.e., guarantees correct packet processing over a wide range of switches) and achieves good performance (i.e., fully leverages switch capabilities). This is because the switch landscape is fundamentally diverse in performance, feature set and supported APIs. We propose to address this challenge via a lightweight portability layer that acts as a rendezvous point between the requirements of controller application and the vendor knowledge of switch implementations. Above, applications specify rules in virtual flow tables annotated with semantic intents and expectations. Below, vendor specific drivers map them to optimized switch-specific rule sets. NOSIX represents a first step towards achieving both portability and good performance across a diverse set of switches.

    Hitesh Ballani
  • R. Singh, T. Brecht, S. Keshav

    The number of publicly accessible virtual execution environments (VEEs) has been growing steadily in the past few years. To be accessible by clients, such VEEs need either a public IPv4 or a public IPv6 address. However, the pool of available public IPv4 addresses is nearly depleted and the low rate of adoption of IPv6 precludes its use. Therefore, what is needed is a way to share precious IPv4 public addresses among a large pool of VEEs. Our insight is that if an IP address is assigned at the time of a client DNS request for the VEE’s name, it is possible to share a single public IP address amongst a set of VEEs whose workloads are not network intensive, such as those hosting personal servers or performing data analytics. We investigate several approaches to multiplexing a pool of global IP addresses among a large number of VEEs, and design a system that overcomes the limitations of current approaches. We perform a qualitative and quantitative comparison of these solutions. We find that upon receiving a DNS request from a client, our solution has a latency as low as 1 ms to allocate a public IP address to a VEE, while keeping the size of the required IP address pool close to the minimum possible.

    Phillipa Gill
  • G. Bianchi, M. Bonola, A. Capone, C. Cascone

    Software Defined Networking envisions smart centralized controllers governing the forwarding behavior of dumb low-cost switches. But are “dumb” switches an actual strategic choice, or (at least to some extent) are they a consequence of the lack of viable alternatives to OpenFlow as programmatic data plane forwarding interface? Indeed, some level of (programmable) control logic in the switches might be beneficial to offload logically centralized controllers (de facto complex distributed systems) from decisions just based on local states (versus network-wide knowledge), which could be handled at wire speed inside the device itself. Also, it would reduce the amount of flow processing tasks currently delegated to specialized middleboxes. The underlying challenge is: can we devise a stateful data plane programming abstraction (versus the stateless OpenFlow match/action table) which still entails high performance and remains consistent with the vendors’ preference for closed platforms? We posit that a promising answer revolves around the usage of extended finite state machines, as an extension (super-set) of the OpenFlow match/action abstraction. We concretely turn our proposed abstraction into an actual table-based API, and, perhaps surprisingly, we show how it can be supported by (mostly) reusing core primitives already implemented in OpenFlow devices.

    Hitesh Ballani
  • M. Honda, F. Huici, C. Raiciu, J. Araujo, L. Rizzo

    Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to become widely used) should be placed on end systems. In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user- level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s. We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18–90% both the same server and nginx running over the kernel’s stack.

    Sharad Agarwal
  • L. Zhan, D. Chiu

    Smart phones have become very popular. Most people attending a conference have a smartphone with them; so it is natural to think about how to build a mobile application to support a conference. In the process of organizing ACM Sigcomm 2013, we initiated a student project to build such a conference app. As a conference organizator, we had good motivation and inspiration to design functions we would like to support. In this paper, we share our experiences, in both functional design and implementation, as well as our experience in trying it out during Sigcomm 2013.

  • B. Carpenter

    This note describes how the Internet has got itself into deep trouble by over-reliance on IP addresses and discusses some possible ways forward.

  • S. Vissicchio, L. Vanbever, O. Bonaventure

    Software Defined Networking (SDN) promises to ease design, operation and management of communication networks. However, SDN comes with its own set of challenges, including incremental deployability, robustness, and scalability. Those challenges make a full SDN deployment difficult in the short-term and possibly inconvenient in the longer-term. In this paper, we explore hybrid SDN models that combine SDN with a more traditional networking approach based on distributed protocols. We show a number of use cases in which hybrid models can mitigate the respective limitations of traditional and SDN approaches, providing incentives to (partially) transition to SDN. Further, we expose the qualitatively diverse tradeoffs that are naturally achieved in hybrid models, making them convenient for different transition strategies and long-term network designs. For those reasons, we argue that hybrid SDN architectures deserve more attention from the scientific community.

  • E. Kenneally, M. Bailey

    The inaugural Cyber-security Research Ethics Dialogue & Strategy Workshop was held on May 23, 2013, in conjunction with the IEEE Security Privacy Symposium in San Francisco, California. CREDS embraced the theme of ethics-by-design in the context of cyber security research, and aimed to: - Educate participants about underlying ethics principles and applications; - Discuss ethical frameworks and how they are applied across the various stakeholders and respective communities who are involved; - Impart recommendations about how ethical frameworks can be used to inform policymakers in evaluating the ethical underpinning of critical policy decisions; - Explore cyber security research ethics techniques, tools, standards and practices so researchers can apply ethical principles within their research methodologies; and - Discuss specific case vignettes and explore the ethical impli- cations of common research acts and omissions.

  • Mat Ford

    This paper reports on a workshop convened to develop an action plan to reduce Internet latency. Internet latency has become a focus of attention at the leading edge of the industry as the desire to make Internet applications more responsive outgrows the ability of increased bandwidth to address this problem. There are fundamental limits to the extent to which latency can be reduced, but there is considerable capacity for improvement throughout the system, making Internet latency a multifaceted challenge. Perhaps the greatest challenge of all is to re-educate the mainstream of the industry to understand that bandwidth is not the panacea, and other optimizations, such as reducing packet loss, are at odds with latency reduction. For Internet applications, reducing the latency impact of sharing the communications medium with other users and applications is key. Current Internet network devices were often designed with a belief that additional buffering would reduce packet loss. In practice, this additional buffering leads to intermittently excessive latency and even greater packet loss under saturating load. For this reason, getting smarter queue management techniques more widely deployed is a high priority. We can reduce these intermittent increases in delay, sometimes by up to two orders of magnitude, by shifting the focus from packet loss avoidance to delay avoidance using technology that we already have developed, tested, implemented and deployed today. There is also plenty of scope for removing other major sources of delay. For instance, connecting to a website could be completed in one roundtrip (the time it takes for packets to travel from source to destination and back again) rather than three or four, by folding two or three rounds of flow and security set-up into the first data exchange, without compromising security or efficiency. Motivating the industry to deploy these advances needs to be aided by the availability of mass-market latency testing tools that could give consumers the information they need to gravitate towards low latency services, providers and products. There is no single network latency metric but several alternatives have been identified that compactly express aggregate delay (e.g. as relationships or a constellation), and tools that make use of these will give greater insight into the impact of changes and the diversity of Internet connections around the world. In many developing countries (and in rural regions of developed countries), aside from Internet access itself, there are significant structural issues, such as trombone routes through the developed world and a lack of content distribution networks (CDNs), that need to be addressed with more urgency than Active Queue Management (AQM) deployment, but the 'blank slate' of new deployments provides an opportunity to consider latency now. More widespread use of Internet exchange points for hosting local content and fostering local interconnections is key to addressing some of these structural challenges.

  • N. Feamster, J. Rexford, E. Zegura

    Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.

  • A. Dainotti, K. Benson, A. King, kc claffy, M. Kallitsis, E. Glatz, X. Dimitropoulos

    This errata is to help viewers/readers identify/properly understand our contribution to the SIGCOMM CCR Newsletter. Volume 44 Issue 1, (January 2014) on pages 42-49.

  • Dina Papagiannaki
    Happy new year! Welcome to the January 2014 issue of ACM Computer Communications Review. We are starting the new year with one of the largest CCR issues I have had the pleasure to edit. This issue contains 10 papers, 6 technical peer reviewed contributions and 4 editorial notes.
     
    The technical papers cover a range of areas, such as routing, Internet measurements, WiFi networking, named data networking and online social networks. They should make a very diverse and interesting read for the CCR audience. In the editorial zone, we have had the pleasure to receive 4 contributions, 3 out of which address fundamental issues around how our community works.
     
    In his editorial note, Prof. Nick McKeown, from Stanford University, is providing his perspective on the issues that go right and the issues that could be improved in the way our premier conference, ACM SIGCOMM, is organized. Prof. McKeown is making a case for a more inclusive conference, drawing examples from other communities. He is further attempting to identify possible directions we could pursue in order to transfer our fundamental contributions into the industry and the society as a whole. 
     
    One more editorial is touching upon some of the issues that Prof. McKeown is outlining in his editorial. Its focus is to identify ways to bridge the gap between the networking community and the Internet standardization bodies. The authors, from Broadcom, Nokia, University of Cambridge, Aalto University and University of Helsinki, are describing the differences and similarities between how the two communities operate. They further provide interesting data on the participation of academic and industrial researchers in standardization bodies. They discuss ways to minimize the friction that may exist as a particular technology is making the leap from the scientific community into the industry. 
     
    Similarities can also be found in Dr. Partridge’s editorial. Dr. Partridge identifies the difficulties faced in publishing work that challenges the existing Internet architecture. One of the interesting recommendations made in the editorial is that a new Internet architecture should not start off trying to be backwards compatible. He encourages our community to be more receptive when it comes to those contributions.
     
    Lastly, we have the pleasure to host our second interview in this issue of CCR. Prof. Mellia interviewed Dr. Antonio Nucci, that is the current CTO of Narus, based in the Bay Area. In this interview you will see a description of Dr. Nucci’s journey from an academic researcher to the Best CTO awardee and his recommendations on interesting research directions for current and future PhD candidates.
     
    All in all, this issue of CCR features a number of interesting, thought provoking articles that we hope you enjoy. The intention behind some of them is that they become the catalyst to a discussion as to how we can make our work more impactful in today’s society, a discussion that I find of critical importance, given our society’s increasing reliance on the Internet.
     
    This issue is also accompanied by a number of departures from the editorial board. I would like to thank Dr. Nikolaos Laoutaris, and Dr. Jia Wang, for their continuous help over the past 2 and 3 years respectively. And we are welcoming Prof. Phillipa Gill, from Stony Brook University, and Prof. Joel Sommers, from Colgate University. They both join the editorial board with a lot of passion to contribute to CCR’s continued success.
    I hope this issue stimulates some discussion and I am at your disposal for any questions or suggestions.
  • Ahmed Elmokashfi, Amogh Dhamdhere
    In the mid 2000s there was some concern in the research and operational communities over the scalability of BGP, the Internet’s interdomain routing protocol. The focus was on update churn (the number of routing protocol messages that are exchanged when the network undergoes routing changes) and whether churn was growing too fast for routers to handle. Recent work somewhat allayed those fears, showing that update churn grows slowly in IPv4, but the question of routing scalability has re-emerged with IPv6. In this work, we develop amodel that expresses BGP churn in terms of four measurable properties of the routing system. We show why the number of updates normalized by the size of the topology is constant, and why routing dynamics are qualitatively similar in IPv4 and IPv6. We also show that the exponential growth of IPv6 churn is entirely expected, as the underlying IPv6 topology is also growing exponentially.
    Jia Wang
  • Mishari Almishari, Paolo Gasti, Naveen Nathan, Gene Tsudik
    Content-Centric Networking (CCN) is an alternative to today’s Internet IP-style packet-switched host-centric networking. One key feature of CCN is its focus on content distribution, which dominates current Internet traffic and which is not well-served by IP. Named Data Networking (NDN) is an instance of CCN; it is an on-going research effort aiming to design and develop a full-blown candidate future Internet architecture. Although NDN’s emphasizes content distribution, it must also support other types of traffic, such as conferencing (audio, video) as well as more historical applications, such as remote login and file transfer. However, suitability of NDN for applications that are not obviously or primarily content-centric. We believe that such applications are not going away any time soon. In this paper, we explore NDN in the context of a class of applications that involve lowlatency bi-directional (point-to-point) communication. Specifically, we propose a few architectural amendments to NDN that provide significantly better throughput and lower latency for this class of applications by reducing routing and forwarding costs. The proposed approach is validated via experiments.
    Katerina Argyraki
  • Mohammad Rezaur Rahman, Pierre-Andr Nol, Chen-Nee Chuah, Balachander Krishnamurthy, Raissa M. D'Souza, S. Felix Wu
    Online social network (OSN) based applications often rely on user interactions to propagate information or to recruit more users, producing a sequence of user actions called adoption process or cascades. This paper presents the first attempt to quantitatively study the adoption process or cascade of such OSN-based applications by analyzing detailed user activity data from a popular Facebook gifting application. In particular, due to the challenge of monitoring user interactions over all possible channels on OSN platforms, we focus on characterizing the adoption process that relies only on user-based invitation (which is applicable to most gifting applications). We characterize the adoptions by tracking the invitations sent by the existing users to their friends through the Facebook gifting application and the events when their friends install the application for the first time. We found that a small number of big cascades carry the adoption of
    most of the application users. Contrary to common beliefs, we did not observe special influential nodes that are responsible for the viral adoption of the application.
    Fabian E. Bustamante
  • Phillipa Gill, Michael Schapira, Sharon Goldberg
    Researchers studying the inter-domain routing system typically rely on models to ll in the gaps created by the lack of information about the business relationships and routing policies used by individual autonomous systems. To shed light on this unknown information, we asked  100 network
    operators about their routing policies, billing models, and thoughts on routing security. This short paper reports the survey's results and discusses their implications.
    Jia Wang
  • Pablo Salvador, Luca Cominardi, Francesco Gringoli, Pablo Serrano
    The IEEE 802.11aa Task Group has recently standardized a set of mechanisms to eciently support video multicasting, namely, the Group Addressed Transmission Service (GATS). In this article, we report the implementation of these mechanisms over commodity hardware, which we make publicly available, and conduct a study to assess their performance under a variety of real-life scenarios. To the best of our knowledge, this is the rst experimental assessment of GATS, which is performed along three axes: we report their complexity in terms of lines of code, their e ectiveness when delivering video trac, and their eciency when utilizing wireless resources. Our results provide key insights on the
    resulting trade-o s when using each mechanism, and paves the way for new enhancements to deliver video over 802.11 Wireless LANs.
    Sharad Agarwal
Syndicate content