Computer Communication Review: Papers

Find a CCR issue:
  • Dina Papagiannaki

    Welcome to the April issue of Computer Communications Review, our community’s quarterly newsletter. Or maybe workshop? Many may not realize it but CCR is actually operating like a workshop with quarterly deadlines. Every quarter we receive 40-60 submissions that are being reviewed by a collective of more than 100 reviewers, and handled by our 12 member editorial board, to which I would like to welcome Alberto Dainotti, from CAIDA. Out of all submissions, some technical papers are published in the running issue, while others are being given feedback for further improvement and get re-evaluated for a later issue. I cannot thank enough the hard working editorial board that, quarter on quarter, handle their allocated papers, targeting to provide the best possible feedback. The editorial papers are not being peer reviewed, but solely reviewed by me. Editorial papers fall into two categories: i) position papers, or ii) workshop reports. My task is to ensure that the positions are clearly expressed and possibly identify cases where positions are being presented through technical arguments, in which case I may engage someone from the editorial board or redirect the paper to the technical track. Fundamentally, CCR is a vehicle to bring our community together and expose interesting, novel ideas as early as possible. And I believe we do achieve this. This issue is an example of the above process. We received 36 papers – 32 technical submissions, and 4 editorials. We accepted all editorials, and 2 of the technical papers, while 10 papers have been recommended for resubmission, with clear recommendations on the changes required. A lot of authors agree that their papers have improved in clarity and technical accuracy through the process of revise-and-resubmit. I hope you enjoy the two technical papers, on rate adaptation in 802.11n, and multipath routing on wireless sensor networks. Three of the editorials cover workshops and community meetings: 1) the 1st named data networking community meeting, 2) the Dagstuhl seminar on distributed cloud computing, and 3) the 1st data transparency lab workshop. Meeting reports are a wonderful way of tracking the state of the art in specific areas, and learning from the findings of the organizers. The last editorial is one of my favorite editorials so far. The authors provide a unique historical perspective on how IP address allocation has evolved since the inception of the Internet, and implications that our community has to deal with. A very interesting exposition of IP address scarcity but also a very valuable perspective on how the Internet as a whole has evolved. This issue is also bringing a novelty. We are establishing a new column, edited by Dr. Renata Teixeira from INRIA. The column is aiming to bring successful examples of technology transfer from our community to the networking industry. The inaugural example is provided by Dr. Paul Francis, discussing Network Address Translation (NAT). Funny how NAT was first proposed in CCR, and that the non-workshop editorial of this issue also deals with IP scarcity. It is interesting to read Paul’s exposition of the events, along with his own reflections on whether what was transferred was what he actually proposed :-). With all this, I hope you enjoy the content of this issue, as well as our second column on graduate advice. Finally, I am expecting you all at the best of CCR session of ACM Sigcomm in London – the Sigcomm session where we celebrate the best technical and the best editorial published by CCR during the past year.
    Dina Papagiannaki CCR Editor

  • L. Kriara, M. Marina

    We consider the link adaptation problem in 802.11n wireless LANs that involves adapting MIMO mode, channel bonding, modulation and coding scheme, and frame aggregation level with varying channel conditions. Through measurement-based analysis, we find that adapting all available 802.11n features results in higher goodput than adapting only a subset of features, thereby showing that holistic link adaptation is crucial to achieve best performance. We then design a novel hybrid link adaptation scheme termed SampleLite that adapts all 802.11n features while being efficient compared to sampling-based open-loop schemes and practical relative to closed loop schemes. SampleLite uses sender-side RSSI measurements to significantly lower the sampling overhead, by exploiting the monotonic relationship between best settings for each feature and the RSSI. Through analysis and experimentation in a testbed environment, we show that our proposed approach can reduce the sampling overhead by over 70% on average compared to the widely used Minstrel HT scheme. We also experimentally evaluate the goodput performance of SampleLite in a wide range of controlled and realworld interference scenarios. Our results show that SampleLite, while performing close to the ideal, delivers goodput that is 35– 100% better than with existing schemes.

    Aline Carneiro Viana
  • S. Sharma, S. Jena

    Wireless Sensor Network (WSN) consists of low power sensor nodes. Energy is the main constraint associated with the sensor nodes. In this paper, we propose a cluster based multipath routing protocol, which uses the clustering and multipath techniques to reduce energy consumption and increase the reliability. The basic idea is to reduce the load of the sensor node by giving more responsibility to the base station (sink). We have implemented and compared the protocol with existing protocols and found that it is more energy-efficient and reliable.

    Joseph Camp
  • P. Richter, M. Allman, R. Bush, V. Paxson

    With the ongoing exhaustion of free address pools at the registries serving the global demand for IPv4 address space, scarcity has become reality. Networks in need of address space can no longer get more address allocations from their respective registries. In this work we frame the fundamentals of the IPv4 address exhaustion phenomena and connected issues. We elaborate on how the current ecosystem of IPv4 address space has evolved since the standardization of IPv4, leading to the rather complex and opaque scenario we face today. We outline the evolution in address space management as well as address space use patterns, identifying key factors of the scarcity issues. We characterize the possible solution space to overcome these issues and open the perspective of address blocks as virtual resources, which involves issues such as differentiation between address blocks, the need for resource certification, and issues arising when transferring address space between networks.

  • claffy, J. Polterock, A. Afanesev, J. Burke, L. Zhang

    This report is a brief summary of the first NDN Community Meeting held at UCLA in Los Angeles, California on September 4-5, 2014. The meeting provided a platform for the attendees from 39 institutions across seven countries to exchange their recent NDN research and development results, to debate existing and proposed functionality in security support, and to provide feedback into the NDN architecture design evolution.

  • Y. Coady, O. Hohlfeld, J. Kempf, R. McGeer, S. Schmid

    A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today’s massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by offering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify different use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.

  • R. Gross-Brown, M. Ficek, J. Agundez, P. Dressler, N. Laoutaris

    On November 20 and 21 2014, Telefonica I+D hosted the Data Transparency Lab ("DTL") Kickoff Workshop on Personal Data Transparency and Online Privacy at its headquarters in Barcelona, Spain. This workshop provided a forum for technologists, researchers, policymakers and industry representatives to share and discuss current and emerging issues around privacy and transparency on the Internet. The objective of this workshop was to kick-start the creation of a community of research, industry, and public interest parties that will work together towards the following objectives: - The development of methodologies and user-friendly tools to promote transparency and empower users to understand online privacy issues and consequences; - The sharing of datasets and research results, and; - The support of research through grants and the provision of infrastructure to deploy tools. With the above activities, the DTL community aims to improve our understanding of technical, ethical, economic and regulatory issues related to the use of personal data by online services. It is hoped that successful execution of such activities will help sustain a fair and transparent exchange of personal data online. This report summarizes the presentations, discussions and questions that resulted from the workshop.

  • Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, Renata Teixeira

    As networking researchers, we love to work on ideas that improve the practice of networking. In the early pioneering days of the Internet the link between networking researchers and practitioners was strong; the community was small and everyone knew each other. Not only were there many important ideas from the research community that affected the practice of networking, we were all very aware of them. Today, the networking industry is enormous and the practice of networking spans many network equipment vendors, operators, chip builders, the IETF, data centers, wireless and cellular, and so on. There continue to be many transfers of ideas, but there isn’t a forum to learn about them. The goal of this series is to create such a forum by presenting articles that shine a spotlight on specific examples; not only on the technology and ideas, but also on the path the ideas took to affect the practice. Sometimes a research paper was picked up by chance; but more often, the researchers worked hand-in-hand with the standards community, the open-source community or industry to develop the idea further to make it suitable for adoption. Their story is here. We are seeking CCR articles describing interesting cases of “research affecting the practice,” including ideas transferred from research labs (academic or industrial) that became: • Commercial products. • Internet standards • Algorithms and ideas embedded into new or existing products • Widely used open-source software • Ideas deployed first by startups, by existing companies or by distribution of free software • Communities built around a toolbox, language, dataset We also welcome stories of negative experiences, or ideas that seemed promising but ended-up not taking off. Paul Francis has accepted to start this editorial series. Enjoy it!
    Bruce Davie, Christophe Diot, Lars Eggert, Nick McKeown, Venkat Padmanabhan, and Renata Teixeira SIGCOMM Industrial Liaison Board

  • Aditya Akella

    As some of you may know, I gave a talk at CoNEXT 2014 titled "On future-proofing networks" (see [1] for slides). While most of the talk was focused on my past research projects, I spent a bit of time talking about the kind of problems I like to work on. I got several interesting questions about the latter, both during the talk and in the weeks following the talk. Given this, I thought I would devote this column (and perhaps parts of future columns) to putting my ideas on problem selection into words. I suspect what I write below will generate more questions than answers in your mind. Don’t hold your questions back, though! Write to me at guru@sigcomm.org!

  • Paul Francis

    In January of 1993, Tony Eng and I published the first paper to propose Network Address Translation (NAT) in CCR based on work done the previous summer during Tony's internship with me at Bellcore. Early in 1994, according to Wikipedia, development was started on the PIX (Private Internet Exchange) firewall product by John Mayes, which included NAT as a key feature. In May of 1994 the first RFC on NAT was published (RFC 1631). PIX was a huge success, and was bought by Cisco in November of 1995. I was asked by the Sigcomm Industrial Liaison Board to write the first of a series of articles under the theme “Examples of Research Affecting the Practice of Networking.” The goal of the series is to help find ways for the research community to increase its industrial impact. I argued with the board that NAT isn't a very good example because I don't think the lessons learned apply very well to today's environment. Better to start with a contemporary example like Openflow. Why isn’t NAT a good example? It's because I as a researcher had nothing to do with the success of NAT, and the reasons for the success of NAT had nothing to do with the reason I developed NAT in the first place. Let me explain. I conceived of NAT as a solution to the expected depletion of the IPv4 address space. Nobody bought a PIX, however, because they wanted to help slow down the global consumption of IP addresses. People bought PIX firewalls because they needed a way to connect their private networks to the public Internet. At the time, it was a relatively common practice for people to assign any random unregistered IP address to private networking equipment or hosts. Picking a random address was much easier than, for instance, going to a regional internet registry to obtain addresses, especially when there was no intent to connect the private network to the public Internet. This obviously became a problem once someone using unregistered private addresses wished to connect to the Internet. The PIX firewall saved people the trouble of trying to obtain an adequate block of IP addresses, and having to renumber networks in order to connect to the Internet if they had already been using unregistered addresses. Indeed, PIX was conceived by John Mayes as an IP variant of the telephone PBX (Private Branch Exchange), which allows private phone systems to operate with their own private numbers, often needing only a single public phone number. The whole NAT episode, seen from my point of view at the time, boils down to this. I published NAT in CCR (thanks Dave Oran, who was editor at the time and allowed it). For me, that was the end of it. Some time later, I was contacted by Kjeld Egevang of Cray Communications, who wanted to write an RFC on NAT so that they could legitimize their implementation of NAT. So I helped a little with that and lent my name to the RFC. (In those days, all you needed to publish an RFC was the approval of one guy, Jon Postel!) Next thing I knew, NAT was everywhere. Given that the problem that I was trying to solve, and the problem that PIX solved, are different, there is in fact no reason to think that John Mayes or Kjeld Egevang got the idea for NAT from the CCR paper. So what would the lesson learned for researchers today be? This: Solve an interesting problem with no business model, publish a paper about it, and hope that somebody uses the same idea to solve some problem that does have a business model. Clearly not a very interesting lesson. I agree with the motivation of this article series. It is very hard to have industrial impact in networking today. The last time I tried was five or six years ago when I made a serious effort to turn ViAggre (NSDI’09) into an RFC. Months of effort yielded nothing, and perhaps rightly so. I hope others, especially others with more positive recent results, will contribute to this series.

  • Dina Papagiannaki

    Happy new year and one more issue of CCR in your inbox or your mailbox. At CCR, we are beginning the new year with a lot of energy and new content that we would like to establish as mainstream in our publication. Starting in January 2015, we are instating a new column in CCR, that is going to be edited by Prof. Aditya Akella, from University of Wisconsin at Madison. Its goal is to provide research and professional advice to our ever growing community. Prof. Akella is describing his intentions with this column in his own editorial. I sincerely hope that this new column will be tremendously successful and will help a lot of the CCR readers navigate academic and research directions or career choices. This issue contains one technical paper that is looking into the tail loss recovery mechanism of TCP and two editorials. The first editorial is an interesting overview of how Internet Exchange Points have evolved in Europe and the U.S.A. The authors provide technical and business related reasons around the observed evolution and outline the issues that would require more attention in the future. Lastly, the second editorial is what I promised in my October 2014 editor’s note. Dr. George Varghese has provided CCR with an editorial note that captures his thinking around what he calls "confluences", and which was presented during his SIGCOMM keynote speech. Reading his editorial literally brought me back to Chicago and the auditorium where George received his SIGCOMM award. I find the concept of "confluence" very important. Finding such confluences is not easy, but when it happens, research becomes fun, exciting, and certainly far easier to motivate and transfer to actual products. I do hope that PhD candidates try to apply George’s framework as they search for their thesis topics. With all this, I wanted to wish you a very happy, and productive 2015. We are always looking forward to your contributions!

  • M. Rajiullah, P. Hurtig, A. Brunstrom, A. Petlund, M. Welzl

    Interactive applications do not require more bandwidth to go faster. Instead, they require less latency. Unfortunately, the current design of transport protocols such as TCP limits possible latency reductions. In this paper we evaluate and compare different loss recovery enhancements to fight tail loss latency. The two recently proposed mechanisms "RTO Restart" (RTOR) and "Tail Loss Probe" (TLP) as well as a new mechanism that applies the logic of RTOR to the TLP timer management (TLPR) are considered. The results show that the relative performance of RTOR and TLP when tail loss occurs is scenario dependent, but with TLP having potentially larger gains. The TLPR mechanism reaps the benefits of both approaches and in most scenarios it shows the best performance.

    Joel Sommers
  • N. Chatzis, G. Smaragdakis, A. Feldmann, W. Willinger

    The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., a lack of diversity in available peering opportunities. We examine in this paper the cast of Internet stakeholders that are bound to play a critical role in determining the fate of this Open-IX effort. These include the large content and cloud providers, CDNs, Tier-1 ISPs, the well-established and some of the newer commercial datacenter and colocation companies, and the largest IXPs in Europe. In particular, we comment on these different parties’ current attitudes with respect to public and private peering and discuss some of the economic arguments that will ultimately determine whether or not the currently pursued strategy by OIX will succeed in achieving the main OIX-articulated goal – a more level playing field for private and public peering in the US such that the actual demand and supply for the different peering opportunities will be reflected in the cost structure.

  • George Varghese

    The most striking ideas in systems are abstractions such as virtual memory, sockets, or packet scheduling. Algorithmics is the servant of abstraction, allowing the performance of the system to approach that of the underlying hardware. I survey the trajectory of network algorithmics, starting with a focus on speed and scale in the 1990s to measurement and security in the 2000s, using what I call the confluence lens. Confluence sees interdisciplinary work as a merger of two or more disciplines made compelling by an inflection point in the real world, while also producing genuinely transformed ideas. I attempt to show that Network Algorithmics represented a confluence in the 1990s between computer systems, algorithms, and networking. I suggest Confluence Diagrams as a means to identify future interdisciplinary opportunities, and describe the emerging field of Network Verification as a new confluence between networking and programming languages.

  • Aditya Akella

    Dear networking students everywhere, Welcome to the first edition of a new quarterly column aimed at mentoring students in the ACM SIGCOMM community. I’m honored to be the inaugural editor. The primary objective of this column is to offer students advice on general issues pertaining to networking research, teaching, and careers. Most students have excellent advisors, but my hope is that this column will nevertheless help, e.g., by: (a) augmenting advice students get from their advisors, and (b) aiding graduate/undergraduate students who are exploring moving into networking or to a new sub-topic therein. Here are some examples of questions this column is suitable for. This is not an exhaustive list; there are many other relevant issues that are not listed here. • Research methods: “Is there an optimal way to approach networked systems research? What are some things to keep in mind when embarking on active measurement projects?” • Tools, testbeds, datasets: “Where can I do experiments to test my new cool idea for data center networking”? “Are there datasets that will aid me in my research on topic X?” • Time management ahead of conference deadlines: “How do I balance writing vs. hacking/experiments?” • Career advice: “With so much exciting work happening in the industry, is there much point in seeking an academic job? Is a PhD in networking worth it?” • Course choice: “Are there courses I should take to prepare myself for research in topic X? Are there online materials I can use for this?” • Teaching networking: “I’m going to teach my first networking course in the near future. How do I prepare myself? Are there online resources I could use?” • Getting involved: “How do I become a more integral part of the SIGCOMM community?” I will likely not address each individual query; rather my goal is to collate (a subset of) the questions I get, group them into meaningful topics, and offer advice as best as I can. Of course, I am not an expert on many of these topics. Thus, I may seek the opinion of relevant people in the community in responding to specific queries. This column is likely to be just 1-2 pages long. As such, some answers may be brief; e.g., I may simply point folks to online resources that already offer the relevant advice. Note that this is not an advice column of the sort you may find in magazines and Sunday newspapers. I will not address topics that are personal in nature; e.g., if you’re having an issue with your advisor I’m not the one to approach for resolution! I will also not address questions comparing venues (“Is X a better conference than Y?”) or research topics (“Is X a better topic to work on than Y?” or “Is X dead?”). And don’t send me abstracts/papers for feedback! Interested students can send queries to guru@sigcomm.org. Names of students asking questions will not be published by default, unless students request otherwise. If you’re not a student, and you have feedback on or disagreement with a statement or comment in my column, I’d love to hear from you as well. Looking forward to hearing from you all.

  • Dina Papagiannaki

    Welcome to the October issue of CCR. This issue features the technical and editorial papers that comprise CCR's quarterly content, but also all the papers that appeared at ACM Sigcomm, along with the best papers selected from its affiliated workshops. Sigcomm this year was attended by 733 people, making it the second highest attendance event ACM Sigcomm has ever had, after Hong Kong last year. More interestingly, 23% of all attendees were coming from industry, something which I also noticed during the presentations and the follow up questions. Our conference has become a very vibrant venue for an expanding community. A community, that seems increasingly interested in solving difficult scientific problems while having impact on the technology landscape as a whole. I am really looking forward to the results of such closer collaboration between academic and industrial researchers. The SIGCOMM award was presented to Dr. George Varghese, “for his sustained and diverse contributions to network algorithmics, with far reaching impact in both research and industry.” George’s talk was focused on his framework of structuring his research, such that he indeed solves difficult problems that will change the landscape of technology in a fundamental way. I surely hope to secure an editorial submission from him in a future issue of CCR, where he can describe in his own "written" words his thinking around what he calls "confluence." The main content of the conference revolved around software defined networks, data centers, wireless and cellular networks, security and privacy. Security and trust are also the topics of our three CCR technical papers for this issue. The editorial section comprises the report from the 6th workshop on active Internet measurements, a position paper aiming to define “fog computing,” and the introduction of an interesting open platform for cellular research, OpenAirInterface. I hope you enjoy reading them. Finally, this issue is marking the end of tenure for two of our editors, Dr. Renata Teixeira, and Dr. Sanjay Jha. I wanted to thank both of them for their service to CCR. With that, I hope you enjoy this extended issue of CCR and I am always at your disposal in case of questions/suggestions. Dina Papagiannaki CCR Editor

  • S. Coull, K. Dyer

    Instant messaging services are quickly becoming the most dominant form of communication among consumers around the world. Apple iMessage, for example, handles over 2 billion messages each day, while WhatsApp claims 16 billion messages from 400 million international users. To protect user privacy, many of these services typically implement endto-end and transport layer encryption, which are meant to make eavesdropping infeasible even for the service providers themselves. In this paper, however, we show that it is possible for an eavesdropper to learn information about user actions, the language of messages, and even the length of those messages with greater than 96% accuracy despite the use of state-of-the-art encryption technologies simply by observing the sizes of encrypted packets. While our evaluation focuses on Apple iMessage, the attacks are completely generic and we show how they can be applied to many popular messaging services, including WhatsApp, Viber, and Telegram.

    Joel Sommers
  • C. Ghali, G. Tsudik, E. Uzun

    In contrast to today’s IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today’s Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.

    Phillipa Gill
  • R. Hofstede, L. Hendriks, A. Sperotto, A. Pras

    Flow-based approaches for SSH intrusion detection have been developed to overcome the scalability issues of host-based alternatives. Although the detection of many SSH attacks in a flow-based fashion is fairly straightforward, no insight is typically provided in whether an attack was successful. We address this shortcoming by presenting a detection algorithm for the flow-based detection of compromises, i.e., hosts that have been compromised during an attack. Our algorithm has been implemented as part of our open-source IDS SSHCure and validated using almost 100 servers, workstations and honeypots, featuring an accuracy close to 100%.

    Hitesh Ballani
  • L. Vaquero, L. Rodero-Merino

    The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as “the fog”. However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.

  • N. Nikaein, M. Marina, S. Manickam, A. Dawson, R. Knopp, C. Bonnet

    Driven by the need to cope with exponentially growing mobile data traffic and to support new traffic types from massive numbers of machine-type devices, academia and industry are thinking beyond the current generation of mobile cellular networks to chalk a path towards fifth generation (5G) mobile networks. Several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloud RANs, application of SDN principles, exploiting new and unused portions of spectrum, use of massive MIMO and full-duplex communications. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. Towards this end, we present OpenAirInterface (OAI) as a suitably flexible platform. In addition, we discuss the use of OAI in the context of several widely mentioned 5G research directions.

  • kc claffy

    On 26-27 March 2014, CAIDA hosted the sixth Workshop on Active Internet Measurements (AIMS-6) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies between academics, industry, policymakers, and funding agencies in the area of active Internet measurement. This year, we explored capabilities and opportunities for network measurement in the wireless domain, and research infrastructure to support it. Participants found the workshop content challengingly diverse, with substantial knowledge exchange regarding the wireless research infrastructure landscape(s) and existing measurement capabilities. But attendees agreed that the conversation was only beginning, and that some challenges merit further discussion, such as finding consensus on standard metrics to measure, and constructing a road map for wireless measurement research infrastructure and activities for the next decade. This report describes topics discussed at the workshop, and summarizes participants’ views of priorities for future funding as well as follow-on workshops in this area. Materials related to the workshop are available at http://www.caida.org/workshops/aims/1403/.

  • Masoud Moshref, Minlan Yu, Ramesh Govindan, Amin Vahdat

    Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals to configure TCAM counters in hardware switches to monitor traffic. However, the TCAM memory at switches is fundamentally limited and the accuracy of the measurement tasks is a function of the resources devoted to them on each switch. This paper describes an adaptive measurement framework, called DREAM, that dynamically adjusts the resources devoted to each measurement task, while ensuring a user-specified level of accuracy. Since the trade-off between resource usage and accuracy can depend upon the type of tasks, their parameters, and traffic characteristics, DREAM does not assume an a priori characterization of this trade-off, but instead dynamically searches for a resource allocation that is sufficient to achieve a desired level of accuracy. A prototype implementation and simulations with three network-wide measurement tasks (heavy hitter, hierarchical heavy hitter and change detection) and diverse traffic show that DREAM can support more concurrent tasks with higher accuracy than several other alternatives.

  • Vimalkumar Jeyakumar, Mohammad Alizadeh, Yilong Geng, Changhoon Kim, David Mazières

    This paper presents a practical approach to rapidly introducing new dataplane functionality into networks: End-hosts embed tiny programs into packets to actively query and manipulate a network’s internal state. We show how this “tiny packet program” (TPP) interface gives end-hosts unprecedented visibility into network behavior, enabling them to work with the network to achieve a desired functionality. Our design leverages what each component does best: (a) switches forward and execute tiny packet programs (at most 5 instructions) in-band at line rate, and (b) end-hosts perform arbitrary (and easily updated) computation on network state. By implementing three different research proposals, we show that TPPs are useful. Using a hardware prototype on a NetFPGA, we show our design is feasible at a reasonable cost.

  • Ethan Heilman, Danny Cooper, Leonid Reyzin, Sharon Goldberg

    The Resource Public Key Infrastructure (RPKI) is a new infrastructure that prevents some of the most devastating attacks on interdomain routing. However, the security benefits provided by the RPKI are accomplished via an architecture that empowers centralized authorities to unilaterally revoke any IP prefixes under their control. We propose mechanisms to improve the transparency of the RPKI, in order to mitigate the risk that it will be used for IP address takedowns. First, we present tools that detect and visualize changes to the RPKI that can potentially take down an IP prefix. We use our tools to identify errors and revocations in the production RPKI. Next, we propose modifications to the RPKI’s architecture to (1) require any revocation of IP address space to receive consent from all impacted parties, and (2) detect when misbehaving authorities fail to obtain consent. We present a security analysis of our architecture, and estimate its overhead using data-driven analysis.

  • Kirill Kogan, Sergey Nikolenko, Ori Rottenstreich, William Culhane, Patrick Eugster

    Efficient packet classification is a core concern for network services. Traditional multi-field classification approaches, in both software and ternary content-addressable memory (TCAMs), entail tradeoffs between (memory) space and (lookup) time. TCAMs cannot efficiently represent range rules, a common class of classification rules confining values of packet fields to given ranges. The exponential space growth of TCAM entries relative to the number of fields is exacerbated when multiple fields contain ranges. In this work, we present a novel approach which identifies properties of many classifiers which can be implemented in linear space and with worst-case guaranteed logarithmic time and allows the addition of more fields including range constraints without impacting space and time complexities. On real-life classifiers from Cisco Systems and additional classifiers from ClassBench [7] (with real parameters), 90-95% of rules are thus handled, and the other 510% of rules can be stored in TCAM to be processed in parallel.

  • Jakub Czyz, Mark Allman, Jing Zhang, Scott Iekel-Johnson, Eric Osterweil, Michael Bailey

    After several IPv4 address exhaustion milestones in the last three years, it is becoming apparent that the world is running out of IPv4 addresses, and the adoption of the next generation Internet protocol, IPv6, though nascent, is accelerating. In order to better understand this unique and disruptive transition, we explore twelve metrics using ten global-scale datasets to create the longest and broadest measurement of IPv6 adoption to date. Using this perspective, we find that adoption, relative to IPv4, varies by two orders of magnitude depending on the measure examined and that care must be taken when evaluating adoption metrics in isolation. Further, we find that regional adoption is not uniform. Finally, and perhaps most surprisingly, we find that over the last three years, the nature of IPv6 utilization—in terms of traffic, content, reliance on transition technology, and performance—has shifted dramatically from prior findings, indicating a maturing of the protocol into production mode. We believe IPv6’s recent growth and this changing utilization signal a true quantum leap.

  • Te-Yuan Huang, Ramesh Johari, Nick McKeown, Matthew Trunnell, Mark Watson

    Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10–20% compared to Netflix’s then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.

  • Tong Yang, Gaogang Xie, YanBiao Li, Qiaobin Fu, Alex X. Liu, Qi Li, Laurent Mathy

    The Forwarding Information Base (FIB) of backbone routers has been rapidly growing in size. An ideal IP lookup algorithm should achieve constant, yet small, IP lookup time and on-chip memory usage. However, no prior IP lookup algorithm achieves both requirements at the same time. In this paper, we first propose SAIL, a Splitting Approach to IP Lookup. One splitting is along the dimension of the lookup process, namely finding the prefix length and finding the next hop, and another splitting is along the dimension of prefix length, namely IP lookup on prefixes of length less than or equal to 24 and IP lookup on prefixes of length longer than 24. Second, we propose a suite of algorithms for IP lookup based on our SAIL framework. Third, we implemented our algorithms on four platforms: CPU, FPGA, GPU, and many-core. We conducted extensive experiments to evaluate our algorithms using real FIBs and real traffic from a major ISP in China. Experimental results show that our SAIL algorithms are several times or even two orders of magnitude faster than well known IP lookup algorithms.

  • Peng Sun, Ratul Mahajan, Jennifer Rexford, Lihua Yuan, Ming Zhang, Ahsan Arefin

    We present Statesman, a network-state management service that allows multiple network management applications to operate independently, while maintaining network-wide safety and performance invariants. Network state captures various aspects of the network such as which links are alive and how switches are forwarding traffic. Statesman uses three views of the network state. In observed state, it maintains an up-to-date view of the actual network state. Applications read this state and propose state changes based on their individual goals. Using a model of dependencies among state variables, Statesman merges these proposed states into a target state that is guaranteed to maintain the safety and performance invariants. It then updates the network to the target state. Statesman has been deployed in ten Microsoft Azure datacenters for several months, and three distinct applications have been built on it. We use the experience from this deployment to demonstrate how Statesman enables each application to meet its goals, while maintaining network-wide invariants.

  • Anuj Kalia, Michael Kaminsky, David G. Andersen

    This paper describes the design and implementation of HERD, a keyvalue system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware. HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server’s memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5 µs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based keyvalue systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.

  • Arpit Gupta, Laurent Vanbever, Muhammad Shahbaz, Sean P. Donovan, Brandon Schlinker, Nick Feamster, Jennifer Rexford, Scott Shenker, Russ Clark, Ethan Katz-Bassett

    BGP severely constrains how networks can deliver traffic over the Internet. Today’s networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an “SDX”), we must create compelling applications, such as “application-specific peering”— where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates.

  • Konstantinos Nikitopoulos, Juan Zhou, Ben Congdon, Kyle Jamieson

    This paper presents the design and implementation of Geosphere, a physical- and link-layer design for access point-based MIMO wireless networks that consistently improves network throughput. To send multiple streams of data in a MIMO system, prior designs rely on a technique called zero-forcing, a way of “nulling” the interference between data streams by mathematically inverting the wireless channel matrix. In general, zero-forcing is highly effective, significantly improving throughput. But in certain physical situations, the MIMO channel matrix can become “poorly conditioned,” harming performance. With these situations in mind, Geosphere uses sphere decoding, a more computationally demanding technique that can achieve higher throughput in such channels. To overcome the sphere decoder’s computational complexity when sending dense wireless constellations at a high rate, Geosphere introduces search and pruning techniques that incorporate novel geometric reasoning about the wireless constellation. These techniques reduce computational complexity of 256-QAM systems by almost one order of magnitude, bringing computational demands in line with current 16- and 64-QAM systems already realized in ASIC. Geosphere thus makes the sphere decoder practical for the first time in a 4 × 4 MIMO, 256-QAM system. Results from our WARP testbed show that Geosphere achieves throughput gains over multi-user MIMO of 2× in 4 × 4 systems and 47% in 2 × 2 MIMO systems.

  • Guan-Hua Tu, Yuanjie Li, Chunyi Peng, Chi-Yu Li, Hongyi Wang, Songwu Lu

    Control-plane protocols are complex in cellular networks. They communicate with one another along three dimensions of cross layers, cross (circuit-switched and packet-switched) domains, and cross (3G and 4G) systems. In this work, we propose signaling diagnosis tools and uncover six instances of problematic interactions. Such control-plane issues span both design defects in the 3GPP standards and operational slips by carriers. They are more damaging than data-plane failures. In the worst-case scenario, users may be out of service in 4G, or get stuck in 3G. We deduce root causes, propose solutions, and summarize learned lessons.

  • Colin Scott, Andreas Wundsam, Barath Raghavan, Aurojit Panda, Andrew Or, Jefferson Lai, Eugene Huang, Zhi Liu, Ahmed El-Hassany, Sam Whitlock, H.B. Acharya, Kyriakos Zarifis, Scott Shenker

    Software bugs are inevitable in software-defined networking control software, and troubleshooting is a tedious, time-consuming task. In this paper we discuss how to improve control software troubleshooting by presenting a technique for automatically identifying a minimal sequence of inputs responsible for triggering a given bug, without making assumptions about the language or instrumentation of the software under test. We apply our technique to five open source SDN control platforms—Floodlight, NOX, POX, Pyretic, ONOS—and illustrate how the minimal causal sequences our system found aided the troubleshooting process.

  • Ali Munir, Ghufran Baig, Syed M. Irteza, Ihsan A. Qazi, Alex X. Liu, Fahad R. Dogar

    Many data center transports have been proposed in recent times (e.g., DCTCP, PDQ, pFabric, etc). Contrary to the common perception that they are competitors (i.e., protocol A vs. protocol B), we claim that the underlying strategies used in these protocols are, in fact, complementary. Based on this insight, we design PASE, a transport framework that synthesizes existing transport strategies, namely, self-adjusting endpoints (used in TCP style protocols), innetwork prioritization (used in pFabric), and arbitration (used in PDQ). PASE is deployment friendly: it does not require any changes to the network fabric; yet, its performance is comparable to, or better than, the state-of-the-art protocols that require changes to network elements (e.g., pFabric). We evaluate PASE using simulations and testbed experiments. Our results show that PASE performs well for a wide range of application workloads and network settings.

  • David Naylor, Matthew K. Mukerjee, Peter Steenkiste

    Though most would agree that accountability and privacy are both valuable, today’s Internet provides little support for either. Previous efforts have explored ways to offer stronger guarantees for one of the two, typically at the expense of the other; indeed, at first glance accountability and privacy appear mutually exclusive. At the center of the tussle is the source address: in an accountable Internet, source addresses undeniably link packets and senders so hosts can be punished for bad behavior. In a privacy-preserving Internet, source addresses are hidden as much as possible. In this paper, we argue that a balance is possible. We introduce the Accountable and Private Internet Protocol (APIP), which splits source addresses into two separate fields — an accountability address and a return address — and introduces independent mechanisms for managing each. Accountability addresses, rather than pointing to hosts, point to accountability delegates, which agree to vouch for packets on their clients’ behalves, taking appropriate action when misbehavior is reported. With accountability handled by delegates, senders are now free to mask their return addresses; we discuss a few techniques for doing so.

  • Xin Jin, Hongqiang Harry Liu, Rohan Gandhi, Srikanth Kandula, Ratul Mahajan, Ming Zhang, Jennifer Rexford, Roger Wattenhofer

    We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53–88% in both wide area and data center networks compared to prior methods.

  • Zhiyong Zhang, Ovidiu Mara, Katerina Argyraki

    When can we reason about the neutrality of a network based on external observations? We prove conditions under which it is possible to (a) detect neutrality violations and (b) localize them to specific links, based on external observations. Our insight is that, when we make external observations from different vantage points, these will most likely be inconsistent with each other if the network is not neutral. Where existing tomographic techniques try to form solvable systems of equations to infer network properties, we try to form unsolvable systems that reveal neutrality violations. We present an algorithm that relies on this idea to identify sets of nonneutral links based on external observations, and we show, through network emulation, that it achieves good accuracy for a variety of network conditions.

  • Jonathan Perry, Amy Ousterhout, Hari Balakrishnan, Devavrat Shah, Hans Fugal

    An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate control—to a centralized arbiter—of when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebook’s datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240× reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200× reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5× reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.

Syndicate content