Computer Communication Review: Papers

Find a CCR issue:
  • Zuoning Yin, Matthew Caesar, and Yuanyuan Zhou

    Software errors and vulnerabilities in core Internet routers have led to several high-profile attacks on the Internet infrastructure and numerous outages. Building an understanding of bugs in open-source router software is a first step towards addressing these problems. In this paper, we study router bugs found in two widely-used open-source router implementations. We evaluate the root cause of bugs, ease of diagnosis and detectability, ease of prevention and avoidance, and their effect on network behavior.

    S. Saroiu
  • Michael Buettner and David Wetherall

    We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (> $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.

    A. Chaintreau
  • Jon Crowcroft

    I’m so Bored of the Future Internet (FI). There are so many initiatives to look at the Internet’s Future1, anyone would think that there was some tremendous threat like global warming, about to bring about its immediate demise, and that this would bring civilisation crashing down around our ears.

    The Internet has a great future behind it, of course. However, my thesis is that the Future Internet is about as relevant as Anthropogenic Global Warming (AGW), in the way it is being used to support various inappropriate activities. Remember that the start of all this was not the exhaustion of IPv4 address space, or the incredibly slow convergence time of BGP routes, or the problem of scaling router memory for FIBs. It was the US research community reacting to a minor (as in parochial) temporary problem of funding in Communications due to slow down within NSF and differing agendas within DARPA.

    It is not necessary to invoke all the hype and hysteria - it is both necessary and sufficient to talk about sustainable energy2, and good technical communications research, development, deployment and operations.

    To continue the analogy between FI and AGW, what we really do not need is yet more climatologists with dodgy data curation methodologies (or ethnographers studying Internet governance).

    What we do need is some solid engineering, to address a number of problems the Internet has. However, this is in fact happening, and would not stop happening if the entire Future Internet flagship was kidnapped by aliens.
    “We don’t need no” government agency doing top down dictats about what to do when. It won’t work and it will be a massive waste of time, energy and other resources - i.e. like AGW, it will be a load of hot air:)

    On the other hand, there are a number of deeper lessons from the Internet Architecture which might prove useful in other domains, and in the bulk of this opinion piece, I give examples of these, applying the Postel and End-to-end principles to transport, energy, government information/vices.

  • Constantine Dovrolis, Krishna Gummadi, Aleksandar Kuzmanovic, and Sascha D. Meinrath

    Measurement Lab (M-Lab) is an open, distributed server platform for researchers to deploy active Internet measurement tools. The goal of M-Lab is to advance network research and empower the public with useful information about their broadband connections. By enhancing Internet transparency, M-Lab helps sustain a healthy, innovative Internet. This article describes M-Lab’s objectives, administrative organization, software and hardware infrastructure. It also provides an overview of the currently available measurement tools and datasets, and invites the broader networking research community to participate in the project.

  • S. Keshav

    This editorial is about some changes that will affect CCR and its community in the months ahead.

    Changes in the Editorial Board

    CCR Area Editors serve for a two-year term. Since the last issue, the terms of the following Area Editors have expired:
    • Kevin Almeroth, UC Santa Barbara, USA
    • Chadi Barakat, INRIA Sophia Antipolis, France
    • Dmitri Krioukov, CAIDA, USA
    • Jitendra Padhye, Microsoft Research, USA
    • Pablo Rodriguez, Telefonica, Spain
    • Darryl Veitch, University of Melbourne, Australia

    I would like to thank them for their devotion, time, and effort. They have greatly enhanced the quality and reputation of this publication.

    Taking their place is an equally illustrious team of researchers:
    • Augustin Chaintreau, Thomson Research, France
    • Stefan Saroiu, Microsoft Research, USA
    • Renata Teixeira, LIP6, France
    • Jia Wang, AT&T Research, USA
    • David Wetherall, University of Washington, USA Welcome aboard!

    Online Submission System

    This is the first issue of CCR completely created using an online paper submission system rather than email. A slight variant to Eddie Kohler's HotCRP, the CCR submission site allows authors to submit papers at any time, and for them to receive reviews as they are finalized.

    Moreover, they can respond to the reviews and conduct an anonymized conversation with their Area Editor. The system is currently batched: reviewer assignments and reviews are done once every three months. However, starting shortly, papers will be assigned to an Area Editor for review as they are submitted and the set of accepted papers will be published quarterly in CCR. We hope that this will allow authors to have the benefits of a 'rolling deadline,' similar to that pioneered by the VLDB journal.

    Reviewer Pool

    The reviewer pool is a set of volunteer reviewers, usually post-PhD, who are called upon by Area Editors to review papers in their special interests. The current set of reviewers in the pool can be found here: If you would like to join the pool, please send mail to with your name, affiliation, interests, and contact URL.

    Page Limits

    We have had a six-page limit for the last year. The purpose of this limit was to prevent CCR from becoming a cemetery for dead papers. This policy has been a success: the set of technical papers in each issue has been vibrant and well-suited to this venue. However, we recognize that it is difficult to fit work into six pages. Therefore, from now on, although submissions will still be limited to six pages (unless permission is obtained in advance), if the reviewers suggest additional work, additional pages will be automatically granted.

    I hope that these changes will continue to make CCR a bellwether for our community. As always, your comments and suggestions for improvement are always welcome.

  • Hilary Finucane and Michael Mitzenmacher

    We provide a detailed analysis of the Lossy Difference Aggregator, a recently developed data structure for measuring latency in a router environment where packet losses can occur. Our analysis provides stronger performance bounds than those given originally, and leads us to a model for how to optimize the parameters for the data structure when the loss rate is not known in advance by using competitive analysis.

    Dmitri Krioukov
  • Marta Carbone and Luigi Rizzo

    Dummynet is a widely used link emulator, developed long ago to run experiments in user-configurable network environments. Since its original design, our system has been extended in various ways, and has become very popular in the research community due to its features and to the ability to emulate even moderately complex network setups on unmodified operating systems.

    We have recently made a number of extensions to the emulator, including loadable packet schedulers, support for better MAC layer modeling, the inclusion in PlanetLab, and development of Linux and Windows versions in addition to the native FreeBSD and Mac OS X ones.

    The goal of this paper is to present in detail the current features of Dummynet, compare it with other emulation solutions, and discuss what operating conditions should be considered and what kind of accuracy to expect when using an emulation system.

    Kevin Almeroth
  • Hamed Haddadi

    Online advertising is currently the richest source of revenue for many Internet giants. The increased number of online businesses, specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses.

    In this note we wish to raise the awareness of the networking research community on potential research areas within the online advertising field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels.

    Adrian Perrig
  • Dirk Trossen, Mikko Sarela, and Karen Sollins

    The current Internet architecture focuses on communicating entities, largely leaving aside the information to be ex-changed among them. However, trends in communication scenarios show that WHAT is being exchanged becoming more important than WHO are exchanging information. Van Jacobson describes this as moving from interconnecting ma-chines to interconnecting information. Any change of this part of the Internet needs argumentation as to why it should be undertaken in the first place. In this position paper, we identify four key challenges, namely information-centrism of applications, supporting and exposing tussles, increasing accountability, and addressing attention scarcity, that we believe an information-centric internetworking architecture could address better and would make changing such crucial part worthwhile. We recognize, however, that a much larger and more systematic debate for such change is needed, underlined by factual evidence on the gain for such change.

    Kevin Almeroth
  • Pei-chun Cheng, Xin Zhao, Beichuan Zhang, and Lixia Zhang

    BGP routing data collected by RouteViews and RIPE RIS have become an essential asset to both the network research and operation communities. However, it has long been speculated that the BGP monitoring sessions between operational routers and the data collectors fail from time to time. Such session failures lead to missing update messages as well as duplicate updates during session re-establishment, making analysis results derived from such data inaccurate. Since there is no complete record of these monitoring session failures, data users either have to sanitize the data discretionarily with respect to their specific needs or, more commonly, assume that session failures are infrequent enough and simply ignore them. In this paper, we present the first systematic assessment and documentary on BGP session failures of RouteViews and RIPE data collectors over the past eight years. Our results show that monitoring session failures are rather frequent, more than 30% of BGP monitoring sessions experienced at least one failure every month. Furthermore, we observed failures that happen to multiple peer sessions on the same collector around the same time, suggesting that the collector’s local problems are a major factor in the session instability. We also developed a web site as a community resource to publish all session failures detected for RouteViews and RIPE RIS data collectors to help users select and clean up BGP data before performing their analysis.

    Jitendra Padhye
  • David R. Choffnes and Fabian E. Bustamante

    Today’s open platforms for network measurement and distributed system research, which we collectively refer to as testbeds in this article, provide opportunities for controllable experimentation and evaluations of systems at the scale of hundreds or thousands of hosts. In this article, we identify several issues with extending results from such platforms to Internet wide perspectives. Specifically, we try to quantify the level of inaccuracy and incompleteness of testbed results when applied to the context of a large-scale peer-to-peer (P2P) system. Based on our results, we emphasize the importance of measurements in the appropriate environment when evaluating Internet-scale systems.

    Pablo Rodrigues
  • Diana Joumblatt, Renata Teixeira, Jaideep Chandrashekar, and Nina Taft

    There is an amazing paucity of data that is collected directly from users’ personal computers. One key reason for this is the perception among researchers that users are unwilling to participate in such a data collection effort. To understand the range of opinions on matters that occur with end-host data tracing, we conducted a survey of 400 computer scientists. In this paper, we summarize and share our findings.

  • k. c. claffy

    On September 23, 2009, CAIDA hosted a virtual Workshop on Internet Economics to bring together network technology and policy researchers, commercial Internet facilities and service providers, and communications regulators to explore a common goal: framing a concrete agenda for the emerging but empirically stunted field of Internet infrastructure economics. With participants stretching from Washington D.C. to Queensland, Australia, we used the electronic conference hosting facilities supported by the California Institute of Technology (CalTech) EVO Collaboration Network. This report describes the workshop discussions and presents relevant open research questions identified by participants.

  • Ratul Mahajan

    This paper is based on a talk that I gave at CoNEXT 2009. Inspired by Hal Varian’s paper on building economic models, it describes a research method for building computer systems. I find this method useful in my work and hope that some readers will find it helpful as well.

  • Matthew Caesar, Martin Casado, Teemu Koponen, Jennifer Rexford, and Scott Shenker

    This paper advocates a different approach to reduce routing convergence—side-stepping the problem by avoiding it in the first place! Rather than recomputing paths after temporary topology changes, we argue for a separation of timescale between offline computation of multiple diverse paths and online spreading of load over these paths. We believe decoupling failure recovery from path computation leads to networks that are inherently more efficient, more scalable, and easier to manage.

  • Constantine Dovrolis and J. Todd Streelman

    There is significant research interest recently to understand the evolution of the current Internet, as well as to design clean-slate Future Internet architectures. Clearly, even when network architectures are designed from scratch, they have to evolve as their environment (i.e., technological constraints, service requirements, applications, economic conditions, etc) always changes. A key question then is: what makes a network architecture evolvable? What determines the ability of a network architecture to evolve as its environment changes? In this paper, we review some relevant ideas about evolvability from the biological literature. We examine the role of robustness and modularity in evolution, and their relation with evolvability. We also discuss evolutionary kernels and punctuated equilibria, two important concepts that may be relevant to the so-called ossification of the core Internet protocols. Finally, we examine optimality, a design objective that is often of primary interest in engineering but that does not seem to be abundant in biology.

  • Hamed Haddadi, Tristan Henderson, and Jon Crowcroft

    On numerous occasions, trips to the facilities coincide with an important mobile phone call. Due to the sleek and polished nature of modern phones, attempting to promptly deal with such calls can occasionally lead to the phone sliding through the owner’s hands, surrendering to the force of gravity and flying down the hole. This is a disaster, and often an expensive incident. It can also be a health and safety hazard, with the owner desperately attempting to retrieve their phone and re-using it.

    This paper provides a first attempt at a cell phone recovery system using themodern functionalities of Toto Japanese toilets.1 In our approach, the phone is calmly recovered, sanitized and retrieved by the user. This can all happen without the call even being dropped, with possibility of secure backup of the user data by embedded sensor and Wi-Fi network connectivity in the toilet. We envision that such an approach will increase the collaboration between Japanese, European and American mobile operators, network researchers and hardware manufacturers.

  • Martin Burkhart, Dominik Schatzmann, Brian Trammell, Elisa Boschi, and Bernhard Plattner

    In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.

    Dmitri Krioukov
  • Yao Liang and Wei Peng

    We present a sophisticated framework to systematically explore the temporal correlation in environmental monitoring wireless sensor networks. The presented framework optimizes lossless data compression in communications given the resource constraint of sensor nodes. The insights and analyses obtained from the framework can directly lead to innovative and better design of data gathering protocols for wireless sensor networks operated in noisy environments to dramatically reduce the energy consumptions.

    Martin May
  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    The use of multiple antennas and MIMO techniques based on them is the key feature of 802.11n equipment that sets it apart from earlier 802.11a/g equipment. It is responsible for superior performance, reliability and range. In this tutorial, we provide a brief introduction to multiple antenna techniques. We describe the two main classes of those techniques, spatial diversity and spatial multiplexing. To ground our discussion, we explain how they work in 802.11n NICs in practice.

  • Parag Kulkarni, Woon Hau Chin, and Tim Farnham

    Femtocell access points (FAPs), also popularly known as Home Base Stations, are small base stations for use within indoor environments to improve coverage and capacity. FAPs have a limited range (e.g. limited to a home or office area) but offer immense capacity improvements for the network due to the ability to reuse a frequency more often as a result of smaller coverage areas. Because there may be thousands of these devices and since the nature of deployment is adhoc, it may not be possible to carry out elaborate frequency planning like that in the traditional cellular network. This paper aims to outline the radio resource management considerations within the context of femto cells, the broader objective being to initiate a discussion and encourage research in the areas highlighted.

  • Brian E. Carpenter and Craig Partridge

    This note describes the various peer review processes applied to Internet Requests for Comments (RFCs) over a number of years, and suggests that these have been up to normal scholarly standards since at least 1992. The authors believe that these documents should be considered the equivalent of scholarly publications.

  • Dah Ming Chiu and Tom Z.J. Fu

    This study takes papers from a selected set of computer networking conferences and journals spanning the past twenty years (1989-2008) to produce various statistics to show how our community publishes papers, and how this process is changing over the years. We observe the rapid growth in the rate of publications, venues, citations, authors, and number of co-authors. We explain how these quantities are related, in particular explore how they are related over time and the reasons behind the changes. The widely accepted model to explain the power law distribution of paper citations is preferential attachment. We propose an extension and refinement that suggests elapsed time is also a factor to determine which papers get cited. We try to compare the selected venues based on citation count, and discuss how we might think about these comparisons, in terms of the roles played by different venues, and the ability to predict impact by venues, and citation counts. The treatment of these issues is general and can be applied to study publication patterns in other research communities. The larger goal of this study is to generate discussion about our publication system, and work towards a vision to transform our publication system for better scalability and effectiveness.

  • Nathan Farrington, Nikhil Handigol, Christoph Mayer, Kok-Kiong Yap, and Jeffrey C. Mogul

    WREN 2009, the Workshop on Research on Enterprise Networking, was held on August 21, 2009, in conjunction with SIGCOMM 2009 in Barcelona. WREN focussed on research challenges and results specific to enterprise and data-center networks. Details about the workshop, including the organizers and the papers presented, are at Approximately 48 people registered to attend WREN.

    The workshop was structured to encourage a lot of questions and discussion. To record what was said, four volunteer scribes (Nathan Farrington, Nikhil Handigol, Christoph Mayer, and Kok-Kiong Yap) took notes. This report is a merged and edited version of their notes. Please realize that the result, while presented in the form of quotations, is at best a paraphrasing of what was actually said, and in some cases may be mistaken. Also, some quotes might be mis-attributed, and some discussion has been lost, due to the interactive nature of the workshop.

    The second instance of WREN will be combined with the Internet Network Management Workshop (INM), in conjunction with NSDI 2010; see for deadlines and additional information.

    Also note that two papers from WREN were re-published in the January 2010 issue of Computer Communication Review: “Understanding Data Center Traffic Characteristics,” by Theophilus A Benson, Ashok Anand, Aditya Akella, and Ming Zhang, and “Remote Network Labs: An On-Demand Network Cloud for Configuration Testing,” by Huan Liu and Dan Orban.

  • Ken Keys

    The well-known traceroute probing method discovers links between interfaces on Internet routers. IP alias resolution, the process of identifying IP addresses belonging to the same router, is a critical step in producing Internet topology maps. We compare the performance and accuracy of known alias resolution techniques, propose some enhancements, and suggest a practical combination of techniques that can produce the most accurate and complete IP-to-router mapping at macroscopic scale.

  • James Kelly, Wladimir Araujo, and Kallol Banerjee

    The creation of services on IP networks is a lengthy process. The development time is further increased if this involves the equipment manufacturer adding third-party technology in their product. In this work we describe how the JUNOS SDK (part of Juniper Networks Partner Solution Development Platform) facilitates innovation and can be used to considerably shorten the development cycle for the creation of services based on embedding third-party software into Juniper Networks routers. We describe how the JUNOS SDK exposes programmatic interfaces to enable packet manipulation by third-party software and how it can be used as a common platform for deploying unique services through the combination of multiple components from multiple parties.

  • Xu Chen, Yun Mao, Z. Morley Mao, and Jacobus Van der Merwe

    Network management operations are complicated, tedious and error-prone, requiring significant human involvement and expert knowledge. In this paper, we first examine the fundamental components of management operations and argue that the lack of automation is due to a lack of programmability at the right level of abstraction. To address this challenge, we present DECOR, a database-oriented, declarative framework towards automated network management. DECOR models router configuration and any generic network status as relational data in a conceptually centralized database. As such, network management operations can be represented as a series of transactional database queries, which provide the benefit of atomicity, consistency and isolation. The rulebased language in DECOR provides the flexible programmability to specify and enforce network-wide management constraints, and achieve high-level task scheduling. We describe the design rationale and architecture of DECOR and present some preliminary examples applying our approach to common network management tasks.

  • Fang Hao, T. V. Lakshman, Sarit Mukherjee, and Haoyu Song

    It is envisaged that services and applications will migrate to a cloud-computing paradigm where thin-clients on userdevices access, over the network, applications hosted in data centers by application service providers. Examples are cloudbased gaming applications and cloud-supported virtual desktops. For good performance and efficiency, it is critical that these services are delivered from locations that are the best for the current (dynamically changing) set of users. To achieve this, we expect that services will be hosted on virtual machines in interconnected data centers and that these virtual machines will migrate dynamically to locations bestsuited for the current user population. A basic network infrastructure need then is the ability to migrate virtual machines across multiple networks without losing service continuity. In this paper, we develop mechanisms to accomplish this using a network-virtualization architecture that relies on a set of distributed forwarding elements with centralized control (borrowing on several recent proposals in a similar vein). We describe a preliminary prototype system, built using Openflow components, that demonstrates the feasibility of this architecture in enabling seamless migration of virtual machines and in enhancing delivery of cloud-based services.

  • Muhammad Bilal Anwer and Nick Feamster

    Network virtualization allows many networks to share the same underlying physical topology; this technology has offered promise both for experimentation and for hosting multiple networks on a single shared physical infrastructure. Much attention has focused on virtualizing the network control plane, but, ultimately, a limiting factor in the deployment of these virtual networks is data-plane performance: Virtual networks must ultimately forward packets at rates that are comparable to native, hardware-based approaches. Aside from proprietary solutions from vendors, hardware support for virtualized data planes is limited. The advent of open, programmable network hardware promises flexibility, speed, and resource isolation, but, unfortunately, hardware does not naturally lend itself to virtualization. We leverage emerging trends in programmable hardware to design a flexible, hardware-based data plane for virtual networks. We present the design, implementation, and preliminary evaluation of this hardware-based data plane and show how the proposed design can support many virtual networks without compromising performance or isolation.

  • Huan Liu and Dan Orban

    Network equipment is difficult to configure correctly. To minimize configuration errors, network administrators typically build a smaller scale test lab replicating the production network and test out their configuration changes before rolling out the changes to production. Unfortunately, building a test lab is expensive and the test equipment is rarely utilized. In this paper, we present Remote Network Labs, which is aimed at leveraging the expensive network equipment more efficiently and reducing the cost of building a test lab. Similar to a server cloud such as Amazon EC2, a user could request network equipment remotely and connect them through a GUI or web services interface. The network equipment is geographically distributed, allowing us to reuse test equipment anywhere. Beyond saving costs, Remote Network Labs brings about many additional benefits, including the ability to fully automate network configuration testing.

  • Theophilus Benson, Ashok Anand, Aditya Akella, and Ming Zhang

    As data centers become more and more central in Internet communications, both research and operations communities have begun to explore how to better design and manage them. In this paper, we present a preliminary empirical study of end-to-end traffic patterns in data center networks that can inform and help evaluate research and operational approaches. We analyze SNMP logs collected at 19 data centers to examine temporal and spatial variations in link loads and losses. We find that while links in the core are heavily utilized the ones closer to the edge observe a greater degree of loss. We then study packet traces collected at a small number of switches in one data center and find evidence of ON-OFF traffic behavior. Finally, we develop a framework that derives ON-OFF traffic parameters for data center traffic sources that best explain the SNMP data collected for the data center. We show that the framework can be used to evaluate data center traffic engineering approaches. We are also applying the framework to design network-level traffic generators for data centers.

  • Xuan Bao and Romit Roy Choudhury

    Mobile phones are becoming a convergent platform for sensing, computation, and communication. This paper envisions VUPoints, a collaborative sensing and video-recording system that takes advantage of this convergence. Ideally, when multiple phones in a social gathering run VUPoints, the output is expected to be a short video-highlights of the occasion, created without human intervention. To achieve this, mobile phones must sense their surroundings and collaboratively detect events that qualify for recording. Short video-clips from different phones can be combined to produce the highlights of the occasion. This paper reports exploratory work towards this longer term project. We present a feasibility study, and show how social events can be sensed through mobile phones and used as triggers for video-recording. While false positives cause inclusion of some uninteresting videos, we believe that further research can significantly improve the efficacy of the system.

  • Stephen M. Rumble, Ryan Stutsman, Philip Levis, David Mazières, and Nickolai Zeldovich

    Energy is the critical limiting resource to mobile computing devices. Correspondingly, an operating system must track, provision, and ration how applications consume energy. The emergence of third-party application stores and marketplaces makes this concern even more pressing. A third-party application must not deny service through excessive, unforeseen energy expenditure, whether accidental or malicious. Previous research has shown promise in tracking energy usage and rationing it to meet device lifetime goals, but such mechanisms and policies are still nascent, especially regarding user interaction.

    We argue for a new operating system, called Cinder, which builds on top of the HiStar OS. Cinder’s energy awareness is based on hierarchical capacitors and task profiles. We introduce and explore these abstractions, paying particular attention to the ways in which policies could be generated and enforced in a dynamic system.

  • Balachander Krishnamurthy and Craig E. Wills

    For purposes of this paper, we define“Personally identifiable information” (PII) as information which can be used to distinguish or trace an individual’s identity either alone or when combined with other information that is linkable to a specificindividual. The popularity of Online Social Networks (OSN) has accelerated the appearance of vast amounts of personal information on the Internet. Our research shows that it is possible for third-parties to link PII, which is leaked via OSNs, with user actions both within OSN sites and elsewhere on non-OSN sites. We refer to this ability to link PII and combine it with other information as “leakage”. We have identified multiple ways by which such leakage occurs and discuss measures to prevent it.

  • John Tang, Mirco Musolesi, Cecilia Mascolo, and Vito Latora

    The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e., networks that do not change over time or built as the results of aggregation of information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture.

    In this paper we propose new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes taking into account the evolution of a network from a global view. We show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions), compared to the metrics used in the past on static graphs. We also characterise network reachability with the concepts of in- and out-components. Then, we generalise them with a global perspective by defining temporal connected components. As a proof of concept we apply these techniques to two classes of time-varying networks, namely connectivity of mobile devices and interactions on an online social network.

  • Kok-Kiong Yap, Masayoshi Kobayashi, Rob Sherwood, Te-Yuan Huang, Michael Chan, Nikhil Handigol, and Nick McKeown

    We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless".

    The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it.

  • Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, Laurent Mathy, and Panagiotis Papadimitriou

    Multi-core CPUs, along with recent advances in memory and buses, render commodity hardware a strong candidate for software router virtualization. In this context, we present the design of a new platform for virtual routers on modern PC hardware. We further discuss our design choices in order to achieve both high performance and flexibility for packet processing.

  • Rob Sherwood, Michael Chan, Adam Covington, Glen Gibb, Mario Flajslik, Nikhil Handigol, Te-Yuan Huang, Peyman Kazemian, Masayoshi Kobayashi, Jad Naous, Srinivasan Seetharaman, David Underhill, Tatsuya Yabe, Kok-Kiong Yap, Yiannis Yiakoumis, Hongyi Zeng, Guido Appenzeller, Ramesh Johari, Nick McKeown, and Guru Parulkar

    OpenFlow has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX could move VMs seamlessly around an OpenFlow network. While OpenFlow has potential to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover.

    Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate five network slices running in parallel: one slice for the production network and four slices running experimental code. Our demonstration runs on real network hardware deployed on our production network1 at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.

  • Christian Hübsch, Christoph P. Mayer, Sebastian Mies, Roland Bless, Oliver P. Waldhorst, and Martina Zitterbart

    End-to-End connectivity in today's Internet can no longer be taken for granted. Middleboxes, mobility, and protocol heterogeneity complicate application development and often result in application-specific solutions. In our demo we present ariba: an overlay-based approach to handle such network challenges and to provide consistent homogeneous network primitives in order to ease application and service development.

  • Mythili Vutukuru, Hari Balakrishnan, and Kyle Jamieson

    This paper presents SoftRate, a wireless bit rate adaptation protocol that is responsive to rapidly varying channel conditions. Unlike previous work that uses either frame receptions or signal-to-noise ratio (SNR) estimates to select bit rates, SoftRate uses confidence information calculated by the physical layer and exported to higher layers via the SoftPHY interface to estimate the prevailing channel bit error rate (BER). Senders use this BER estimate, calculated over each received packet (even when the packet has no bit errors), to pick good bit rates. SoftRate’s novel BER computation works across different wireless environments and hardware without requiring any retraining. SoftRate also uses abrupt changes in the BER estimate to identify interference, enabling it to reduce the bit rate only in response to channel errors caused by attenuation or fading. Our experiments conducted using a software radio prototype show that SoftRate achieves 2x higher throughput than popular frame-level protocols such as SampleRate [4] and RRAA [24]. It also achieves 20% more throughput than an SNR-based protocol trained on the operating environment, and up to 4x higher throughput than an untrained SNR-based protocol. The throughput gains using SoftRate stem from its ability to react to channel variations within a single packet-time and its robustness to collision losses.

Syndicate content