Computer Communication Review: Papers

Find a CCR issue:
  • Dirk Trossen, Mikko Sarela, and Karen Sollins

    The current Internet architecture focuses on communicating entities, largely leaving aside the information to be ex-changed among them. However, trends in communication scenarios show that WHAT is being exchanged becoming more important than WHO are exchanging information. Van Jacobson describes this as moving from interconnecting ma-chines to interconnecting information. Any change of this part of the Internet needs argumentation as to why it should be undertaken in the first place. In this position paper, we identify four key challenges, namely information-centrism of applications, supporting and exposing tussles, increasing accountability, and addressing attention scarcity, that we believe an information-centric internetworking architecture could address better and would make changing such crucial part worthwhile. We recognize, however, that a much larger and more systematic debate for such change is needed, underlined by factual evidence on the gain for such change.

    Kevin Almeroth
  • Pei-chun Cheng, Xin Zhao, Beichuan Zhang, and Lixia Zhang

    BGP routing data collected by RouteViews and RIPE RIS have become an essential asset to both the network research and operation communities. However, it has long been speculated that the BGP monitoring sessions between operational routers and the data collectors fail from time to time. Such session failures lead to missing update messages as well as duplicate updates during session re-establishment, making analysis results derived from such data inaccurate. Since there is no complete record of these monitoring session failures, data users either have to sanitize the data discretionarily with respect to their specific needs or, more commonly, assume that session failures are infrequent enough and simply ignore them. In this paper, we present the first systematic assessment and documentary on BGP session failures of RouteViews and RIPE data collectors over the past eight years. Our results show that monitoring session failures are rather frequent, more than 30% of BGP monitoring sessions experienced at least one failure every month. Furthermore, we observed failures that happen to multiple peer sessions on the same collector around the same time, suggesting that the collector’s local problems are a major factor in the session instability. We also developed a web site as a community resource to publish all session failures detected for RouteViews and RIPE RIS data collectors to help users select and clean up BGP data before performing their analysis.

    Jitendra Padhye
  • David R. Choffnes and Fabian E. Bustamante

    Today’s open platforms for network measurement and distributed system research, which we collectively refer to as testbeds in this article, provide opportunities for controllable experimentation and evaluations of systems at the scale of hundreds or thousands of hosts. In this article, we identify several issues with extending results from such platforms to Internet wide perspectives. Specifically, we try to quantify the level of inaccuracy and incompleteness of testbed results when applied to the context of a large-scale peer-to-peer (P2P) system. Based on our results, we emphasize the importance of measurements in the appropriate environment when evaluating Internet-scale systems.

    Pablo Rodrigues
  • Diana Joumblatt, Renata Teixeira, Jaideep Chandrashekar, and Nina Taft

    There is an amazing paucity of data that is collected directly from users’ personal computers. One key reason for this is the perception among researchers that users are unwilling to participate in such a data collection effort. To understand the range of opinions on matters that occur with end-host data tracing, we conducted a survey of 400 computer scientists. In this paper, we summarize and share our findings.

  • k. c. claffy

    On September 23, 2009, CAIDA hosted a virtual Workshop on Internet Economics to bring together network technology and policy researchers, commercial Internet facilities and service providers, and communications regulators to explore a common goal: framing a concrete agenda for the emerging but empirically stunted field of Internet infrastructure economics. With participants stretching from Washington D.C. to Queensland, Australia, we used the electronic conference hosting facilities supported by the California Institute of Technology (CalTech) EVO Collaboration Network. This report describes the workshop discussions and presents relevant open research questions identified by participants.

  • Ratul Mahajan

    This paper is based on a talk that I gave at CoNEXT 2009. Inspired by Hal Varian’s paper on building economic models, it describes a research method for building computer systems. I find this method useful in my work and hope that some readers will find it helpful as well.

  • Matthew Caesar, Martin Casado, Teemu Koponen, Jennifer Rexford, and Scott Shenker

    This paper advocates a different approach to reduce routing convergence—side-stepping the problem by avoiding it in the first place! Rather than recomputing paths after temporary topology changes, we argue for a separation of timescale between offline computation of multiple diverse paths and online spreading of load over these paths. We believe decoupling failure recovery from path computation leads to networks that are inherently more efficient, more scalable, and easier to manage.

  • Constantine Dovrolis and J. Todd Streelman

    There is significant research interest recently to understand the evolution of the current Internet, as well as to design clean-slate Future Internet architectures. Clearly, even when network architectures are designed from scratch, they have to evolve as their environment (i.e., technological constraints, service requirements, applications, economic conditions, etc) always changes. A key question then is: what makes a network architecture evolvable? What determines the ability of a network architecture to evolve as its environment changes? In this paper, we review some relevant ideas about evolvability from the biological literature. We examine the role of robustness and modularity in evolution, and their relation with evolvability. We also discuss evolutionary kernels and punctuated equilibria, two important concepts that may be relevant to the so-called ossification of the core Internet protocols. Finally, we examine optimality, a design objective that is often of primary interest in engineering but that does not seem to be abundant in biology.

  • Hamed Haddadi, Tristan Henderson, and Jon Crowcroft

    On numerous occasions, trips to the facilities coincide with an important mobile phone call. Due to the sleek and polished nature of modern phones, attempting to promptly deal with such calls can occasionally lead to the phone sliding through the owner’s hands, surrendering to the force of gravity and flying down the hole. This is a disaster, and often an expensive incident. It can also be a health and safety hazard, with the owner desperately attempting to retrieve their phone and re-using it.

    This paper provides a first attempt at a cell phone recovery system using themodern functionalities of Toto Japanese toilets.1 In our approach, the phone is calmly recovered, sanitized and retrieved by the user. This can all happen without the call even being dropped, with possibility of secure backup of the user data by embedded sensor and Wi-Fi network connectivity in the toilet. We envision that such an approach will increase the collaboration between Japanese, European and American mobile operators, network researchers and hardware manufacturers.

  • Martin Burkhart, Dominik Schatzmann, Brian Trammell, Elisa Boschi, and Bernhard Plattner

    In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.

    Dmitri Krioukov
  • Yao Liang and Wei Peng

    We present a sophisticated framework to systematically explore the temporal correlation in environmental monitoring wireless sensor networks. The presented framework optimizes lossless data compression in communications given the resource constraint of sensor nodes. The insights and analyses obtained from the framework can directly lead to innovative and better design of data gathering protocols for wireless sensor networks operated in noisy environments to dramatically reduce the energy consumptions.

    Martin May
  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    The use of multiple antennas and MIMO techniques based on them is the key feature of 802.11n equipment that sets it apart from earlier 802.11a/g equipment. It is responsible for superior performance, reliability and range. In this tutorial, we provide a brief introduction to multiple antenna techniques. We describe the two main classes of those techniques, spatial diversity and spatial multiplexing. To ground our discussion, we explain how they work in 802.11n NICs in practice.

  • Parag Kulkarni, Woon Hau Chin, and Tim Farnham

    Femtocell access points (FAPs), also popularly known as Home Base Stations, are small base stations for use within indoor environments to improve coverage and capacity. FAPs have a limited range (e.g. limited to a home or office area) but offer immense capacity improvements for the network due to the ability to reuse a frequency more often as a result of smaller coverage areas. Because there may be thousands of these devices and since the nature of deployment is adhoc, it may not be possible to carry out elaborate frequency planning like that in the traditional cellular network. This paper aims to outline the radio resource management considerations within the context of femto cells, the broader objective being to initiate a discussion and encourage research in the areas highlighted.

  • Brian E. Carpenter and Craig Partridge

    This note describes the various peer review processes applied to Internet Requests for Comments (RFCs) over a number of years, and suggests that these have been up to normal scholarly standards since at least 1992. The authors believe that these documents should be considered the equivalent of scholarly publications.

  • Dah Ming Chiu and Tom Z.J. Fu

    This study takes papers from a selected set of computer networking conferences and journals spanning the past twenty years (1989-2008) to produce various statistics to show how our community publishes papers, and how this process is changing over the years. We observe the rapid growth in the rate of publications, venues, citations, authors, and number of co-authors. We explain how these quantities are related, in particular explore how they are related over time and the reasons behind the changes. The widely accepted model to explain the power law distribution of paper citations is preferential attachment. We propose an extension and refinement that suggests elapsed time is also a factor to determine which papers get cited. We try to compare the selected venues based on citation count, and discuss how we might think about these comparisons, in terms of the roles played by different venues, and the ability to predict impact by venues, and citation counts. The treatment of these issues is general and can be applied to study publication patterns in other research communities. The larger goal of this study is to generate discussion about our publication system, and work towards a vision to transform our publication system for better scalability and effectiveness.

  • Nathan Farrington, Nikhil Handigol, Christoph Mayer, Kok-Kiong Yap, and Jeffrey C. Mogul

    WREN 2009, the Workshop on Research on Enterprise Networking, was held on August 21, 2009, in conjunction with SIGCOMM 2009 in Barcelona. WREN focussed on research challenges and results specific to enterprise and data-center networks. Details about the workshop, including the organizers and the papers presented, are at http://conferences.sigcomm.org/sigcomm/2009/workshops/wren/index.php. Approximately 48 people registered to attend WREN.

    The workshop was structured to encourage a lot of questions and discussion. To record what was said, four volunteer scribes (Nathan Farrington, Nikhil Handigol, Christoph Mayer, and Kok-Kiong Yap) took notes. This report is a merged and edited version of their notes. Please realize that the result, while presented in the form of quotations, is at best a paraphrasing of what was actually said, and in some cases may be mistaken. Also, some quotes might be mis-attributed, and some discussion has been lost, due to the interactive nature of the workshop.

    The second instance of WREN will be combined with the Internet Network Management Workshop (INM), in conjunction with NSDI 2010; see http://www.usenix.org/event/inmwren10/cfp/ for deadlines and additional information.

    Also note that two papers from WREN were re-published in the January 2010 issue of Computer Communication Review: “Understanding Data Center Traffic Characteristics,” by Theophilus A Benson, Ashok Anand, Aditya Akella, and Ming Zhang, and “Remote Network Labs: An On-Demand Network Cloud for Configuration Testing,” by Huan Liu and Dan Orban.

  • Ken Keys

    The well-known traceroute probing method discovers links between interfaces on Internet routers. IP alias resolution, the process of identifying IP addresses belonging to the same router, is a critical step in producing Internet topology maps. We compare the performance and accuracy of known alias resolution techniques, propose some enhancements, and suggest a practical combination of techniques that can produce the most accurate and complete IP-to-router mapping at macroscopic scale.

  • James Kelly, Wladimir Araujo, and Kallol Banerjee

    The creation of services on IP networks is a lengthy process. The development time is further increased if this involves the equipment manufacturer adding third-party technology in their product. In this work we describe how the JUNOS SDK (part of Juniper Networks Partner Solution Development Platform) facilitates innovation and can be used to considerably shorten the development cycle for the creation of services based on embedding third-party software into Juniper Networks routers. We describe how the JUNOS SDK exposes programmatic interfaces to enable packet manipulation by third-party software and how it can be used as a common platform for deploying unique services through the combination of multiple components from multiple parties.

  • Xu Chen, Yun Mao, Z. Morley Mao, and Jacobus Van der Merwe

    Network management operations are complicated, tedious and error-prone, requiring significant human involvement and expert knowledge. In this paper, we first examine the fundamental components of management operations and argue that the lack of automation is due to a lack of programmability at the right level of abstraction. To address this challenge, we present DECOR, a database-oriented, declarative framework towards automated network management. DECOR models router configuration and any generic network status as relational data in a conceptually centralized database. As such, network management operations can be represented as a series of transactional database queries, which provide the benefit of atomicity, consistency and isolation. The rulebased language in DECOR provides the flexible programmability to specify and enforce network-wide management constraints, and achieve high-level task scheduling. We describe the design rationale and architecture of DECOR and present some preliminary examples applying our approach to common network management tasks.

  • Fang Hao, T. V. Lakshman, Sarit Mukherjee, and Haoyu Song

    It is envisaged that services and applications will migrate to a cloud-computing paradigm where thin-clients on userdevices access, over the network, applications hosted in data centers by application service providers. Examples are cloudbased gaming applications and cloud-supported virtual desktops. For good performance and efficiency, it is critical that these services are delivered from locations that are the best for the current (dynamically changing) set of users. To achieve this, we expect that services will be hosted on virtual machines in interconnected data centers and that these virtual machines will migrate dynamically to locations bestsuited for the current user population. A basic network infrastructure need then is the ability to migrate virtual machines across multiple networks without losing service continuity. In this paper, we develop mechanisms to accomplish this using a network-virtualization architecture that relies on a set of distributed forwarding elements with centralized control (borrowing on several recent proposals in a similar vein). We describe a preliminary prototype system, built using Openflow components, that demonstrates the feasibility of this architecture in enabling seamless migration of virtual machines and in enhancing delivery of cloud-based services.

  • Muhammad Bilal Anwer and Nick Feamster

    Network virtualization allows many networks to share the same underlying physical topology; this technology has offered promise both for experimentation and for hosting multiple networks on a single shared physical infrastructure. Much attention has focused on virtualizing the network control plane, but, ultimately, a limiting factor in the deployment of these virtual networks is data-plane performance: Virtual networks must ultimately forward packets at rates that are comparable to native, hardware-based approaches. Aside from proprietary solutions from vendors, hardware support for virtualized data planes is limited. The advent of open, programmable network hardware promises flexibility, speed, and resource isolation, but, unfortunately, hardware does not naturally lend itself to virtualization. We leverage emerging trends in programmable hardware to design a flexible, hardware-based data plane for virtual networks. We present the design, implementation, and preliminary evaluation of this hardware-based data plane and show how the proposed design can support many virtual networks without compromising performance or isolation.

  • Huan Liu and Dan Orban

    Network equipment is difficult to configure correctly. To minimize configuration errors, network administrators typically build a smaller scale test lab replicating the production network and test out their configuration changes before rolling out the changes to production. Unfortunately, building a test lab is expensive and the test equipment is rarely utilized. In this paper, we present Remote Network Labs, which is aimed at leveraging the expensive network equipment more efficiently and reducing the cost of building a test lab. Similar to a server cloud such as Amazon EC2, a user could request network equipment remotely and connect them through a GUI or web services interface. The network equipment is geographically distributed, allowing us to reuse test equipment anywhere. Beyond saving costs, Remote Network Labs brings about many additional benefits, including the ability to fully automate network configuration testing.

  • Theophilus Benson, Ashok Anand, Aditya Akella, and Ming Zhang

    As data centers become more and more central in Internet communications, both research and operations communities have begun to explore how to better design and manage them. In this paper, we present a preliminary empirical study of end-to-end traffic patterns in data center networks that can inform and help evaluate research and operational approaches. We analyze SNMP logs collected at 19 data centers to examine temporal and spatial variations in link loads and losses. We find that while links in the core are heavily utilized the ones closer to the edge observe a greater degree of loss. We then study packet traces collected at a small number of switches in one data center and find evidence of ON-OFF traffic behavior. Finally, we develop a framework that derives ON-OFF traffic parameters for data center traffic sources that best explain the SNMP data collected for the data center. We show that the framework can be used to evaluate data center traffic engineering approaches. We are also applying the framework to design network-level traffic generators for data centers.

  • Xuan Bao and Romit Roy Choudhury

    Mobile phones are becoming a convergent platform for sensing, computation, and communication. This paper envisions VUPoints, a collaborative sensing and video-recording system that takes advantage of this convergence. Ideally, when multiple phones in a social gathering run VUPoints, the output is expected to be a short video-highlights of the occasion, created without human intervention. To achieve this, mobile phones must sense their surroundings and collaboratively detect events that qualify for recording. Short video-clips from different phones can be combined to produce the highlights of the occasion. This paper reports exploratory work towards this longer term project. We present a feasibility study, and show how social events can be sensed through mobile phones and used as triggers for video-recording. While false positives cause inclusion of some uninteresting videos, we believe that further research can significantly improve the efficacy of the system.

  • Stephen M. Rumble, Ryan Stutsman, Philip Levis, David Mazières, and Nickolai Zeldovich

    Energy is the critical limiting resource to mobile computing devices. Correspondingly, an operating system must track, provision, and ration how applications consume energy. The emergence of third-party application stores and marketplaces makes this concern even more pressing. A third-party application must not deny service through excessive, unforeseen energy expenditure, whether accidental or malicious. Previous research has shown promise in tracking energy usage and rationing it to meet device lifetime goals, but such mechanisms and policies are still nascent, especially regarding user interaction.

    We argue for a new operating system, called Cinder, which builds on top of the HiStar OS. Cinder’s energy awareness is based on hierarchical capacitors and task profiles. We introduce and explore these abstractions, paying particular attention to the ways in which policies could be generated and enforced in a dynamic system.

  • Balachander Krishnamurthy and Craig E. Wills

    For purposes of this paper, we define“Personally identifiable information” (PII) as information which can be used to distinguish or trace an individual’s identity either alone or when combined with other information that is linkable to a specificindividual. The popularity of Online Social Networks (OSN) has accelerated the appearance of vast amounts of personal information on the Internet. Our research shows that it is possible for third-parties to link PII, which is leaked via OSNs, with user actions both within OSN sites and elsewhere on non-OSN sites. We refer to this ability to link PII and combine it with other information as “leakage”. We have identified multiple ways by which such leakage occurs and discuss measures to prevent it.

  • John Tang, Mirco Musolesi, Cecilia Mascolo, and Vito Latora

    The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e., networks that do not change over time or built as the results of aggregation of information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture.

    In this paper we propose new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes taking into account the evolution of a network from a global view. We show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions), compared to the metrics used in the past on static graphs. We also characterise network reachability with the concepts of in- and out-components. Then, we generalise them with a global perspective by defining temporal connected components. As a proof of concept we apply these techniques to two classes of time-varying networks, namely connectivity of mobile devices and interactions on an online social network.

  • Kok-Kiong Yap, Masayoshi Kobayashi, Rob Sherwood, Te-Yuan Huang, Michael Chan, Nikhil Handigol, and Nick McKeown

    We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless".

    The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it.

  • Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, Laurent Mathy, and Panagiotis Papadimitriou

    Multi-core CPUs, along with recent advances in memory and buses, render commodity hardware a strong candidate for software router virtualization. In this context, we present the design of a new platform for virtual routers on modern PC hardware. We further discuss our design choices in order to achieve both high performance and flexibility for packet processing.

  • Rob Sherwood, Michael Chan, Adam Covington, Glen Gibb, Mario Flajslik, Nikhil Handigol, Te-Yuan Huang, Peyman Kazemian, Masayoshi Kobayashi, Jad Naous, Srinivasan Seetharaman, David Underhill, Tatsuya Yabe, Kok-Kiong Yap, Yiannis Yiakoumis, Hongyi Zeng, Guido Appenzeller, Ramesh Johari, Nick McKeown, and Guru Parulkar

    OpenFlow has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX could move VMs seamlessly around an OpenFlow network. While OpenFlow has potential to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover.

    Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate five network slices running in parallel: one slice for the production network and four slices running experimental code. Our demonstration runs on real network hardware deployed on our production network1 at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.

  • Christian Hübsch, Christoph P. Mayer, Sebastian Mies, Roland Bless, Oliver P. Waldhorst, and Martina Zitterbart

    End-to-End connectivity in today's Internet can no longer be taken for granted. Middleboxes, mobility, and protocol heterogeneity complicate application development and often result in application-specific solutions. In our demo we present ariba: an overlay-based approach to handle such network challenges and to provide consistent homogeneous network primitives in order to ease application and service development.

  • Mythili Vutukuru, Hari Balakrishnan, and Kyle Jamieson

    This paper presents SoftRate, a wireless bit rate adaptation protocol that is responsive to rapidly varying channel conditions. Unlike previous work that uses either frame receptions or signal-to-noise ratio (SNR) estimates to select bit rates, SoftRate uses confidence information calculated by the physical layer and exported to higher layers via the SoftPHY interface to estimate the prevailing channel bit error rate (BER). Senders use this BER estimate, calculated over each received packet (even when the packet has no bit errors), to pick good bit rates. SoftRate’s novel BER computation works across different wireless environments and hardware without requiring any retraining. SoftRate also uses abrupt changes in the BER estimate to identify interference, enabling it to reduce the bit rate only in response to channel errors caused by attenuation or fading. Our experiments conducted using a software radio prototype show that SoftRate achieves 2x higher throughput than popular frame-level protocols such as SampleRate [4] and RRAA [24]. It also achieves 20% more throughput than an SNR-based protocol trained on the operating environment, and up to 4x higher throughput than an untrained SNR-based protocol. The throughput gains using SoftRate stem from its ability to react to channel variations within a single packet-time and its robustness to collision losses.

  • Aveek Dutta, Dola Saha, Dirk Grunwald, and Douglas Sicker

    Network protocol designers, both at the physical and network level, have long considered interference and simultaneous transmission in wireless protocols as a problem to be avoided. This, coupled with a tendency to emulate wired network protocols in the wireless domain, has led to artificial limitations in wireless networks. In this paper, we argue that wireless protocols can exploit simultaneous transmission to reduce the cost of reliable multicast by orders of magnitude. With an appropriate application interface, simultaneous transmission can also greatly speed up common group communication primitives, such as anycast, broadcast, leader election and others.

    The proposed method precisely fits into the domain of directly reachable nodes where many group communication mechanisms are commonly used in routing protocols and other physical-layer mechanisms. We demonstrate how simultaneous transmission can be used to implement a reliable broadcast for an infrastructure and peer-to-peer network using a prototype reconfigurable hardware. We also validate the notion of using simple spectrum sensing techniques to distinguish multiple transmissions. We then describe how the mechanism can be extended to solve group communication problems and the challenges inherent to build innovative protocols which are faster and reliable at the same time.

  • Paramvir Bahl, Ranveer Chandra, Thomas Moscibroda, Rohan Murty, and Matt Welsh

    Networking over UHF white spaces is fundamentally different from conventional Wi-Fi along three axes: spatial variation, temporal variation, and fragmentation of the UHF spectrum. Each of these differences gives rise to new challenges for implementing a wireless network in this band. We present the design and implementation of WhiteFi, the firstWi-Fi like system constructed on top of UHF white spaces. WhiteFi incorporates a new adaptive spectrum assignment algorithm to handle spectrum variation and fragmentation, and proposes a low overhead protocol to handle temporal variation. WhiteFi builds on a simple technique, called SIFT, that reduces the time to detect transmissions in variable channel width systems by analyzing raw signals in the time domain. We provide an extensive system evaluation in terms of a prototype implementation and detailed experimental and simulation results.

  • Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya, and Amin Vahdat

    This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, in exible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet/IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a \plug-and-play" large-scale, data center network.

  • Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel, and Sudipta Sengupta

    To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. Tomeet these goals, we presentVL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2’s design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2’s implementation leverages proven network technologies, already available at lowcost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds – sustaining a rate that is 94% of the maximum possible.

  • Chuanxiong Guo, Guohan Lu, Dan Li, Haitao Wu, Xuan Zhang, Yunfeng Shi, Chen Tian, Yongguang Zhang, and Songwu Lu

    This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multi- ple network ports connect to multiple layers of COTS (com- modity on-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube sup- ports various bandwidth-intensive applications by speeding- up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic.

    BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very di±cult to repair or replace its components.

    Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balanc- ing and it significantly accelerates representative bandwidth- intensive applications.

  • Yinglian Xie, Fang Yu, and Martin Abadi

    Today’s Internet is open and anonymous. While it permits free traffic from any

    host, attackers that generate malicious traffic cannot typically be held accountable. In this paper, we present a system called HostTracker that tracks dynamic bindings between hosts and IP addresses by leveraging application-level data with unreliable IDs. Using a month-long user login trace from a large email provider, we show that HostTracker can attribute most of the activities reliably to the responsible hosts, despite the existence of dynamic IP addresses, proxies, and NATs. With this information, we are able to analyze the host population, to conduct forensic analysis, and also to blacklist malicious hosts dynamically.

  • Ashok Anand, Vyas Sekar, and Aditya Akella

    Application-independent Redundancy Elimination (RE), or identifying and removing repeated content from network transfers, has been used with great success for improving network performance on enterprise access links. Recently, there is growing interest for supporting RE as a network-wide service. Such a network-wide RE service benefits ISPs by reducing link loads and increasing the effective network capacity to better accommodate the increasing number of bandwidth-intensive applications. Further, a networkwide RE service democratizes the benefits of RE to all end-to-end traffic and improves application performance by increasing throughput and reducing latencies.

    While the vision of a network-wide RE service is appealing, realizing it in practice is challenging. In particular, extending singlevantage- point RE solutions designed for enterprise access links to the network-wide case is inefficient and/or requires modifying routing policies. We present SmartRE, a practical and efficient architecture for network-wide RE. We show that SmartRE can enable more effective utilization of the available resources at network devices, and thus can magnify the overall benefits of network-wide RE. We prototype our algorithms using Click and test our framework extensively using several real and synthetic traces.

  • Aditya Dhananjay, Hui Zhang, Jinyang Li, and Lakshminarayanan Subramanian

    Realizing the full potential of a multi-radio mesh network involves two main challenges: how to assign channels to radios at each node to minimize interference and how to choose high throughput routing paths in the face of lossy links, variable channel conditions and external load. This paper presents ROMA, a practical, distributed channel assignment and routing protocol that achieves good multi-hop path performance between every node and one or more designated gateway nodes in a dual-radio network. ROMA assigns nonoverlapping channels to links along each gateway path to eliminate intra-path interference. ROMA reduces inter-path interference by assigning different channels to paths destined for different gateways whenever possible. Evaluations on a 24-node dual-radio testbed show that ROMA achieves high throughput in a variety of scenarios.

Syndicate content