Computer Communication Review: Papers

Find a CCR issue:
  • kc claffy

    On February 10-12, 2011, CAIDA hosted the third Work- shop on Active Internet Measurements (AIMS-3) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with the previous two AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security research communities. For three years, the workshop has fostered interdisciplinary conversation among researchers, operators, and government, focused on analysis of goals, means, and emerging issues in active Internet measurement projects. The first workshop emphasized discussion of existing hardware and software platforms for macroscopic measurement and mapping of Internet properties, in particular those related to cybersecurity. The second workshop included more performance evaluation and data-sharing approaches. This year we expanded the work- shop agenda to include active measurement topics of more recent interest: broadband performance; gauging IPv6 deployment; and measurement activities in international re- search networks.

  • kc claffy

    Exhaustion of the Internet addressing authority’s (IANA) available IPv4 address space, which occurred in February 2011, is finally exerting exogenous pressure on network operators to begin to deploy IPv6. There are two possible outcomes from this transition. IPv6 may be widely adopted and embraced, causing many existing methods to measure and monitor the Internet to be ineffective. A second possibility is that IPv6 languishes, transition mechanisms fail, or performance suffers. Either scenario requires data, measurement, and analysis to inform technical, business, and policy decisions. We survey available data that have allowed limited tracking of IPv6 deployment thus far, describe additional types of data that would support better tracking, and offer a perspective on the challenging future of IPv6 evolution.

  • S. Keshav

    Twenty years ago, when I was still a graduate student, going online meant firing up a high-speed 1200 baud modem and typing text on a Z19 glass terminal to interact with my university’s VAX 11/780 server. Today, this seems quaint, if not downright archaic. Fast forwarding twenty years from now, it seems very likely that reading newspapers and magazines on paper will seem equally quaint, if not downright wasteful. It is clear that the question is when, not if, CCR goes completely online.

    CCR today provides two types of content: editorials and technical articles. Both are selected to be relevant, novel, and timely. By going online only, we would certainly not give up these qualities. Instead, by not being tied to the print medium, we could publish articles as they were accepted, instead of waiting for a publication deadline. This would reduce the time-to-publication from the current 16 weeks to less than 10 weeks, making the content even more timely.

    Freeing CCR from print has many other benefits. We could publish content that goes well beyond black-and-white print and graphics. For example, graphs and photographs in papers would no longer have to be black-and-white. But that is not all: it would be possible, for example, to publish professional- quality videos of paper presentations at the major SIG conferences. We could also publish and archive the software and data sets for accepted papers. Finally, it would allow registered users to receive alerts when relevant content was published. Imagine the benefits from getting a weekly update from CCR with pointers to freshly-published content that is directly relevant to your research!

    These potential benefits can be achieved at little additional cost and using off-the-shelf technologies. They would, however, significantly change the CCR experience for SIG members. Therefore, before we plunge ahead, we’d like to know what you think. Do send your comments to me at:

  • Martin Heusse, Sears A. Merritt, Timothy X. Brown, and Andrzej Duda

    Many papers explain the drop of download performance when two TCP connections in opposite directions share a common bottleneck link by ACK compression, the phenomenon in which download ACKs arrive in bursts so that TCP self clocking breaks. Efficient mechanisms to cope with the performance problem exist and we do not consider proposing yet another solution. We rather thoroughly analyze the interactions between connections and show that actually ACK compression only arises in a perfectly symmetrical setup and it has little impact on performance. We provide a different explanation of the interactions—data pendulum, a core phenomenon that we analyze in this paper. In the data pendulum effect, data and ACK segments alternately fill only one of the link buffers (on the upload or download side) at a time, but almost never both of them. We analyze the effect in the case in which buffers are structured as arrays of bytes and derive an expression for the ratio between the download and upload throughput. Simulation results and measurements confirm our analysis and show how appropriate buffer sizing alleviates performance degradation. We also consider the case of buffers structured as arrays of packets and show that it amplifies the effects of data pendulum.

    D. Papagiannaki
  • Nasif Ekiz, Abuthahir Habeeb Rahman, and Paul D. Amer

    While analyzing CAIDA Internet traces of TCP traffic to detect instances of data reneging, we frequently observed seven misbehaviors in the generation of SACKs. These misbehaviors could result in a data sender mistakenly thinking data reneging occurred. With one misbehavior, the worst case could result in a data sender receiving a SACK for data that was transmitted but never received. This paper presents a methodology and its application to test a wide range of operating systems using TBIT to fingerprint which ones misbehave in each of the seven ways. Measuring the performance loss due to these misbehaviors is outside the scope of this study; the goal is to document the misbehaviors so they may be corrected. One can conclude that the handling of SACKs while simple in concept is complex to implement.

    S. Saroiu
  • Shane Alcock and Richard Nelson

    This paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40% of packet loss events in YouTube flows in a residential DSL dataset and the re- transmission of over 1% of all YouTube data sent after the application flow control began. We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams.

    S. Moon
  • Marcus Lundén and Adam Dunkels

    In low-power wireless networks, nodes need to duty cycle their radio transceivers to achieve a long system lifetime. Counter-intuitively, in such networks broadcast becomes expensive in terms of energy and bandwidth since all neighbors must be woken up to receive broadcast messages. We argue that there is a class of traffic for which broadcast is overkill: periodic redundant transmissions of semi-static information that is already known to all neighbors, such as neighbor and router advertisements. Our experiments show that such traffic can account for as much as 20% of the network power consumption. We argue that this calls for a new communication primitive and present politecast, a communication primitive that allows messages to be sent without explicitly waking neighbors up. We have built two systems based on politecast: a low-power wireless mobile toy and a full-scale low-power wireless network deployment in an art gallery and our experimental results show that politecast can provide up to a four-fold lifetime improvement over broadcast.

    P. Levis
  • Xiang Cheng, Sen Su, Zhongbao Zhang, Hanchi Wang, Fangchun Yang, Yan Luo, and Jie Wang

    Virtualizing and sharing networked resources have become a growing trend that reshapes the computing and networking architectures. Embedding multiple virtual networks (VNs) on a shared substrate is a challenging problem on cloud computing platforms and large-scale sliceable network testbeds. In this paper we apply the Markov Random Walk (RW) model to rank a network node based on its resource and topological attributes. This novel topology-aware node ranking measure reflects the relative importance of the node. Using node ranking we devise two VN embedding algorithms. The first algorithm maps virtual nodes to substrate nodes according to their ranks, then embeds the virtual links between the mapped nodes by finding shortest paths with unsplittable paths and solving the multi-commodity flow problem with splittable paths. The second algorithm is a backtracking VN embedding algorithm based on breadth-first search, which embeds the virtual nodes and links during the same stage using node ranks. Extensive simulation experiments show that the topology-aware node rank is a better resource measure and the proposed RW-based algorithms increase the long-term average revenue and acceptance ratio compared to the existing embedding algorithms.

    S. Agarwal
  • Jianping Wu, Jessie Hui Wang, and Jiahai Yang

    Research and promotion of next generation Internet have drawn attention of researchers in many countries. In USA, FIND initiative takes a clean-slate approach. In EU, EIFFEL think tank concludes that both clean slate and evolutionary approach are needed. While in China, researchers and the country are enthusiastic on the promotion and immediate deployment of IPv6 due to the imminent problem of IPv4 address exhaustion.

    Since 2003, China launched a strategic programme called China Next Generation Internet (CNGI). China is expecting that Chinese industry is better positioned on future Internet technologies and services than it was for the first generation. Under the support of CNGI grant, China Education and Research Network (CERNET) started to build an IPv6- only network, i.e. CNGI-CERNET2. Currently it provides IPv6 access service for students and staff in many Chinese universities. In this article, we will introduce the CNGI programme, the architecture of CNGI-CERNET2, and some aspects of CNGI-CERNET2’s deployment and operation, such as transition, security, charging and roaming service etc.

  • Ingmar Poese, Steve Uhlig, Mohamed Ali Kaafar, Benoit Donnet, and Bamba Gueye

    The most widely used technique for IP geolocation consists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolocation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitations in their usability.

    First, the vast majority of entries in the databases refer only to a few popular countries (e.g., U.S.). This creates an imbalance in the representation of countries across the IP blocks of the databases. Second, these entries do not reflect the original allocation of IP blocks, nor BGP announcements. In addition, we quantify the accuracy of geolocation databases on a large European ISP based on ground truth information. This is the first study using a ground truth showing that the overly fine granularity of database entries makes their accuracy worse, not better. Geolocation databases can claim country-level accuracy, but certainly not city-level.

  • Wai-Leong Yeow, Cedric Westphal, and Ulas C. Kozat

    In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.

  • S. Keshav

    What is, or ought to be, the goal of systems research? The answer to this question differs for academics and researchers in industry. Researchers in the industry usually work either directly or indirectly on a specific commercial project, and are therefore constrained to design and build a system that fits manifest needs. They do not need to worry about a goal beyond this somewhat narrow horizon. For instance, a researcher at Google may be given the task of building an efficient file system: higher level goals beyond this are meaningless to him or her. So, the ‘goal’ of systems research is more or less trivial in the industrial context.

    Many academic researchers in the area, however, are less constrained. Lacking an immediate project to work on, they are often left wondering what set of issues to address.

    One solution is to work with industrial partners to find relevant problems. However, although this results in problems that are well-defined, immediately applicable, and even publishable in the best conferences, it is not clear whether this is the true role of academia. Why should industrial research be carried out for free by academics, in effect subsidized by society? I think that academics may be inspired’ by industrial problems, but should set their sights higher.

    Another easy path is to choose to work in a ‘hot’ area, as defined by the leaders in the community, or a funding agency (more often than not, these are identical). If DARPA declares technology X or Y to be its latest funding goal, it is not too hard to change ones path to be a researcher of flavour X or Y. This path has the attraction that it guarantees a certain level of funding as well as a community of fellow researchers. However, letting others decide the research program does not sound too appealing. It is not that far from industrial research, except that the person to be satisfied is a program manager or funding agency, instead of your boss.

    I think academic researchers ought to seek their own path relatively unfettered by considerations of industrial projects or the whims of funding agencies. This, therefore, immediately brings up the question of what ought to be the goal of their work. Here are my thoughts.

    I believe that systems research lies in bridging two ‘gaps’: the Problem Selection Gap and the Infrastructure-Device Gap. In a nutshell, the goal of systems research is to satisfy application requirements, as defined by the Problem Selection Gap, by putting together infrastructure from underlying devices, by solving the Infrastructure-Device Gap. Let me explain this next.

    What is the Infrastructure-device gap? Systems research results in the creation of systems infrastructure. By infrastructure, I mean a system that is widely used and that serves to improve the daily lives of its users in some way. Think of it as the analogues of water and electricity. By that token, Automatic Teller Machines, Internet Search, airline reservation systems, and satellite remote sensing services are all instances of essential technological infrastructure.

    Infrastructure is built by putting together devices. By devices, I actually mean sub-systems whose behaviour can be well-enough encapsulated to form building blocks for the next level of abstraction and complexity. For instance, from the perspective of a computer network researcher, a host is a single device. Yet, a host is a complex system in itself, with many hundreds of subsystems. So, the definition of device depends on the specific abstraction being considered, and I will take it to be self-evident, for the purpose of this discussion, what a device is.

    An essential aspect of the composition of devices into infrastructure is that the infrastructure has properties that individual devices do not. Consider a RAID system, that provides fault tolerance properties far superior to that of an individual disk. The systems research here is to mask the problems of individual devices, that is, to compose the devices into a harmonious whole, whose group properties, such as functionality, reliability, availability, efficiency, scalability, flexibility etc. are superior to that of each device. This then, is at the heart of systems research: how to take devices, appropriately defined, and compose them to create emergent properties in an infrastructure. We judge the quality of the infrastructure by the level to which it meets its stated goals. Moreover, we can use a standard ‘bag of tricks’ (explained in the networks context in George Varghese’s superb book ‘Network Algorithmics’) to effect this composition.

    Although satisfying, this definition of systems research leaves an important problem unresolved: how should one define the set of infrastructure properties in the first place. After all, for each set of desired properties, one can come up with a system design that best matches it. Are we to be resigned to a set of not just incompatible, but incomparable, system designs?

    Here is where the Problem selection gap fits in. Systems are not built in a vacuum. They exist in a social context. In other words, systems are built for some purpose. In the context of industrial research, the purpose is the purpose of the corporation, and handed down to the researcher: ‘Thou Shalt Build a File System’, for instance. And along with this edict comes a statement of the performance, efficiency, and ‘ility’ goals for the system. In such situations, there is no choice of problem selection.

    But what of the academic researcher? What are the characteristics of the infrastructure that the academic should seek to build? I believe that the answer is to look to the social context of academia. Universities are supported by the public at large in order to provide a venue for the solution of problems that afflict society at large. These are problems of health care, education, poverty, global warming, pollution, inner-city crime, and so on. As academics, it behooves us to do our bit to help society solve these problems. Therefore, I claim that as academics, we should choose one or more of these big problems, and then think of what type of system infrastructure can we build to either alleviate or solve it. This will naturally lead to a set of infrastructure requirements. In other words, there is no need to invent artificial problems to work on! There are enough real-world problems already. We only need to open our eyes.

  • Yang Chen, Vincent Borrel, Mostafa Ammar, and Ellen Zegura

    The vast majority of research in wireless and mobile (WAM) networking falls in the MANET (Mobile Ad Hoc Network) category, where end-to-end paths are the norm. More recently, research has focused on a different Disruption Tolerant Network (DTN) paradigm, where end-to-end paths are the exception and intermediate nodes may store data while waiting for transfer opportunities towards the destination. Protocols developed for MANETs are generally not appropriate for DTNs and vice versa, since the connectivity assumptions are so different. We make the simple but powerful observation that MANETs and DTNs fit into a continuum that generalizes these two previously distinct categories. In this paper, building on this observation, we develop a WAM continuum framework that goes further to scope the entire space of Wireless and Mobile networks so that a network can be characterized by its position in this continuum. Certain network equivalence classes can be defined over subsets of this WAM continuum. We instantiate our framework that allows network connectivity classification and show how that classification relates to routing. We illustrate our approach by applying it to networks described by traces and by mobility models. We also outline how our framework can be used to guide network design and operation.

    S. Banerjee
  • Matthew Luckie, Amogh Dhamdhere, kc claffy, and David Murrell

    Data collected using traceroute-based algorithms underpins research into the Internet’s router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths. Consequently, successive probe packets can solicit responses from unconnected routers, leading to the inference of false links. This paper examines the inaccuracies induced from such false inferences, both on macroscopic and ISP topology mapping. We collected macroscopic topology data to 365k destinations, with techniques that both do and do not try to capture load balancing phenomena. We then use alias resolution techniques to infer if a measurement artifact of classic traceroute induces a false router-level link. This technique detected that 2.71% and 0.76% of the links in our UDP and ICMP graphs were falsely inferred due to the presence of load-balancing. We conclude that most per-flow load-balancing does not induce false links when macroscopic topology is inferred using classic traceroute. The effect of false links on ISP topology mapping is possibly much worse, because the degrees of a tier-1 ISP’s routers derived from classic traceroute were inflated by a median factor of 2.9 as compared to those inferred with Paris traceroute.

    R. Teixeira
  • Suchul Lee, Hyunchul Kim, Dhiman Barman, Sungryoul Lee, Chong-kwon Kim, Ted Kwon, and Yanghee Choi

    Recent research on Internet traffic classification has produced a number of approaches for distinguishing types of traffic. However, a rigorous comparison of such proposed algorithms still remains a challenge, since every proposal considers a different benchmark for its experimental evaluation. A lack of clear consensus on an objective and scientific way for comparing results has made researchers uncertain of fundamental as well as relative contributions and limitations of each proposal. In response to the growing necessity for an objective method of comparing traffic classifiers and to shed light on scientifically grounded traffic classification research, we introduce an Internet traffic classification benchmark tool, NeTraMark. Based on six design guidelines (Comparability, Reproducibility, Efficiency, Extensibility, Synergy, and Flexibility/Ease-of-use), NeTraMark is the first Internet traffic classification benchmark where eleven different state-of-the-art traffic classifiers are integrated. NeTraMark allows researchers and practitioners to easily extend it with new classification algorithms and compare them with other built-in classifiers, in terms of three categories of performance metrics: per-whole-trace flow accuracy, per-application flow accuracy, and computational performance.

    R. Teixeira
  • Lei Yang, Zengbin Zhang, Wei Hou, Ben Y. Zhao, and Haitao Zheng

    Proliferation and innovation of wireless technologies require significant amounts of radio spectrum. Recent policy reforms by the FCC are paving the way by freeing up spectrum for a new generation of frequency-agile wireless devices based on software defined radios (SDRs). But despite recent advances in SDR hardware, research on SDR MAC protocols or applications requires an experimental platform for managing physical access. We introduce Papyrus, a software platform for wireless researchers to develop and experiment dynamic spectrum systems using currently available SDR hardware. Papyrus provides two fundamental building blocks at the physical layer: flexible non-contiguous frequency access and simple and robust frequency detection. Papyrus allows researchers to deploy and experiment new MAC protocols and applications on USRP GNU Radio, and can also be ported to other SDR platforms. We demonstrate the use of Papyrus using Jello, a distributedMAC overlay for high-bandwidth media streaming applications and Ganache, a SDR layer for adaptable guardband configuration. Full implementations of Papyrus and Jello are publicly available.

    D. Wetherall
  • Jon Whiteaker, Fabian Schneider, and Renata Teixeira

    This paper performs controlled experiments with two popular virtualization techniques, Linux-VServer and Xen, to examine the effects of virtualization on packet sending and receiving delays. Using a controlled setting allows us to independently investigate the influence on delay measurements when competing virtual machines (VMs) perform tasks that consume CPU, memory, I/O, hard disk, and network bandwidth. Our results indicate that heavy network usage from competing VMs can introduce delays as high as 100 ms to round-trip times. Furthermore, virtualization adds most of this delay when sending packets, whereas packet reception introduces little extra delay. Based on our findings, we discuss guidelines and propose a feedback mechanism to avoid measurement bias under virtualization.

    Y. Zhang
  • Luis M. Vaquero, Luis Rodero-Merino, and Rajkumar Buyya

    Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an “advanced outsourcing” solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system.

  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    We are pleased to announce the release of a tool that records detailed measurements of the wireless channel along with received 802.11 packet traces. It runs on a commodity 802.11n NIC, and records Channel State Information (CSI) based on the 802.11 standard. Unlike Receive Signal Strength Indicator (RSSI) values, which merely capture the total power received at the listener, the CSI contains information about the channel between sender and receiver at the level of individual data subcarriers, for each pair of transmit and receive antennas.

    Our toolkit uses the Intel WiFi Link 5300 wireless NIC with 3 antennas. It works on up-to-date Linux operating systems: in our testbed we use Ubuntu 10.04 LTS with the 2.6.36 kernel. The measurement setup comprises our customized versions of Intel’s closesource firmware and open-source iwlwifi wireless driver, userspace tools to enable these measurements, access point functionality for controlling both ends of the link, and Matlab (or Octave) scripts for data analysis. We are releasing the binary of the modified firmware, and the source code to all the other components.

  • Anders Lindgren and Pan Hui

    Research on networks for challenged environments has become a major research area recently. There is however a lack of true understanding among networking researchers about what such environments really are like. In this paper we give an introduction to the ExtremeCom series of work- shops that were created to overcome this limitation. We will discuss the motivation behind why the workshop series was created, give some summaries of the two workshops that have been held, and discuss the lessons that we have learned from them.

  • Vinod Kone, Mariya Zheleva, Mile Wittie, Ben Y. Zhao, Elizabeth M. Belding, Haitao Zheng, and Kevin Almeroth

    Accurate measurements of deployed wireless networks are vital for researchers to perform realistic evaluation of proposed systems. Unfortunately, the difficulty of performing detailed measurements limits the consistency in parameters and methodology of current datasets. Using different datasets, multiple research studies can arrive at conflicting conclusions about the performance of wireless systems. Correcting this situation requires consistent and comparable wireless traces collected from a variety of deployment environments. In this paper, we describe AirLab, a distributed wireless data collection infrastructure that uses uniformly instrumented measurement nodes at heterogeneous locations to collect consistent traces of both standardized and user-defined experiments. We identify four challenges in the AirLab platform, consistency, fidelity, privacy, security, and describe our approaches to address them.

  • Shailesh Agrawal, Kavitha Athota, Pramod Bhatotia, Piyush Goyal, Phani Krisha, Kirtika Ruchandan, Nishanth Sastry, Gurmeet Singh, Sujesha Sudevalayam, Immanuel Ilavarasan Thomas, Arun Vishwanath, Tianyin Xu, and Fang Yu

    This document collects together reports of the sessions from the 2010 ACM SIGCOMM Conference, the annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM) on the applications, technologies, architectures, and protocols for computer communication.

  • Kenneth L. Calvert, W. Keith Edwards, Nick Feamster, Rebecca E. Grinter, Ye Deng, and Xuzi Zhou

    In managing and troubleshooting home networks, one of the challenges is in knowing what is actually happening. Availability of a record of events that occurred on the home network before trouble appeared would go a long way toward addressing that challenge. In this position/work-in-progress paper, we consider requirements for a general-purpose logging facility for home networks. Such a facility, if properly designed, would potentially have other uses. We describe several such uses and discuss requirements to be considered in the design of a logging platform that would be widely supported and accepted. We also report on our initial deployment of such a facility.

  • Jeffrey Erman, Alexandre Gerber, and Subhabrata Sen

    HTTP (Hypertext Transport Protocol) was originally primarily used for human-initiated client-server communications launched from web browsers, traditional computers and laptops. However, today it has become the protocol of choice for a bewildering range of applications from a wide array of emerging devices like smart TVs and gaming consoles. This paper presents an initial study characterizing the non-traditional sources of HTTP traffic such as consumer devices and automated updates in the overall HTTP traffic for residential Internet users. Among our findings, 13% of all HTTP traffic in terms of bytes is due to nontraditional sources, with 5% being from consumer devices such as WiFi enabled smartphones and 8% generated from automated software updates and background processes. Our findings show that 11% of all HTTP requests are caused by communications with advertising servers from as many as 190 countries worldwide, suggesting the widespread prevalence of such activities. Overall, our findings start to answer questions about what is the state of traffic generated in these smart homes.

  • Mikko Pervilä and Jussi Kangasharju

    Data centers are a major consumer of electricity and a significant fraction of their energy use is devoted to cooling the data center. Recent prototype deployments have investigated the possibility of using outside air for cooling and have shown large potential savings in energy consumption. In this paper, we push this idea to the extreme, by running servers outside in Finnish winter. Our results show that commercial, off-the-shelf computer equipment can tolerate extreme conditions such as outside air temperatures below -20C and still function correctly over extended periods of time. Our experiment improves upon the other recent results by confirming their findings and extending them to cover a wider range of intake air temperatures and humidity. This paper presents our experimentation methodology and setup, and our main findings and observations.

  • Andrew Krioukov, Prashanth Mohan, Sara Alspaugh, Laura Keys, David Culler, and Randy Katz

    Energy consumption is a major and costly problem in data centers. A large fraction of this energy goes to powering idle machines that are not doing any useful work. We identify two causes of this inefficiency: low server utilization and a lack of power-proportionality. To address this problem we present a design for an power-proportional cluster consisting of a power-aware cluster manager and a set of heterogeneous machines. Our design makes use of currently available energy-efficient hardware, mechanisms for transitioning in and out of low-power sleep states, and dynamic provisioning and scheduling to continually adjust to workload and minimize power consumption. With our design we are able to reduce energy consumption while maintaining acceptable response times for a web service workload based on Wikipedia. With our dynamic provisioning algorithms we demonstrate via simulation a 63% savings in power usage in a typically provisioned datacenter where all machines are left on and awake at all times. Our results show that we are able to achieve close to 90% of the savings a theoretically optimal provisioning scheme would achieve. We have also built a prototype cluster which runs Wikipedia to demonstrate the use of our design in a real environment.

  • Srinivasan Keshav and Catherine Rosenberg

    Several powerful forces are gathering to make fundamental and irrevocable changes to the century-old grid. The next-generation grid, often called the ‘smart grid,’ will feature distributed energy production, vastly more storage, tens of millions of stochastic renewable-energy sources, and the use of communication technologies both to allow precise matching of supply to demand and to incentivize appropriate consumer behaviour. These changes will have the effect of reducing energy waste and reducing the carbon footprint of the grid, making it ‘smarter’ and ‘greener.’ In this position paper, we discuss how the concepts and techniques pioneered by the Internet, the fruit of four decades of research in this area, are directly applicable to the design of a smart, green grid. This is because both the Internet and the electrical grid are designed to meet fundamental needs, for information and for energy, respectively, by connecting geographically dispersed suppliers with geographically dispersed consumers. Keeping this and other similarities (and fundamental differences, as well) in mind, we propose several specific areas where Internet concepts and technologies can contribute to the development of a smart, green grid. We also describe some areas where the Internet operations can be improved based on the experience gained in the electrical grid. We hope that our work will initiate a dialogue between the Internet and the smart grid communities.

  • Nicholas FitzRoy-Dale, Ihor Kuz, and Gernot Heiser

    We describe Currawong, a tool to perform system software architecture optimisation. Currawong is an extensible tool which applies optimisations at the point where an application invokes framework or library code. Currawong does not require source code to perform optimisations, e ectively decoupling the relationship between compilation and optimisation. We show, through examples written for the popular Android smartphone platform, that Currawong is capable of signi cant performance improvement to existing applications.

  • Gunho Lee, Niraj Tolia, Parthasarathy Ranganathan, and Randy H. Katz

    This paper proposes an architecture for optimized resource allocation in Infrastructure-as-a-Service (IaaS)-based cloud systems. Current IaaS systems are usually unaware of the hosted application’s requirements and therefore allocate resources independently of its needs, which can significantly impact performance for distributed data-intensive applications.

    To address this resource allocation problem, we propose an architecture that adopts a “what if ” methodology to guide allocation decisions taken by the IaaS. The architecture uses a prediction engine with a lightweight simulator to estimate the performance of a given resource allocation and a genetic algorithm to find an optimized solution in the large search space. We have built a prototype for Topology-Aware Resource Allocation (TARA) and evaluated it on a 80 server cluster with two representative MapReduce-based benchmarks. Our results show that TARA reduces the job completion time of these applications by up to 59% when compared to application-independent allocation policies.

  • Dong Yin, Deepak Unnikrishnan, Yong Liao, Lixin Gao, and Russell Tessier

    Recent FPGA-based implementations of network virtualization represent a significant step forward in network performance and scalability. Although these systems have been shown to provide orders of magnitude higher performance than solutions using software-based routers, straightforward reconfiguration of hardware-based virtual networks over time is a challenge. In this paper, we present the implementation of a reconfigurable network virtualization substrate that combines several partially-reconfigurable hardware virtual routers with software virtual routers. The update of hardware-based virtual networks in our system is supported via real-time partial FPGA reconfiguration. Hardware virtual networks can be dynamically reconfigured in a fraction of a second without affecting other virtual networks operating in the same FPGA. A heuristic has been developed to allocate virtual networks with diverse bandwidth requirements and network characteristics on this heterogeneous virtualization substrate. Experimental results show that the reconfigurable virtual routers can forward packets at line rate. Partial reconfiguration allows for 20x faster hardware reconfiguration than a previous approach which migrated hardware virtual networks to software.

  • Yong He, Ji Fang, Jiansong Zhang, Haichen Shen, Kun Tan, and Yongguang Zhang

    This demonstration shows a novel virtualization architecture, called Multi-Protocol Access Point (MPAP), which exploits the software radio technology to virtualize multiple heterogenous wireless stan- dards on single radio hardware. The basic idea is to deploy a wide- band radio front-end to receive radio signals from all wireless stan- dards sharing the same spectrum band, and use separate software base-bands to demodulate information stream for each wireless s- tandard. Based on software radio, MPAP consolidates multiple wireless devices into single hardware platform, allowing them to share the common general-purpose computing resource. Different software base-bands can easily communicate and coordinate via a software coordinator and coexist better with one another. As one example, we demonstrate to use non-contiguous OFDM in 802.11g PHY to avoid the mutual interference with narrow-band ZigBee communication.

  • Keon Jang, Sangjin Han, Seungyeop Han, Sue Moon, and KyoungSoo Park

    SSL/TLS is a standard protocol for secure Internet communication. Despite its great success, today’s SSL deployment is largely limited to security-critical domains. The low adoption rate of SSL is mainly due to high computation overhead on the server side.

    In this paper, we propose Graphics Processing Units (GPUs) as a new source of computing power to reduce the server-side overhead. We have designed and implemented an SSL proxy that opportunistically offloads cryptographic operations to GPUs. The evaluation results show that our GPU implementation of cryptographic operations, RSA, AES, and HMAC-SHA1, achieves high throughput while keeping the latency low. The SSL proxy significantly boosts the throughput of SSL transactions, handling 21.5K SSL transactions per second, and has comparable response time even when overloaded.

  • Binbin Chen, Ziling Zhou, Yuda Zhao, and Haifeng Yu

    Motivated by recent emerging systems that can leverage partially correct packets in wireless networks, this paper investigates the novel concept of error estimating codes (EEC). Without correcting the errors in the packet, EEC enables the receiver of the packet to estimate the packet’s bit error rate, which is perhaps the most important meta-information of a partially correct packet. Our EEC algorithm provides provable estimation quality, with rather low redundancy and computational overhead. To demonstrate the utility of EEC, we exploit and implement EEC in two wireless network applications, Wi-Fi rate adaptation and real-time video streaming. Our real-world experiments show that these applications can significantly benefit from EEC.

  • Sayandeep Sen, Syed Gilani, Shreesha Srinath, Stephen Schmitt, and Suman Banerjee

    All practical wireless communication systems are prone to errors. At the symbol level such wireless errors have a well-defined structure: when a receiver decodes a symbol erroneously, it is more likely that the decoded symbol is a good “approximation” of the transmitted symbol than a randomly chosen symbol among all possible transmitted symbols. Based on this property, we define approximate communication, a method that exploits this error structure to natively provide unequal error protection to data bits. Unlike traditional (FEC-based) mechanisms of unequal error protection that consumes additional network and spectrum resources to encode redundant data, the approximate communication technique achieves this property at the PHY layer without consuming any additional network or spectrum resources (apart from a minimal signaling overhead) . Approximate communication is particularly useful to media delivery applications that can benefit significantly from unequal error protection of data bits. We show the usefulness of this method to such applications by designing and implementing an end-to-end media delivery system, called Apex. Our Software Defined Radio (SDR)-based experiments reveal that Apex can improve video quality by 5 to 20 dB (PSNR) across a diverse set of wireless conditions, when compared to traditional approaches. We believe that mechanisms such as Apex can be a cornerstone in designing future wireless media delivery systems under any errorprone channel condition.

  • Myungjin Lee, Nick Duffield, and Ramana Rao Kompella

    New applications such as algorithmic trading and high-performance computing require extremely low latency (in microseconds). Network operators today lack sufficient fine-grain measurement tools to detect, localize and repair performance anomalies and delay spikes that cause application SLA violations. A recently proposed solution called LDA provides a scalable way to obtain latency, but only provides aggregate measurements. However, debugging applicationspecific problems requires per-flow measurements, since different flows may exhibit significantly different characteristics even when they are traversing the same link. To enable fine-grained per-flow measurements in routers, we propose a new scalable architecture called reference latency interpolation (RLI) that is based on our observation that packets potentially belonging to different flows that are closely spaced to each other exhibit similar delay properties. In our evaluation using simulations over real traces, we show that RLI achieves a median relative error of 12% and one to two orders of magnitude higher accuracy than previous per-flow measurement solutions with small overhead.

  • Kai Chen, Chuanxiong Guo, Haitao Wu, Jing Yuan, Zhenqian Feng, Yan Chen, Songwu Lu, and Wenfei Wu

    Data center networks encode locality and topology information into their server and switch addresses for performance and routing purposes. For this reason, the traditional address configuration protocols such as DHCP require huge amount of manual input, leaving them error-prone.

    In this paper, we present DAC, a generic and automatic Data center Address Configuration system. With an automatically generated blueprint which defines the connections of servers and switches labeled by logical IDs, e.g., IP addresses, DAC first learns the physical topology labeled by device IDs, e.g., MAC addresses. Then at the core of DAC is its device-to-logical ID mapping and malfunction detection. DAC makes an innovation in abstracting the device-to-logical ID mapping to the graph isomorphism problem, and solves it with low time-complexity by leveraging the attributes of data center network topologies. Its malfunction detection scheme detects errors such as device and link failures and miswirings, including the most difficult case where miswirings do not cause any node degree change.

    We have evaluated DAC via simulation, implementation and experiments. Our simulation results show that DAC can accurately find all the hardest-to-detect malfunctions and can autoconfigure a large data center with 3.8 million devices in 46 seconds. In our implementation, we successfully autoconfigure a small 64-server BCube network within 300 milliseconds and show that DAC is a viable solution for data center autoconfiguration.

  • Hussam Abu-Libdeh, Paolo Costa, Antony Rowstron, Greg O'Shea, and Austin Donnelly

    Building distributed applications that run in data centers is hard. The CamCube project explores the design of a ship-ping container sized data center with the goal of building an easier platform on which to build these applications. Cam-Cube replaces the traditional switch-based network with a 3D torus topology, with each server directly connected to six other servers. As in other proposals, e.g. DCell and BCube, multi-hop routing in CamCube requires servers to participate in packet forwarding. To date, as in existing data centers, these approaches have all provided a single routing protocol for the applications.

    In this paper we explore if allowing applications to implement their own routing services is advantageous, and if we can support it efficiently. This is based on the observation that, due to the

    exibility ofered by the CamCube API, many applications implemented their own routing protocol in order to achieve specific application-level characteristics, such as trading of higher-latency for better path convergence. Using large-scale simulations we demonstrate the benefits and network-level impact of running multiple routing protocols. We demonstrate that applications are more effcient and do not generate additional control traffic overhead. This motivates us to design an extended routing service allowing easy implementation of application-specific routing protocols on CamCube. Finally, we demonstrate that the additional performance overhead incurred when using the extended routing service on a prototype CamCube is very low.

  • Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan

    Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP’s demands on the limited buffer space available in data center switches. For example, bandwidth hungry “background” flows build up queues at the switches, and thus impact the performance of latency sensitive “foreground” traffic.

    To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.

  • Craig Labovitz, Scott Iekel-Johnson, Danny McPherson, Jon Oberheide, and Farnam Jahanian

    In this paper, we examine changes in Internet inter-domain traffic demands and interconnection policies. We analyze more than 200 Exabytes of commercial Internet traffic over a two year period through the instrumentation of 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. Our analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies. Specifically, we find the majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks. We also show significant changes in Internet application usage, including a global decline of P2P and a significant rise in video traffic. We conclude with estimates of the current size of the Internet by inter-domain traffic volume and rate of annualized inter-domain traffic growth.

  • Sharon Goldberg, Michael Schapira, Peter Hummon, and Jennifer Rexford

    In response to high-profile Internet outages, BGP security variants have been proposed to prevent the propagation of bogus routing information. To inform discussions of which variant should be deployed in the Internet, we quantify the ability of the main protocols (origin authentication, soBGP, S-BGP, and data-plane verification) to blunt traffic-attraction attacks; i.e., an attacker that deliberately attracts traffic to drop, tamper, or eavesdrop on packets.

    Intuition suggests that an attacker can maximize the traffic he attracts by widely announcing a short path that is not flagged as bogus by the secure protocol. Through simulations on an empirically-determined AS-level topology, we show that this strategy is surprisingly effective, even when the network uses an advanced security solution like S-BGP or data-plane verification. Worse yet, we show that these results underestimate the severity of attacks. We prove that finding the most damaging strategy is NP-hard, and show how counterintuitive strategies, like announcing longer paths, announcing to fewer neighbors, or triggering BGP loop-detection, can be used to attract even more traffic than the strategy above. These counterintuitive examples are not merely hypothetical; we searched the empirical AS topology to identify specific ASes that can launch them. Finally, we find that a clever export policy can often attract almost as much traffic as a bogus path announcement. Thus, our work implies that mechanisms that police export policies (e.g., defensive filtering) are crucial, even if S-BGP is fully deployed.

Syndicate content