CCR Papers from January 2011

Find a CCR issue:
  • S. Keshav

    What is, or ought to be, the goal of systems research? The answer to this question differs for academics and researchers in industry. Researchers in the industry usually work either directly or indirectly on a specific commercial project, and are therefore constrained to design and build a system that fits manifest needs. They do not need to worry about a goal beyond this somewhat narrow horizon. For instance, a researcher at Google may be given the task of building an efficient file system: higher level goals beyond this are meaningless to him or her. So, the ‘goal’ of systems research is more or less trivial in the industrial context.

    Many academic researchers in the area, however, are less constrained. Lacking an immediate project to work on, they are often left wondering what set of issues to address.

    One solution is to work with industrial partners to find relevant problems. However, although this results in problems that are well-defined, immediately applicable, and even publishable in the best conferences, it is not clear whether this is the true role of academia. Why should industrial research be carried out for free by academics, in effect subsidized by society? I think that academics may be inspired’ by industrial problems, but should set their sights higher.

    Another easy path is to choose to work in a ‘hot’ area, as defined by the leaders in the community, or a funding agency (more often than not, these are identical). If DARPA declares technology X or Y to be its latest funding goal, it is not too hard to change ones path to be a researcher of flavour X or Y. This path has the attraction that it guarantees a certain level of funding as well as a community of fellow researchers. However, letting others decide the research program does not sound too appealing. It is not that far from industrial research, except that the person to be satisfied is a program manager or funding agency, instead of your boss.

    I think academic researchers ought to seek their own path relatively unfettered by considerations of industrial projects or the whims of funding agencies. This, therefore, immediately brings up the question of what ought to be the goal of their work. Here are my thoughts.

    I believe that systems research lies in bridging two ‘gaps’: the Problem Selection Gap and the Infrastructure-Device Gap. In a nutshell, the goal of systems research is to satisfy application requirements, as defined by the Problem Selection Gap, by putting together infrastructure from underlying devices, by solving the Infrastructure-Device Gap. Let me explain this next.

    What is the Infrastructure-device gap? Systems research results in the creation of systems infrastructure. By infrastructure, I mean a system that is widely used and that serves to improve the daily lives of its users in some way. Think of it as the analogues of water and electricity. By that token, Automatic Teller Machines, Internet Search, airline reservation systems, and satellite remote sensing services are all instances of essential technological infrastructure.

    Infrastructure is built by putting together devices. By devices, I actually mean sub-systems whose behaviour can be well-enough encapsulated to form building blocks for the next level of abstraction and complexity. For instance, from the perspective of a computer network researcher, a host is a single device. Yet, a host is a complex system in itself, with many hundreds of subsystems. So, the definition of device depends on the specific abstraction being considered, and I will take it to be self-evident, for the purpose of this discussion, what a device is.

    An essential aspect of the composition of devices into infrastructure is that the infrastructure has properties that individual devices do not. Consider a RAID system, that provides fault tolerance properties far superior to that of an individual disk. The systems research here is to mask the problems of individual devices, that is, to compose the devices into a harmonious whole, whose group properties, such as functionality, reliability, availability, efficiency, scalability, flexibility etc. are superior to that of each device. This then, is at the heart of systems research: how to take devices, appropriately defined, and compose them to create emergent properties in an infrastructure. We judge the quality of the infrastructure by the level to which it meets its stated goals. Moreover, we can use a standard ‘bag of tricks’ (explained in the networks context in George Varghese’s superb book ‘Network Algorithmics’) to effect this composition.

    Although satisfying, this definition of systems research leaves an important problem unresolved: how should one define the set of infrastructure properties in the first place. After all, for each set of desired properties, one can come up with a system design that best matches it. Are we to be resigned to a set of not just incompatible, but incomparable, system designs?

    Here is where the Problem selection gap fits in. Systems are not built in a vacuum. They exist in a social context. In other words, systems are built for some purpose. In the context of industrial research, the purpose is the purpose of the corporation, and handed down to the researcher: ‘Thou Shalt Build a File System’, for instance. And along with this edict comes a statement of the performance, efficiency, and ‘ility’ goals for the system. In such situations, there is no choice of problem selection.

    But what of the academic researcher? What are the characteristics of the infrastructure that the academic should seek to build? I believe that the answer is to look to the social context of academia. Universities are supported by the public at large in order to provide a venue for the solution of problems that afflict society at large. These are problems of health care, education, poverty, global warming, pollution, inner-city crime, and so on. As academics, it behooves us to do our bit to help society solve these problems. Therefore, I claim that as academics, we should choose one or more of these big problems, and then think of what type of system infrastructure can we build to either alleviate or solve it. This will naturally lead to a set of infrastructure requirements. In other words, there is no need to invent artificial problems to work on! There are enough real-world problems already. We only need to open our eyes.

  • Yang Chen, Vincent Borrel, Mostafa Ammar, and Ellen Zegura

    The vast majority of research in wireless and mobile (WAM) networking falls in the MANET (Mobile Ad Hoc Network) category, where end-to-end paths are the norm. More recently, research has focused on a different Disruption Tolerant Network (DTN) paradigm, where end-to-end paths are the exception and intermediate nodes may store data while waiting for transfer opportunities towards the destination. Protocols developed for MANETs are generally not appropriate for DTNs and vice versa, since the connectivity assumptions are so different. We make the simple but powerful observation that MANETs and DTNs fit into a continuum that generalizes these two previously distinct categories. In this paper, building on this observation, we develop a WAM continuum framework that goes further to scope the entire space of Wireless and Mobile networks so that a network can be characterized by its position in this continuum. Certain network equivalence classes can be defined over subsets of this WAM continuum. We instantiate our framework that allows network connectivity classification and show how that classification relates to routing. We illustrate our approach by applying it to networks described by traces and by mobility models. We also outline how our framework can be used to guide network design and operation.

    S. Banerjee
  • Matthew Luckie, Amogh Dhamdhere, kc claffy, and David Murrell

    Data collected using traceroute-based algorithms underpins research into the Internet’s router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths. Consequently, successive probe packets can solicit responses from unconnected routers, leading to the inference of false links. This paper examines the inaccuracies induced from such false inferences, both on macroscopic and ISP topology mapping. We collected macroscopic topology data to 365k destinations, with techniques that both do and do not try to capture load balancing phenomena. We then use alias resolution techniques to infer if a measurement artifact of classic traceroute induces a false router-level link. This technique detected that 2.71% and 0.76% of the links in our UDP and ICMP graphs were falsely inferred due to the presence of load-balancing. We conclude that most per-flow load-balancing does not induce false links when macroscopic topology is inferred using classic traceroute. The effect of false links on ISP topology mapping is possibly much worse, because the degrees of a tier-1 ISP’s routers derived from classic traceroute were inflated by a median factor of 2.9 as compared to those inferred with Paris traceroute.

    R. Teixeira
  • Suchul Lee, Hyunchul Kim, Dhiman Barman, Sungryoul Lee, Chong-kwon Kim, Ted Kwon, and Yanghee Choi

    Recent research on Internet traffic classification has produced a number of approaches for distinguishing types of traffic. However, a rigorous comparison of such proposed algorithms still remains a challenge, since every proposal considers a different benchmark for its experimental evaluation. A lack of clear consensus on an objective and scientific way for comparing results has made researchers uncertain of fundamental as well as relative contributions and limitations of each proposal. In response to the growing necessity for an objective method of comparing traffic classifiers and to shed light on scientifically grounded traffic classification research, we introduce an Internet traffic classification benchmark tool, NeTraMark. Based on six design guidelines (Comparability, Reproducibility, Efficiency, Extensibility, Synergy, and Flexibility/Ease-of-use), NeTraMark is the first Internet traffic classification benchmark where eleven different state-of-the-art traffic classifiers are integrated. NeTraMark allows researchers and practitioners to easily extend it with new classification algorithms and compare them with other built-in classifiers, in terms of three categories of performance metrics: per-whole-trace flow accuracy, per-application flow accuracy, and computational performance.

    R. Teixeira
  • Lei Yang, Zengbin Zhang, Wei Hou, Ben Y. Zhao, and Haitao Zheng

    Proliferation and innovation of wireless technologies require significant amounts of radio spectrum. Recent policy reforms by the FCC are paving the way by freeing up spectrum for a new generation of frequency-agile wireless devices based on software defined radios (SDRs). But despite recent advances in SDR hardware, research on SDR MAC protocols or applications requires an experimental platform for managing physical access. We introduce Papyrus, a software platform for wireless researchers to develop and experiment dynamic spectrum systems using currently available SDR hardware. Papyrus provides two fundamental building blocks at the physical layer: flexible non-contiguous frequency access and simple and robust frequency detection. Papyrus allows researchers to deploy and experiment new MAC protocols and applications on USRP GNU Radio, and can also be ported to other SDR platforms. We demonstrate the use of Papyrus using Jello, a distributedMAC overlay for high-bandwidth media streaming applications and Ganache, a SDR layer for adaptable guardband configuration. Full implementations of Papyrus and Jello are publicly available.

    D. Wetherall
  • Jon Whiteaker, Fabian Schneider, and Renata Teixeira

    This paper performs controlled experiments with two popular virtualization techniques, Linux-VServer and Xen, to examine the effects of virtualization on packet sending and receiving delays. Using a controlled setting allows us to independently investigate the influence on delay measurements when competing virtual machines (VMs) perform tasks that consume CPU, memory, I/O, hard disk, and network bandwidth. Our results indicate that heavy network usage from competing VMs can introduce delays as high as 100 ms to round-trip times. Furthermore, virtualization adds most of this delay when sending packets, whereas packet reception introduces little extra delay. Based on our findings, we discuss guidelines and propose a feedback mechanism to avoid measurement bias under virtualization.

    Y. Zhang
  • Luis M. Vaquero, Luis Rodero-Merino, and Rajkumar Buyya

    Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an “advanced outsourcing” solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system.

  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    We are pleased to announce the release of a tool that records detailed measurements of the wireless channel along with received 802.11 packet traces. It runs on a commodity 802.11n NIC, and records Channel State Information (CSI) based on the 802.11 standard. Unlike Receive Signal Strength Indicator (RSSI) values, which merely capture the total power received at the listener, the CSI contains information about the channel between sender and receiver at the level of individual data subcarriers, for each pair of transmit and receive antennas.

    Our toolkit uses the Intel WiFi Link 5300 wireless NIC with 3 antennas. It works on up-to-date Linux operating systems: in our testbed we use Ubuntu 10.04 LTS with the 2.6.36 kernel. The measurement setup comprises our customized versions of Intel’s closesource firmware and open-source iwlwifi wireless driver, userspace tools to enable these measurements, access point functionality for controlling both ends of the link, and Matlab (or Octave) scripts for data analysis. We are releasing the binary of the modified firmware, and the source code to all the other components.

  • Anders Lindgren and Pan Hui

    Research on networks for challenged environments has become a major research area recently. There is however a lack of true understanding among networking researchers about what such environments really are like. In this paper we give an introduction to the ExtremeCom series of work- shops that were created to overcome this limitation. We will discuss the motivation behind why the workshop series was created, give some summaries of the two workshops that have been held, and discuss the lessons that we have learned from them.

  • Vinod Kone, Mariya Zheleva, Mile Wittie, Ben Y. Zhao, Elizabeth M. Belding, Haitao Zheng, and Kevin Almeroth

    Accurate measurements of deployed wireless networks are vital for researchers to perform realistic evaluation of proposed systems. Unfortunately, the difficulty of performing detailed measurements limits the consistency in parameters and methodology of current datasets. Using different datasets, multiple research studies can arrive at conflicting conclusions about the performance of wireless systems. Correcting this situation requires consistent and comparable wireless traces collected from a variety of deployment environments. In this paper, we describe AirLab, a distributed wireless data collection infrastructure that uses uniformly instrumented measurement nodes at heterogeneous locations to collect consistent traces of both standardized and user-defined experiments. We identify four challenges in the AirLab platform, consistency, fidelity, privacy, security, and describe our approaches to address them.

  • Shailesh Agrawal, Kavitha Athota, Pramod Bhatotia, Piyush Goyal, Phani Krisha, Kirtika Ruchandan, Nishanth Sastry, Gurmeet Singh, Sujesha Sudevalayam, Immanuel Ilavarasan Thomas, Arun Vishwanath, Tianyin Xu, and Fang Yu

    This document collects together reports of the sessions from the 2010 ACM SIGCOMM Conference, the annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM) on the applications, technologies, architectures, and protocols for computer communication.

  • Kenneth L. Calvert, W. Keith Edwards, Nick Feamster, Rebecca E. Grinter, Ye Deng, and Xuzi Zhou

    In managing and troubleshooting home networks, one of the challenges is in knowing what is actually happening. Availability of a record of events that occurred on the home network before trouble appeared would go a long way toward addressing that challenge. In this position/work-in-progress paper, we consider requirements for a general-purpose logging facility for home networks. Such a facility, if properly designed, would potentially have other uses. We describe several such uses and discuss requirements to be considered in the design of a logging platform that would be widely supported and accepted. We also report on our initial deployment of such a facility.

  • Jeffrey Erman, Alexandre Gerber, and Subhabrata Sen

    HTTP (Hypertext Transport Protocol) was originally primarily used for human-initiated client-server communications launched from web browsers, traditional computers and laptops. However, today it has become the protocol of choice for a bewildering range of applications from a wide array of emerging devices like smart TVs and gaming consoles. This paper presents an initial study characterizing the non-traditional sources of HTTP traffic such as consumer devices and automated updates in the overall HTTP traffic for residential Internet users. Among our findings, 13% of all HTTP traffic in terms of bytes is due to nontraditional sources, with 5% being from consumer devices such as WiFi enabled smartphones and 8% generated from automated software updates and background processes. Our findings show that 11% of all HTTP requests are caused by communications with advertising servers from as many as 190 countries worldwide, suggesting the widespread prevalence of such activities. Overall, our findings start to answer questions about what is the state of traffic generated in these smart homes.

  • Mikko Pervilä and Jussi Kangasharju

    Data centers are a major consumer of electricity and a significant fraction of their energy use is devoted to cooling the data center. Recent prototype deployments have investigated the possibility of using outside air for cooling and have shown large potential savings in energy consumption. In this paper, we push this idea to the extreme, by running servers outside in Finnish winter. Our results show that commercial, off-the-shelf computer equipment can tolerate extreme conditions such as outside air temperatures below -20C and still function correctly over extended periods of time. Our experiment improves upon the other recent results by confirming their findings and extending them to cover a wider range of intake air temperatures and humidity. This paper presents our experimentation methodology and setup, and our main findings and observations.

  • Andrew Krioukov, Prashanth Mohan, Sara Alspaugh, Laura Keys, David Culler, and Randy Katz

    Energy consumption is a major and costly problem in data centers. A large fraction of this energy goes to powering idle machines that are not doing any useful work. We identify two causes of this inefficiency: low server utilization and a lack of power-proportionality. To address this problem we present a design for an power-proportional cluster consisting of a power-aware cluster manager and a set of heterogeneous machines. Our design makes use of currently available energy-efficient hardware, mechanisms for transitioning in and out of low-power sleep states, and dynamic provisioning and scheduling to continually adjust to workload and minimize power consumption. With our design we are able to reduce energy consumption while maintaining acceptable response times for a web service workload based on Wikipedia. With our dynamic provisioning algorithms we demonstrate via simulation a 63% savings in power usage in a typically provisioned datacenter where all machines are left on and awake at all times. Our results show that we are able to achieve close to 90% of the savings a theoretically optimal provisioning scheme would achieve. We have also built a prototype cluster which runs Wikipedia to demonstrate the use of our design in a real environment.

  • Srinivasan Keshav and Catherine Rosenberg

    Several powerful forces are gathering to make fundamental and irrevocable changes to the century-old grid. The next-generation grid, often called the ‘smart grid,’ will feature distributed energy production, vastly more storage, tens of millions of stochastic renewable-energy sources, and the use of communication technologies both to allow precise matching of supply to demand and to incentivize appropriate consumer behaviour. These changes will have the effect of reducing energy waste and reducing the carbon footprint of the grid, making it ‘smarter’ and ‘greener.’ In this position paper, we discuss how the concepts and techniques pioneered by the Internet, the fruit of four decades of research in this area, are directly applicable to the design of a smart, green grid. This is because both the Internet and the electrical grid are designed to meet fundamental needs, for information and for energy, respectively, by connecting geographically dispersed suppliers with geographically dispersed consumers. Keeping this and other similarities (and fundamental differences, as well) in mind, we propose several specific areas where Internet concepts and technologies can contribute to the development of a smart, green grid. We also describe some areas where the Internet operations can be improved based on the experience gained in the electrical grid. We hope that our work will initiate a dialogue between the Internet and the smart grid communities.

  • Nicholas FitzRoy-Dale, Ihor Kuz, and Gernot Heiser

    We describe Currawong, a tool to perform system software architecture optimisation. Currawong is an extensible tool which applies optimisations at the point where an application invokes framework or library code. Currawong does not require source code to perform optimisations, e ectively decoupling the relationship between compilation and optimisation. We show, through examples written for the popular Android smartphone platform, that Currawong is capable of signi cant performance improvement to existing applications.

  • Gunho Lee, Niraj Tolia, Parthasarathy Ranganathan, and Randy H. Katz

    This paper proposes an architecture for optimized resource allocation in Infrastructure-as-a-Service (IaaS)-based cloud systems. Current IaaS systems are usually unaware of the hosted application’s requirements and therefore allocate resources independently of its needs, which can significantly impact performance for distributed data-intensive applications.

    To address this resource allocation problem, we propose an architecture that adopts a “what if ” methodology to guide allocation decisions taken by the IaaS. The architecture uses a prediction engine with a lightweight simulator to estimate the performance of a given resource allocation and a genetic algorithm to find an optimized solution in the large search space. We have built a prototype for Topology-Aware Resource Allocation (TARA) and evaluated it on a 80 server cluster with two representative MapReduce-based benchmarks. Our results show that TARA reduces the job completion time of these applications by up to 59% when compared to application-independent allocation policies.

  • Dong Yin, Deepak Unnikrishnan, Yong Liao, Lixin Gao, and Russell Tessier

    Recent FPGA-based implementations of network virtualization represent a significant step forward in network performance and scalability. Although these systems have been shown to provide orders of magnitude higher performance than solutions using software-based routers, straightforward reconfiguration of hardware-based virtual networks over time is a challenge. In this paper, we present the implementation of a reconfigurable network virtualization substrate that combines several partially-reconfigurable hardware virtual routers with software virtual routers. The update of hardware-based virtual networks in our system is supported via real-time partial FPGA reconfiguration. Hardware virtual networks can be dynamically reconfigured in a fraction of a second without affecting other virtual networks operating in the same FPGA. A heuristic has been developed to allocate virtual networks with diverse bandwidth requirements and network characteristics on this heterogeneous virtualization substrate. Experimental results show that the reconfigurable virtual routers can forward packets at line rate. Partial reconfiguration allows for 20x faster hardware reconfiguration than a previous approach which migrated hardware virtual networks to software.

  • Yong He, Ji Fang, Jiansong Zhang, Haichen Shen, Kun Tan, and Yongguang Zhang

    This demonstration shows a novel virtualization architecture, called Multi-Protocol Access Point (MPAP), which exploits the software radio technology to virtualize multiple heterogenous wireless stan- dards on single radio hardware. The basic idea is to deploy a wide- band radio front-end to receive radio signals from all wireless stan- dards sharing the same spectrum band, and use separate software base-bands to demodulate information stream for each wireless s- tandard. Based on software radio, MPAP consolidates multiple wireless devices into single hardware platform, allowing them to share the common general-purpose computing resource. Different software base-bands can easily communicate and coordinate via a software coordinator and coexist better with one another. As one example, we demonstrate to use non-contiguous OFDM in 802.11g PHY to avoid the mutual interference with narrow-band ZigBee communication.

  • Keon Jang, Sangjin Han, Seungyeop Han, Sue Moon, and KyoungSoo Park

    SSL/TLS is a standard protocol for secure Internet communication. Despite its great success, today’s SSL deployment is largely limited to security-critical domains. The low adoption rate of SSL is mainly due to high computation overhead on the server side.

    In this paper, we propose Graphics Processing Units (GPUs) as a new source of computing power to reduce the server-side overhead. We have designed and implemented an SSL proxy that opportunistically offloads cryptographic operations to GPUs. The evaluation results show that our GPU implementation of cryptographic operations, RSA, AES, and HMAC-SHA1, achieves high throughput while keeping the latency low. The SSL proxy significantly boosts the throughput of SSL transactions, handling 21.5K SSL transactions per second, and has comparable response time even when overloaded.

Syndicate content