Computer Communication Review: Papers

Find a CCR issue:
  • Mark Allman

    Careless selection of the ephemeral port number portion of a transport protocol’s connection identifier has been shown to potentially degrade security by opening the connection up to injection attacks from “blind” or “off path” attackers—or, attackers that cannot directly observe the connection. This short paper empirically explores a number of algorithms for choosing the ephemeral port number that attempt to obscure the choice from such attackers and hence make mounting these blind attacks more difficult.

    Kevin Almeroth
  • Adam Greenhalgh, Felipe Huici, Mickael Hoerdt, Panagiotis Papadimitriou, Mark Handley, and Laurent Mathy

    The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.

    Chadi Barakat
  • Chuan Han, Siyu Zhan, and Yaling Yang

    This paper addresses the open problem of locating an attacker that intentionally hides or falsifies its position using advanced radio technologies. A novel attacker localization mechanism, called Access Point Coordinated Localization (APCL), is proposed for IEEE 802.11 networks. APCL actively forces the attacker to reveal its position information by combining access point (AP) coordination with the traditional range-free localization. The optimal AP coordination process is calculated by modeling it as a finite horizon discrete Markov decision process, which is efficiently solved by an approximation algorithm. The performance advantages are verified through extensive simulations.

    Suman Banerjee
  • Arun Vishwanath, Vijay Sivaraman, and Marina Thottan

    The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay- bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.

    This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow per- formance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.

  • Jari Arkko, Bob Briscoe, Lars Eggert, Anja Feldmann, and Mark Handley

    This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial. New research agenda items were also identified.

  • Jon Crowcroft

    It has been proposed that research in certain areas is to be avoided when those areas have gone cold. While previous work concentrated on detecting the temperature of a research topic, this work addresses the question of changing the temperature of said topics. We make suggestions for a set of techniques to re-heat a topic that has gone cold. In contrast to other researchers who propose uncertain approaches involving creativity, lateral thinking and imagination, we concern ourselves with deterministic approaches that are guaranteed to yield results.

  • Jens-Matthias Bohli, Christoph Sorge, and Dirk Westhoff

    One expectation about the future Internet is the participa- tion of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Inter- net of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.

  • Henning Schulzrinne

    n double-blind reviewing (DBR), both reviewers and authors are unaware of each others' identities and affiliations. DBR is said to increase review fairness. However, DBR may only be marginally effective in combating the randomness of the typical conference review process for highly-selective conferences. DBR may also make it more difficult to adequately review conference submissions that build on earlier work of the authors and have been partially published in workshops. I believe that DBR mainly increases the perceived fairness of the reviewing process, but that may be an important benefit. Rather than waiting until the final stages, the reviewing process needs to explicitly address the issue of workshop publications early on.

  • Michalis Faloutsos

    They say that music and mathematics are intertwined. I am not sure this is true, but I always wanted to use the word intertwined. The point is that my call for poetry received a overwhelmingly enthusiastic response from at least five people. My mailbox was literally flooded (I have a small mailbox). This article is a tribute to the poetry of science, or, as I like to call it, the Poetry of Science. You will be amazed.

  • S. Keshav

    It is an honour for me to take over the Editorship of CCR from Christophe Diot. In his four years at the helm, Christophe brought in great energy and his inimitable style. More importantly, he put into place policies and processes that streamlined operations, made the magazine a must-read for SIGCOMM members, and improved the visibility of the articles and authors published here. For all this he has our sincere thanks.

    In my term as Editor, I would like to build on Christophe's legacy. I think that the magazine is well-enough established that the Editorial Board can afford to experiment with a few new ideas. Here are some ideas we have in the works.

    First, we are going to limit all future papers to CCR, both editorial and peer-reviewed content, to six pages. This will actively discourage making CCR a burial ground for papers that were rejected elsewhere.

    Second, we will be proactive in seeking timely editorial content. We want SIGCOMM members to view CCR as a forum to publish new ideas, working hypotheses, and opinion pieces. Even surveys and tutorials, as long as they are timely, are welcome.

    Third, we want to encourage participation by industry, researchers and practitioners alike. We request technologists and network engineers in the field to submit articles to CCR outlining issues they face in their work, issues that can be taken up by academic researchers, who are always seeking new problems to tackle.

    Fourth, we would like to make use of online technologies to make CCR useful to its readership. In addition to CCR Online, we are contemplating an arXiv-like repository where papers can be submitted and reviewed. Importantly, readers could ask to be notified when papers matching certain keywords are submitted. This idea is still in its early stages: details will be worked out over the next few months.

    Setting these practicalities aside, let me now turn my attention to an issue that sorely needs our thoughts and innovations: the use of computer networking as a tool to solve real-world problems.

    The world today is in the midst of several crises: climate change, the rapidly growing gap between the haves and the have-nots, and the potential for epidemic outbreaks of infectious diseases, to name but a few. As thinking, educated, citizens of the world, we cannot but be challenged to do our part in averting the worst effects of theseglobal problems.

    Luckily, computer networking researchers and professionals have an important role to play. For instance: We can use networks to massively monitor weather and to allow high quality videoconferences that avoid air travel. We can use wired and wireless sensors to greatly reduce the inefficiencies of our heating and cooling systems. We can provide training videos to people at the ‘bottom-of-the-pyramid’ that can open new horizons to them and allow them to earn a better living. We can spread information that can lead to the overthrow of endemic power hierarchies through the miracles of cheap cell phones and peer-to-peer communication. We can help monitor infectious diseases in the remote corners of the world and help coordinate rapid responses to them.

    We have in our power the ideas and the technologies that can make a difference. And we must put these to good use.

    CCR can and should become the forum where the brilliant minds of today are exposed to the problems, the real problems, that face us, and where solutions are presented, critiqued, improved, and shared. This must be done. Let’s get started!

  • Ashvin Lakshmikantha, R. Srikant, Nandita Dukkipati, Nick McKeown, and Carolyn Beck

    Buffer sizing has received a lot of attention recently since it is becoming increasingly difficult to use large buffers in highspeed routers. Much of the prior work has concentrated on analyzing the amount of buffering required in core routers assuming that TCP carries all the data traffic. In this paper, we evaluate the amount of buffering required for RCP on a single congested link, while explicitly modeling flow arrivals and departures. Our theoretical analysis and simulations indicate that buffer sizes of about 10% of the bandwidth-delay product are sufficient for RCP to deliver good performance to end-users.

    Darryl Veitch
  • Stefan Frei, Thomas Duebendorfer, and Bernhard Plattner

    Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

    Dmitri Krioukov
  • David A. Hayes, Jason But, and Grenville Armitage

    A Stream Control Transmission Protocol (SCTP) capable Network Address Translation (NAT) device is necessary to support the wider deployment of the SCTP protocol. The key issues for an SCTP NAT are SCTP’s control chunk multiplexing and multi-homing features. Control chunk multiplexing can expose an SCTP NAT to possible Denial of Service attacks. These can be mitigated through the use of chunk and parameter processing limits.

    Multiple and changing IP addresses during an SCTP association, mean that SCTP NATs cannot operate in the way conventional UDP/TCP NATs operate. Tracking these multiple global IP addresses can help in avoiding lookup table conflicts, however, it can also result in circumstances that can lead to NAT state inconsistencies. Our analysis shows that tracking global IP addresses is not necessary in most expected practical installations.

    We use our FreeBSD SCTP NAT implementation, alias_sctp to examine the performance implications of tracking global IP addresses. We find that typical memory usage doubles and that the processing requirements are significant for installations that experience high association arrival rates.

    In conclusion we provide practical recommendations for a secure stable SCTP NAT installation.

    Chadi Barakat
  • Tao Ye, Darryl Veitch, and Jean Bolot

    Data confidentiality over mobile devices can be difficult to secure due to a lack of computing power and weak supporting encryption components. However, modern devices often have multiple wireless interfaces with diverse channel capacities and security capabilities. We show that the availability of diverse, heterogeneous links (physical or logical) between nodes in a network can be used to increase data confidentiality, on top of the availability or strength of underlying encryption techniques. We introduce a new security approach using multiple channels to transmit data securely, based on the idea of deliberate corruption, and information reduction, and analyze its security using the information theoretic concept of secrecy capacity, and the wiretap channel. Our work introduces the idea of channel design with security in mind.

    Suman Banerjee
  • David Andersen

    Jay Lepreau was uncharacteristically early in his passing, but left behind him a trail of great research, rigorously and repeatably evaluated systems, and lives changed for the better.

  • Matthew Mathis

    The current Internet fairness paradigm mandates that all protocols have equivalent response to packet loss and other congestion signals, allowing relatively simple network devices to attain a weak form of fairness by sending uniform signals to all ows. Our paper[1], which recently received the ACM SIGCOMM Test of Time Award, modeled the reference Additive-Increase-Multiplicative-Decrease algorithm used by TCP. However, in many parts of the Internet ISPs are choosing to explicitly control customer traffic, because the traditional paradigm does not sufficiently enforce fairness in a number of increasingly common situations. This editorial note takes the position we should embrace this paradigm shift, which will eventually move the responsibility for capacity allocation from the end-systems to the network itself. This paradigm shift might eventually eliminate the requirement that all protocols be TCP-Friendly".

  • Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lindner

    This paper discusses the concept of Cloud Computing to achieve a complete definition of what a Cloud is, using the main characteristics typically associated with this paradigm in the literature. More than 20 definitions have been studied allowing for the extraction of a consensus definition as well as a minimum definition containing the essential characteristics. This paper pays much attention to the Grid paradigm, as it is often confused with Cloud technologies. We also describe the relationships and distinctions between the Grid and Cloud approaches.

  • L. Lily Yang

    60 GHz is considered the most promising technology to deliver gigabit wireless for indoor communications. We propose to integrate 60 GHz radio with the existing Wi-Fi radio in 2.4/5 GHz band to take advantage of the complementary nature of these different bands. This integration presents an opportunity to provide a unified technology for both gigabit Wireless Personal Area Networks (WPAN) and Wireless Local Area Networks (WLAN), thus further reinforcing the technology convergence that is already underway with the widespread adoption of Wi-Fi technology. Many open research questions remain to make this unified solution work seamlessly for WPAN and WLAN.

  • Dola Saha, Dirk Grunwald, and Douglas Sicker

    Advances in networking have been accelerated by the use of abstractions, such as “layering”, and the ability to apply those abstractions across multiple communication media. Wireless communication provides the greatest challenge to these clean abstractions because of the lossy communication media. For many networking researchers, wireless communications hardware starts and ends with WiFi, or 802.11 compliant hardware.

    However, there has been a recent growth in software defined radio, which allows the basic radio medium to be manipulated by programs. This mutable radio layer has allowed researchers to exploit the physical properties of radio communication to overcome some of the challenges of the radio media; in certain cases, researchers have been able to develop mechanisms that are difficult to implement in electrical or optical media. In this paper, we describe the different design variants for software radios, their programming methods and survey some of the more cutting edge uses of those radios.

  • Albert Greenberg, James Hamilton, David A. Maltz, and Parveen Patel

    The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.

  • Henning Schulzrinne

    While most of us are involved in organizing conferences in some way, we probably do not pay too much attention to the organizational model of these events. This is somewhat surprising, given that conferences are probably the most visible activity of most professional societies, and also entail significant expenditures of money and volunteer labor. While the local square dance club with a $500 annual budget probably has bylaws and statutes, most conferences with hundred thousand dollar budgets operate more by oral tradition than by formal descriptions of responsibilities. In almost all cases, this works just fine, but this informality can lead to misunderstandings or problems when expectations differ among the volunteers or when there is a crisis. Thus, I believe that it is helpful to have clearer models, so that conferences and volunteers can reach a common understanding of what is expected of everybody that contributes their time to the conference, and also who is responsible when things go wrong. For long-running conferences, the typical conference organization involves four major actors: the sponsoring professional organization, a steering committee, the general chairs and the technical program chairs. However, the roles and reporting relationships seem to differ rather dramatically between different conferences.

  • Michalis Faloutsos

    It is clearly a time of change. Naturally, I am talking about the change of person in the top post: “Le Boss Grand” Christophe Diot is replaced by “Canadian Chief, Eh?” Srinivasan Keshav. Coincidence? No, my friends. This change of the guards in the most powerful position in CCR, and, by some stretch of the imagination, in SIGCOMM, and, by arbitrary extension, in the scientific world at large, is just the start of a period of change that will be later known as the Great Changes. Taking the cue from that, I say, let’s change.

  • Don Towsley

    This talk overviews some of the highlights and success stories in the mathematical modeling and analysis of the Internet (and other networks). We will begin with Kleinrock’s seminal work on modeling store and forward networks and its extensions, and end with the successful development of fluid models for the current Internet. The rest of the talk will focus on lessons learned from these endeavors and how modeling and analysis can and will play a role in the development of new networks (e.g., wireless networks, application-level networks). Finally, we conclude that one can have fun modeling networks while at the same time make a living.

  • Changhoon Kim, Matthew Caesar, and Jennifer Rexford

    IP networks today require massive effort to configure and manage. Ethernet is vastly simpler to manage, but does not scale beyond small local area networks. This paper describes an alternative network architecture called SEATTLE that achieves the best of both worlds: The scalability of IP combined with the simplicity of Ethernet. SEATTLE provides plug-and-play functionality via flat addressing, while ensuring scalability and efficiency through shortest-path routing and hash-based resolution of host information. In contrast to previous work on identity-based routing, SEATTLE ensures path predictability and stability, and simplifies network management. We performed a simulation study driven by real-world traffic traces and network topologies, and used Emulab to evaluate a prototype of our design based on the Click and XORP open-source routing platforms. Our experiments show that SEATTLE efficiently handles network failures and host mobility, while reducing control overhead and state requirements by roughly two orders of magnitude compared with Ethernet bridging.

  • Kirill Levchenko, Geoffrey M. Voelker, Ramamohan Paturi, and Stefan Savage

    In this paper, we present a new link-state routing algorithm called Approximate Link state (XL) aimed at increasing routing efficiency by suppressing updates from parts of the network. We prove that three simple criteria for update propagation are sufficient to guarantee soundness, completeness and bounded optimality for any such algorithm. We show, via simulation, that XL significantly outperforms standard link-state and distance vector algorithms—in some cases reducing overhead by more than an order of magnitude— while having negligible impact on path length. Finally, we argue that existing link-state protocols, such as OSPF, can incorporate XL routing in a backwards compatible and incrementally deployable fashion.

  • Murtaza Motiwala, Megan Elmore, Nick Feamster, and Santosh Vempala

    We present path splicing, a new routing primitive that allows network paths to be constructed by combining multiple routing trees (“slices”) to each destination over a single network topology. Path splicing allows traffic to switch trees at any hop en route to the destination. End systems can change the path on which traffic is forwarded by changing a small number of additional bits in the packet header. We evaluate path splicing for intradomain routing using slices generated from perturbed link weights and find that splicing achieves reliability that approaches the best possible using a small number of slices, for only a small increase in latency and no adverse effects on traffic in the network. In the case of interdomain routing, where splicing derives multiple trees from edges in alternate backup routes, path splicing achieves near-optimal reliability and can provide significant benefits even when only a fraction of ASes deploy it. We also describe several other applications of path splicing, as well as various possible deployment paths.

  • Franck Le, Geoffrey G. Xie, Dan Pei, Jia Wang, and Hui Zhang

    Recent studies reveal that the routing structures of operational networks are much more complex than a simple BGP/IGP hierarchy, highlighted by the presence of many distinct instances of routing protocols. However, the glue (how routing protocol instances interact and exchange routes among themselves) is still little understood or studied. For example, although Route Redistribution (RR), the implementation of the glue in router software, has been used in the Internet for more than a decade, it was only recently shown that RR is extremely vulnerable to anomalies similar to the permanent route oscillations in BGP. This paper takes an important step toward understanding how RR is used and how fundamental the role RR plays in practice. We developed a complete model and associated tools for characterizing interconnections between routing instances based on analysis of router configuration data. We analyzed and characterized the RR usage in more than 1600 operational networks. The findings are: (i) RR is indeed widely used; (ii) operators use RR to achieve important design objectives not realizable with existing routing protocols alone; (iii) RR configurations can be very diverse and complex. These empirical discoveries not only confirm that the RR glue constitutes a critical component of the current Internet routing architecture, but also emphasize the urgent need for more research to improve its safety and flexibility to support important design objectives.

  • Dilip A. Joseph, Arsalan Tavakoli, and Ion Stoica

    Data centers deploy a variety of middleboxes (e.g., firewalls, load balancers and SSL offloaders) to protect, manage and improve the performance of applications and services they run. Since existing networks provide limited support for middleboxes, administrators typically overload path selection mechanisms to coerce traffic through the desired sequences of middleboxes placed on the network path. These ad-hoc practices result in a data center network that is hard to con gure and maintain, wastes middlebox resources, and cannot guarantee middlebox traversal under network churn.

    To address these issues, we propose the policy-aware switching layer or PLayer, a new layer-2 for data centers consisting of inter-connected policy-aware switches or pswitches. Unmodified middleboxes are placed off the network path by plugging them into pswitches. Based on policies specified by administrators, pswitches explicitly forward different types of traffic through different sequences of middleboxes. Experiments using our prototype software pswitches suggest that the PLayer is flexible, uses middleboxes efficiently, and guarantees correct middlebox traversal under churn.

  • Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat

    Today’s data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Nonuniform bandwidth among data center nodes complicates application design and limits overall system performance.

    In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today’s higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.

  • Chuanxiong Guo, Haitao Wu, Kun Tan, Lei Shi, Yongguang Zhang, and Songwu Lu

    A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional treebased structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers.

  • Richard Alimi, Ye Wang, and Y. Richard Yang

    Configurations for today’s IP networks are becoming increasingly complex. As a result, configuration management is becoming a major cost factor for network providers and configuration errors are becoming a major cause of network disruptions. In this paper, we present and evaluate the novel idea of shadow configurations. Shadow configurations allow configuration evaluation before deployment and thus can reduce potential network disruptions. We demonstrate using real implementation that shadow configurations can be implemented with low overhead.

  • Thomas Karagiannis, Richard Mortier, and Antony Rowstron

    Enterprise network architecture and management have followed the Internet’s design principles despite different requirements and characteristics: enterprise hosts are administered by a single authority, which intrinsically assigns different values to traffic from different business applications.

    We advocate a new approach where hosts are no longer relegated to the network’s periphery, but actively participate in network-related decisions. To enable host participation, network information, such as dynamic network topology and per-link characteristics and costs, is exposed to the hosts, and network administrators specify conditions on the propagated network information that trigger actions to be performed while a condition holds. The combination of a condition and its actions embodies the concept of the network exception handler, defined analogous to a program exception handler. Conceptually, network exception handlers execute on hosts with actions parameterized by network and host state.

    Network exception handlers allow hosts to participate in network management, traffic engineering and other operational decisions by explicitly controlling host traffic under predefined conditions. This flexibility improves overall performance by allowing efficient use of network resources. We outline several sample network exception handlers, present an architecture to support them, and evaluate them using data collected from our own enterprise network.

  • Ranveer Chandra, Ratul Mahajan, Thomas Moscibroda, Ramya Raghavendra, and Paramvir Bahl

    We study a fundamental yet under-explored facet in wireless communication – the width of the spectrum over which transmitters spread their signals, or the channel width. Through detailed measurements in controlled and live environments, and using only commodity 802.11 hardware, we first quantify the impact of channel width on throughput, range, and power consumption. Taken together, our findings make a strong case for wireless systems that adapt channel width. Such adaptation brings unique benefits. For instance, when the throughput required is low, moving to a narrower channel increases range and reduces power consumption; in fixed-width systems, these two quantities are always in conflict.

    We then present SampleWidth, a channel width adaptation algorithm for the base case of two communicating nodes. This algorithm is based on a simple search process that builds on top of existing techniques for adapting modulation. Per specified policy, it can maximize throughput or minimize power consumption. Evaluation using a prototype implementation shows that SampleWidth correctly identities the optimal width under a range of scenarios. In our experiments with mobility, it increases throughput by more than 60% compared to the best fixed-width configuration.

  • Hariharan Rahul, Nate Kushman, Dina Katabi, Charles Sodini, and Farinaz Edalat

    Wideband technologies in the unlicensed spectrum can satisfy the ever-increasing demands for wireless bandwidth created by emerging rich media applications. The key challenge for such systems, however, is to allow narrowband technologies that share these bands (say, 802.11 a/b/g/n, Zigbee) to achieve their normal performance, without compromising the throughput or range of the wideband network.

    This paper presents SWIFT, the first system where high-throughput wideband nodes are shown in a working deployment to coexist with unknown narrowband devices, while forming a network of their own. Prior work avoids narrowband devices by operating below the noise level and limiting itself to a single contiguous unused band. While this achieves coexistence, it sacrifices the throughput and operating distance of the wideband device. In contrast, SWIFT creates high-throughput wireless links by weaving together non-contiguous unused frequency bands that change as narrowband devices enter or leave the environment. This design principle of cognitive aggregation allows SWIFT to achieve coexistence, while operating at normal power, and thereby obtaining higher throughput and greater operating range. We implement SWIFT on a wideband hardware platform, and evaluate it in the presence of 802.11 devices. In comparison to a baseline that coexists with narrowband devices by operating below their noise level, SWIFT is equally narrowband-friendly but achieves 3.6−10.5× higher throughput and 6× greater range.

  • Shyamnath Gollakota and Dina Katabi

    This paper presents ZigZag, an 802.11 receiver design that combats hidden terminals. ZigZag’s core contribution is a new form of interference cancellation that exploits asynchrony across successive collisions. Specifically, 802.11 retransmissions, in the case of hidden terminals, cause successive collisions. These collisions have different interference-free stretches at their start, which ZigZag exploits to bootstrap its decoding.

    ZigZag makes no changes to the 802.11 MAC and introduces no overhead when there are no collisions. But, when senders collide, ZigZag attains the same throughput as if the colliding packets were a priori scheduled in separate time slots. We build a prototype of ZigZag in GNU Radio. In a testbed of 14 USRP nodes, ZigZag reduces the average packet loss rate at hidden terminals from 72.6% to about 0.7%.

  • Yinglian Xie, Fang Yu, Kannan Achan, Rina Panigrahy, Geoff Hulten, and Ivan Osipkov

    In this paper, we focus on characterizing spamming botnets by leveraging both spam payload and spam server traffic properties. Towards this goal, we developed a spam signature generation framework called AutoRE to detect botnet-based spam emails and botnet membership. AutoRE does not require pre-classified training data or white lists. Moreover, it outputs high quality regular expression signatures that can detect botnet spam with a low false positive rate. Using a three-month sample of emails from Hotmail, AutoRE successfully identified 7,721 botnet-based spam campaigns together with 340,050 unique botnet host IP addresses.

    Our in-depth analysis of the identified botnets revealed several interesting findings regarding the degree of email obfuscation, properties of botnet IP addresses, sending patterns, and their correlation with network scanning traffic. We believe these observations are useful information in the design of botnet detection schemes.

  • Gregor Maier, Robin Sommer, Holger Dreger, Anja Feldmann, Vern Paxson, and Fabian Schneider

    In many situations it can be enormously helpful to archive the raw contents of a network traffic stream to disk, to enable later inspection of activity that becomes interesting only in retrospect. We present a Time Machine (TM) for network traffic that provides such a capability. The TM leverages the heavy-tailed nature of network flows to capture nearly all of the likely-interesting traffic while storing only a small fraction of the total volume. An initial proof-of-principle prototype established the forensic value of such an approach, contributing to the investigation of numerous attacks at a site with thousands of users. Based on these experiences, a rearchitected implementation of the system provides flexible, high-performance traffic stream capture, indexing and retrieval, including an interface between the TM and a real-time network intrusion detection system (NIDS). The NIDS controls the TM by dynamically adjusting recording parameters, instructing it to permanently store suspicious activity for offline forensics, and fetching traffic from the past for retrospective analysis. We present a detailed performance evaluation of both stand-alone and joint setups, and report on experiences with running the system live in high-volume environments.

  • Xin Liu, Xiaowei Yang, and Yanbin Lu

    This paper presents the design and implementation of a filter-based DoS defense system (StopIt) and a comparison study on the effectiveness of filters and capabilities. Central to the StopIt design is a novel closed-control, open-service architecture: any receiver can use StopIt to block the undesired traffic it receives, yet the design is robust to various strategic attacks from millions of bots, including filter exhaustion attacks and bandwidth flooding attacks that aim to disrupt the timely installation of filters. Our evaluation shows that StopIt can block the attack traffic from a few millions of attackers within tens of minutes with bounded router memory. We compare StopIt with existing filter-based and capability-based DoS defense systems under simulated DoS attacks of various types and scales. Our results show that StopIt outperforms existing filter-based systems, and can prevent legitimate communications from being disrupted by various DoS flooding attacks. It also outperforms capability-based systems in most attack scenarios, but a capability-based system is more effective in a type of attack that the attack traffic does not reach a victim, but congests a link shared by the victim. These results suggest that both filters and capabilities are highly effective DoS defense mechanisms, but neither is more effective than the other in all types of DoS attacks.

  • Randy Smith, Cristian Estan, Somesh Jha, and Shijin Kong

    Deep packet inspection is playing an increasingly important role in the design of novel network services. Regular expressions are the language of choice for writing signatures, but standard DFA or NFA representations are unsuitable for high-speed environments, requiring too much memory, too much time, or too much per-flow state. DFAs are fast and can be readily combined, but doing so often leads to statespace explosion. NFAs, while small, require large per-flow state and are slow.

    We propose a solution that simultaneously addresses all these problems. We start with a first-principles characterization of state-space explosion and give conditions that eliminate it when satisfied. We show how auxiliary variables can be used to transform automata so that they satisfy these conditions, which we codify in a formal model that augments DFAs with auxiliary variables and simple instructions for manipulating them. Building on this model, we present techniques, inspired by principles used in compiler optimization, that systematically reduce runtime and per-flow state. In our experiments, signature sets from Snort and Cisco Systems achieve state-space reductions of over four orders of magnitude, per-flow state reductions of up to a factor of six, and runtimes that approach DFAs.

  • Ashok Anand, Archit Gupta, Aditya Akella, Srinivasan Seshan, and Scott Shenker

    Many past systems have explored how to eliminate redundant transfers from network links and improve network efficiency. Several of these systems operate at the application layer, while the more recent systems operate on individual packets. A common aspect of these systems is that they apply to localized settings, e.g. at stub network access links. In this paper, we explore the benefits of deploying packet-level redundant content elimination as a universal primitive on all Internet routers. Such a universal deployment would immediately reduce link loads everywhere. However, we argue that far more significant network-wide benefits can be derived by redesigning network routing protocols to leverage the universal deployment. We develop “redundancy-aware” intra- and inter-domain routing algorithms and show that they enable better traffic engineering, reduce link usage costs, and enhance ISPs’ responsiveness to traffic variations. In particular, employing redundancy elimination approaches across redundancy-aware routes can lower intra and inter-domain link loads by 10-50%. We also address key challenges that may hinder implementation of redundancy elimination on fast routers. Our current software router implementation can run at OC48 speeds.

Syndicate content