Computer Communication Review: Papers

Find a CCR issue:
  • Nicholas FitzRoy-Dale, Ihor Kuz, and Gernot Heiser

    We describe Currawong, a tool to perform system software architecture optimisation. Currawong is an extensible tool which applies optimisations at the point where an application invokes framework or library code. Currawong does not require source code to perform optimisations, e ectively decoupling the relationship between compilation and optimisation. We show, through examples written for the popular Android smartphone platform, that Currawong is capable of signi cant performance improvement to existing applications.

  • Gunho Lee, Niraj Tolia, Parthasarathy Ranganathan, and Randy H. Katz

    This paper proposes an architecture for optimized resource allocation in Infrastructure-as-a-Service (IaaS)-based cloud systems. Current IaaS systems are usually unaware of the hosted application’s requirements and therefore allocate resources independently of its needs, which can significantly impact performance for distributed data-intensive applications.

    To address this resource allocation problem, we propose an architecture that adopts a “what if ” methodology to guide allocation decisions taken by the IaaS. The architecture uses a prediction engine with a lightweight simulator to estimate the performance of a given resource allocation and a genetic algorithm to find an optimized solution in the large search space. We have built a prototype for Topology-Aware Resource Allocation (TARA) and evaluated it on a 80 server cluster with two representative MapReduce-based benchmarks. Our results show that TARA reduces the job completion time of these applications by up to 59% when compared to application-independent allocation policies.

  • Dong Yin, Deepak Unnikrishnan, Yong Liao, Lixin Gao, and Russell Tessier

    Recent FPGA-based implementations of network virtualization represent a significant step forward in network performance and scalability. Although these systems have been shown to provide orders of magnitude higher performance than solutions using software-based routers, straightforward reconfiguration of hardware-based virtual networks over time is a challenge. In this paper, we present the implementation of a reconfigurable network virtualization substrate that combines several partially-reconfigurable hardware virtual routers with software virtual routers. The update of hardware-based virtual networks in our system is supported via real-time partial FPGA reconfiguration. Hardware virtual networks can be dynamically reconfigured in a fraction of a second without affecting other virtual networks operating in the same FPGA. A heuristic has been developed to allocate virtual networks with diverse bandwidth requirements and network characteristics on this heterogeneous virtualization substrate. Experimental results show that the reconfigurable virtual routers can forward packets at line rate. Partial reconfiguration allows for 20x faster hardware reconfiguration than a previous approach which migrated hardware virtual networks to software.

  • Yong He, Ji Fang, Jiansong Zhang, Haichen Shen, Kun Tan, and Yongguang Zhang

    This demonstration shows a novel virtualization architecture, called Multi-Protocol Access Point (MPAP), which exploits the software radio technology to virtualize multiple heterogenous wireless stan- dards on single radio hardware. The basic idea is to deploy a wide- band radio front-end to receive radio signals from all wireless stan- dards sharing the same spectrum band, and use separate software base-bands to demodulate information stream for each wireless s- tandard. Based on software radio, MPAP consolidates multiple wireless devices into single hardware platform, allowing them to share the common general-purpose computing resource. Different software base-bands can easily communicate and coordinate via a software coordinator and coexist better with one another. As one example, we demonstrate to use non-contiguous OFDM in 802.11g PHY to avoid the mutual interference with narrow-band ZigBee communication.

  • Keon Jang, Sangjin Han, Seungyeop Han, Sue Moon, and KyoungSoo Park

    SSL/TLS is a standard protocol for secure Internet communication. Despite its great success, today’s SSL deployment is largely limited to security-critical domains. The low adoption rate of SSL is mainly due to high computation overhead on the server side.

    In this paper, we propose Graphics Processing Units (GPUs) as a new source of computing power to reduce the server-side overhead. We have designed and implemented an SSL proxy that opportunistically offloads cryptographic operations to GPUs. The evaluation results show that our GPU implementation of cryptographic operations, RSA, AES, and HMAC-SHA1, achieves high throughput while keeping the latency low. The SSL proxy significantly boosts the throughput of SSL transactions, handling 21.5K SSL transactions per second, and has comparable response time even when overloaded.

  • Binbin Chen, Ziling Zhou, Yuda Zhao, and Haifeng Yu

    Motivated by recent emerging systems that can leverage partially correct packets in wireless networks, this paper investigates the novel concept of error estimating codes (EEC). Without correcting the errors in the packet, EEC enables the receiver of the packet to estimate the packet’s bit error rate, which is perhaps the most important meta-information of a partially correct packet. Our EEC algorithm provides provable estimation quality, with rather low redundancy and computational overhead. To demonstrate the utility of EEC, we exploit and implement EEC in two wireless network applications, Wi-Fi rate adaptation and real-time video streaming. Our real-world experiments show that these applications can significantly benefit from EEC.

  • Sayandeep Sen, Syed Gilani, Shreesha Srinath, Stephen Schmitt, and Suman Banerjee

    All practical wireless communication systems are prone to errors. At the symbol level such wireless errors have a well-defined structure: when a receiver decodes a symbol erroneously, it is more likely that the decoded symbol is a good “approximation” of the transmitted symbol than a randomly chosen symbol among all possible transmitted symbols. Based on this property, we define approximate communication, a method that exploits this error structure to natively provide unequal error protection to data bits. Unlike traditional (FEC-based) mechanisms of unequal error protection that consumes additional network and spectrum resources to encode redundant data, the approximate communication technique achieves this property at the PHY layer without consuming any additional network or spectrum resources (apart from a minimal signaling overhead) . Approximate communication is particularly useful to media delivery applications that can benefit significantly from unequal error protection of data bits. We show the usefulness of this method to such applications by designing and implementing an end-to-end media delivery system, called Apex. Our Software Defined Radio (SDR)-based experiments reveal that Apex can improve video quality by 5 to 20 dB (PSNR) across a diverse set of wireless conditions, when compared to traditional approaches. We believe that mechanisms such as Apex can be a cornerstone in designing future wireless media delivery systems under any errorprone channel condition.

  • Myungjin Lee, Nick Duffield, and Ramana Rao Kompella

    New applications such as algorithmic trading and high-performance computing require extremely low latency (in microseconds). Network operators today lack sufficient fine-grain measurement tools to detect, localize and repair performance anomalies and delay spikes that cause application SLA violations. A recently proposed solution called LDA provides a scalable way to obtain latency, but only provides aggregate measurements. However, debugging applicationspecific problems requires per-flow measurements, since different flows may exhibit significantly different characteristics even when they are traversing the same link. To enable fine-grained per-flow measurements in routers, we propose a new scalable architecture called reference latency interpolation (RLI) that is based on our observation that packets potentially belonging to different flows that are closely spaced to each other exhibit similar delay properties. In our evaluation using simulations over real traces, we show that RLI achieves a median relative error of 12% and one to two orders of magnitude higher accuracy than previous per-flow measurement solutions with small overhead.

  • Kai Chen, Chuanxiong Guo, Haitao Wu, Jing Yuan, Zhenqian Feng, Yan Chen, Songwu Lu, and Wenfei Wu

    Data center networks encode locality and topology information into their server and switch addresses for performance and routing purposes. For this reason, the traditional address configuration protocols such as DHCP require huge amount of manual input, leaving them error-prone.

    In this paper, we present DAC, a generic and automatic Data center Address Configuration system. With an automatically generated blueprint which defines the connections of servers and switches labeled by logical IDs, e.g., IP addresses, DAC first learns the physical topology labeled by device IDs, e.g., MAC addresses. Then at the core of DAC is its device-to-logical ID mapping and malfunction detection. DAC makes an innovation in abstracting the device-to-logical ID mapping to the graph isomorphism problem, and solves it with low time-complexity by leveraging the attributes of data center network topologies. Its malfunction detection scheme detects errors such as device and link failures and miswirings, including the most difficult case where miswirings do not cause any node degree change.

    We have evaluated DAC via simulation, implementation and experiments. Our simulation results show that DAC can accurately find all the hardest-to-detect malfunctions and can autoconfigure a large data center with 3.8 million devices in 46 seconds. In our implementation, we successfully autoconfigure a small 64-server BCube network within 300 milliseconds and show that DAC is a viable solution for data center autoconfiguration.

  • Hussam Abu-Libdeh, Paolo Costa, Antony Rowstron, Greg O'Shea, and Austin Donnelly

    Building distributed applications that run in data centers is hard. The CamCube project explores the design of a ship-ping container sized data center with the goal of building an easier platform on which to build these applications. Cam-Cube replaces the traditional switch-based network with a 3D torus topology, with each server directly connected to six other servers. As in other proposals, e.g. DCell and BCube, multi-hop routing in CamCube requires servers to participate in packet forwarding. To date, as in existing data centers, these approaches have all provided a single routing protocol for the applications.

    In this paper we explore if allowing applications to implement their own routing services is advantageous, and if we can support it efficiently. This is based on the observation that, due to the

    exibility ofered by the CamCube API, many applications implemented their own routing protocol in order to achieve specific application-level characteristics, such as trading of higher-latency for better path convergence. Using large-scale simulations we demonstrate the benefits and network-level impact of running multiple routing protocols. We demonstrate that applications are more effcient and do not generate additional control traffic overhead. This motivates us to design an extended routing service allowing easy implementation of application-specific routing protocols on CamCube. Finally, we demonstrate that the additional performance overhead incurred when using the extended routing service on a prototype CamCube is very low.

  • Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan

    Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP’s demands on the limited buffer space available in data center switches. For example, bandwidth hungry “background” flows build up queues at the switches, and thus impact the performance of latency sensitive “foreground” traffic.

    To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.

  • Craig Labovitz, Scott Iekel-Johnson, Danny McPherson, Jon Oberheide, and Farnam Jahanian

    In this paper, we examine changes in Internet inter-domain traffic demands and interconnection policies. We analyze more than 200 Exabytes of commercial Internet traffic over a two year period through the instrumentation of 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. Our analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies. Specifically, we find the majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks. We also show significant changes in Internet application usage, including a global decline of P2P and a significant rise in video traffic. We conclude with estimates of the current size of the Internet by inter-domain traffic volume and rate of annualized inter-domain traffic growth.

  • Sharon Goldberg, Michael Schapira, Peter Hummon, and Jennifer Rexford

    In response to high-profile Internet outages, BGP security variants have been proposed to prevent the propagation of bogus routing information. To inform discussions of which variant should be deployed in the Internet, we quantify the ability of the main protocols (origin authentication, soBGP, S-BGP, and data-plane verification) to blunt traffic-attraction attacks; i.e., an attacker that deliberately attracts traffic to drop, tamper, or eavesdrop on packets.

    Intuition suggests that an attacker can maximize the traffic he attracts by widely announcing a short path that is not flagged as bogus by the secure protocol. Through simulations on an empirically-determined AS-level topology, we show that this strategy is surprisingly effective, even when the network uses an advanced security solution like S-BGP or data-plane verification. Worse yet, we show that these results underestimate the severity of attacks. We prove that finding the most damaging strategy is NP-hard, and show how counterintuitive strategies, like announcing longer paths, announcing to fewer neighbors, or triggering BGP loop-detection, can be used to attract even more traffic than the strategy above. These counterintuitive examples are not merely hypothetical; we searched the empirical AS topology to identify specific ASes that can launch them. Finally, we find that a clever export policy can often attract almost as much traffic as a bogus path announcement. Thus, our work implies that mechanisms that police export policies (e.g., defensive filtering) are crucial, even if S-BGP is fully deployed.

  • Xue Cai and John Heidemann

    Although the Internet is widely used today, we have little information about the edge of the network. Decentralized management, firewalls, and sensitivity to probing prevent easy answers and make measurement difficult. Building on frequent ICMP probing of 1% of the Internet address space, we develop clustering and analysis methods to estimate how Internet addresses are used. We show that adjacent addresses often have similar characteristics and are used for similar purposes (61% of addresses we probe are consistent blocks of 64 neighbors or more). We then apply this block-level clustering to provide data to explore several open questions in how networks are managed. First, we provide information about how effectively network address blocks appear to be used, finding that a significant number of blocks are only lightly used (most addresses in about one-fifth of /24 blocks are in use less than 10% of the time), an important issue as the IPv4 address space nears full allocation. Second, we provide new measurements about dynamically managed address space, showing nearly 40% of /24 blocks appear to be dynamically allocated, and dynamic addressing is most widely used in countries more recent to the Internet (more than 80% in China, while less than 30% in the U.S.). Third, we distinguish blocks with low-bitrate last-hops and show that such blocks are often underutilized.

  • Tomas Isdal, Michael Piatek, Arvind Krishnamurthy, and Thomas Anderson

    Privacy—the protection of information from unauthorized disclosure is increasingly scarce on the Internet. The lack of privacy is particularly true for popular peer-to-peer data sharing applications such as BitTorrent where user behavior is easily monitored by third parties. Anonymizing overlays such as Tor and Freenet can improve user privacy, but only at a cost of substantially reduced performance. Most users are caught in the middle, unwilling to sacrifice either privacy or performance.

    In this paper, we explore a new design point in this tradeoff between privacy and performance. We describe the design and implementation of a new P2P data sharing protocol, called OneSwarm, that provides users much better privacy than BitTorrent and much better performance than Tor or Freenet. A key aspect of the OneSwarm design is that users have explicit configurable control over the amount of trust they place in peers and in the sharing model for their data: the same data can be shared publicly, anonymously, or with access control, with both trusted and untrusted peers. OneSwarm’s novel lookup and transfer techniques yield a median factor of 3.4 improvement in download times relative to Tor and a factor of 6.9 improvement relative to Freenet. OneSwarm is publicly available and has been downloaded by hundreds of thousands of users since its release.

  • Frank McSherry and Ratul Mahajan

    We consider the potential for network trace analysis while providing the guarantees of “differential privacy.” While differential privacy provably obscures the presence or absence of individual records in a dataset, it has two major limitations: analyses must (presently) be expressed in a higher level declarative language; and the analysis results are randomized before returning to the analyst.

    We report on our experiences conducting a diverse set of analyses in a differentially private manner. We are able to express all of our target analyses, though for some of them an approximate expression is required to keep the error-level low. By running these analyses on real datasets, we find that the error introduced for the sake of privacy is often (but not always) low even at high levels of privacy. We factor our learning into a toolkit that will be likely useful for other analyses. Overall, we conclude that differential privacy shows promise for a broad class of network analyses.

  • Michael E. Kounavis, Xiaozhu Kang, Ken Grewal, Mathew Eszenyi, Shay Gueron, and David Durham

    End-to-end communication encryption is considered necessary for protecting the privacy of user data in the Internet. Only a small fraction of all Internet traffic, however, is protected today. The primary reason for this neglect is economic, mainly security protocol speed and cost. In this paper we argue that recent advances in the implementation of cryptographic algorithms can make general purpose processors capable of encrypting packets at line rates. This implies that the Internet can be gradually transformed to an information delivery infrastructure where all traffic is encrypted and authenticated. We justify our claim by presenting technologies that accelerate end-to-end encryption and authentication by a factor of 6 and a high performance TLS 1.2 protocol implementation that takes advantage of these innovations. Our implementation is available in the public domain for experimentation.

  • Kun Tan, Ji Fang, Yuanyang Zhang, Shouyuan Chen, Lixin Shi, Jiansong Zhang, and Yongguang Zhang

    Modern communication technologies are steadily advancing the physical layer (PHY) data rate in wireless LANs, from hundreds of Mbps in current 802.11n to over Gbps in the near future. As PHY data rates increase, however, the overhead of media access control (MAC) progressively degrades data throughput efficiency. This trend reflects a fundamental aspect of the current MAC protocol, which allocates the channel as a single resource at a time.

    This paper argues that, in a high data rate WLAN, the channel should be divided into separate subchannels whose width is commensurate with PHY data rate and typical frame size. Multiple stations can then contend for and use subchannels simultaneously according to their traffic demands, thereby increasing overall efficiency. We introduce FICA, a fine-grained channel access method that embodies this approach to media access using two novel techniques. First, it proposes a new PHY architecture based on OFDM that retains orthogonality among subchannels while relying solely on the coordination mechanisms in existing WLAN, carrier-sensing and broadcasting. Second, FICA employs a frequency-domain contention method that uses physical layer RTS/CTS signaling and frequency domain backoff to efficiently coordinate subchannel access. We have implemented FICA, both MAC and PHY layers, using a software radio platform, and our experiments demonstrate the feasibility of the FICA design. Further, our simulation results suggest FICA can improve the efficiency ratio of WLANs by up to 400% compared to existing 802.11.

  • Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall

    RSSI is known to be a fickle indicator of whether a wireless link will work, for many reasons. This greatly complicates operation because it requires testing and adaptation to find the best rate, transmit power or other parameter that is tuned to boost performance. We show that, for the first time, wireless packet delivery can be accurately predicted for commodity 802.11 NICs from only the channel measurements that they provide. Our model uses 802.11n Channel State Information measurements as input to an OFDM receiver model we develop by using the concept of effective SNR. It is simple, easy to deploy, broadly useful, and accurate. It makes packet delivery predictions for 802.11a/g SISO rates and 802.11n MIMO rates, plus choices of transmit power and antennas. We report testbed experiments that show narrow transition regions (

  • Hariharan Rahul, Haitham Hassanieh, and Dina Katabi

    Diversity is an intrinsic property of wireless networks. Recent years have witnessed the emergence of many distributed protocols like ExOR, MORE, SOAR, SOFT, and MIXIT that exploit receiver diversity in 802.11-like networks. In contrast, the dual of receiver diversity, sender diversity, has remained largely elusive to such networks.

    This paper presents SourceSync, a distributed architecture for harnessing sender diversity. SourceSync enables concurrent senders to synchronize their transmissions to symbol boundaries, and cooperate to forward packets at higher data rates than they could have achieved by transmitting separately. The paper shows that SourceSync improves the performance of opportunistic routing protocols. Specifically, SourceSync allows all nodes that overhear a packet in a wireless mesh to simultaneously transmit it to their nexthops, in contrast to existing opportunistic routing protocols that are forced to pick a single forwarder from among the overhearing nodes. Such simultaneous transmission reduces bit errors and improves throughput. The paper also shows that SourceSync increases the throughput of 802.11 last hop diversity protocols by allowing multiple APs to transmit simultaneously to a client, thereby harnessing sender diversity. We have implemented SourceSync on the FPGA of an 802.11-like radio platform. We have also evaluated our system in an indoor wireless testbed, empirically showing its benefits.

  • Muhammad Bilal Anwer, Murtaza Motiwala, Mukarram bin Tariq, and Nick Feamster

    We present SwitchBlade, a platform for rapidly deploying custom protocols on programmable hardware. SwitchBlade uses a pipeline-based design that allows individual hardware modules to be enabled or disabled on the fly, integrates software exception handling, and provides support for forwarding based on custom header fields. SwitchBlade's ease of programmability and wire-speed performance enables rapid prototyping of custom data-plane functions that can be directly deployed in a production network. SwitchBlade integrates common packet-processing functions as hardware modules, enabling different protocols to use these functions without having to resynthesize hardware. SwitchBlade's customizable forwarding engine supports both longest-prefix matching in the packet header and exact matching on a hash value. SwitchBlade's software exceptions can be invoked based on either packet or flow-based rules and updated quickly at runtime, thus making it easy to integrate more flexible forwarding function into the pipeline. Switch-Blade also allows multiple custom data planes to operate in parallel on the same physical hardware, while providing complete isolation for protocols running in parallel. We implemented SwitchBlade using NetFPGA board, but SwitchBlade can be implemented with any FPGA. To demonstrate SwitchBlade's flexibility, we use Switch-Blade to implement and evaluate a variety of custom network protocols: we present instances of IPv4, IPv6, Path Splicing, and an OpenFlow switch, all running in parallel while forwarding packets at line rate.

  • Sangjin Han, Keon Jang, KyoungSoo Park, and Sue Moon

    We present PacketShader, a high-performance software router framework for general packet processing with Graphics Processing Unit (GPU) acceleration. PacketShader exploits the massively-parallel processing power of GPU to address the CPU bottleneck in current software routers. Combined with our high-performance packet I/O engine, PacketShader outperforms existing software routers by more than a factor of four, forwarding 64B IPv4 packets at 39 Gbps on a single commodity PC. We have implemented IPv4 and IPv6 forwarding, OpenFlow switching, and IPsec tunneling to demonstrate the flexibility and performance advantage of PacketShader. The evaluation results show that GPU brings significantly higher throughput over the CPU-only implementation, confirming the effectiveness of GPU for computation and memory-intensive operations in packet processing.

  • Balajee Vamanan, Gwendolyn Voskuilen, and T. N. Vijaykumar

    Packet Classification is a key functionality provided by modern routers. Previous decision-tree algorithms, HiCuts and HyperCuts, cut the multi-dimensional rule space to separate a classifier’s rules. Despite their optimizations, the algorithms incur considerable memory overhead due to two issues: (1) Many rules in a classifier overlap and the overlapping rules vary vastly in size, causing the algorithms’ fine cuts for separating the small rules to replicate the large rules. (2) Because a classifier’s rule-space density varies significantly, the algorithms’ equi-sized cuts for separating the dense parts needlessly partition the sparse parts, resulting in many ineffectual nodes that hold only a few rules. We propose EffiCuts which employs four novel ideas: (1) Separable trees: To eliminate overlap among small and large rules, we separate all small and large rules. We define a subset of rules to be separable if all the rules are either small or large in each dimension. We build a distinct tree for each such subset where each dimension can be cut coarsely to separate the large rules, or finely to separate the small rules without incurring replication. (2) Selective tree merging: To reduce the multiple trees’ extra accesses which degrade throughput, we selectively merge separable trees mixing rules that may be small or large in at most one dimension. (3) Equi-dense cuts: We employ unequal cuts which distribute a node’s rules evenly among the children, avoiding ineffectual nodes at the cost of a small processing overhead in the tree traversal. (4) Node Co-location: To achieve fewer accesses per node than HiCuts and HyperCuts, we co-locate parts of a node and its children. Using ClassBench, we show that for similar throughput EffiCuts needs factors of 57 less memory than HyperCuts and of 4-8 less power than TCAM.

  • Franck Le, Geoffrey G. Xie, and Hui Zhang

    Recent studies have shown that the current primitives for connecting multiple routing protocol instances (OSPF 1, OSPF 2, EIGRP 10, etc.) are pervasively deployed in enterprise networks and the Internet. Furthermore, these primitives are extremely vulnerable to routing anomalies (route oscillations, forwarding loops, etc.) and at the same time too rigid to support some of today’s operational objectives. In this paper, we propose a new theory to reason about routing properties across multiple routing instances. The theory directly applies to both link-state and vector routing protocols. Each routing protocol still makes independent routing decisions and may consider a combination of routing metrics, including bandwidth, delay, cost, and reliability. While the theory permits a range of solutions, we focus on a design that requires no changes to existing routing protocols. Guided by the theory, we derive a new set of connecting primitives, which are not only provably safe but also more expressive than the current version. We have implemented and validated the new primitives using XORP. The results confirm that our design can support a large range of desirable operational goals, including those not achievable today, safely and with little manual configuration.

  • Patrick Wendell, Joe Wenjie Jiang, Michael J. Freedman, and Jennifer Rexford

    Geo-replicated services need an effective way to direct client requests to a particular location, based on performance, load, and cost. This paper presents DONAR, a distributed system that can offload the burden of replica selection, while providing these services with a sufficiently expressive interface for specifying mapping policies. Most existing approaches for replica selection rely on either central coordination (which has reliability, security, and scalability limitations) or distributed heuristics (which lead to suboptimal request distributions, or even instability). In contrast, the distributed mapping nodes in DONAR run a simple, efficient algorithm to coordinate their replica-selection decisions for clients. The protocol solves an optimization problem that jointly considers both client performance and server load, allowing us to show that the distributed algorithm is stable and effective. Experiments with our DONAR prototype—providing replica selection for CoralCDN and the Measurement Lab—demonstrate that our algorithm performs well “in the wild.” Our prototype supports DNS- and HTTP-based redirection, IP anycast, and a secure update protocol, and can handle many customer services with diverse policy objectives.

  • Mohammad Hajjat, Xin Sun, Yu-Wei Eric Sung, David Maltz, Sanjay Rao, Kunwadee Sripanidkulchai, and Mohit Tawarmalani

    In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.

  • Xin Liu, Xiaowei Yang, and Yong Xia

    Denial of Service (DoS) attacks frequently happen on the Internet, paralyzing Internet services and causing millions of dollars of financial loss. This work presents NetFence, a scalable DoSresistant network architecture. NetFence uses a novel mechanism, secure congestion policing feedback, to enable robust congestion policing inside the network. Bottleneck routers update the feedback in packet headers to signal congestion, and access routers use it to police senders’ traffic. Targeted DoS victims can use the secure congestion policing feedback as capability tokens to suppress unwanted traffic. When compromised senders and receivers organize into pairs to congest a network link, NetFence provably guarantees a legitimate sender its fair share of network resources without keeping per-host state at the congested link. We use a Linux implementation, ns-2 simulations, and theoretical analysis to show that NetFence is an effective and scalable DoS solution: it reduces the amount of state maintained by a congested router from per-host to at most per-(Autonomous System).

  • Fernando Silveira, Christophe Diot, Nina Taft, and Ramesh Govindan

    When many flows are multiplexed on a non-saturated link, their volume changes over short timescales tend to cancel each other out, making the average change across flows close to zero. This equilibrium property holds if the flows are nearly independent, and it is violated by traffic changes caused by several, potentially small, correlated flows. Many traffic anomalies (both malicious and benign) fit this description. Based on this observation, we exploit equilibrium to design a computationally simple detection method for correlated anomalous flows. We compare our new method to two well known techniques on three network links. We manually classify the anomalies detected by the three methods, and discover that our method uncovers a different class of anomalies than previous techniques do.

  • Zhichun Li, Gao Xia, Hongyu Gao, Yi Tang, Yan Chen, Bin Liu, Junchen Jiang, and Yuezhou Lv

    Accuracy and speed are the two most important metrics for Network Intrusion Detection/Prevention Systems (NIDS/NIPSes). Due to emerging polymorphic attacks and the fact that in many cases regular expressions (regexes) cannot capture the vulnerability conditions accurately, the accuracy of existing regex-based NIDS/NIPS systems has become a serious problem. In contrast, the recently-proposed vulnerability signatures [10, 29] (a.k.a. data patches) can exactly describe the vulnerability conditions and achieve better accuracy. However, how to efficiently apply vulnerability signatures to high speed NIDS/NIPS with a large ruleset remains an untouched but challenging issue.

    This paper presents the first systematic design of vulnerability signature based parsing and matching engine, NetShield, which achieves multi-gigabit throughput while offering much better accuracy. Particularly, we made the following contributions: (i) we proposed a candidate selection algorithm which efficiently matches thousands of vulnerability signatures simultaneously requiring a small amount of memory; (ii) we proposed an automatic lightweight parsing state machine achieving fast protocol parsing. Experimental results show that the core engine of NetShield achieves at least 1.9+Gbps signature matching throughput on a 3.8GHz single-core PC, and can scale-up to at least 11+Gbps under a 8-core machine for 794 HTTP vulnerability signatures. We release our prototype and sample signatures at www.nshield.org.

  • Ye Wang, Hao Wang, Ajay Mahimkar, Richard Alimi, Yin Zhang, Lili Qiu, and Yang Richard Yang

    Network resiliency is crucial to IP network operations. Existing techniques to recover from one or a series of failures do not offer performance predictability and may cause serious congestion. In this paper, we propose Resilient Routing Reconfiguration (R3), a novel routing protection scheme that is (i) provably congestionfree under a large number of failure scenarios; (ii) efficient by having low router processing overhead and memory requirements; (iii) flexible in accommodating different performance requirements (e.g., handling realistic failure scenarios, prioritized traffic, and the trade-off between performance and resilience); and (iv) robust to both topology failures and traffic variations. We implement R3 on Linux using a simple extension of MPLS, called MPLS-ff. We conduct extensive Emulab experiments and simulations using realistic network topologies and traffic demands. Our results show that R3 achieves near-optimal performance and is at least 50% better than the existing schemes under a wide range of failure scenarios.

  • Ajay Anil Mahimkar, Han Hee Song, Zihui Ge, Aman Shaikh, Jia Wang, Jennifer Yates, Yin Zhang, and Joanne Emmons

    Networks continue to change to support new applications, improve reliability and performance and reduce the operational cost. The changes are made to the network in the form of upgrades such as software or hardware upgrades, new network or service features and network configuration changes. It is crucial to monitor the network when upgrades are made because they can have a significant impact on network performance and if not monitored may lead to unexpected consequences in operational networks. This can be achieved manually for a small number of devices, but does not scale to large networks with hundreds or thousands of routers and extremely large number of different upgrades made on a regular basis.

    In this paper, we design and implement a novel infrastructure MERCURY for detecting the impact of network upgrades (or triggers) on performance. MERCURY extracts interesting triggers from a large number of network maintenance activities. It then identifies behavior changes in network performance caused by the triggers. It uses statistical rule mining and network configuration to identify commonality across the behavior changes. We systematically evaluate MERCURY using data collected at a large tier-1 ISP network. By comparing to operational practice, we show that MERCURY is able to capture the interesting triggers and behavior changes induced by the triggers. In some cases, MERCURY also discovers previously unknown network behaviors demonstrating the effectiveness in identifying network conditions flying under the radar.

  • Daniel Turner, Kirill Levchenko, Alex C. Snoeren, and Stefan Savage

    Of the major factors affecting end-to-end service availability, network component failure is perhaps the least well understood. How often do failures occur, how long do they last, what are their causes, and how do they impact customers? Traditionally, answering questions such as these has required dedicated (and often expensive) instrumentation broadly deployed across a network.

    We propose an alternative approach: opportunistically mining “low-quality” data sources that are already available in modern network environments. We describe a methodology for recreating a succinct history of failure events in an IP network using a combination of structured data (router configurations and syslogs) and semi-structured data (email logs). Using this technique we analyze over five years of failure events in a large regional network consisting of over 200 routers; to our knowledge, this is the largest study of its kind.

  • Guohui Wang, David G. Andersen, Michael Kaminsky, Konstantina Papagiannaki, T.S. Eugene Ng, Michael Kozuch, and Michael Ryan

    Data-intensive applications that operate on large volumes of data have motivated a fresh look at the design of data center networks. The first wave of proposals focused on designing pure packetswitched networks that provide full bisection bandwidth. However, these proposals significantly increase network complexity in terms of the number of links and switches required and the restricted rules to wire them up. On the other hand, optical circuit switching technology holds a very large bandwidth advantage over packet switching technology. This fact motivates us to explore how optical circuit switching technology could benefit a data center network. In particular, we propose a hybrid packet and circuit switched data center network architecture (or HyPaC for short) which augments the traditional hierarchy of packet switches with a high speed, low complexity, rack-to-rack optical circuit-switched network to supply high bandwidth to applications. We discuss the fundamental requirements of this hybrid architecture and their design options. To demonstrate the potential benefits of the hybrid architecture, we have built a prototype system called c-Through. c-Through represents a design point where the responsibility for traffic demand estimation and traffic demultiplexing resides in end hosts, making it compatible with existing packet switches. Our emulation experiments show that the hybrid architecture can provide large benefits to unmodified popular data center applications at a modest scale. Furthermore, our experimental experience provides useful insights on the applicability of the hybrid architecture across a range of deployment scenarios.

  • Nathan Farrington, George Porter, Sivasankar Radhakrishnan, Hamid Hajabdolali Bazzaz, Vikram Subramanya, Yeshaiahu Fainman, George Papen, and Amin Vahdat

    The basic building block of ever larger data centers has shifted from a rack to a modular container with hundreds or even thousands of servers. Delivering scalable bandwidth among such containers is a challenge. A number of recent efforts promise full bisection bandwidth between all servers, though with significant cost, complexity, and power consumption. We present Helios, a hybrid electrical/optical switch architecture that can deliver significant reductions in the number of switching elements, cabling, cost, and power consumption relative to recently proposed data center network architectures. We explore architectural trade offs and challenges associated with realizing these benefits through the evaluation of a fully functional Helios prototype.

  • Minlan Yu, Jennifer Rexford, Michael J. Freedman, and Jia Wang

    Ideally, enterprise administrators could specify fine-grain policies that drive how the underlying switches forward, drop, and measure traffic. However, existing techniques for flow based networking rely too heavily on centralized controller software that installs rules reactively, based on the first packet of each flow. In this paper, we propose DIFANE, a scalable and efficient solution that keeps all traffic in the data plane by selectively directing packets through intermediate switches that store the necessary rules. DIFANE relegates the controller to the simpler task of partitioning these rules over the switches. DIFANE can be readily implemented with commodity switch hardware, since all data-plane functions can be expressed in terms of wildcard rules that perform simple actions on matching packets. Experiments with our prototype on Click-based OpenFlow switches show that DIFANE scales to larger networks with richer policies.

  • Bimal Viswanath, Ansley Post, Krishna P. Gummadi, and Alan Mislove

    Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.

    In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.

  • Josep M. Pujol, Vijay Erramilli, Georgos Siganos, Xiaoyuan Yang, Nikos Laoutaris, Parminder Chhabra, and Pablo Rodriguez

    The difficulty of scaling Online Social Networks (OSNs) has introduced new system design challenges that has often caused costly re-architecting for services like Twitter and Facebook. The complexity of interconnection of users in social networks has introduced new scalability challenges. Conventional vertical scaling by resorting to full replication can be a costly proposition. Horizontal scaling by partitioning and distributing data among multiples servers – e.g. using DHTs – can lead to costly inter-server communication.

    We design, implement, and evaluate SPAR, a social partitioning and replication middle-ware that transparently leverages the social graph structure to achieve data locality while minimizing replication. SPAR guarantees that for all users in an OSN, their direct neighbor’s data is co-located in the same server. The gains from this approach are multi-fold: application developers can assume local semantics, i.e., develop as they would for a single server; scalability is achieved by adding commodity servers with low memory and network I/O requirements; and redundancy is achieved at a fraction of the cost.

    We detail our system design and an evaluation based on datasets from Twitter, Orkut, and Facebook, with a working implementation. We show that SPAR incurs minimum overhead, and can help a well-known open-source Twitter clone reach Twitter’s scale without changing a line of its application logic and achieves higher throughput than Cassandra, Facebook’s DHT based key-value store database.

  • David R. Choffnes, Fabián E. Bustamante, and Zihui Ge

    The user experience for networked applications is becoming a key benchmark for customers and network providers. Perceived user experience is largely determined by the frequency, duration and severity of network events that impact a service. While today’s networks implement sophisticated infrastructure that issues alarms for most failures, there remains a class of silent outages (e.g., caused by configuration errors) that are not detected. Further, existing alarms provide little information to help operators understand the impact of network events on services. Attempts to address this through infrastructure that monitors end-to-end performance for customers have been hampered by the cost of deployment and by the volume of data generated by these solutions.

    We present an alternative approach that pushes monitoring to applications on end systems and uses their collective view to detect network events and their impact on services - an approach we call Crowdsourcing Event Monitoring (CEM). This paper presents a general framework for CEM systems and demonstrates its effectiveness for a P2P application using a large dataset gathered from BitTorrent users and confirmed network events from two ISPs. We discuss how we designed and deployed a prototype CEM implementation as an extension to BitTorrent. This system performs online service-level network event detection through passive monitoring and correlation of performance in end-users’ applications.

  • S. Keshav

    When I took my oral qualifying exam at Berkeley many years ago, my seniors told me that if I made it past the assumptions slide, I would pass. Assumptions are the foundations on which a dissertation is built and the examining committee subjects a candidate’s assumptions to harsh analysis. If the assumptions are correct, then the rest of the dissertation, even if flawed, is correctible. Nothing can rescue a dissertation built on incorrect assumptions.

    What is true for dissertations at Berkeley is just as true for presentations, papers, and indeed your research career. It is supremely important to ensure that assumptions underlying your work are sound.

    Unfortunately, there is an inherent problem in validating your assumptions. If you are both making the assumptions and validating them, then it is likely that you are going to be biased in your evaluation. So, you may overlook a flaw that a more critical eye could easily discern. That is the reason why I think it is a good idea to frequently subject your ideas to open inspection by a critical audience. You can do this by giving talks or by talking to your peers one-on-one. The more directly your assumptions are questioned the better. If your assumptions can survive several rounds of criticism then you can be relatively certain of their validity.

    For our discipline to progress, it is important that both ends of the bargain be kept. Researchers should share their ideas openly and be prepared to defend their assumptions. And, as a listener and critic, you should dissect and criticize the assumptions in the talks that you hear and the papers that you read.

    As an aside, I know that I may not always be a welcome member of some audiences because I pull no punches in my skewering of what, to me, appear to be incorrect assumptions. For the record, this is never personal: I am playing my role as a critic in what think is in the best scientific tradition. I will paraphrase the great Hamming to state that scientists should attend talks not to congratulate the speaker but tear them apart!

    Which brings me back to the papers in CCR. As you read, do take the time to think through whether the assumptions make sense. Our reviewers and editors do as thorough a job as they can, but the onus is still upon you. Do also read the public review of a paper, where the Area Editor discusses the pros and cons of each technical paper. This will help you understand the assumptions with which an Area Editor agrees or disagrees. Remember that CCR Online is available for your comments and discussion. Of course, you can also write to an author directly with your questions and constructive criticism.

  • László Gyarmati and Tuan Anh Trinh

    Data centers have a crucial role in current Internet architecture supporting content-centric networking. State-of-theart data centers have different architectures like fat-tree, DCell, or BCube. However, their architectures share a common property: symmetry. Due to their symmetric nature, a tricky point with these architectures is that they are hard to be extended in small quantities. Contrary to state-of-the-art data center architectures, we propose an asymmetric data center topology generation method called Scafida inspired by scale-free networks; these data centers have not only small diameters and high fault tolerance, inherited by scale-free networks, but can also be scaled in smaller and less homogenous increments. We extend the original scale-free network generation algorithm of Barabási and Albert to meet the physical constraints of switches and routers. Despite the fact that our method artificially limits the node degrees in the network, our data center architectures keep the preferable properties of scale-free networks. Based on extensive simulations we present preliminary results that are promising regarding the error tolerance, scalability, and flexibility of the architecture.

    S. Agarwal
Syndicate content