CCR Papers from October 2007

Find a CCR issue:
  • Martin Casado, Michael J. Freedman, Justin Pettit, Jianying Luo, Nick McKeown, and Scott Shenker

    This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single networkwide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches.

    We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in in Stanford University’s network, and this deployment experience has significantly affected Ethane’s design.

  • Paramvir Bahl, Ranveer Chandra, Albert Greenberg, Srikanth Kandula, David A. Maltz, and Ming Zhang

    Localizing the sources of performance problems in large enterprise networks is extremely challenging. Dependencies are numerous, complex and inherently multi-level, spanning hardware and software components across the network and the computing infrastructure. To exploit these dependencies for fast, accurate problem localization, we introduce an Inference Graph model, which is welladapted to user-perceptible problems rooted in conditions giving rise to both partial service degradation and hard faults. Further, we introduce the Sherlock system to discover Inference Graphs in the operational enterprise, infer critical attributes, and then leverage the result to automatically detect and localize problems. To illuminate strengths and limitations of the approach, we provide results from a prototype deployment in a large enterprise network, as well as from testbed emulations and simulations. In particular, we find that taking into account multi-level structure leads to a 30% improvement in fault localization, as compared to two-level approaches.

  • Yu-Chung Cheng, Mikhail Afanasyev, Patrick Verkaik, Péter Benkö, Jennifer Chiang, Alex C. Snoeren, Stefan Savage, and Geoffrey M. Voelker

    Modern enterprise networks are of sufficient complexity that even simple faults can be difficult to diagnose — let alone transient outages or service degradations. Nowhere is this problem more apparent than in the 802.11-based wireless access networks now ubiquitous in the enterprise. In addition to the myriad complexities of the wired network, wireless networks face the additional challenges of shared spectrum, user mobility and authentication management. Not surprisingly, few organizations have the expertise, data or tools to decompose the underlying problems and interactions responsible for transient outages or performance degradations. In this paper, we present a set of modeling techniques for automatically characterizing the source of such problems. In particular, we focus on data transfer delays unique to 802.11 networks—media access dynamics and mobility management latency. Through a combination of measurement, inference and modeling we reconstruct sources of delay —from the physical layer to the transport layer—as well as the interactions among them. We demonstrate our approach using comprehensive traces of wireless activity in the UCSD Computer Science building.

  • Dario Bonfiglio, Marco Mellia, Michela Meo, Dario Rossi, and Paolo Tofanelli

    Skype is a very popular VoIP software which has recently attracted the attention of the research community and network operators. Following a closed source and proprietary design, Skype protocols and algorithms are unknown. Moreover, strong encryption mechanisms are adopted by Skype, making it very difficult to even glimpse its presence from a traffic aggregate. In this paper, we propose a framework based on two complementary techniques to reveal Skype traffic in real time. The first approach, based on Pearson’s Chi-Square test and agnostic to VoIP-related traffic characteristics, is used to detect Skype’s fingerprint from the packet framing structure, exploiting the randomness introduced at the bit level by the encryption process. Conversely, the second approach is based on a stochastic characterization of Skype traffic in terms of packet arrival rate and packet length, which are used as features of a decision process based on Naive Bayesian Classifiers.

    In order to assess the effectiveness of the above techniques, we develop an off-line cross-checking heuristic based on deep-packet inspection and flow correlation, which is interesting per se. This heuristic allows us to quantify the amount of false negatives and false positives gathered by means of the two proposed approaches: results obtained from measurements in different networks show that the technique is very effective in identifying Skype traffic.

    While both Bayesian classifier and packet inspection techniques are commonly used, the idea of leveraging on randomness to reveal traffic is novel. We adopt this to identify Skype traffic, but the same methodology can be applied to other classification problems as well.

  • Wesley W. Terpstra, Jussi Kangasharju, Christof Leng, and Alejandro P. Buchmann

    Peer-to-peer systems promise inexpensive scalability, adaptability, and robustness. Thus, they are an attractive platform for file sharing, distributed wikis, and search engines. These applications often store weakly structured data, requiring sophisticated search algorithms. To simplify the search problem, most scalable algorithms introduce structure to the network. However, churn or violent disruption may break this structure, compromising search guarantees.

    This paper proposes a simple probabilistic search system, BubbleStorm, built on random multigraphs. Our primary contribution is a flexible and reliable strategy for performing exhaustive search. BubbleStorm also exploits the heterogeneous bandwidth of peers. However, we sacrifice some of this bandwidth for high parallelism and low latency. The provided search guarantees are tunable, with success probability adjustable well into the realm of reliable systems.

    For validation, we simulate a network with one million low-end peers and show BubbleStorm handles up to 90% simultaneous peer departure and 50% simultaneous crash.

  • Mohamed Ali Kaafar, Laurent Mathy, Chadi Barakat, Kave Salamatian, Thierry Turletti, and Walid Dabbous

    This paper addresses the issue of the security of Internet Coordinate Systems, by proposing a general method for malicious behavior detection during coordinate computations. We first show that the dynamics of a node, in a coordinate system without abnormal or malicious behavior, can be modeled by a Linear State Space model and tracked by a Kalman filter. Then we show that the obtained model can be generalized in the sense that the parameters of a filter calibrated at a node can be used effectively to model and predict the dynamic behavior at another node, as long as the two nodes are not too far apart in the network. This leads to the proposal of a Surveyor infrastructure: Surveyor nodes are trusted, honest nodes that use each other exclusively to position themselves in the coordinate space, and are therefore immune to malicious behavior in the system. During their own coordinate embedding, other nodes can then use the filter parameters of a nearby Surveyor as a representation of normal, clean system behavior to detect and filter out abnormal or malicious activity. A combination of simulations and Planet- Lab experiments are used to demonstrate the validity, generality, and effectiveness of the proposed approach for two representative coordinate embedding systems, namely Vivaldi and NPS.

  • Jayaram Mudigonda, Harrick M. Vin, and Stephen W. Keckler

    Challenges in addressing the memory bottleneck have made it difficult to design a packet processing platform that simultaneously achieves both ease-of-programming and high performance. Today’s commercial processors support two architectural mechanisms–namely, hardware multithreading and caching–to overcome the memory bottleneck. The configurations of these mechanisms (e.g., cache capacity, number of threads per processor core) are fixed at processordesign time. The relative effectiveness of these mechanisms, however, varies significantly with application, traffic, and system characteristics. Thus, programmers often struggle to achieve high performance from a processor that is not well-suited to a particular deployment.

    To address this challenge, we first make a case for, and then develop a malleable processor architecture that facilitates the dynamic reconfiguration of cache capacity and number of threads to best-suit the needs of each deployment. We then present an algorithm that can determine the optimal thread-cache balance at run-time. The combination of these two allows us to simultaneously achieve the goals of ease-of-programming and high performance. We demonstrate that our processor outperforms a processor similar to Intel’s IXP2800–a state-of-the-art commercial Network Processor–in about 89% of the deployments we consider. Further, in about 30% of the deployments our platform improves the throughput by as much as 300%.

  • Lihua Yuan, Chen-Nee Chuah, and Prasant Mohapatra

    Traffic measurements provide critical input for a wide range of network management applications, including traffic engineering, accounting, and security analysis. Existing measurement tools collect traffic statistics based on some predetermined, inflexible concept of “flows”. They do not have sufficient built-in intelligence to understand the application requirements or adapt to the traffic conditions. Consequently, they have limited scalability with respect to the number of flows and the heterogeneity of monitoring applications.

    We present ProgME, a Programmable MEasurement architecture based on a novel concept of flowset – arbitrary set of flows defined according to application requirements and/or traffic conditions. Through a simple flowset composition language, ProgME can incorporate application requirements, adapt itself to circumvent the challenges on scalability posed by the large number of flows, and achieve a better application-perceived accuracy. ProgME can analyze and adapt to traffic statistics in real-time. Using sequential hypothesis test, ProgME can achieve fast and scalable heavy hitter identification.

  • John R. Douceur and Thomas Moscibroda

    We address a critical deployment issue for network systems, namely motivating people to install and run a distributed service. This work is aimed primarily at peer-to-peer systems, in which the decision and effort to install a service falls to individuals rather than to a central planner. This problem is relevant for bootstrapping systems that rely on the network effect, wherein the benefits are not felt until deployment reaches a significant scale, and also for deploying asymmetric systems, wherein the set of contributors is different than the set of beneficiaries. Our solution is the lottery tree (lottree), a mechanism that probabilistically encourages both participation in the system and also solicitation of new participants. We define the lottree mechanism and formally state seven properties that encourage contribution, solicitation, and fair play. We then present the Pachira lottree scheme, which satisfies five of these seven properties, and we prove this to be a maximal satisfiable subset. Using simulation, we determine optimal parameters for the Pachira lottree scheme, and we determine how to configure a lottree system for achieving various deployment scales based on expected installation effort. We also present extensive sensitivity analyses, which bolster the generality of our conclusions.

  • Cheng Huang, Jin Li, and Keith W. Ross

    Video-on-demand in the Internet has become an immensely popular service in recent years. But due to its high bandwidth requirements and popularity, it is also a costly service to provide. We consider the design and potential benefits of peer-assisted video-on-demand, in which participating peers assist the server in delivering VoD content. The assistance is done in such a way that it provides the same user quality experience as pure client-server distribution. We focus on the single-video approach, whereby a peer only redistributes a video that it is currently watching.

    Using a nine-month trace from a client-server VoD deployment for MSN Video, we assess what the 95 percentile server bandwidth costs would have been if a peer-assisted employment had been instead used. We show that peer-assistance can dramatically reduce server bandwidth costs, particularly if peers prefetch content when there is spare upload capacity in the system. We consider the impact of peer-assisted VoD on the cross-traffic among ISPs. Although this traffic is significant, if care is taken to localize the P2P traffic within the ISPs, we can eliminate the ISP cross traffic while still achieving important reductions in server bandwidth. We also develop a simple analytical model which captures many of the critical features of peer-assisted VoD, including its operational modes.

  • Wolfgang Mühlbauer, Steve Uhlig, Bingjie Fu, Mickael Meulle, and Olaf Maennel

    Routing policies are typically partitioned into a few classes that capture the most common practices in use today [1]. Unfortunately, it is known that the reality of routing policies [2] and peering relationships is far more complex than those few classes [1,3]. We take the next step of searching for the appropriate granularity at which policies should be modeled. For this purpose, we study how and where to configure per-prefix policies in an AS-level model of the Internet, such that the selected paths in the model are consistent with those observed in BGP data from multiple vantage points.

    By comparing business relationships with per-prefix filters, we investigate the role and limitations of business relationships as a model for policies. We observe that popular locations for filtering correspond to valleys where no path should be propagated according to inferred business relationships. This result reinforces the validity of the valley-free property used for business relationships inference. However, given the sometimes large path diversity ASs have, business relationships do not contain enough information to decide which path will be chosen as the best. To model how individual ASs choose their best paths, we introduce a new abstraction: next-hop atoms. Next-hop atoms capture the different sets of neighboring ASs an AS uses for its best routes. We show that a large fraction of next-hop atoms correspond to per-neighbor path choices. A non-negligible fraction of path choices, however, correspond to hot-potato routing and tie-breaking within the BGP decision process, very detailed aspects of Internet routing.

  • Cheng Tien Ee, Vijay Ramachandran, Byung-Gon Chun, Kaushik Lakshminarayanan, and Scott Shenker

    The Border Gateway Protocol (BGP) allows each autonomous system (AS) to select routes to destinations based on semantically rich and locally determined policies. This autonomously exercised policy freedom can cause instability, where unresolvable policy-based disputes in the network result in interdomain route oscillations. Several recent works have established that such instabilities can only be eliminated by enforcing a globally accepted preference ordering on routes (such as shortest path). To resolve this conflict between policy autonomy and system stability, we propose a distributed mechanism that enforces a preference ordering only when disputes resulting in oscillations exist. This preserves policy freedom when possible, and imposes stability when required.

  • Szymon Chachulski, Michael Jennings, Sachin Katti, and Dina Katabi

    Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers’ access to the medium. Although the scheduler delivers opportunistic gains, it misses some of the inherent features of the 802.11 MAC. For example, it prevents spatial reuse and thus may underutilize the wireless medium. It also eliminates the layering abstraction, making the protocol less amenable to extensions to alternate traffic types such as multicast.

    This paper presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. Experimental results from a 20-node wireless testbed show that MORE’s median unicast throughput is 22% higher than ExOR, and the gains rise to 45% over ExOR when there is a chance of spatial reuse. For multicast, MORE’s gains increase with the number of destinations, and are 35-200% greater than ExOR.

  • Teemu Koponen, Mohit Chawla, Byung-Gon Chun, Andrey Ermolinskiy, Kye Hyun Kim, Scott Shenker, and Ion Stoica

    The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.

  • Saikat Guha and Paul Francis

    We argue that the current model for flow establishment in the Internet: DNS Names, IP addresses, and transport ports, is inadequate due to problems that go beyond the small IPv4 address space and resulting NAT boxes. Even where global addresses exist, firewalls cannot glean enough information about a flow from packet headers, and so often err, typically by being over-conservative: disallowing flows that might otherwise be allowed. This paper presents a novel architecture, protocol design, and implementation, for flow establishment in the Internet. The architecture, called NUTSS, takes into account the combined policies of endpoints and network providers. While NUTSS borrows liberally from other proposals (URI-like naming, signaling to manage ephemeral IPv4 or IPv6 data flows), NUTSS is unique in that it couples overlay signaling with data-path signaling. NUTSS requires no changes to existing network protocols, and combined with recent NAT traversal techniques, works with IPv4 and existing NAT/firewalls. This paper describes NUTSS and shows how it satisfies a wide range of “end-middle-end” network requirements, including access control, middlebox steering, multi-homing, mobility, and protocol negotiation.

  • Hitesh Ballani and Paul Francis

    Networks are hard to manage and in spite of all the so called holistic management packages, things are getting worse. We argue that the difficulty of network management can partly be attributed to a fundamental flaw in the existing architecture: protocols expose all their internal details and hence, the complexity of the ever-evolving data plane encumbers the management plane. Guided by this observation, in this paper we explore an alternative approach and propose Complexity Oblivious Network Management (CONMan), a network architecture in which the management interface of data-plane protocols includes minimal protocol-specific information. This restricts the operational complexity of protocols to their implementation and allows the management plane to achieve high level policies in a structured fashion. We built the CONMan interface of a few protocols and a management tool that can achieve high-level configuration goals based on this interface. Our preliminary experience with applying this tool to real world VPN configuration indicates the architecture’s potential to alleviate the difficulty of configuration management.

  • Martin Karsten, S. Keshav, Sanjiva Prasad, and Mirza Beg

    The de facto service architecture of today’s communication networks, in particular the Internet, is heterogeneous, complex, ad hoc, and not particularly well understood. With layering as the only means for functional abstraction, and even this violated by middle-boxes, the diversity of current technologies can barely be expressed, let alone analyzed. As a first step to remedying this problem, we present an axiomatic formulation of fundamental forwarding mechanisms in communication networks. This formulation allows us to express precisely and abstractly the concepts of naming and addressing and to specify a consistent set of control patterns and operational primitives, from which a variety of communication services can be composed. Importantly, this framework can be used to (1) formally analyze network protocols based on structural properties, and also to (2) derive working prototype implementations of these protocols. The prototype is implemented as a universal forwarding engine, a general framework and runtime environment based on the Click router.

  • Hao Wang, Yang Richard Yang, Paul H. Liu, Jia Wang, Alexandre Gerber, and Albert Greenberg

    Reliability is a critical requirement of the Internet. The availability and resilience of the Internet under failures can have significant global effects. However, in the current Internet routing architecture, achieving the high level of reliability demanded by many missioncritical activities can be costly. In this paper, we first propose a novel solution framework called reliability as an interdomain service (REIN) that can be incrementally deployed in the Internet and may improve the redundancy of IP networks at low cost. We then present robust algorithms to efficiently utilize network redundancy to improve reliability. We use real IP network topologies and traffic traces to demonstrate the effectiveness of our framework and algorithms.

  • Karthik Lakshminarayanan, Matthew Caesar, Murali Rangan, Tom Anderson, Scott Shenker, and Ion Stoica

    Current distributed routing paradigms (such as link-state, distancevector, and path-vector) involve a convergence process consisting of an iterative exploration of intermediate routes triggered by certain events such as link failures. The convergence process increases router load, introduces outages and transient loops, and slows reaction to failures. We propose a new routing paradigm where the goal is not to reduce the convergence times but rather to eliminate the convergence process completely. To this end, we propose a technique called Failure-Carrying Packets (FCP) that allows data packets to autonomously discover a working path without requiring completely up-to-date state in routers. Our simulations, performed using real-world failure traces and Rocketfuel topologies, show that: (a) the overhead of FCP is very low, (b) unlike traditional link-state routing (such as OSPF), FCP can provide both low lossrate as well as low control overhead, (c) compared to prior work in backup path precomputations, FCP provides better routing guarantees under failures despite maintaining lesser state at the routers.

  • Khaled Elmeleegy, Alan L. Cox, and T. S. Eugene Ng

    Ethernet is pervasive. This is due in part to its ease of use. Equipment can be added to an Ethernet network with little or no manual configuration. Furthermore, Ethernet is self-healing in the event of equipment failure or removal. However, there are scenarios where a local event can lead to network-wide packet loss and congestion due to slow or faulty reconfiguration of the spanning tree. Moreover, in some cases the packet loss and congestion may persist indefinitely.

    To address these problems, we introduce the EtherFuse, a new device that can be inserted into an existing Ethernet to speed the reconfiguration of the spanning tree and prevent congestion due to packet duplication. EtherFuse is backward compatible and requires no change to the existing hardware, software, or protocols. We describe a prototype EtherFuse implementation and experimentally demonstrate its effectiveness. Specifically, we characterize how quickly it responds to network failures, its ability to reduce packet loss and duplication, and its benefits on the end-to-end performance of common applications.

  • Hitesh Ballani, Paul Francis, and Xinyang Zhang

    There have been many incidents of prefix hijacking in the Internet. The hijacking AS can blackhole the hijacked traffic. Alternatively, it can transparently intercept the hijacked traffic by forwarding it onto the owner. This paper presents a study of such prefix hijacking and interception with the following contributions: (1). We present a methodology for prefix interception, (2). We estimate the fraction of traffic to any prefix that can be hijacked and intercepted in the Internet today, (3). The interception methodology is implemented and used to intercept real traffic to our prefix, (4). We conduct a detailed study to detect ongoing prefix interception.

    We find that: Our hijacking estimates are in line with the impact of past hijacking incidents and show that ASes higher up in the routing hierarchy can hijack a significant amount of traffic to any prefix, including popular prefixes. A less apparent result is that the same holds for prefix interception too. Further, our implementation shows that intercepting traffic to a prefix in the Internet is almost as simple as hijacking it. Finally, while we fail to detect ongoing prefix interception, the detection exercise highlights some of the challenges posed by the prefix interception problem.

  • Changxi Zheng, Lusheng Ji, Dan Pei, Jia Wang, and Paul Francis

    As more and more Internet IP prefix hijacking incidents are being reported, the value of hijacking detection services has become evident. Most of the current hijacking detection approaches monitor IP prefixes on the control plane and detect inconsistencies in route advertisements and route qualities. We propose a different approach that utilizes information collected mostly from the data plane. Our method is motivated by two key observations: when a prefix is not hijacked, 1) the hop count of the path from a source to this prefix is generally stable; and 2) the path from a source to this prefix is almost always a super-path of the path from the same source to a reference point along the previous path, as long as the reference point is topologically close to the prefix. By carefully selecting multiple vantage points and monitoring from these vantage points for any departure from these two observations, our method is able to detect prefix hijacking with high accuracy in a light-weight, distributed, and real-time fashion. Through simulations constructed based on real Internet measurement traces, we demonstrate that our scheme is accurate with both false positive and false negative ratios below 0.5%.

  • Bryan Parno, Dan Wendlandt, Elaine Shi, Adrian Perrig, Bruce Maggs, and Yih-Chun Hu

    Systems using capabilities to provide preferential service to selected flows have been proposed as a defense against large-scale network denial-of-service attacks. While these systems offer strong protection for established network flows, the Denial-of-Capability (DoC) attack, which prevents new capability-setup packets from reaching the destination, limits the value of these systems.

    Portcullis mitigates DoC attacks by allocating scarce link bandwidth for connection establishment packets based on per-computation fairness. We prove that a legitimate sender can establish a capability with high probability regardless of an attacker’s resources or strategy and that no system can improve on our guarantee. We simulate full and partial deployments of Portcullis on an Internetscale topology to confirm our theoretical results and demonstrate the substantial benefits of using per-computation fairness.

  • Yinglian Xie, Fang Yu, Kannan Achan, Eliot Gillum, Moises Goldszmidt, and Ted Wobber

    This paper introduces a novel algorithm, UDmap, to identify dynamically assigned IP addresses and analyze their dynamics pattern. UDmap is fully automatic, and relies only on applicationlevel server logs. We applied UDmap to a month-long Hotmail user-login trace and identified a significant number of dynamic IP addresses – more than 102 million. This suggests that the fraction of IP addresses that are dynamic is by no means negligible. Using this information in combination with a three-month Hotmail email server log, we were able to establish that 95.6% of mail servers setup on the dynamic IP addresses in our trace sent out solely spam emails. Moreover, these mail servers sent out a large amount of spam – amounting to 42.2% of all spam emails received by Hotmail. These results highlight the importance of being able to accurately identify dynamic IP addresses for spam filtering. We expect similar benefits to arise for phishing site identification and botnet detection. To our knowledge, this is the first successful attempt to automatically identify and understand IP address dynamics.

  • Ricardo V. Oliveira, Beichuan Zhang, and Lixia Zhang

    Characterizing the evolution of Internet topology is important to our understanding of the Internet architecture and its interplay with technical, economic and social forces. A major challenge in obtaining empirical data on topology evolution is to identify real topology changes from the observed topology changes, since the latter can be due to either topology changes or transient routing dynamics. In this paper, we formulate the topology liveness problem and propose a solution based on the analysis of BGP data. We find that the impact of transient routing dynamics on topology observation decreases exponentially over time, and that the real topology dynamics consist of a constant-rate birth process and a constant-rate death process. Our model enables us to infer real topology changes from observation data with a given confidence level. We demonstrate the usefulness of the model by applying it to three applications: providing more accurate views of the topology, evaluating theoretical evolution models, and empirically characterizing the trends of topology evolution. We find that customer networks and provider networks have distinct evolution trends, which can provide an important input to the design of future Internet routing architecture.

  • Priya Mahadevan, Calvin Hubble, Dmitri Krioukov, Bradley Huffaker, and Amin Vahdat

    Researchers involved in designing network services and protocols rely on results from simulation and emulation environments to evaluate correctness, performance and scalability. To better understand the behavior of these applications and to predict their performance when deployed across the Internet, the generated topologies that serve as input to simulated and emulated environments must closely match real network characteristics, not just in terms of graph structure (node interconnectivity) but also with respect to various node and link annotations. Relevant annotations include link latencies, AS membership and whether a router is a peering or internal router. Finally, it should be possible to rescale a given topology to a variety of sizes while still maintaining its essential characteristics.

    In this paper, we propose techniques to generate annotated, Internet router graphs of different sizes based on existing observations of Internet characteristics. We find that our generated graphs match a variety of graph properties of observed topologies for a range of target graph sizes. While the best available data of Internet topology currently remains imperfect, the quality of our generated topologies will improve with the fidelity of available measurement techniques or next generation architectures that make Internet structure more transparent.

  • Barath Raghavan, Kashi Vishwanath, Sriram Ramabhadran, Kenneth Yocum, and Alex C. Snoeren

    Today’s cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service’s network traffic. Our abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. We present two designs—one general purpose, and one optimized for TCP—that allow service operators to explicitly trade off between communication costs and system accuracy, efficiency, and scalability. Both designs are capable of rate limiting thousands of flows with negligible overhead (less than 3% in the tested configuration). We demonstrate that our TCP-centric design is scalable to hundreds of nodes while robust to both loss and communication delay, making it practical for deployment in nationwide service providers.

  • Sumitha Bhandarkar, A. L. Narasimha Reddy, Yueping Zhang, and Dimitri Loguinov

    In this paper, we show that end-host based congestion prediction is more accurate than previously characterized. However, it may not be possible to entirely eliminate the uncertainties in congestion prediction. To address these uncertainties, we propose Probabilistic Early Response TCP (PERT). PERT emulates the behavior of AQM/ECN, in the congestion response function of end-hosts. We present fluid-flow analysis of PERT/RED and PERT/PI, versions of PERT that emulate router-based RED and PI controllers. Our analysis shows that PERT/RED has better stability behavior than router-based RED. We also present results from ns-2 simulations to show the practical feasibility of PERT. The scheme presented here is general and can be used for emulating other AQM algorithms.

  • Bryan Ford

    Internet applications currently have a choice between stream and datagram transport abstractions. Datagrams efficiently support small transactions and streams are suited for longrunning conversations, but neither abstraction adequately supports applications like HTTP that exhibit a mixture of transaction sizes, or applications like FTP and SIP that use multiple transport instances. Structured Stream Transport (SST) enhances the traditional stream abstraction with a hierarchical hereditary structure, allowing applications to create lightweight child streams from any existing stream. Unlike TCP streams, these lightweight streams incur neither 3-way handshaking delays on startup nor TIME-WAIT periods on close. Each stream offers independent data transfer and flow control, allowing different transactions to proceed in parallel without head-of-line blocking, but all streams share one congestion control context. SST supports both reliable and best-effort delivery in a way that semantically unifies datagrams with streams and solves the classic “large datagram” problem, where a datagram’s loss probability increases exponentially with fragment count. Finally, an application can prioritize its streams relative to each other and adjust priorities dynamically through out-of-band signaling. A user-space prototype shows that SST is TCP-friendly to within 2%, and performs comparably to a user-space TCP and to within 10% of kernel TCP on a WiFi network.

  • Aruna Balasubramanian, Brian Levine, and Arun Venkataramani

    Routing protocols for disruption-tolerant networks (DTNs) use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, and so these approaches have only an incidental effect on routing such metrics as maximum or average delivery delay. In this paper, we present rapid, an intentional DTN routing protocol that can optimize a specific routing metric such as the worst-case delivery delay or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system.

    We evaluate rapid rigorously through a prototype deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real DTN at this scale. Our results suggest that rapid significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads RAPID is within 10% of the optimal performance.

  • Ramakrishna Gummadi, David Wetherall, Ben Greenstein, and Srinivasan Seshan

    We study the impact on 802.11 networks of RF interference from devices such as Zigbee and cordless phones that increasingly crowd the 2.4GHz ISM band, and from devices such as wireless camera jammers and non-compliant 802.11 devices that seek to disrupt 802.11 operation. Our experiments show that commodity 802.11 equipment is surprisingly vulnerable to certain patterns of weak or narrow-band interference. This enables us to disrupt a link with an interfering signal whose power is 1000 times weaker than the victim’s 802.11 signals, or to shut down a multiple AP, multiple channel managed network at a location with a single radio interferer. We identify several factors that lead to these vulnerabilities, ranging from MAC layer driver implementation strategies to PHY layer radio frequency implementation strategies. Our results further show that these factors are not overcome by simply changing 802.11 operational parameters (such as CCA threshold, rate and packet size) with the exception of frequency shifts. This leads us to explore rapid channel hopping as a strategy to withstand RF interference.We prototype a channel hopping design using PRISM NICs, and find that it can sustain throughput at levels of RF interference well above that needed to disrupt unmodified links, and at a reasonable cost in terms of switching overheads.

  • Sachin Katti, Shyamnath Gollakota, and Dina Katabi

    Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits.

    So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding.

  • Kyle Jamieson and Hari Balakrishnan

    Bit errors occur in wireless communication when interference or noise overcomes the coded and modulated transmission. Current wireless protocols may use forward error correction (FEC) to correct some small number of bit errors, but generally retransmit the whole packet if the FEC is insufficient. We observe that current wireless mesh network protocols retransmit a number of packets and that most of these retransmissions end up sending bits that have already been received multiple times, wasting network capacity. To overcome this inefficiency, we develop, implement, and evaluate a partial packet recovery (PPR) system.

    PPR incorporates two new ideas: (1) SoftPHY, an expanded physical layer (PHY) interface that provides PHY-independent hints to higher layers about the PHY’s confidence in each bit it decodes, and (2) a postamble scheme to recover data even when a packet preamble is corrupted and not decodable at the receiver. Finally, we present PP-ARQ, an asynchronous link-layer ARQ protocol built on PPR that allows a receiver to compactly encode a request for retransmission of only those bits in a packet that are likely in error. Our experimental results from a 31-node Zigbee (802.15.4) testbed that includes Telos motes with 2.4 GHz Chipcon radios and GNU Radio nodes implementing the 802.15.4 standard show that PP-ARQ increases end-to-end capacity by a factor of 2× under moderate load.

  • Umar Saif, Ahsan Latif Chudhary, Shakeel Butt, and Nabeel Farooq Butt

    In this paper we present a peer-to-peer dialup architecture for accelerated “Internet access” in the developing world. Our proposed architecture provides a mechanism for multiplexing the scarce and expensive international Internet bandwidth over higher bandwidth p2p dialup connections within a developing country. Our system combines a number of architectural components, such as incentive-driven p2p data transfer, intelligent connection interleaving and content-prefetching. This paper presents a detailed design, implementation and evaluation of our dialup p2p data transfer architecture inspired by Bittorrent.

    Pablo Rodriguez (Telefonica)
  • Kun-chan Lan, Zhe Wang, Mahbub Hassan, Tim Moors, Rodney Berriman, Lavy Libman, Maximilian Ott, Bjorn Landfeldt, and Zainab Zaidi

    Wireless mesh networks (WMN) have attracted considerable interest in recent years as a convenient, flexible and low-cost alternative to wired communication infrastructures in many contexts. However, the great majority of research on metropolitan-scale WMN has been centered around maximization of available bandwidth, suitable for non-real-time applications such as Internet access for the general public. On the other hand, the suitability of WMN for missioncritical infrastructure applications remains by and large unknown, as protocols typically employed in WMN are, for the most part, not designed for real-time communications. In this paper, we describe the Smart Transport and Roads Communications (STaRComm) project at National ICT Australia (NICTA), which sets a goal of designing a wireless mesh network architecture to solve the communication needs of the traffic control system in Sydney, Australia. This system, known as SCATS (Sydney Coordinated Adaptive Traffic System) and used in over 100 cities around the world, connects a hierarchy of several thousand devices — from individual traffic light controllers to regional computers and the central Traffic Management Centre (TMC) - and places stringent requirements on the reliability and latency of the data exchanges. We discuss our experience in the deployment of an initial testbed consisting of 7 mesh nodes placed at intersections with traffic lights, and share the results and insights learned from our measurements and initial trials in the process.

  • Yao Shen, Yunze Cai, and Xiaoming Xu

    In this paper, we present a shortest-path-based algorithm, called local shortest path (LSP), for topology control in wireless multihop networks. In this algorithm, each node locally computes the shortest paths connecting itself to nearby nodes based on some link weight function, and then it selects all the second nodes on the shortest paths as its logical neighbors in the final topology. Any energy model can be employed in LSP to design the link weight function whose value represents the power consumption required in the transmission along a link. We analytically prove that such a simple algorithm maintains network connectivity and guarantees that the minimal energy path between any two nodes is preserved in the final topology. Simulation results show that LSP can reduce the energy consumption, especially in heterogenous networks.

  • Fragkiskos Papadopoulos and Konstantinos Psounis

    It has been recently suggested that uncongested links could be completely ignored when evaluating Internet’s performance. In particular, based on the observation that only the congested links along the path of each flow introduce sizable queueing delays and dependencies among flows, it has been shown that one can infer the performance of the larger Internet by creating and observing a suitably scaleddown replica, consisting of the congested links only. Given that the majority of Internet links are uncongested, it has been demonstrated that this approach can be used to greatly simplify and expedite performance prediction.

    However, an important open problem, directly related to the practicability of such an approach, is whether there exist efficient and scalable ways for identifying uncongested links, in large and complex Internet-like networks. Of course, such a question is not only very important for scaling down Internet’s topology, but also in many other contexts, e.g. such as in traffic engineering and capacity planning.

    In this paper we present simple rules that can be used to efficiently identify uncongested Internet links. In particular, we first identify scenarios under which one can easily deduce whether a link is uncongested by inspecting the network topology. Then, we identify scenarios in which this is not possible, and propose an efficient methodology, based on the large deviations theory and flow-level statistics, to approximate the queue length distribution, and in turn, to deduce the congestion level of a link. We also demonstrate how simple commonly used metrics, such as the link utilization, can be quite misleading in classifying an Internet link.

  • David Salyers, Yingxin Jiang, Aaron Striegel, and Christian Poellabauer

    Network line speeds have increased at a significant rate. Unfortunately, network performance has not been able to keep pace with increases in line speed. This is due to the majority of packets being less than or equal to 100 bytes in addition to network routers not being able to scale well with the increased number of packets. In this paper we present our solution, JumboGen, an approach that will allow for a higher utilization of larger packet sizes on a domain-wise basis. Through simulations and experimentation, we show that the dynamic creation of jumbo packets decreases the number of packets processed by core routers and does not have an adverse impact on link utilization or fairness. The final result of JumboGen is a reduction in the number of packets seen by core routers which directly improves network scalability.

  • Moritz Steiner, Taoufik En-Najjary, and Ernst W. Biersack

    Peer-to-peer systems have seen a tremendous growth in the last few years and peer-to-peer traffic makes a major fraction of the total traffic seen in the Internet. The dominating application for peer-to-peer is file sharing. Some of the most popular peer-to-peer systems for file sharing have been Napster, FastTrack, BitTorrent, and eDonkey, each one counting a million or more users at their peak time.

    We got interested in kad, since it is the only DHT that has been part of very popular peer-to-peer system with several million simultaneous users. As we have been studying kad over the course of the last 18 months we have been both, fascinated and frightened by the possibilities kad offers. Mounting a Sybil attack is very easy in kad and allows to compromise the privacy of kad users, to compromise the correct operation of the key lookup and to mount DDOS with very little resources.

    In this paper, we will relate some of our findings and point out how kad can be used and misused.

  • Matthew Roughan and Darryl Veitch

    Keywords: Fractal Renewal Process, Long-range dependence, wavelet estimator, Hurst parameter, sampling, discontinuities.

Syndicate content