CCR Papers from October 2009

Find a CCR issue:
  • Mythili Vutukuru, Hari Balakrishnan, and Kyle Jamieson

    This paper presents SoftRate, a wireless bit rate adaptation protocol that is responsive to rapidly varying channel conditions. Unlike previous work that uses either frame receptions or signal-to-noise ratio (SNR) estimates to select bit rates, SoftRate uses confidence information calculated by the physical layer and exported to higher layers via the SoftPHY interface to estimate the prevailing channel bit error rate (BER). Senders use this BER estimate, calculated over each received packet (even when the packet has no bit errors), to pick good bit rates. SoftRate’s novel BER computation works across different wireless environments and hardware without requiring any retraining. SoftRate also uses abrupt changes in the BER estimate to identify interference, enabling it to reduce the bit rate only in response to channel errors caused by attenuation or fading. Our experiments conducted using a software radio prototype show that SoftRate achieves 2x higher throughput than popular frame-level protocols such as SampleRate [4] and RRAA [24]. It also achieves 20% more throughput than an SNR-based protocol trained on the operating environment, and up to 4x higher throughput than an untrained SNR-based protocol. The throughput gains using SoftRate stem from its ability to react to channel variations within a single packet-time and its robustness to collision losses.

  • Aveek Dutta, Dola Saha, Dirk Grunwald, and Douglas Sicker

    Network protocol designers, both at the physical and network level, have long considered interference and simultaneous transmission in wireless protocols as a problem to be avoided. This, coupled with a tendency to emulate wired network protocols in the wireless domain, has led to artificial limitations in wireless networks. In this paper, we argue that wireless protocols can exploit simultaneous transmission to reduce the cost of reliable multicast by orders of magnitude. With an appropriate application interface, simultaneous transmission can also greatly speed up common group communication primitives, such as anycast, broadcast, leader election and others.

    The proposed method precisely fits into the domain of directly reachable nodes where many group communication mechanisms are commonly used in routing protocols and other physical-layer mechanisms. We demonstrate how simultaneous transmission can be used to implement a reliable broadcast for an infrastructure and peer-to-peer network using a prototype reconfigurable hardware. We also validate the notion of using simple spectrum sensing techniques to distinguish multiple transmissions. We then describe how the mechanism can be extended to solve group communication problems and the challenges inherent to build innovative protocols which are faster and reliable at the same time.

  • Paramvir Bahl, Ranveer Chandra, Thomas Moscibroda, Rohan Murty, and Matt Welsh

    Networking over UHF white spaces is fundamentally different from conventional Wi-Fi along three axes: spatial variation, temporal variation, and fragmentation of the UHF spectrum. Each of these differences gives rise to new challenges for implementing a wireless network in this band. We present the design and implementation of WhiteFi, the firstWi-Fi like system constructed on top of UHF white spaces. WhiteFi incorporates a new adaptive spectrum assignment algorithm to handle spectrum variation and fragmentation, and proposes a low overhead protocol to handle temporal variation. WhiteFi builds on a simple technique, called SIFT, that reduces the time to detect transmissions in variable channel width systems by analyzing raw signals in the time domain. We provide an extensive system evaluation in terms of a prototype implementation and detailed experimental and simulation results.

  • Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya, and Amin Vahdat

    This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, in exible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet/IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a \plug-and-play" large-scale, data center network.

  • Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel, and Sudipta Sengupta

    To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. Tomeet these goals, we presentVL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2’s design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2’s implementation leverages proven network technologies, already available at lowcost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds – sustaining a rate that is 94% of the maximum possible.

  • Chuanxiong Guo, Guohan Lu, Dan Li, Haitao Wu, Xuan Zhang, Yunfeng Shi, Chen Tian, Yongguang Zhang, and Songwu Lu

    This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multi- ple network ports connect to multiple layers of COTS (com- modity on-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube sup- ports various bandwidth-intensive applications by speeding- up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic.

    BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very di±cult to repair or replace its components.

    Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balanc- ing and it significantly accelerates representative bandwidth- intensive applications.

  • Yinglian Xie, Fang Yu, and Martin Abadi

    Today’s Internet is open and anonymous. While it permits free traffic from any

    host, attackers that generate malicious traffic cannot typically be held accountable. In this paper, we present a system called HostTracker that tracks dynamic bindings between hosts and IP addresses by leveraging application-level data with unreliable IDs. Using a month-long user login trace from a large email provider, we show that HostTracker can attribute most of the activities reliably to the responsible hosts, despite the existence of dynamic IP addresses, proxies, and NATs. With this information, we are able to analyze the host population, to conduct forensic analysis, and also to blacklist malicious hosts dynamically.

  • Ashok Anand, Vyas Sekar, and Aditya Akella

    Application-independent Redundancy Elimination (RE), or identifying and removing repeated content from network transfers, has been used with great success for improving network performance on enterprise access links. Recently, there is growing interest for supporting RE as a network-wide service. Such a network-wide RE service benefits ISPs by reducing link loads and increasing the effective network capacity to better accommodate the increasing number of bandwidth-intensive applications. Further, a networkwide RE service democratizes the benefits of RE to all end-to-end traffic and improves application performance by increasing throughput and reducing latencies.

    While the vision of a network-wide RE service is appealing, realizing it in practice is challenging. In particular, extending singlevantage- point RE solutions designed for enterprise access links to the network-wide case is inefficient and/or requires modifying routing policies. We present SmartRE, a practical and efficient architecture for network-wide RE. We show that SmartRE can enable more effective utilization of the available resources at network devices, and thus can magnify the overall benefits of network-wide RE. We prototype our algorithms using Click and test our framework extensively using several real and synthetic traces.

  • Aditya Dhananjay, Hui Zhang, Jinyang Li, and Lakshminarayanan Subramanian

    Realizing the full potential of a multi-radio mesh network involves two main challenges: how to assign channels to radios at each node to minimize interference and how to choose high throughput routing paths in the face of lossy links, variable channel conditions and external load. This paper presents ROMA, a practical, distributed channel assignment and routing protocol that achieves good multi-hop path performance between every node and one or more designated gateway nodes in a dual-radio network. ROMA assigns nonoverlapping channels to links along each gateway path to eliminate intra-path interference. ROMA reduces inter-path interference by assigning different channels to paths destined for different gateways whenever possible. Evaluations on a 24-node dual-radio testbed show that ROMA achieves high throughput in a variety of scenarios.

  • P. Brighten Godfrey, Igor Ganichev, Scott Shenker, and Ion Stoica

    We present a new routing protocol, pathlet routing, in which networks advertise fragments of paths, called pathlets, that sources concatenate into end-to-end source routes. Intuitively, the pathlet is a highly exible building block, capturing policy constraints as well as enabling an exponentially large number of path choices. In particular, we show that pathlet routing can emulate the policies of BGP, source routing, and several recent multipath proposals.

    This exibility lets us address two major challenges for Internet routing: scalability and source-controlled routing. When a router's routing policy has only \local" constraints, it can be represented using a small number of pathlets, leading to very small forwarding tables and many choices of routes for senders. Crucially, pathlet routing does not impose a global requirement on what style of policy is used, but rather allows multiple styles to coexist. The protocol thus supports complex routing policies while enabling and incentivizing the adoption of policies that yield small forwarding plane state and a high degree of path choice.

  • Asfandyar Qureshi, Rick Weber, Hari Balakrishnan, John Guttag, and Bruce Maggs

    Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai’s CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.

  • Randy Baden, Adam Bender, Neil Spring, Bobby Bhattacharjee, and Daniel Starin

    Online social networks (OSNs) are immensely popular, with some claiming over 200 million users [10]. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information.

    We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data.

    We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices.

  • Micah Z. Brodsky and Robert T. Morris

    Carrier sense is often used to regulate concurrency in wireless medium access control (MAC) protocols, balancing interference protection and spatial reuse. Carrier sense is known to be imperfect, and many improved techniques have been proposed. Is the search for a replacement justified? This paper presents a theoretical model for average case two-sender carrier sense based on radio propagation theory and Shannon capacity. Analysis using the model shows that carrier sense performance is surprisingly close to optimal for radios with adaptive bitrate. The model suggests that hidden and exposed terminals usually cause modest reductions in throughput rather than dramatic decreases. Finally, it is possible to choose a fixed sense threshold which performs well across a wide range of scenarios, in large part due to the role of the noise floor. Experimental results from an indoor 802.11 testbed support these claims.

  • Shyamnath Gollakota, Samuel David Perli, and Dina Katabi

    The throughput of existing MIMO LANs is limited by the number of antennas on the AP. This paper shows how to overcome this limita- tion. It presents interference alignment and cancellation (IAC), a new approach for decoding concurrent sender-receiver pairs in MIMO networks. IAC synthesizes two signal processing techniques, inter- ference alignment and interference cancellation, showing that the combination applies to scenarios where neither interference align- ment nor cancellation applies alone. We show analytically that IAC almost doubles the throughput of MIMO LANs. We also implement IAC in GNU-Radio, and experimentally demonstrate that for 2x2 MIMO LANs, IAC increases the average throughput by 1.5x on the downlink and 2x on the uplink.

  • Xi Liu, Anmol Sheth, Michael Kaminsky, Konstantina Papagiannaki, Srinivasan Seshan, and Peter Steenkiste

    The demand for wireless bandwidth in indoor environments such as homes and offices continues to increase rapidly. Although wireless technologies such as MIMO can reach link throughputs of 100s of Mbps (802.11n) for a single link, the question of how we can deliver high throughput to a large number of densely-packed devices remains an open problem. Directional antennas have been shown to be an effective way to increase spatial reuse, but past work has focused largely on outdoor environments where the interactions between wireless links can usually be ignored. This assumption is not acceptable in dense indoor wireless networks since indoor deployments need to deal with rich scattering and multipath effects. In this paper we introduce DIRC, a wireless network design whose access points use phased array antennas to achieve high throughput in dense, indoor environments. The core of DIRC is an algorithm that increases spatial reuse and maximizes overall network capacity by optimizing the orientations of a network of directional antennas. We implemented DIRC and evaluated it on a nine node network in an enterprise setting. Our results show that DIRC improves overall network capacity in indoor environments, while being flexible enough to adapt to node mobility and changing traffic workloads.

  • Ashley Flavel and Matthew Roughan

    Routing oscillation is highly detrimental. It can decrease performance and lead to a high level of update churn placing unnecessary workload on router the problem is distributed between many providers. However, iBGP — the routing protocol used to distribute routes inside a single Autonomous System —has also been shown to oscillate. Despite the fact that iBGP is configured by a single provider according to apparently straight forward rules, more than eight years of research has not solved the problem of iBGP oscillation. Various solutions have been proposed but they all lack critical features: either they are complicated to implement, restrict routing flexibility, or lack guarantees of stability. In this paper we propose a very simple adaptation to the BGP decision process. Despite its simplicity and negligible cost we prove algebraically that it prevents iBGP oscillation. We extend the idea to provide routing flexibility, such as respecting the MED attribute, without sacrificing network stability.

  • Petri Jokela, András Zahemszky, Christian Esteve Rothenberg, Somaya Arianfar, and Pekka Nikander

    A large fraction of today’s Internet applications are internally publish/subscribe in nature; the current architecture makes it cumbersome and inept to support them. In essence, supporting efficient publish/subscribe requires data-oriented naming, efficient multicast, and in-network caching. Deployment of native IP-based multicast has failed, and overlay- based multicast systems are inherently inefficient. We surmise that scalable and efficient publish/subscribe will require substantial architectural changes, such as moving from endpoint-oriented systems to information-centric architectures.

    In this paper, we propose a novel multicast forwarding fabric, suitable for large-scale topic-based publish/subscribe. Due to very simple forwarding decisions and small forwarding tables, the fabric may be more energy efficient than the currently used ones. To understand the limitations and potential, we provide efficiency and scalability analysis via simulations and early measurements from our two implementations. We show that the system scales up to metropolitan WAN sizes, and we discuss how to interconnect separate networks.

  • Lorenzo De Carli, Yi Pan, Amit Kumar, Cristian Estan, and Karthikeyan Sankaralingam

    New protocols for the data link and network layer are being proposed to address limitations of current protocols in terms of scalability, security, and manageability. High-speed routers and switches that implement these protocols traditionally perform packet processing using ASICs which offer high speed, low chip area, and low power. But with inflexible custom hardware, the deployment of new protocols could happen only through equipment upgrades. While newer routers use more flexible network processors for data plane processing, due to power and area constraints lookups in forwarding tables are done with custom lookup modules. Thus most of the proposed protocols can only be deployed with equipment upgrades.

    To speed up the deployment of new protocols, we propose a flexible lookup module, PLUG (Pipelined Lookup Grid). We can achieve generality without loosing efficiency because various custom lookup modules have the same fundamental features we retain: area dominated by memories, simple processing, and strict access patterns defined by the data structure. We implemented IPv4, Ethernet, Ethane, and SEATTLE in our dataflow-based programming model for the PLUG and mapped them to the PLUG hardware which consists of a grid of tiles. Throughput, area, power, and latency of PLUGs are close to those of specialized lookup modules.

  • Yu-Wei Eric Sung, Carsten Lund, Mark Lyn, Sanjay G. Rao, and Subhabrata Sen

    Business and economic considerations are driving the extensive use of service differentiation in Virtual Private Networks (VPNs) operated for business enterprises today. The resulting Class of Service (CoS) designs embed complex policy decisions based on the described priorities of various applications, extent of bandwidth availability, and cost considerations. These inherently complex high-level policies are realized through low-level router configurations. The configuration process is tedious and error-prone given the highly intertwined nature of CoS configuration, the multiple router configurations over which the policies are instantiated, and the complex access control lists (ACLs) involved. Our contributions include (i) a formal approach to modeling CoS policies from router configuration files in a precise manner; (ii) a practical and computationally efficient tool that can determine the CoS treatments received by an arbitrary set of flows across multiple routers; and (iii) a validation of our approach in enabling applications such as troubleshooting, auditing, and visualization of network-wide CoS design, using router configuration data from a cross-section of 150 diverse enterprise VPNs. To our knowledge, this is the first effort aimed at modeling and analyzing CoS configurations.

  • Ajay Anil Mahimkar, Zihui Ge, Aman Shaikh, Jia Wang, Jennifer Yates, Yin Zhang, and Qi Zhao

    IPTV is increasingly being deployed and offered as a commercial service to residential broadband customers. Compared with traditional ISP networks, an IPTV distribution network (i) typically adopts a hierarchical instead of mesh-like structure, (ii) imposes more stringent requirements on both reliability and performance, (iii) has different distribution protocols (which make heavy use of IP multicast) and traffic patterns, and (iv) faces more serious scalability challenges in managing millions of network elements. These unique characteristics impose tremendous challenges in the effective management of IPTV network and service.

    In this paper, we focus on characterizing and troubleshooting performance issues in one of the largest IPTV networks in North America. We collect a large amount of measurement data from a wide range of sources, including device usage and error logs, user activity logs, video quality alarms, and customer trouble tickets. We develop a novel diagnosis tool called Giza that is specifically tailored to the enormous scale and hierarchical structure of the IPTV network. Giza applies multi-resolution data analysis to quickly detect and localize regions in the IPTV distribution hierarchy that are experiencing serious performance problems. Giza then uses several statistical data mining techniques to troubleshoot the identified problems and diagnose their root causes. Validation against operational experiences demonstrates the effectiveness of Giza in detecting important performance issues and identifying interesting dependencies. The methodology and algorithms in Giza promise to be of great use in IPTV network operations.

  • Srikanth Kandula, Ratul Mahajan, Patrick Verkaik, Sharad Agarwal, Jitendra Padhye, and Paramvir Bahl

    By studying trouble tickets from small enterprise networks, we conclude that their operators need detailed fault diagnosis. That is, the diagnostic system should be able to diagnose not only generic faults (e.g., performance-related) but also application specific faults (e.g., error codes). It should also identify culprits at a fine granularity such as a process or firewall configuration. We build a system, called NetMedic, that enables detailed diagnosis by harnessing the rich information exposed by modern operating systems and applications. It formulates detailed diagnosis as an inference problem that more faithfully captures the behaviors and interactions of fine-grained network components such as processes. The primary challenge in solving this problem is inferring when a component might be impacting another. Our solution is based on an intuitive technique that uses the joint behavior of two components in the past to estimate the likelihood of themimpacting one another in the present.We find that our deployed prototype is effective at diagnosing faults that we inject in a live environment. The faulty component is correctly identified as themost likely culprit in 80% of the cases and is almost always in the list of top five culprits.

  • Ramana Rao Kompella, Kirill Levchenko, Alex C. Snoeren, and George Varghese

    Many network applications have stringent end-to-end latency requirements, including VoIP and interactive video conferencing, automated trading, and high-performance computing—where even microsecond variations may be intolerable. The resulting fine-grain measurement demands cannot be met effectively by existing technologies, such as SNMP, NetFlow, or active probing. We propose instrumenting routers with a hash-based primitive that we call a Lossy Difference Aggregator (LDA) to measure latencies down to tens of microseconds and losses as infrequent as one in a million.

    Such measurement can be viewed abstractly as what we refer to as a coordinated streaming problem, which is fundamentally harder than standard streaming problems due to the need to coordinate values between nodes. We describe a compact data structure that efficiently computes the average and standard deviation of latency and loss rate in a coordinated streaming environment. Our theoretical results translate to an efficient hardware implementation at 40 Gbps using less than 1% of a typical 65-nm 400-MHz networking ASIC.When compared to Poisson-spaced active probing with similar overheads, our LDA mechanism delivers orders of magnitude smaller relative error; active probing requires 50–60 times as much bandwidth to deliver similar levels of accuracy.

  • Yin Zhang, Matthew Roughan, Walter Willinger, and Lili Qiu

    Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availability and accuracy of traffic matrices. However, in practice it is challenging to reliably measure traffic matrices. Missing values are common. This observation brings us into the realm of compressive sensing, a generic technique for dealing with missing values that exploits the presence of structure and redundancy in many realworld systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions.

    To address this problem, we develop a novel spatio-temporal compressive sensing framework with two key components: (i) a new technique called SPARSITY REGULARIZED MATRIX FACTORIZATION (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties, and (ii) a mechanism for combining low-rank approximations with local interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfully replace up to 98% of the values. Evaluation in applications such as network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.

  • Pavlos Papageorge, Justin McCann, and Michael Hicks

    We present the Measurement Manager Protocol (MGRP), an in-kernel service that schedules and transmits probes on behalf of active measurement tools. Unlike prior measurement services, MGRP transparently piggybacks application packets inside the often significant amounts of empty padding contained in typical probes. Using MGRP thus combines the modularity, flexibility, and accuracy of standalone active measurement tools with the lower overhead of passive measurement techniques. Microbenchmark experiments show that the resulting bandwidth savings makes it possible to measure the network accurately, but faster and more aggressively than without piggybacking, and with few ill effects to piggybacked application or competing traffic. When using MGRP to schedule measurements on behalf of MediaNet, an overlay service that adaptively schedules media streams, we show MediaNet can achieve significantly higher streaming rates under the same network conditions.

  • Costin Raiciu, Felipe Huici, Mark Handley, and David S. Rosenblum

    To search the web quickly, search engines partition the web index over many machines, and consult every partition when answering a query. To increase throughput, replicas are added for each of these machines. The key parameter of these algorithms is the trade-off between replication and partitioning: increasing the partitioning level improves query completion time since more servers handle the query, but may incur non-negligible startup costs for each subquery. Finding the right operating point and adapting to it can significantly improve performance and reduce costs.

    We introduce Rendezvous On a Ring (ROAR), a novel distributed algorithm that enables on-the-fly re-configuration of the partitioning level. ROAR can add and remove servers without stopping the system, cope with server failures, and provide good load-balancing even with a heterogeneous server pool. We demonstrate these claims using a privacy-preserving search application built upon ROAR.

  • Vijay Vasudevan, Amar Phanishayee, Hiral Shah, Elie Krevat, David G. Andersen, Gregory R. Ganger, Garth A. Gibson, and Brian Mueller

    This paper presents a practical solution to a problem facing high-fan-in, high-bandwidth synchronized TCP workloads in datacenter Ethernets|the TCP incast problem. In these networks, receivers can experience a drastic reduction in application throughput when simultaneously requesting data from many servers using TCP. Inbound data overffills small switch buffers, leading to TCP timeouts lasting hundreds of milliseconds. For many datacenter workloads that have a barrier synchronization requirement (e.g., filesystem reads and parallel data-intensive queries), throughput is reduced by up to 90%. For latency-sensitive applications, TCP timeouts in the datacenter impose delays of hundreds of milliseconds in networks with round-trip-times in microseconds.

    Our practical solution uses high-resolution timers to enable microsecond-granularity TCP timeouts. We demonstrate that this technique is effective in avoiding TCP incast collapse in simulation and in real-world experiments. We show that eliminating the minimum retransmission timeout bound is safe for all environments, including the wide-area.

  • Sharad Agarwal and Jacob R. Lorch

    The latency between machines on the Internet can dramatically affect users’ experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts latencies between machine pairs allows such matchmaking to consider many more machine pairs than can be probed in a scalable fashion while users are waiting. Using a far-reaching trace of latencies between players on over 3.5 million game consoles, we designed Htrae, a latency prediction system for game matchmaking scenarios. One novel feature of Htrae is its synthesis of geolocation with a network coordinate system. It uses geolocation to select reasonable initial network coordinates for new machines joining the system, allowing it to converge more quickly than standard network coordinate systems and produce substantially lower prediction error than state-of-the-art latency prediction systems. For instance, it produces 90th percentile errors less than half those of iPlane and Pyxida. Our design is general enough to make it a good fit for other latency-sensitive peer-topeer applications besides game matchmaking

  • S. Keshav

    I first attended SIGCOMM in 1989 and recently spent the third week of August 2009 at SIGCOMM, spanning a period of 20 years. Two things strike me about that sentence. The first is that by the inevitable passage of time I seem to have become an ‘old-timer.’ The second is that like everyone else I have conflated the conference and the SIG. This editorial digs deeper into these two facts (and yes, they are related).

    Looking back, what strikes me is that many researchers who I first met in 1989 still take the time to attend, are still active in research and are still making the annual pilgrimage to the conference. It is always a pleasure to run into someone I have met on and off for the better part of two decades.Some colleagues were graduate students like me in 1989 and we reminisce about the old times together. Others were graduate students when I first met them and are now full-fledged researchers. And yet others were students that I had the pleasure of supervising with whom I renew a long-ago relationship that has now turned into a friendship among peers. To see a fresh-faced grad student transition into a leading researcher and a pillar of the community is an unalloyed joy!

    Now to the other thread, of conflating the conference and the SIG. For a long time, I did not think that the SIG had any role to play other than to organize the conference. So it seemed natural to refer to both as SIGCOMM, as most people still do. Actually, I think there is something deeper here. I think what makes SIGCOMM a community is our coming together every year to listen to the same papers, meet fellow researchers, and to participate in shared activities, such as the banquet, the demo and poster sessions, and to cheer and boo at the outrageous opinion session. That is, attending the conference is the act that binds us and makes us a community. It turns out that this technique of creating community is age old: in 600 BCE, the Buddha asked all his followersto gather once a year and re-affirm the tenets of their faith. It worked then and it works now. And that is why the conference and the community are inextricable.Conflating the conference and SIG, however, leads to some problems because there is more to the SIG than SIGCOMM and more to the community than attending the conference. Let me talk first about the other activities of the SIG (do also read Bruce Davie’s editorial in this issue).

    The SIG sponsors several other conferences besides SIGCOMM, such as IMC, HotNets, SenSys, and CoNEXT. It also puts out this newsletter, makes awards, and provides a forum for community discussion by means of a blog and CCR Online. It is now gearing up to provide a forum for industry-academic collaboration through several initiatives. These activities of the SIG allow members to present papers, learn of cuttingedge research in the field, and to get feedback on ideas from their peers.There is more to participating in the community than attending conferences. The SIG is entirely run by volunteers. Volunteer duties include reviewing papers, helping with our website, participating in programcommittees, serving as CCR and ToN area editors, helping in the running of the various conferences, and liaison with the ACM and related communities. If you are interested in any of these, write to the Chair, Bruce Davie, or anyone on the executive committee. We are always happy to hear from you. Of course, some jobs – such as servingon the SIGCOMM TPC – are more sought after than others, but I am sure that anyone who wishes to contribute can.

    Why bother participating? It is not just the joy of doing something to benefit others that gives a warm fuzzy feeling. It is also that volunteers in the community form relationships that are professionally and personally rewarding. If you want to learn about the latest advances in some area whose research leader is a member of SIGCOMM (which is nearly always the case) and you have met that member in the course of your volunteer duties, then you have a way to contact that person and get the inside track. Email from a person you have met and interacted with is much more likely to get responded to than email from a total stranger. Over time, meeting the same people in different roles opens doors to research collaboration, funding, and an added perspective on your research career. Volunteering has its privileges!

    So there you have it. Attending SIGCOMM – the conference – for the last twenty years has made me keenly aware of the SIGCOMM community. I have gained much from being a part of the community. I encourage all of you to participate in the community and make SIGCOMM your own.

  • Xiaolong Li and Homayoun Yousefi'zadeh

    In the recent years, end-to-end feedback-based variants of TCP as well as VCP have emerged as practical alternatives of congestion control by requiring the use of only one or two ECN bits in the IP header. However, all such schemes suffer from a relatively low speed of convergence and exhibit a biased fairness behavior in moderate bandwidth high delay networks due to utilizing an insufficient amount of congestion feedback. In this paper, we propose a novel distributed ECN-based congestion control protocol to which we refer as Multi Packet Congestion Control Protocol (MPCP). In contrast to other alternatives, MPCP is able to relay a more precise congestion feedback yet preserve the utilization of the two ECN bits. MPCP distributes (extracts) congestion related information into (from) a series of n packets, thus allowing for a 2n-bit quantization of congestion measures with each packet carrying two of 2n bits in its ECN bits. We describe the design, implementation, and performance evaluation of MPCP through both simulations and experimental studies.

    Dmitri Krioukov
  • F. Gringoli, Luca Salgarelli, M. Dusi, N. Cascarano, F. Risso, and k. c. claffy

    Much of Internet traffic modeling, firewall, and intrusion detection research requires traces where some ground truth regarding application and protocol is associated with each packet or flow. This paper presents the design, development and experimental evaluation of gt, an open source software toolset for associating ground truth information with Internet traffic traces. By probing the monitored host’s kernel to obtain information on active Internet sessions, gt gathers ground truth at the application level. Preliminary experimental results show that gt’s effectiveness comes at little cost in terms of overhead on the hosting machines. Furthermore, when coupled with other packet inspection mechanisms, gt can derive ground truth not only in terms of applications (e.g., e-mail), but also in terms of protocols (e.g., SMTP vs. POP3).

    Pablo Rodriguez
  • Bruce Davie

    In my first month as SIGCOMM chair, I have been asked “what is your vision for SIGCOMM?”, “can we make SIGCOMM more transparent?”, and “will you write an article for CCR?”. This article is an attempt to address all three of those questions.

  • Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, and Stephen Wolff

    This paper was first published online by the Internet Society in December 2003 and is being re-published in ACM SIGCOMM Computer Communication Review because of its historic import. It was written at the urging of its primary editor, the late Barry Leiner. He felt that a factual rendering of the events and activities associated with the development of the early Internet would be a valuable contribution. The contributing authors did their best to incorporate only factual material into this document. There are sure to be many details that have not been captured in the body of the document but it remains one of the most accurate renderings of the early period of development available.

  • k. c. claffy, Marina Fomenkov, Ethan Katz-Bassett, Robert Beverly, Beverly A. Cox, and Matthew Luckie

    Measuring the global Internet is a perpetually challenging task for technical, economic and policy reasons, which leaves scientists as well as policymakers navigating critical questions in their field with little if any empirical grounding. On February 12-13, 2009, CAIDA hosted the Workshop on Active Internet Measurements (AIMS) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops which provide a venue for researchers, operators, and policymakers to exchange ideas and perspectives. The two-day workshop included presentations, discussion after each presentation, and breakout sessions focused on how to increase potential and mitigate limitations of active measurements in the wide area Internet. We identified relevant stakeholders who may support and/or oppose measurement, and explored how collaborative solutions might maximize the benefit of research at minimal cost. This report describes the findings of the workshop, outlines open research problems identified by participants, and concludes with recommendations that can benefit both Internet science and communications policy. Slides from workshop presentations are available at http://www.caida.org/workshops/isma/0902/.

  • Ehab Al-Shaer, Albert Greenberg, Charles Kalmanek, David A. Maltz, T. S. Eugene Ng, and Geoffrey G. Xie

    Network management represents an architectural gap in today’s Internet [1]. Many problems with computer networks today, such as faults, misconfiguration, performance degradation, etc., are due to insufficient support for network management, and the problem takes on additional dimensions with the emerging programmable router paradigm. The Internet Network Management Workshop is working to build a community of researchers interested in solving the challenges of network management via a combination of bottoms-up analysis of data from existing networks and a top-down design of new architectures and approaches driven by that data. This editorial sets out some of the research challenges we see facing network management, and calls for participation in working to solve them.

  • Kentaro Toyama and Muneeb Ali

    Poverty and the associated sufferings remain a global challenge, with over a billion people surviving on less than a dollar a day. Technology, applied appropriately, can help improve their lives. Despite some clear examples of technical research playing a key role in global development, there is a question that repeatedly arises in this area: can technologies for developing regions be considered a core area of computer science research? In this note, we examine some of the arguments on both sides of this question, deliberately avoid answering the question itself (for the lack of community consensus), and provide some suggestions for the case where the answer is in the affirmative.

  • Dirk Trossen

    While many initiatives have been discussing the future of the Internet, the EIFFEL1 think tank set out to push these discussions forward by assembling a community of researchers debating the potential thrusts towards the Future Internet. This article provides an account of these discussions, being addressed both to the EIFFEL membership and more widely to members of the community interested in medium to long-term developments. We outline immediate problems as well as potentially missed opportunities in the current Internet, while focussing the debate on the need for a Future Internet on the style we conduct research as well as how we design systems at large. Most importantly, we recognize that an inter-disciplinary dialogue is required to formulate actions to be taken and to remove barriers to their realization.

  • Richard Gass and Christophe Diot

    Increasing popularity in personal access points and travel routers has begun to rise with the advent of smaller and more battery conscious devices. This article introduces the YAAP (Yet Another Access Point), an open source software that enables ad-hoc infrastructure services in the absence of network connectivity. The YAAP provides a familiar way to connect users to each other and provides a set of useful services. We list and describe its applications, explain how it can be used, provide details about the code, and point readers to where it can be downloaded.

Syndicate content