CCR Papers from April 2012

Find a CCR issue:
  • S. Keshav

    As a networking researcher, working with computer networks day in and day out, you probably have rarely paused to reflect on the surprisingly difficult question of "What is a network?" For example, would you consider a bio-chemical system to be a network? How about a social network? Or a water supply network? Or the electrical grid? After all, all of these share some aspects in common with a computer network: they can be represented as a graph and they carry a flow (of chemical signals, messages, water, and electrons, respectively) from one or more sources to one or more destinations. So, shouldn't we make them equally objects of study by SIGCOMM members?

    You could argue that some of these networks differ dramatically from the Internet. The water network, for example, does not carry any messages and is unidirectional. So, it is not a communication network, unlike the Internet or, perhaps, a social network. This implicitly takes the position that the only networks we (as computer networking researchers) ought to study are bidirectional communication networks. This is a conservative position that is relatively easy to justify, but it excludes from consideration some interesting and open research questions that arise in the context of these other networks. Choosing the capacity of a water tank or an electrical transformer turns out to be similar in many respects to choosing the capacity of a router buffer or a transmission link. Similarly, one could imagine that the round-trip-time on a social network (the time it takes for a rumour you started to get back to you by word of mouth) would inform you about the structure of social network in much the same way as an ICMP ping. For these reasons, a more open-minded view about the nature of a network may be both pragmatic and conducive to innovation.
     
    My own view is that a network is any system that can be naturally represented by a graph. Additionally, a communication network is any system where a flow
    that originates at some set of source nodes is delivered to some set of destination nodes typically due to the forwarding action of intermediate nodes (although this may not be strictly necessary). This broad definition encompasses water networks, biological networks, and electrical networks as well as telecommunication networks and the Internet. It seeks to present a unifying abstraction so that techniques developed in one form of network can be adopted by researchers in the others.
     
    Besides a broad definition of networks, like the one above, the integrative study of networks--or ‘Network Science’ as its proponents call it-requires the underlying communities (and there are more than one) to be open to ideas from each other, and for the publication fora in these communities to be likewise “liberal in what you accept,” in Jon Postel's famous words. This is essential to allow researchers in Network Science to carry ideas from one community to another, despite their being less than expert in certain aspects of their work. CCR, through its publication of non-peer-reviewed Editorials, is perfectly positioned to follow this principle.
     
    I will end with a couple of important announcements. First, this issue will mark the end of Stefan Saroiu's tenure as an Area Editor. His steady editorial hand will be much missed. Thanks, Stefan!
     
    Second, starting September 1, 2012, Dina Papagiannaki will take over as the new Editor of CCR. Dina has demonstrated a breadth of understanding and depth of vision that assures me that CCR will be in very good hands. I am confident that under her stewardship CCR will rise to ever greater heights. I wish her the very best.
     
  • Supasate Choochaisri, Kittipat Apicharttrisorn, Kittiporn Korprasertthaworn, Pongpakdi Taechalertpaisarn, Chalermek Intanagonwiwat

    Desynchronization is useful for scheduling nodes to perform tasks at different time. This property is desirable for resource sharing, TDMA scheduling, and collision avoiding. Inspired by robotic circular formation, we propose DWARF (Desynchronization With an ARtificial Force field), a novel technique for desynchronization in wireless networks. Each neighboring node has artificial forces to repel other nodes to perform tasks at different time phases. Nodes with closer time phases have stronger forces to repel each other in the time domain. Each node adjusts its time phase proportionally to its received forces. Once the received forces are balanced, nodes are desynchronized. We evaluate our implementation of DWARF on TOSSIM, a simulator for wireless sensor networks. The simulation results indicate that DWARF incurs significantly lower desynchronization error and scales much better than existing approaches.

    Bhaskaran Raman
  • André Zúquete, Carlos Frade

    The IPv4 address space is quickly getting exhausted, putting a tremendous pressure on the adoption of even more NAT levels or IPv6. On the other hand, many authors propose the adoption of new Internet addressing capabilities, namely content-based addressing, to complement the existing IP host-based addressing. In this paper we propose the introduction of a location layer, between transport and network layers, to address both problems. We keep the existing IPv4 (or IPv6) host-based core routing functionalities, while we enable hosts to become routers between separate address spaces by exploring the new location header. For a proof of concept, we modified the TCP/IP stack of a Linux host to handle our new protocol layer and we designed and conceived a novel NAT box to enable current hosts to interact with the modified stack.

    David Wetherall
  • Kate Lin, Yung-Jen Chuang, Dina Katabi

    In many wireless systems, it is desirable to precede a data transmission with a handshake between the sender and the receiver. For example, RTS-CTS is a handshake that prevents collisions due to hidden terminals. Past work, however, has shown that the overhead of such handshake is too high for practical deployments. We present a new approach to wireless handshake that is almost overhead free. The key idea underlying the design is to separate a packet's PLCP header and MAC header from its body and have the sender and receiver first exchange the data and ACK headers, then exchange the bodies of the data and ACK packets without additional headers. The header exchange provides a natural handshake at almost no extra cost. We empirically evaluate the feasibility of such lightweight handshake and some of its applications. Our testbed evaluation shows that header-payload separation does not hamper packet decodabilty. It also shows that a light handshake enables hidden terminals, i.e., nodes that interfere with each other without RTS/CTS, to experience less than 4% of collisions. Furthermore, it improves the accuracy of bit rate selection in bursty and mobile environments producing a throughput gain of about 2x.

    Bhaskaran Raman
  • Cheng Huang, Ivan Batanov, Jin Li

    Internet services are often deployed in multiple (tens to hundreds) of geographically distributed data centers. They rely on Global Traffic Management (GTM) solutions to direct clients to the optimal data center based on a number of criteria like network performance, geographic location, availability, etc. The GTM solutions, however, have a fundamental design limitation in their ability to accurately map clients to data centers - they use the IP address of the local DNS resolver (LDNS) used by a client as a proxy for the true client identity, which in some cases causes suboptimal performance. This issue is known as the client-LDNS mismatch problem. We argue that recent proposals to address the problem suffer from serious limitations. We then propose a simple new solution, named ``FQDN extension'', which can solve the client-LDNS mismatch problem completely. We build a prototype system and demonstrate the effectiveness of the proposed solution. Using JavaScript, the solution can be deployed immediately for some online services, such as Web search, without modifying either client or local resolver.

    Renata Teixeira
  • Shane Alcock, Perry Lorier, Richard Nelson

    This paper introduces libtrace, an open-source software library for reading and writing network packet traces. Libtrace offers performance and usability enhancements compared to other libraries that are currently used. We describe the main features of libtrace and demonstrate how the libtrace programming API enables users to easily develop portable trace analysis tools without needing to consider the details of the capture format, file compression or intermediate protocol headers. We compare the performance of libtrace against other trace processing libraries to show that libtrace offers the best compromise between development effort and program run time. As a result, we conclude that libtrace is a valuable contribution to the passive measurement community that will aid the development of better and more reliable trace analysis and network monitoring tools.

    AT&T Labs
  • Pamela Zave

    Correctness of the Chord ring-maintenance protocol would mean that the protocol can eventually repair all disruptions in the ring structure, given ample time and no further disruptions while it is working. In other words, it is "eventual reachability." Under the same assumptions about failure behavior as made in the Chord papers, no published version of Chord is correct. This result is based on modeling the protocol in Alloy and analyzing it with the Alloy Analyzer. By combining the right selection of pseudocode and textual hints from several papers, and fixing flaws revealed by analysis, it is possible to get a version that may be correct. The paper also discusses the significance of these results, describes briefly how Alloy is used to model and reason about Chord, and compares Alloy analysis to model-checking.

    David Wetherall
  • Juan Camilo Cardona Restrepo, Rade Stanojevic

    In spite of the tremendous amount of measurement efforts on understanding the Internet as a global system, little is known about the 'local' Internet (among ISPs inside a region or a country) due to limitations of the existing measurement tools and scarce data. In this paper, empirical in nature, we characterize the evolution of one such ecosystem of local ISPs by studying the interactions between ISPs happening at the Slovak Internet eXchange (SIX). By crawling the web archive waybackmachine.org we collect 158 snapshots (spanning 14 years) of the SIX website, with the relevant data that allows us to study the dynamics of the Slovak ISPs in terms of: the local ISP peering, the traffic distribution, the port capacity/utilization and the local AS-level traffic matrix. Examining our data revealed a number of invariant and dynamic properties of the studied ecosystem that we report in detail.

    Yin Zhang
  • Eric Keller, Michael Schapira, Jennifer Rexford

    Traditional traffic engineering adapts the routing of traffic within the network to maximize performance. We propose a new approach that also adaptively changes where traffic enters and leaves the network—changing the “traffic matrix”, and not just the intradomain routing configuration. Our approach does not affect traffic patterns and BGP routes seen in neighboring networks, unlike conventional inter-domain traffic engineering where changes in BGP policies shift traf-

    fic and routes from one edge link to another. Instead, we capitalize on recent innovations in edge-link migration that enable seamless rehoming of an edge link to a different internal router in an ISP backbone network—completely transparent to the router in the neighboring domain. We present an optimization framework for traffic engineering with migration and develop algorithms that determine which edge links should migrate, where they should go, and how often
    they should move. Our experiments with Internet2 traffic and topology data show that edge-link migration allows the network to carry 18.8% more traffic (at the same level of performance) over optimizing routing alone.
    Telefonica Research
  • Craig A. Shue, Andrew J. Kalafut, Mark Allman, Curtis R. Taylor

    There are many deployed approaches for blocking unwanted traffic, either once it reaches the recipient's network, or closer to its point of origin. One of these schemes is based on the notion of traffic carrying capabilities that grant access to a network and/or end host. However, leveraging capabilities results in added complexity and additional steps in the communication process: Before communication starts a remote host must be vetted and given a capability to use in the subsequent communication. In this paper, we propose a lightweight mechanism that turns the answers provided by DNS name resolution - which Internet communication broadly depends on anyway - into capabilities. While not achieving an ideal capability system, we show the mechanism can be built from commodity technology and is therefore a pragmatic way to gain some of the key benefits of capabilities without requiring new infrastructure.

    Stefan Saroiu
  • Yingdi Yu, Duane Wessels, Matt Larson, Lixia Zhang

    Operators of high-profile DNS zones utilize multiple authority servers for performance and robustness. We conducted a series of trace-driven measurements to understand how current caching resolver implementations distribute queries among a set of authority servers. Our results reveal areas for improvement in the ``apparently sound'' server selection schemes used by some popular implementations. In some cases, the selection schemes lead to sub-optimal behavior of caching resolvers, e.g. sending a significant amount of queries to unresponsive servers. We believe that most of these issues are caused by careless implementations, such as keeping decreasing a server's SRTT after the server has been selected, treating unresponsive servers as responsive ones, and using constant SRTT decaying factor. For the problems identified in this work, we recommended corresponding solutions.

    Renata Teixeira
  • Benoit Donnet, Matthew Luckie, Pascal Mérindol, Jean-Jacques Pansiot

    Operators have deployed Multiprotocol Label Switching (MPLS) in the Internet for over a decade. However, its impact on Internet topology measurements is not well known, and it is possible for some MPLS configurations to lead to false router-level links in maps derived from traceroute data. In this paper, we introduce a measurement-based classification of MPLS tunnels, identifying tunnels where IP hops are revealed but not explicitly tagged as label switching routers, as well as tunnels that obscure the underlying path. Using a large-scale dataset we collected, we show that paths frequently cross MPLS tunnels in today's Internet: in our data, at least 30% of the paths we tested traverse an MPLS tunnel. We also propose and evaluate several methods to reveal MPLS tunnels that are not explicitly flagged as such: we discover that their fraction is significant (up to half the explicit tunnel quantity) but most of them do not obscure IP-level topology discovery.

    Yin Zhang
  • Hamed Haddadi, Richard Mortier, Steven Hand

    People everywhere are generating ever-increasing amounts of data, often without being fully aware of who is recording what about them. For example, initiatives such as mandated smart metering, expected to be widely deployed in the UK in the next few years and already attempted in countries such as the Netherlands, will generate vast quantities of detailed, personal data about huge segments of the population. Neither the impact nor the potential of this society-wide data gathering are well understood. Once data is gathered, it will be processed -- and society is only now beginning to grapple with the consequences for privacy, both legal and ethical, of these actions, e.g., Brown et al. There is the potential for great harm through, e.g., invasion of privacy; but also the potential for great benefits by using this data to make more efficient use of resources, as well as releasing its vast economic potential. In this editorial we briefly discuss work in this area, the challenges still faced, and some potential avenues for addressing them.

  • Martin Arlitt

    Time tends to pass more quickly than we would like. Sometimes it is helpful to reflect on what you have accomplished, and to derive what you have learned from the experiences. These "lessons learned" may then be leveraged by yourself or others in the future. Occasionally, an external event will motivate this self reflection. For me, it was the 50th anniversary reunion of the St. Walburg Eagles, held in July 2011. The Eagles are a full-contact (ice) hockey team I played with between 1988 and 1996 (the Eagles ceased operations twice during this period, which limited me to four seasons playing with them), while attending university. What would I tell my friends and former teammates that I had been doing for the past 15 years? After some thought, I realized that my time as an Eagle had prepared me for a research career, in ways I would never have imagined. This article (an extended version with color photos is available in [1]) shares some of these similarities, to motivate others to reflect on their own careers and achievements, and perhaps make proactive changes as a result.

  • Jon Crowcroft

    The Internet is not a Universal service, but then neither is democracy. So should the Internet be viewed as a right? It's certainly sometimes wrong. In this brief article, we depend on the Internet to reach our readers, and we hope that they don't object our doing that.

  • Charles Kalmanek

    It has become a truism that innovation in the information and communications technology (ICT) fields is occurring faster than ever before. This paper posits that successful innovation requires three essential elements: a need, know-how or knowledge, and favorable economics. The paper examines this proposition by considering three technical areas in which there has been significant innovation in recent years: server virtualization and the cloud, mobile application optimization, and mobile speech services. An understanding of the elements that contribute to successful innovation is valuable to anyone that does either fundamental or applied research in fields of information and communication technology.

  • kc claffy

    The second Workshop on Internet Economics [2], hosted by CAIDA and Georgia Institute of Technology on December 1-2, 2011, brought together network technology and policy researchers with providers of commercial Internet facilities and services (network operators) to further explore the common objective of framing an agenda for the emerging but empirically stunted field of Internet infrastructure economics. This report describes the workshop discussions and presents relevant open research questions identified by its participants.

Syndicate content