Computer Communication Review: Papers

Find a CCR issue:
  • Yeonhee Lee, Youngseok Lee

    Internet traffic measurement and analysis has long been used to characterize network usage and user behaviors, but faces the problem of scalability under the explosive growth of Internet traffic and high-speed access. Scalable Internet traffic measurement and analysis is difficult because a large data set requires matching computing and storage resources. Hadoop, an open-source computing platform of MapReduce and a distributed file system, has become a popular infrastructure for massive data analytics because it facilitates scalable data processing and storage services on a distributed computing system consisting of commodity hardware. In this paper, we present a Hadoop-based traffic monitoring system that performs IP, TCP, HTTP, and NetFlow analysis of multi-terabytes of Internet traffic in a scalable manner. From experiments with a 200-node testbed, we achieved 14 Gbps throughput for 5 TB files with IP and HTTP-layer analysis MapReduce jobs. We also explain the performance issues related with traffic analysis MapReduce jobs.

    Sharad Agarwal
  • Yaoqing Liu, Syed Obaid Amin, Lan Wang

    The size of the global Routing Information Base (RIB) has been increasing at an alarming rate. This directly leads to the rapid growth of the global Forwarding Information Base (FIB) size, which raises serious concerns for ISPs as the FIB memory in line cards is much more expensive than regular memory modules and it is very costly to increase this memory capacity frequently for all the routers in an ISP. One potential solution is to install only the most popular FIB entries into the fast memory (i.e., a FIB cache), while storing the complete FIB in slow memory. In this paper, we propose an effective FIB caching scheme that achieves a considerably higher hit ratio than previous approaches while preventing the cache-hiding problem. Our experimental results show that with only 20K prefixes in the cache (5.36% of the actual FIB size), the hit ratio of our scheme is higher than 99.95%. Our scheme can also handle cache misses, cache replacement and routing updates efficiently.

    Fabián E. Bustamante
  • Robert Beverly, Mark Allman

    The computer science research paper review process is largely human and time-intensive. More worrisome, review processes are frequently questioned, and often non-transparent. This work advocates applying computer science methods and tools to the computer science review process. As an initial exploration, we data mine the submissions, bids, reviews, and decisions from a recent top-tier computer networking conference. We empirically test several common hypotheses, including the existence of readability, citation, call-for-paper adherence, and topical bias. From our findings, we hypothesize review process methods to improve fairness, efficiency, and transparency.

    Sharad Agarwal
  • Mark Allman

    While there has been much buzz in the community about the large depth of queues throughout the Internet—the socalled “bufferbloat” problem—there has been little empirical understanding of the scope of the phenomenon. Yet, the supposed problem is being used as input to engineering decisions about the evolution of protocols. While we know from wide scale measurements that bufferbloat can happen, we have no empirically-based understanding of how often bufferbloat does happen. In this paper we use passive measurements to assess the bufferbloat phenomena.

    Nikolaos Laoutaris
  • P. Brighten Godfrey

    This article captures some of the discussion and insights from this year's ACM Workshop on Hot Topics in Networks (HotNets-XI).

  • Yan Grunenberger, Jonathan M. Smith

    We attended the 2012 Mobile World Congress in Barcelona, Spain. This note reports on some of our observations that we believe might be relevant to the SIGCOMM community.

  • Engin Arslan, Murat Yuksel, Mehmet Hadi Gunes

    Management and automated configuration of large-scale networks is one of the crucial issues for Internet Service Providers (ISPs). Since wrong configurations may lead to loss of an enormous amount of customer traffic, highly experienced network administrators are typically the ones who are trusted for the management and configuration of a running ISP network. We frame the management and experimentation of a network as a "game" for training network administrators without having to risk the network operation. The interactive environment treats the trainee network administrators as players of a game and tests them with various network failures or dynamics.

  • Arjuna Sathiaseelan, Jon Crowcroft

    The Computer Laboratory, University of Cambridge hosted a workshop on "Internet on the Move" on September 22, 2012. The objective of the workshop was to bring academia, industry and regulators to discuss the challenges in realizing the notion of ubiquitous mobile Internet. The editorial summarises a general overview of the issues discussed on enabling universal mobile coverage and some of the solutions that have been proposed to alleviate the problem of having ubiquitous mobile connectivity.

  • Jennifer Rexford, Pamela Zave

    A workshop on Abstractions for Network Services, Architecture, and Implementation brought together researchers interested in creating better abstractions for creating and analyzing networked services and network architectures. The workshop took place at DIMACS on May 21-23, 2012. This report summarizes the presentations and discussions that took place at the workshop, organized by areas of abstractions such as layers, domains, and graph properties.

  • S. Keshav

    I considered many ideas for my last CCR editorial but, in the end, decided to write about something that I think I share with every reader of CCR, yet is something we rarely acknowledge even in conversation, let alone in print: the joy of research.

    For me, research is the process of exploring new ideas, formulating problems in areas yet undefined, and then using our ever-expanding toolkit of algorithms, technologies, and theories to solve them. I find this process to be deep, satisfying, and fun. It is fun to explore new ideas, fun to learn new tools, techniques, and theories, and fun to solve puzzles. I'm especially delighted during that brief, sharp, shining moment when it is as if a puzzle piece has clicked into place and confusion is transformed into simplicity. It is this that keeps me fueled as a researcher; it is the direct experience of the fun of research that converts the best of our students to our ranks.
     
    To be sure, there are many other ways to have fun. One can climb mountains or hike forbidding landscapes; swim the waves or fly from continent to continent in search of exotic cuisines. I have done some of these, but find them all, to some degree, unsatisfying. These experiences are intense but ephemeral. Besides, it is hard to justify that they have any socially redeeming value. In contrast, research, especially the kind of work that is both theoretically challenging yet practically applicable, is not only fun but also worthwhile.
     
    Of course, not all aspects of research are fun. Behind each sweet moment of success there can be many dreary hours of work, with little guarantee that a hunch may pan out. Each idea carried into practice, each paper accepted for publication, and each research project that benefits society builds on many discarded ideas, rejcted papers, and failed projects. Yet, even in the face of these failures, I feel that the process itself is fun. I sympathize with Oscar Wilde, who wrote “We are all in the gutter, but some of us are looking at the stars.”
     
    I think every researcher, at some level, has a directunderstanding of what I mean by the joy of research. I know this because our shared experience binds us despite barriers of geography, culture, and language. I find an instant rapport with other researchers when discussing each other’s work: the barriers to communication drop as we share our experiences, hunches, and ideas. The excitement simply shines through.
     
    Unfortunately, we do not often share our sense of joy with outsiders. Our ideas are usually hidden behind walls of jargon, inscrutable mathematical notation, and the arcane conventions of academic publishing. This does not serve us well: baffled funders and soporific students do not aid our cause. Instead, we should let our exuberance and joy--tempered with gratitude to our employers--motivate us to share our ideas. By interpreting our work to non-experts we open channels of communication with those who can directly benefit from our ideas and innovations. For many of us, this is the one of the deepest motivations for our work.
     
    So, let the joy of research be your touchstone. Share this joy with your fellow researchers, but share it too with others, that the fire in your work may ignite a light elsewhere, and that your work benefit society at large.
  • Xuetao Wei, Nicholas Valler, B. Aditya Prakash, Iulian Neamtiu, Michalis Faloutsos, Christos Faloutsos

    If a false rumor propagates via Twitter, while the truth propagates between friends in Facebook, which one will prevail? This question captures the essence of the problem we address here. We study the intertwined propagation of two competing "memes" (or viruses, rumors, products etc.) in a composite network. A key novelty is the use of a composite network, which in its simplest model is defined as a single set of nodes with two distinct types of edges interconnecting them. Each meme spreads across the composite network in accordance to an SIS-like propagation model (a flu-like infection-recovery). To study the epidemic behavior of our system, we formulate it as a non-linear dynamic system (NLDS). We develop a metric for each meme that is based on the eigenvalue of an appropriately constructed matrix and argue that this metric plays a key role in determining the "winning" meme. First, we prove that our metric determines the tipping point at which both memes become extinct eventually. Second, we conjecture that the meme with the strongest metric will most likely prevail over the other, and we show evidence of that via simulations in both real and synthetic composite networks. Our work is among the first to study the interplay between two competing memes in composite networks.

    Augustin Chaintreau
  • Sebastian Zander, Lachlan L.H. Andrew, Grenville Armitage, Geoff Huston, George Michaelson

    The Teredo auto-tunnelling protocol allows IPv6 hosts behind IPv4 NATs to communicate with other IPv6 hosts. It is enabled by default on Windows Vista and Windows 7. But Windows clients are self-constrained: if their only IPv6 access is Teredo, they are unable to resolve host names to IPv6 addresses. We use web-based measurements to investigate the (latent) Teredo capability of Internet clients, and the delay introduced by Teredo. We compare this with native IPv6 and 6to4 tunnelling capability and delay. We find that only 6--7% of connections are from fully IPv6-capable clients, but an additional 15--16% of connections are from clients that would be IPv6-capable if Windows Teredo was not constrained. However, Teredo increases the median latency to fetch objects by 1--1.5 seconds compared to IPv4 or native IPv6, even with an optimally located Teredo relay. Furthermore, in many cases Teredo fails to establish a tunnel.

    Jia Wang
  • Ingmar Poese, Benjamin Poese, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, Bruce Maggs

    Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate only in server selection, not routing, in order to perform traffic engineering. To this end, we propose Content-aware Traffic Engineering (CaTE), which dynamically adapts server selection for content hosted by CDNs using ISP recommendations on small time scales. CaTE relies on the observation that by selecting an appropriate server among those available to deliver the content, the path of the traffic in the network can be influenced in a desired way. We present the design and implementation of a prototype to realize CaTE, and show how CDNs and ISPs can jointly take advantage of the already deployed distributed hosting infrastructures and path diversity, as well as the ISP detailed view of the network status without revealing sensitive operational information. By relying on tier-1 ISP traces, we show that CaTE allows CDNs to enhance the end-user experience while enabling an ISP to achieve several traffic engineering goals.

    Renata Teixeira
  • Marko Zec, Luigi Rizzo, Miljenko Mikuc

    Can a software routing implementation compete in a field generally reserved for specialized lookup hardware? This paper presents DXR, an IPv4 lookup scheme based on transforming large routing tables into compact lookup structures which easily fit into cache hierarchies of modern CPUs. DXR supports various memory/speed tradeoffs and scales almost linearly with the number of CPU cores. The smallest configuration, D16R, distills a real-world BGP snapshot with 417,000 IPv4 prefixes and 213 distinct next hops into a structure consuming only 782 Kbytes, less than 2 bytes per prefix, and achieves 490 million lookups per second (MLps) in synthetic tests using uniformly random IPv4 keys on a commodity 8-core CPU. Some other DXR configurations exceed 700~MLps at the cost of increased memory footprint. DXR significantly outperforms a software implementation of DIR-24-8-BASIC, has better scalability, and requires less DRAM bandwidth. Our prototype works inside the FreeBSD kernel, which permits DXR to be used with standard APIs and routing daemons such as Quagga and XORP, and to be validated by comparing lookup results against the BSD radix tree.

    Nikolaos Laoutaris
  • Jon Whiteaker, Fabian Schneider, Renata Teixeira, Christophe Diot, Augustin Soule, Fabio Picconi, Martin May

    The success of over-the-top (OTT) services reflects users' demand for personalization of digital services at home. ISPs propose fulfilling this demand with a cloud delivery model, which would simplify the management of the service portfolio and bring them additional revenue streams. We argue that this approach has many limitations that can be fixed by turning the home gateway into a flexible execution platform. We define requirements for such a "service-hosting gateway" and build a proof of concept prototype using a virtualized Intel Groveland system-on-a-chip platform. We discuss remaining challenges such as service distribution, security and privacy, management, and home integration.

    David Wetherall
  • Jeffrey C. Mogul, Lucian Popa

    Infrastructure-as-a-Service ("Cloud") data-centers intrinsically depend on high-performance networks to connect servers within the data-center and to the rest of the world. Cloud providers typically offer different service levels, and associated prices, for different sizes of virtual machine, memory, and disk storage. However, while all cloud providers provide network connectivity to tenant VMs, they seldom make any promises about network performance, and so cloud tenants suffer from highly-variable, unpredictable network performance. Many cloud customers do want to be able to rely on network performance guarantees, and many cloud providers would like to offer (and charge for) these guarantees. But nobody really agrees on how to define these guarantees, and it turns out to be challenging to define "network performance" in a way that is useful to both customers and providers. We attempt to bring some clarity to this question.

  • Tanja Zseby, kc claffy

    On May 14-15, 2012, CAIDA hosted the first international Workshop on Darkspace and Unsolicited Traffic Analysis (DUST 2012) to provide a forum for discussion of the science, engineering, and policy challenges associated with darkspace and unsolicited traffic analysis. This report captures threads discussed at the workshop and lists resulting collaborations.

  • Tobias Lauinger, Nikolaos Laoutaris, Pablo Rodriguez, Thorsten Strufe, Ernst Biersack, Engin Kirda

    Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community.

  • Sameer S. Tilak, Philip Papadopoulos

    Software Operations and Management (O&M) i.e., installing, configuring, and updating thousands of software components within a conventional Data Center is a well-understood issue. Existing frameworks such as the Rocks toolkit have revolutionized the way system administrators deploy and manage large-scale compute clusters, storage servers, and visualization facilities. However, existing tools like Rocks are designed for a "friendly" Data Center environment where stable power along with high-performance compute, storage, and networking is the norm. In contrast, sensor networks are embedded deeply within the harsh physical environment where node failures, node mobility and idiosyncrasies of wireless networks are the norm. In addition, device heterogeneity and resource-constrained nature (e.g., power, memory, CPU capability) of the sensor cyberinfrastructure (CI) are realities that must be addressed and reconciled. Although sensor CI must be more adaptable and more-rapidly reconfigurable than the data center equivalents, few if any of the existing software O&M tools and techniques have been adapted to the significantly more challenging environment of sensor networks. A more automated approach to software O&M would provide significant benefits to system builders, operators, and sensor network researchers. We argue that by starting with software O&M techniques developed for data centers, and then adapting and extending them to the world of resource-constrained sensor networks, we will be able to provide robust and scientifically reproducible mechanisms for defining the software footprint of individual sensors and networks of sensors. This paper describes the current golden-image based software O&M practice in Android world. We then propose an approach that adapts the Rocks toolkit to allow one to rapidly and reliably build complete Android environments (firmware flashes) at the individual sensor level and extend to a large networks of diverse sensors.

  • Dimitri Papadimitriou, Lluís Fàbrega, Pere Vilà, Davide Careglio, Piet Demeester

    In this paper, we report the results of the workshop organized by the FP7 EULER project on measurement-based research and associated methodology, experiments and tools. This workshop aimed at gathering all Future Internet Research and Experimentation (FIRE) experimental research projects under this thematic. Participants were invited to present the usage of measurement techniques in their experiments, their developments on measurement tools, and their foreseeable needs with respect to new domains of research not currently addressed by existing measurement techniques and tools.

  • S. Keshav

    Networking researchers seem to fall into two nearly non-overlapping categories: those whose blood runs with the practical clarity of “rough consensus and running code” (in the words of Dave Clark) and those who worship, instead, at the altar of mathematical analysis. The former build systems that work, even work well, but don’t necessarily know at what level of scaling or load their systems will catastrophically fail. Congestion collapse in the Internet in the mid-1980’s, for example, was a direct result of this approach, and similar scaling failures recur periodically (HTTP 1.0, “push” content distribution, and Shoutcast, to name a few), although many pragmatically-engineered systems, such as DNS, email, and Twitter, have proved to be incredibly scalable and robust.

    Proponents of mathematical modeling are better able to quantify the performance of their systems, using the powerful tools arising from theories such as model checking, queueing theory, and control theory. However, purely analytical approaches (remember PetriNets?) have had little practical success due to three inherent limitations. To begin with, it is not clear what mathematical approach is the best fit to a given problem. There are a plethora of approaches -- each of which can take years to master -- and it is nearly impossible to decide, a priori, which one best matches the problem at hand. For example, to optimize a system one can use linear or quadratic optimization, or any number of heuristic approaches, such as hill-climbing, genetic algorithms, and taboo search. Which one to pick? It all depends on the nuances of the problem, the quality of the available tools, and prior experience in using these approaches. That’s pretty daunting for a seasoned researcher, let alone a graduate student. Second, every mathematically sound approach necessarily makes simplifying assumptions. Fitting the square peg of reality into the round hole of mathematical assumptions can lead to impractical, even absurd, designs. As a case in point, assumptions of individual rationality needed by decision and game theory rarely hold in practice. Third, having spent the time to learn about a particular modeling approach, a researcher may be seduced into viewing the approach as being more powerful than it really is, ignoring its faults and modeling assumptions. For these reasons, I believe that one should couple a healthy respect for mathematical modeling with a hearty skepticism of its outcomes.
     
    When mathematical modeling and pragmatic system design come together, it can lead to beautiful systems. The original Ethernet, for example, brought together the elegant mathematics of researchers like Kleinrock, Tobagi, Lam, and Abramson with hands-on implementation by Metcalfe. Similarly, Jacobson and Karels brought a deep understanding of control theory to their inspired design of TCP congestion control. More recently, the Google Page Rank algorithm by Page, Brin, Motwani, and Winograd is based on eigenvalue computation in sparse Markov matrices.
     
    Given these enormous successes, it is no wonder that researchers in our community try hard to combine mathematical modeling with system building. Most papers in SIGCOMM these days build and study real systems applying analytical techniques arising from areas such as optimization, protocol verification, information theory, and communication theory. Although I must confess that the mathematical details of many papers are beyond my understanding (despite my recent attempt to remedy the situation), I think this is a positive development.
     
    Yet, much needs to be done. As a field, we lack widely accepted abstractions for even relatively simple concepts such as names and addresses, let alone routing and middleboxes. These have stymied our ability to build standard models for networking problems or a standard list of Grand Challenges. The recent emphasis on clean-slate design has renewed focus on these problems, and I look forward to the outcomes of these efforts in the years to come.
     
  • Vinh The Lam, Sivasankar Radhakrishnan, Rong Pan, Amin Vahdat, George Varghese

    Application performance in cloud data centers often depends crucially on network bandwidth, not just the aggregate data transmitted as in typical SLAs. We describe a mechanism for data center networks called NetShare that requires no hardware changes to routers but allows bandwidth to be allocated predictably across services based on weights. The weights are either specified by a manager, or automatically assigned at each switch port based on a virtual machine heuristic for isolation. Bandwidth unused by a service is shared proportionately by other services, providing weighted hierarchical max-min fair sharing. On a testbed of Fulcrum switches, we demonstrate that NetShare provides bandwidth isolation in various settings, including multipath networks.

    Sharad Agarwal
  • Yosuke Himura, Yoshiko Yasuda

    Multi-tenant datacenter networking, with which multiple customer (tenant) networks are virtualized over a single shared physical infrastructure, is cost-effective but poses significant costs on manual configuration. Such tasks would be alleviated with configuration templates, whereas a crucial difficulty stems from creating appropriate (i.e., reusable) ones. In this work, we propose a graph-based method of mining configurations of existing tenants to extract their recurrent patterns that would be used as reusable templates for upcoming tenants. The effectiveness of the proposed method is demonstrated with actual configuration files obtained from a business datacenter network.

    Sharad Agarwal
  • Anonymous

    Some ISPs and governments (most notably the Great Firewall of China) use DNS injection to block access to "unwanted" websites. The censorship tools inspect DNS queries near the ISP's boundary routers for sensitive domain keywords and inject forged DNS responses, blocking the users from accessing censored sites, such as twitter and facebook. Unfortunately this causes collateral damage, affecting communication beyond the censored networks when outside DNS traffic traverses censored links. In this paper, we analyze the causes of the collateral damages and measure the Internet to identify the injecting activities and their effect. We find 39 ASes in China injecting forged DNS replies. Furthermore, 26 of 43,000 measured open resolvers outside China, distributed in 109 countries, may suffer some collateral damage from these forged replies. Different from previous work that considers the collateral damage being limited to queries to root servers (F, I, J) located in China, we find that most collateral damage arises when the paths between resolvers and some TLD name servers transit through ISPs in China.

    Philip Levis
  • kc claffy

    On Monday, 22 August 2011, CAIDA hosted a one-day workshop to discuss scalable measurement and analysis of BGP and traceroute topology data, and practical applications of such data analysis including tracking of macroscopic censorship and filtering activities on the Internet. Discussion topics included: the surprisingly stability in the number of BGP updates over time; techniques for improving measurement and analysis of inter-domain routing policies; an update on Colorado State's BGPMon instrumentation; using BGP data to improve the interpretation of traceroute data, both for real-time diagnostics (e.g., AS traceroute) and for large-scale topology mapping; using both BGP and traceroute data to support detection and mapping infrastructure integrity, including different types of of filtering and censorship; and use of BGP data to analyze existing and proposed approaches to securing the interdomain routing system. This report briefly summarizes the presentations and discussions that followed.

  • Jon Crowcroft

    In all seriousness, Differential Privacy is a new technique and set of tools for managing responses to statistical queries over secured data, in such a way that the user cannot reconstruct more precise identification of principles in the dataset beyond a formally well-specified bound. This means that personally sensitive data such as Internet packet traces or social network measurements can be shared between researchers without invading personal privacy, and that assurances can be made with accuracy. With less seriousness, I would like to talk about Differential Piracy, but not without purpose. For sure, while there are legitimate reasons for upstanding citizens to live without fear of eternal surveillance, there is also a segment of society that gets away with things they shouldn't, under a cloak. Perhaps that is the (modest) price we have to pay for a modicum less paranoia in this brave new world. So, there has been a lot of work recently on Piracy Preserving Queries and Differential Piracy. These two related technologies exploit new ideas in statistical security. Rather than security through obscurity, the idea is to offer privacy through lack of differentiation (no, not inability to perform basic calculus, more the inability to distinguish between large numbers of very similar things).

  • kc claffy

    On February 8-10, 2012, CAIDA hosted the fourth Workshop on Active Internet Measurements (AIMS-4) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with the previous three AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. This year we continued to focus on how measurement can illuminate two specific public policy concerns: IPv6 deployment and broadband performance. This report briefly describes topics discussed at this year's workshop. Slides and other materials related to the workshop are available at http://www.caida.org/.

  • Rute Sofia, Paulo Mendes, Manuel José Damásio, Sara Henriques, Fabio Giglietto, Erica Giambitto, Alessandro Bogliolo

    This paper provides an interdisciplinary perspective concerning the role of prosumers on future Internet design based on the current trend of Internet user empowerment. The paper debates the prosumer role, and addresses models to develop a symmetric Internet architecture and supply-chain based on the integration of social capital aspects. It has as goal to ignite the discussion concerning a socially-driven Internet architectural design.

  • Dirk Trossen

    The late noughties have seen an influx of work in different scientific disciplines, all addressing the question of 'design' and 'architecture'. It is a battle between those advocating the theory of 'emergent properties' and others who strive for a 'theory for 'architecture'. We provide a particular insight into this battle, represented in the form of a story that focuses on the role of a possibly unusual protagonist and his influence on computer science, the Internet, architecture and beyond. We show his relation to one of the great achievements of system engineering, the Internet, and the possible future as it might unfold. Note from the writer: The tale is placed in a mixture of reality and fiction, while postulating a certain likelihood for this fiction. There is no proof for the assertions made in this tale, leaving the space for a sequel to be told.

  • Marshini Chetty, Nick Feamster

    Managing a home network is challenging because the underlying infrastructure is so complex. Existing interfaces either hide or expose the network's underlying complexity, but in both cases, the information that is shown does not necessarily allow a user to complete desired tasks. Recent advances in software defined networking, however, permit a redesign of the underlying network and protocols, potentially allowing designers to move complexity further from the user and, in some cases, eliminating it entirely. In this paper, we explore whether the choices of what to make visible to the user in the design of today's home network infrastructure, performance, and policies make sense. We also examine whether new capabilities for refactoring the network infrastructure - changing the underlying system without compromising existing functionality - should cause us to revisit some of these choices. Our work represents a case study of how co-designing an interface and its underlying infrastructure could ultimately improve interfaces for that infrastructure.

  • Cheng Yi, Alexander Afanasyev, Lan Wang, Beichuan Zhang, Lixia Zhang

    In Named Data Networking (NDN) architecture, packets carry data names rather than source or destination addresses. This change of paradigm leads to a new data plane: data consumers send out Interest packets, routers forward them and maintain the state of pending Interests, which is used to guide Data packets back to the consumers. NDN routers' forwarding process is able to detect network problems by observing the two-way traffic of Interest and Data packets, and explore multiple alternative paths without loops. This is in sharp contrast to today's IP forwarding process which follows a single path chosen by the routing process, with no adaptability of its own. In this paper we outline the design of NDN's adaptive forwarding, articulate its potential benefits, and identify open research issues.

  • S. Keshav

    As a networking researcher, working with computer networks day in and day out, you probably have rarely paused to reflect on the surprisingly difficult question of "What is a network?" For example, would you consider a bio-chemical system to be a network? How about a social network? Or a water supply network? Or the electrical grid? After all, all of these share some aspects in common with a computer network: they can be represented as a graph and they carry a flow (of chemical signals, messages, water, and electrons, respectively) from one or more sources to one or more destinations. So, shouldn't we make them equally objects of study by SIGCOMM members?

    You could argue that some of these networks differ dramatically from the Internet. The water network, for example, does not carry any messages and is unidirectional. So, it is not a communication network, unlike the Internet or, perhaps, a social network. This implicitly takes the position that the only networks we (as computer networking researchers) ought to study are bidirectional communication networks. This is a conservative position that is relatively easy to justify, but it excludes from consideration some interesting and open research questions that arise in the context of these other networks. Choosing the capacity of a water tank or an electrical transformer turns out to be similar in many respects to choosing the capacity of a router buffer or a transmission link. Similarly, one could imagine that the round-trip-time on a social network (the time it takes for a rumour you started to get back to you by word of mouth) would inform you about the structure of social network in much the same way as an ICMP ping. For these reasons, a more open-minded view about the nature of a network may be both pragmatic and conducive to innovation.
     
    My own view is that a network is any system that can be naturally represented by a graph. Additionally, a communication network is any system where a flow
    that originates at some set of source nodes is delivered to some set of destination nodes typically due to the forwarding action of intermediate nodes (although this may not be strictly necessary). This broad definition encompasses water networks, biological networks, and electrical networks as well as telecommunication networks and the Internet. It seeks to present a unifying abstraction so that techniques developed in one form of network can be adopted by researchers in the others.
     
    Besides a broad definition of networks, like the one above, the integrative study of networks--or ‘Network Science’ as its proponents call it-requires the underlying communities (and there are more than one) to be open to ideas from each other, and for the publication fora in these communities to be likewise “liberal in what you accept,” in Jon Postel's famous words. This is essential to allow researchers in Network Science to carry ideas from one community to another, despite their being less than expert in certain aspects of their work. CCR, through its publication of non-peer-reviewed Editorials, is perfectly positioned to follow this principle.
     
    I will end with a couple of important announcements. First, this issue will mark the end of Stefan Saroiu's tenure as an Area Editor. His steady editorial hand will be much missed. Thanks, Stefan!
     
    Second, starting September 1, 2012, Dina Papagiannaki will take over as the new Editor of CCR. Dina has demonstrated a breadth of understanding and depth of vision that assures me that CCR will be in very good hands. I am confident that under her stewardship CCR will rise to ever greater heights. I wish her the very best.
     
  • Supasate Choochaisri, Kittipat Apicharttrisorn, Kittiporn Korprasertthaworn, Pongpakdi Taechalertpaisarn, Chalermek Intanagonwiwat

    Desynchronization is useful for scheduling nodes to perform tasks at different time. This property is desirable for resource sharing, TDMA scheduling, and collision avoiding. Inspired by robotic circular formation, we propose DWARF (Desynchronization With an ARtificial Force field), a novel technique for desynchronization in wireless networks. Each neighboring node has artificial forces to repel other nodes to perform tasks at different time phases. Nodes with closer time phases have stronger forces to repel each other in the time domain. Each node adjusts its time phase proportionally to its received forces. Once the received forces are balanced, nodes are desynchronized. We evaluate our implementation of DWARF on TOSSIM, a simulator for wireless sensor networks. The simulation results indicate that DWARF incurs significantly lower desynchronization error and scales much better than existing approaches.

    Bhaskaran Raman
  • André Zúquete, Carlos Frade

    The IPv4 address space is quickly getting exhausted, putting a tremendous pressure on the adoption of even more NAT levels or IPv6. On the other hand, many authors propose the adoption of new Internet addressing capabilities, namely content-based addressing, to complement the existing IP host-based addressing. In this paper we propose the introduction of a location layer, between transport and network layers, to address both problems. We keep the existing IPv4 (or IPv6) host-based core routing functionalities, while we enable hosts to become routers between separate address spaces by exploring the new location header. For a proof of concept, we modified the TCP/IP stack of a Linux host to handle our new protocol layer and we designed and conceived a novel NAT box to enable current hosts to interact with the modified stack.

    David Wetherall
  • Kate Lin, Yung-Jen Chuang, Dina Katabi

    In many wireless systems, it is desirable to precede a data transmission with a handshake between the sender and the receiver. For example, RTS-CTS is a handshake that prevents collisions due to hidden terminals. Past work, however, has shown that the overhead of such handshake is too high for practical deployments. We present a new approach to wireless handshake that is almost overhead free. The key idea underlying the design is to separate a packet's PLCP header and MAC header from its body and have the sender and receiver first exchange the data and ACK headers, then exchange the bodies of the data and ACK packets without additional headers. The header exchange provides a natural handshake at almost no extra cost. We empirically evaluate the feasibility of such lightweight handshake and some of its applications. Our testbed evaluation shows that header-payload separation does not hamper packet decodabilty. It also shows that a light handshake enables hidden terminals, i.e., nodes that interfere with each other without RTS/CTS, to experience less than 4% of collisions. Furthermore, it improves the accuracy of bit rate selection in bursty and mobile environments producing a throughput gain of about 2x.

    Bhaskaran Raman
  • Cheng Huang, Ivan Batanov, Jin Li

    Internet services are often deployed in multiple (tens to hundreds) of geographically distributed data centers. They rely on Global Traffic Management (GTM) solutions to direct clients to the optimal data center based on a number of criteria like network performance, geographic location, availability, etc. The GTM solutions, however, have a fundamental design limitation in their ability to accurately map clients to data centers - they use the IP address of the local DNS resolver (LDNS) used by a client as a proxy for the true client identity, which in some cases causes suboptimal performance. This issue is known as the client-LDNS mismatch problem. We argue that recent proposals to address the problem suffer from serious limitations. We then propose a simple new solution, named ``FQDN extension'', which can solve the client-LDNS mismatch problem completely. We build a prototype system and demonstrate the effectiveness of the proposed solution. Using JavaScript, the solution can be deployed immediately for some online services, such as Web search, without modifying either client or local resolver.

    Renata Teixeira
  • Shane Alcock, Perry Lorier, Richard Nelson

    This paper introduces libtrace, an open-source software library for reading and writing network packet traces. Libtrace offers performance and usability enhancements compared to other libraries that are currently used. We describe the main features of libtrace and demonstrate how the libtrace programming API enables users to easily develop portable trace analysis tools without needing to consider the details of the capture format, file compression or intermediate protocol headers. We compare the performance of libtrace against other trace processing libraries to show that libtrace offers the best compromise between development effort and program run time. As a result, we conclude that libtrace is a valuable contribution to the passive measurement community that will aid the development of better and more reliable trace analysis and network monitoring tools.

    AT&T Labs
  • Pamela Zave

    Correctness of the Chord ring-maintenance protocol would mean that the protocol can eventually repair all disruptions in the ring structure, given ample time and no further disruptions while it is working. In other words, it is "eventual reachability." Under the same assumptions about failure behavior as made in the Chord papers, no published version of Chord is correct. This result is based on modeling the protocol in Alloy and analyzing it with the Alloy Analyzer. By combining the right selection of pseudocode and textual hints from several papers, and fixing flaws revealed by analysis, it is possible to get a version that may be correct. The paper also discusses the significance of these results, describes briefly how Alloy is used to model and reason about Chord, and compares Alloy analysis to model-checking.

    David Wetherall
  • Juan Camilo Cardona Restrepo, Rade Stanojevic

    In spite of the tremendous amount of measurement efforts on understanding the Internet as a global system, little is known about the 'local' Internet (among ISPs inside a region or a country) due to limitations of the existing measurement tools and scarce data. In this paper, empirical in nature, we characterize the evolution of one such ecosystem of local ISPs by studying the interactions between ISPs happening at the Slovak Internet eXchange (SIX). By crawling the web archive waybackmachine.org we collect 158 snapshots (spanning 14 years) of the SIX website, with the relevant data that allows us to study the dynamics of the Slovak ISPs in terms of: the local ISP peering, the traffic distribution, the port capacity/utilization and the local AS-level traffic matrix. Examining our data revealed a number of invariant and dynamic properties of the studied ecosystem that we report in detail.

    Yin Zhang
  • Eric Keller, Michael Schapira, Jennifer Rexford

    Traditional traffic engineering adapts the routing of traffic within the network to maximize performance. We propose a new approach that also adaptively changes where traffic enters and leaves the network—changing the “traffic matrix”, and not just the intradomain routing configuration. Our approach does not affect traffic patterns and BGP routes seen in neighboring networks, unlike conventional inter-domain traffic engineering where changes in BGP policies shift traf-

    fic and routes from one edge link to another. Instead, we capitalize on recent innovations in edge-link migration that enable seamless rehoming of an edge link to a different internal router in an ISP backbone network—completely transparent to the router in the neighboring domain. We present an optimization framework for traffic engineering with migration and develop algorithms that determine which edge links should migrate, where they should go, and how often
    they should move. Our experiments with Internet2 traffic and topology data show that edge-link migration allows the network to carry 18.8% more traffic (at the same level of performance) over optimizing routing alone.
    Telefonica Research
Syndicate content