CCR Papers from April 2008

Find a CCR issue:
  • Harsha V. Madhyastha and Balachander Krishnamurthy

    Flow records gathered by routers provide valuable coarse-granularity traffic information for several measurement-related network applications. However, due to high volumes of traffic, flow records need to be sampled before they are gathered. Current techniques for producing sampled flow records are either focused on selecting flows from which statistical estimates of traffic volume can be inferred, or have simplistic models for applications. Such sampled flow records are not suitable for many applications with more specific needs, such as ones that make decisions across flows.

    As a first step towards tailoring the sampling algorithm to an application’s needs, we design a generic language in which any particular application can express the classes of traffic of its interest. Our evaluation investigates the expressive power of our language, and whether flow records have sufficient information to enable sampling of records of relevance to applications. We use templates written in our custom language to instrument sampling tailored to three different applications—BLINC, Snort, and Bro. Our study, based on month-long datasets gathered at two different network locations, shows that by learning local traffic characteristics we can sample relevant flow records near-optimally with low false negatives in diverse applications.

    Chadi Barakat
  • Minlan Yu, Yung Yi, Jennifer Rexford, and Mung Chiang

    Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding—mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtualnetwork topologies. Our simulation experiments show that path splitting, path migration, and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks.

    Darryl Veitch
  • Juan J. Ramos-Munoz, Lidia Yamamoto, and Christian Tschudin

    Current network protocols must comply with rigid interfaces and rules of behavior to fit into well defined, vertical proto- col stacks. It is difficult for network designers to offer a wide spectrum of alternative protocols suitable for diverse situa- tions, and to make the stack evolve to match new needs. The tendency is to design protocols that can adapt to the widest possible spread of use. However, even the best adaptive pro- tocols cannot possibly cope with all situations. When their adaptivity limits are reached, the ability to switch to other protocols becomes a clear advantage.

    Our aim in this paper is to present Lightweight Au- tonomous resIlient Networks (LAIN), a framework that ex- ploits the multiplicity of alternative protocol, and exposes the spectrum of choice to the advantage of the applications. The system runs continuous experiments with alternative protocols online, in parallel as well as serially, in order to select automatically those that best match the application’s needs under the current network conditions. We report first results on the feasibility of the approach and point out the need for such a system in network and protocol evolution.

    Jon Crowcroft
  • Jennifer Rexford

    As the saying goes, “In theory there is no difference between theory and practice. But, in practice, there is.” Networking research has a wealth of good papers on both sides of the theory-practice divide. However, many practical papers stop short of having a sharp problem formulation or a rigorously considered solution, and many theory papers overlook or assume away some key aspect of the system they intend to model. Still, every so often, a paper comes along that nails a practical question with just the right bit of theory. When that happens, it’s a thing of beauty. These are my ten favorite examples. In some cases, I mention survey papers that cover an entire body of work, or a journal paper that presents a more mature overview of one or more conference papers, rather than single out an individual research result. (As an aside, I think good survey papers are a wonderful contribution to the community, and wish more people invested the considerable time and energy required to write them.)

  • Mark Allman

    The July 2007 issue of CCR elicited review process horror stories. I expect that everyone has their own vast collection. I certainly do. However, I found that picking my favorite story to be like choosing my favorite offspring. Therefore, rather than focusing on a single tale of woe I have tried to extrapolate some key points from across the suboptimal reviewing I have observed. I write this essay from the perspective of an author who has years of accepts and rejects.1 However, this note is also greatly informed by my refereeing activities over the years (on PCs, reviewing for journals, editorial boards, etc.). My intent is to make general observations in the hopes of contributing to a conversation that improves our overall review processes and ultimately helps us establish a stronger set of community values with regards to what we expect and appreciate in papers. While I strive for generality I do not claim the observations are unbiased or that I have closed all my open wounds in this area.

  • Jon Crowcroft

    In this article, we discuss the lessons in innovation from the last twenty years of the Internet that might be applied in the cellular telephone industry.

  • Benoit Donnet and Olivier Bonaventure

    This paper focuses on BGP communities, a particular BGP attribute that has not yet been extensively studied by the research community. It allows an operator to group destinations in a single entity to which the same routing decisions might be applied. In this paper, we show that the usage of this attribute has increased and that it also contributes to routing table growth. In addition, we propose a taxonomy of BGP community attributes to allow operators to better document their communities. We further manually collect information on BGP communities and tag it according to our taxonomy. We show that a large proportion of the BGP communities are used for traffic engineering purposes.

  • Patrick Crowley

    There is a growing sentiment among academics in computing that a shift to multicore processors in commodity computers will demand that all programmers become parallel programmers. This is because future general-purpose processors are not likely to improve the performance of a single thread of execution; instead, the presence of multiple processor cores on a CPU will improve the performance of groups of threads. In this article, I argue that there is another trend underway, namely integration, which will have a greater near-term impact on developers of system software and applications. This integration, and its likely impact on general-purpose computers, is clearly illustrated in the architecture of modern mobile phones.

  • Hamed Haddadi, Steve Uhlig, Andrew Moore, Richard Mortier, and Miguel Rio

    Despite the large number of papers on network topology modeling and inference, there still exists ambiguity about the real nature of the Internet AS and router level topology. While recent findings have illustrated the inaccuracies in maps inferred from BGP peering and traceroute measurements, existing topology models still produce static topologies, using simplistic assumptions about power law observations and preferential attachment.

    Today, topology generators are tightly bound to the observed data used to validate them. Given that the actual properties of the Internet topology are not known, topology generators should strive to reproduce the variability that characterizes the evolution of the Internet topology over time. Future topology generators should be able to express the variations in local connectivity that makes today’s Internet: peering relationships, internal AS topology and routing policies each changing over time due to failures, maintenance, upgrades and business strategies of the network. Topology generators should capture those dimensions, by allowing a certain level of randomness in the outcome, rather than enforcing structural assumptions as the truths about Internet’s evolving structure, which may never be discovered.

  • Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner

    This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too.

  • Michalis Faloutsos

    For those of us in academia, tenure is great. Unless you don't have it, in which case it pretty much sucks. In fact, it goes beyond sucking: it kills. Mainly your personal life, sometimes your spirit. I claim that assistant professors are an endangered species. Are we heading towards an environmental disaster that is going to destabilize the academic ecosystem or did I just have too much to drink last night? Either way, I think I deserve the Noble prize for Peace.

  • Mark Crovella and Christophe Diot

    ACM SIGCOMM Computer Communication Review (CCR – www.sigcomm.org/ccr/) fills a unique niche in the spectrum of computer communications literature. It seeks to quickly publish articles containing high-quality research, especially new ideas and visions, in order to allow the community to react and comment. CCR is unique in that its reviewing process turn-over is less than 3 months, which guarantees a timely publication of high quality scientific articles.

  • Olivier Bonaventure, Augustin Chaintreau, Laurent Mathy, and Philippe Owezarski

    This paper claims that Shadow Technical Program Committee (TPC) should be organized on a regular basis for attractive conferences in the networking domain. It helps ensuring that young generations of researchers have experience with the process of reviewing and selecting papers before they actually become part of regular TPCs. We highlight several reasons why a shadow TPC offers a unique educational experience, as compared with the two most traditional learning process: “delegated review” and “learn on the job”. We report examples taken from the CoNEXT 2007 shadow TPC and announce the CoNEXT 2008 Shadow TPC.

Syndicate content