Computer Communication Review: Papers

Find a CCR issue:
  • Darleen Fisher

    In spite of the current Internet’s overwhelming success, there are growing concerns about its future and its robustness, manageability, security, openness to innovation, and scalability. As the Internet has become the largest human-made system ever deployed will we retain the ability to understand or manage it? Will we find ways to secure the current Internet or will we lose the security arms race to hackers and even state-supported attackers as they become more pervasive and sophisticated? Will the Internet continue to incorporate the thousands of new wireless networks currently added daily or encompass millions of embedded sensor systems that are expected to connect to the Internet in the future? There are also increasing societal concerns such as ensuring that an Internet maintains support for an open society, a balance of accountability and privacy, and continued economic viability. Will Internet companies continue to create new services and capabilities for the current Internet or will economic factors result in “network ossification” as some researchers fear?

    These are questions that concern networking and social science researchers around the world. In the United States, the National Science Foundation (NSF) has challenged the US research community to take a fresh look at the Internet by participating in the Future Internet Design (FIND) part of the Networking Technology and Systems (NeTS) program in the Division of Computer and Network Systems.

  • Anastasius Gavras, Arto Karila, Serge Fdida, Martin May, and Martin Potts

    The research community worldwide has increasingly drawn its attention to the weaknesses of the current Internet. Many proposals are addressing the perceived problems, ranging from new enhanced protocols to fix specific problems up to the most radical proposal to redesign and deploy a fully new Internet. Most of the problems in the current Internet are rooted in the tremendous pace of increase of its use. As a consequence there was little time to address the deficiencies of the Internet from an architectural point of view. Within FP7, the European Commission has facilitated the creation of European expert groups around the theme FIRE "Future Internet Research and Experimentation". FIRE has two related dimensions: on one hand, promoting experimentally-driven long-term, visionary research on new paradigms and networking concepts and architectures for the future Internet; on the other hand, building a large-scale experimentation facility supporting both medium- and long-term research on networks and services by gradually federating existing and new testbeds for emerging or future Internet technologies. By addressing future challenges for the Internet such as mobility, scalability, security and privacy, this new experimentally-driven approach is challenging the mainstream perceptions for future Internet development. This new initiative is intended to complement the more industrially-driven approaches which are addressed under the FP7 Objective "The Network of the Future" within the FP7-ICT Workprogramme 2007-08. FIRE is focused on exploring new and radically better technological solutions for the future Internet, while preserving the "good" aspects of the current Internet, in terms of openness, freedom of expression and ubiquitous access. The FIRE activities are being launched in the 2nd ICT call, which closes in October 2007, under the FP7-ICT Objective 1.6 "New Paradigms and Experimental Facilities" (budget €40m). Projects are envisaged to start in early 2008.

  • Jun Li, Michael Guidero, Zhen Wu, Eric Purpus, and Toby Ehrenkranz

    Understanding BGP routing dynamics is critical to the solid growth and maintenance of the Internet routing infrastructure. However, while the most extensive study on BGP dynamics is nearly a decade old, many factors that could affect BGP dynamics have changed considerably. We revisit this important topic in this paper, focusing on not only comparing with the previous results, but also issues not well explored before. We have found that, compared to almost a decade ago, although certain characteristics remain unchanged (such as some temporal properties), BGP dynamics are now “busier,” and more importantly, now have much less pathological behavior and are “healthier”; for example, forwarding dynamics are now not only dominant, but also more consistent across different days. Contributions to BGP dynamics by different BGP peers—which are not proportional to the size of a peer’s AS—are also more stable, and dynamics due to policy changes or duplicate announcements are usually from specific peers.

    Serge Fdida
  • Sridhar Machiraju, Darryl Veitch, François Baccelli, and Jean C. Bolot

    Active probing techniques have overwhelmingly been based on a few key heuristics. To progress to the next level a more powerful approach is needed, which is capable of filtering noise effectively, designing (and defining) optimal probing strategies, and understanding fundamental limitations. We provide a probabilistic, queueing-theoretic treatment that contributes to this program in the single hop case. We provide an exact inversion method for cross traffic distributions, rigorous system identifiability results to help determine what active probing can and can’t achieve, a new approach for treating queueing theoretic ‘noise’ based on conditioning, and cross traffic estimators with enhanced properties.

    Constantine Dovrolis
  • Barath Raghavan, Saurabh Panjwani, and Anton Mityagin

    We analyze a secure routing protocol, Secure Path Vector (SPV), proposed in SIGCOMM 2004. SPV aims to provide authenticity for route announcements in the Border Gateway Protocol (BGP) using an efficient alternative to ordinary digital signatures, called constant-time signatures. Today, SPV is often considered the best cryptographic defense for BGP.

    We find subtle flaws in the design of SPV which lead to attacks that can be mounted by 60% of Autonomous Systems in the Internet. In addition, we study several of SPV’s design decisions and assumptions and highlight the requirements for security of routing protocols. In light of our analysis, we reexamine the need for constant-time signatures and find that certain standard digital signature schemes can provide the same level of efficiency for route authenticity.

    Venkat Padmanabhan
  • Felipe Huici and Mark Handley

    Defending against large, distributed Denial-of-Service attacks is challenging, with large changes to the network core or to end-hosts often suggested. To make matters worse, spoofing adds to the difficulty, since defenses must resist attempts to trigger filtering of other people’s traffic. Further, any solution has to provide incentives for deployment, or it will never see the light of day. We present a simple and effective architectural defense against distributed DoS attacks that requires no changes to the end-hosts, minimal changes to the network core, is robust to spoofing, provides incentives for initial deployment, and can be built with off-the-shelf hardware.

    Ernst Biersack
  • Srikanth Kandula, Dina Katabi, Shantanu Sinha, and Arthur Berger

    Dynamic load balancing is a popular recent technique that protects ISP networks from sudden congestion caused by load spikes or link failures. Dynamic load balancing protocols, however, require schemes for splitting traffic across multiple paths at a fine granularity. Current splitting schemes present a tussle between slicing granularity and packet reordering. Splitting traffic at the granularity of packets quickly and accurately assigns the desired traffic share to each path, but can reorder packets within a TCP flow, confusing TCP congestion control. Splitting traffic at the granularity of a ow avoids packet reordering but may overshoot the desired shares by up to 60% in dynamic envi- ronments, resulting in low end-to-end network goodput. Contrary to popular belief, we show that one can sys- tematically split a single ow across multiple paths without causing packet reordering. We propose FLARE, a new traffic splitting algorithm that operates on bursts of packets, carefully chosen to avoid reordering. Using a combination of analysis and trace-driven simulations, we show that FLARE attains accuracy and responsiveness comparable to packet switching without reordering packets. FLARE is simple and can be implemented with a few KB of router state.

    Matthew Roughan
  • Bob Briscoe

    Resource allocation and accountability keep reappearing on every list of requirements for the Internet architecture. The reason we never resolve these issues is a broken idea of what the problem is. The applied research and standards communities are using completely unrealistic and impractical fairness criteria. The resulting mechanisms don’t even allocate the right thing and they don’t allocate it between the right entities. We explain as bluntly as we can that thinking about fairness mechanisms like TCP in terms of sharing out flow rates has no intellectual heritage from any concept of fairness in philosophy or social science, or indeed real life. Comparing flow rates should never again be used for claims of fairness in production networks. Instead, we should judge fairness mechanisms on how they share out the ‘cost’ of each user’s actions on others.

    Michalis Faloutsos
  • Nate Kushman, Srikanth Kandula, and Dina Katabi

    Industry observers expect VoIP to eventually replace most of the existing land-line telephone connections. Currently however, quality and reliability concerns largely limit VoIP usage to either personal calls on cross-domain services such as Skype and Vonage, or to single-domain services such as trunking, where a core ISP carries long-distance voice as VoIP only within its backbone, to save cost with a unified voice/data infrastructure. This paper investigates the factors that prevent cross-domain VoIP deployments from achieving the quality and reliability of existing land-line telephony (PSTN). We ran over 50,000 VoIP phone calls between 24 locations in US and Europe for a three-week period. Our results indicate that VoIP usability is hindered as much by BGP's slow convergence as network congestion. In fact, about half of the unintelligible VoIP samples in our data occur within 10 minutes of a BGP update.

    Jon Crowcroft
  • J. Schwarz da Silva

    The idea that today’s Internet uses are pushing its original architecture and design philosophy into realms that were neither anticipated nor easily accommodated has been gaining momentum, the overriding concern being that the functioning of the global networked society and economies, is likely to be severely impaired.

    There is no doubt a critical role to be played by countries and research funding agencies in this debate. However, the definition and correct positioning of these entities in the debate is closely related to basic underlying principles governmental institutions can agree to frame the developments of future network technologies and architectures. It is indeed clear that Internet architecture is today facing several challenges, many of them being related to scalability issues in view supporting an ever growing number of users, devices, service attributes, applications, contexts, environments, security, vulnerability, networking technologies to name a few. Still, existing architectures are based on a number of features and characteristics that have proved to be very valuable from an economic and policy perspective:

  • Steven Pope and David Riddoch

    This paper presents both a retrospective of the development of network interface architecture, and performance and conformance data from a range of contemporary devices sporting various performance enhancing technologies. The data shows that 10Gb/s networking is now possible without statefull offload and while consuming less than one CPU core on a contemporary commodity server.

  • Michalis Faloutsos

    I would like to apologise to both of my fans (which I will call Tom and Jerry respecting their request for anonymity for obvious reasons) for missing my column in the last issue. Their response was extremely flattering although the points of “silence is gold” and “measure twice, speak once, fool you thrice” kept appearing in their emails.

  • S. Naicken, B. Livingston, A. Basu, S. Rodhetbhai, I. Wakeman, and D. Chalmers

    In this paper, we discuss the current situation with respect to simulation usage in P2P research, testing the available P2P simulators against a proposed set of requirements, and surveying over 280 papers to discover what simulators are already being used. We found that no simulator currently meets all our requirements, and that simulation results are generally reported in the literature in a fashion that precludes any reproduction of results. We hope that this paper will give rise to further discussion and knowledge sharing among those of the P2P and network simulation research communities, so that a simulator that meets the needs of rigorous P2P research can be developed.

  • Lyman Chapin

    The International Federation for Information Processing (IFIP) is an international umbrella organization for national academic computer societies such as ACM. It was established in 1960 under the auspices of UNESCO as a result of the first World Computer Congress held in Paris in 1959. Today, IFIP has 56 society members that contribute to 13 technical committees (TCs), and its principal activity is the sponsorship of roughly 100 conferences throughout the world.

    Notwithstanding their multinational membership and scope, ACM and the IEEE Computer Society are joint “USA” members of IFIP and its technical committees.1 Since 1995 SIGCOMM has sponsored ACM’s participation in IFIP TC6, Communication Systems. The author serves as the ACM representative; Arun Iyengar, at IBM’s T. J. Watson Research Center, represents the IEEE Computer Society. Joe Turner, at Clemson University, is the current ACM representative to the IFIP General Assembly.

  • Thrasyvoulos Spyropoulos, Serge Fdida, and Scott Kirkpatrick

    While the Internet is hardly “broken”, it has proved unable to integrate new ideas, new architectures, and provide paths for future integration of data, voice, rich media and higher reliability. The reason is that the basic concept of the Internet as an end-to-end packet delivery service has made its middle layer, networking services through TCP/IP, untouchable. If we wish to see any disruptive enhancements to security, routing flexibility and reliability, and robust quality of service guarantees in coming years, we will need to move towards an Internet in which networking environments offering differing strengths can coexist on a permanent basis. This view is gaining currency in the US, advocated by the FIND/GENI initiative [7, 6] and in Europe, where it forms the heart of the activities reviewed by ARCADIA. The ARCADIA activity, sponsored by COST [1] has been chartered to look at critical areas in which research on fundamentals in the Internet’s architecture and protocols, supported by accurate experiment, can unlock some of the Internet impasses. This paper attempts to describes the insight gained and conclusions drawn from the first ARCADIA workshop on the Future of the Internet, organized around the main themes of Virtualization, Federation and Monitoring/Measurement.

  • Craig Partridge

    One of the challenges I remember from my days as a Ph.D. student was the tremendous struggle to figure out what papers I should read: what key ideas did I need to understand to feel I was even vaguely qualified to do work in networking?

    The idea in this essay is to try to answer those questions for the Ph.D. student of today while fitting within the limit of ten papers. Two of these works, alas, are doctoral dissertations, not papers but I am hoping the CCR editor will let that slide…

  • Manuel Crotti, Maurizio Dusi, Francesco Gringoli, and Luca Salgarelli

    The classification of IP flows according to the application that generated them is at the basis of any modern network management platform. However, classical techniques such as the ones based on the analysis of transport layer or application layer information are rapidly becoming ineffective. In this paper we present a flow classification mechanism based on three simple properties of the captured IP packets: their size, inter-arrival time and arrival order. Even though these quantities have already been used in the past to define classification techniques, our contribution is based on new structures called protocol fingerprints, which express such quantities in a compact and efficient way, and on a simple classification algorithm based on normalized thresholds. Although at a very early stage of development, the proposed technique is showing promising preliminary results from the classification of a reduced set of protocols.

    Chadi Barakat
  • Shengming Jiang

    All optical packet switching (AOPS) technology is essential to fully utilize the tremendous bandwidth provided by ad vanced optical communication techniques through forward ing packets in optical domain for the next generation net work. However, long packet headers and other complex op erations such as table lookup and packet header rewriting still have to be processed electronically for lack of cost effective optical processing techniques. This not only in creases system complexity but also limits packet forwarding speed due to opticalelectronicoptical conversion. Lots of work of improving optical processing techniques to realize AOPS is reported in the literature. Differently, this paper proposes a new networking structure to facilitate AOPS realization and support various existing networks through simplifying networking operations. This structure only re quires an AOPS node to process a short packet header to for ward packets across it with neither table lookup nor header rewriting. Furthermore, it moves high layer addressing is sues from packet forwarding mechanisms of routers. Conse quently, any changes in addressing schemes such as address space extension do not require changes in the AOPS nodes. It can also support both connectionoriented and connec tionless services to carry various types of traffic such as ATM and IP traffic. This structure is mainly based on the hierar chical source routing approach. The analytical results show that average packet header sizes are still acceptable even for long paths consisting of many nodes each of which has a large number of output ports.

    Jon Crowcroft
  • Xenofontas Dimitropoulos, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker, Young Hyun, kc claffy, and George Riley

    Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5% customer to provider (c2p), 82.8% peer to peer (p2p), and 90.3% sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2% of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors.

    Ernst Biersack
  • David Malone, Ken Duffy, and Christopher King

    Ricciato poses several questions, including why a particular LD (log-diagram) plot does not give the Hurst parameter predicted by theory? We offer an explanation of his observation and highlight other unusual aspects of LD plots.

  • G. Vu-Brugier, R. S. Stanojevic, D. J. Leith, and R. N. Shorten

    Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.

  • Jon Crowcroft

    Network Neutrality is the subject of much current debate. In this white paper I try to finnd the signal in the noise by taking a largely technical look at various definitions of network neutrality and the feasibility and complexity of implementing systems that support those ideas.

    First off, there are a lot of emotional terms used to describe various aspects of what makes the melting pot of the

    neutrality debate. For example, censorship or black-holing (where route filtering, fire-walling and port blocking might say what is happening in less insightful way); free-riding is often bandied about to describe the business of making money on the net (rather than overlay service provision); monopolistic tendencies, instead of the natural inclinationof an organisation that owns a lot of kit that they've sunk capital into, to want to make revenue from it!

    The paper describes the basic realities of the net, which has never been a level playing field for many accidental and some deliberate reasons, and then looks at the future evolution of IP (and lower level) services; the evolution of overlay services, and the evolution of the structure of the ISP business space (access, core and other); finally, I appeal to simple minded economic and regulatory arguments to ask whether there is any case at all for special pleading for the Internet as a special case, different from other services, or utilities.

  • Mostafa H. Ammar

    So it is my turn to recommend a reading list of 10 networking papers. I decided to take this opportunity to recommend papers from the networking area's past. I have several reasons for focusing on such a list. First, as a relatively young discipline that has seen tremendous growth in recent activities and contributions, our present tends to overwhelm our past in sheer volume.

  • Nick Feamster, Lixin Gao, and Jennifer Rexford

    Today’s Internet Service Providers (ISPs) serve two roles: managing their network infrastructure and providing (arguably limited) services to end users. We argue that coupling these roles impedes the deployment of new protocols and architectures, and that the future Internet should support two separate entities: infrastructure providers (who manage the physical infrastructure) and service providers (who deploy network protocols and offer end-to-end services). We present a high-level design for Cabo, an architecture that enables this separation; we also describe challenges associated with realizing this architecture.

  • Helmut Bürklin, Ralf Schäfer, and Dietrich Westerkamp

    This paper gives a brief overview of the Digital Video Broadcasting (DVB) project. Starting in 1993, this project produced standards for digital broadcasting in all media (satellite, cable, terrestrial) using return channels of various kinds. In the current phase, DVB has also embraced Internet Protocol (IP) based delivery on telco and cable.

  • Dmitri Krioukov, kc claffy, Marina Fomenkov, Fan Chung, Alessandro Vespignani, and Walter Willinger

    Internet topology analysis has recently experienced a surge of interest in computer science, physics, and the mathematical sciences. However, researchers from these different disciplines tend to approach the same problem from different angles. As a result, the field of Internet topology analysis and modeling must untangle sets of inconsistent findings, conflicting claims, and contradicting statements.

    On May 10-12, 2006, CAIDA hosted the Workshop on Internet topology (WIT). By bringing together a group of researchers spanning the areas of computer science, physics, and the mathematical sciences, the workshop aimed to improve communication across these scientific disciplines, enable interdisciplinary crossfertilization, identify commonalities in the different approaches, promote synergy where it exists, and utilize the richness that results from exploring similar problems from multiple perspectives.

    This report describes the findings of the workshop, outlines a set of relevant open research problems identified by participants, and concludes with recommendations that can benefit all scientific communities interested in Internet topology research.

  • Jon Crowcroft and Peter Key

    Europe has often followed in the footsteps of US research, but here we are trying to lead in Clean Slate networking research, rather than Cleans late networking. This is a report from a recent workshop on this topic.

  • Christoph Neumann, Nicolas Prigent, Matteo Varvello, and Kyoungwon Suh

    While multi-player online games are very successful, their fast deployment suffers from their server-based architecture. Indeed, servers both limit the scalability of the games and increase deployment costs. However, they make it easier to

    control the game (e.g. by preventing cheating and providing support for billing). Peer-to-peer, i.e. transfer of the game functions on each each player’s machine, is an attractive communication model for online gaming. We investigate here the challenges of peer-to-peer gaming, hoping that this discussion will generate a broader interest in the research community.

Syndicate content