CCR Papers from 2009

  • Don Bowman

    The Internet is a primary engine of innovation, communication, and entertainment in our society. It is used by diverse and potentially conflicting interests. When these interests collide in the form of technology, should the Internet stay true to its roots and let things take their course naturally, or should regulation and legislation be enacted to control what may or may not be done. This author believes public opinion is best served by light regulation and a strong focus on allowing technological innovation to take its natural course.

  • Romain Kuntz, Antoine Gallais, and Thomas Noel

    Although research on algorithms and communication pro- tocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. Although research on algorithms and communication protocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. The effort allocated to develop suitable protocols from scratch every new deployment could however be minimized by using already existing contributions which provide code reuse and adaptive protocols. Though we advocate their use for nowadays deployments, we have identified several shortcomings in foreseen scenarios for which we provide guidelines for future researches.

  • Chip Elliott and Aaron Falk

    In this paper we discuss the current status of the Global Environment for Network Innovations. Early prototypes of GENI are starting to come online as an end-to-end system and network researchers are invited to participate by engaging in the design process or using GENI to conduct experiments.

  • Deep Medhi and Peter A. Freeman

    A US-JapanWorkshop on Future Networks was held in Palo Alto, CA on October 31 - November 1, 2008. This workshop brought together leading US and Japanese network researchers and network research infrastructure developers who are interested in future networks. The focus was on research issues and experimental infrastructure to support research on future generation networks. The goal was to foster cooperation and communication between peers in the two countries. Through this workshop, a number of research challenges were identified. The workshop also made recommendations to: create a new funding mechanism to foster special collaborations in networking and experimentation, extend current national testbeds with international connectivity; and urge the respective governments to exert strong leadership to ensure that collaborative research for creating future networks is carried out.

  • Henning Schulzrinne

    As the community develops an ever more diverse set of venues for disseminating and discussing technical work and as the traditional resource constraints change from pages and shelf space to reviewer time, the traditional prohibition against double submission deserves closer consideration. We discuss reasons why double submissions have been frowned upon, and try to establish a set of guidelines that ensure appropriate review and dissemination, without preventing informal early discussion of research projects.

  • Konstantina Papagiannaki and Luigi Rizzo

    Selecting a technical program for a conference, and running the process so that decisions are well received by authors and participants, is a challenging task. We report our experience in running the SIGCOMM 2009 Technical Program Committee (TPC). The purpose of this article is to document the process that we followed, and discuss it critically. This should let authors get a better understanding of what led to the final acceptance or rejection of their work, and hopefully let other

  • Michalis Faloutsos

    “Don’t even get me started” is a powerful and direct, yet vague and meaningless, expression. What if you reply: “By all means, please, get started”. I believe that this will stun your opponent (it may be a friendly discussion, but if you don’t believe the other person is an opponent, you will never win.). Then, you can quickly grab his/her nose and yell: ”Twos before eights, and one bird in the bush”. It can be very entertaining unless the other person is your boss or your psychiatrist. In any case, this column dares to go where no column at CCR has gone before. It is all about getting started. And by that, we mean starting start-ups.

    Warning: the content below is not suitable for small children or adults with common sense.

  • S. Keshav

    We are living in the worst economic times since the 1930s. The US economy contracted at an annualized rate of 3.8% in the fourth quarter of 2008, the corresponding figure for Japan is 12.7%, and Iceland may become the first post-depression Western economy to suffer from an outright fiscal collapse. Economists tell us that one of the reasons for this worldwide recession is a ‘housing bubble’ where banks overestimated a borrower's ability to pay back a loan and where house buyers – armed with cheap loans – overestimated the true worth of a house.

    The recent Internet bubble is still fresh in some of our minds, where there was a similar overestimation of the true worth of Internet-enabled businesses. That bubble crashed too, with consequences suffered by the entire economy.

    Unfortunately, bubbles are not uncommon in networking research. Certain topics appear seemingly from nowhere, become ‘hot,’ propelled by interest from both leading researchers and funding agencies, and just as mysteriously die off, leaving behind a flood of papers, mostly in second- and third-tier conferences, written by authors only too keen to jump on a trend. Bubbles lead to an overexpenditure of research effort on marginal topics, wasting resources and breeding a certain degree of cynicism amongst our brightest young minds. Moreover, they drain resources from more deserving but less hyped ventures. Can our experience with economic bubbles shed light on research bubbles and teach us how to avoid them?

    Both economic and research bubbles share some similarities, such as having unrealistic expectations about what can be achieved by a company, the real-estate market, or a new technology. Bubble participants either naively or cynically invest time and money in solutions and technologies whose success is far from assured and whose widespread adoption would require the complete overthrow of legacy infrastructure. To avoid being caught in a bubble, or to merely avoid being caught in the tail end of one (being at the leading edge of a bubble is both fun and profitable!), ask tough questions about the underlying assumptions. In the midst of the housing bubble, could one point out housing prices could down as easily as they could go up? Could anyone have believed in the ’90s that videoconferencing, ATM, RSVP and other 'hot' topics would soon be consigned to the midden heap of history? I think so. It only requires the willingness to question every assumption and draw the inevitable conclusions.

    I think that in the end, what really inflates a bubble is money. Cheap money from venture capitalists, banks, and funding agencies makes it profitable to enter a bubble and make it grow. So it is important that the gatekeepers of funding be vigilant. They should be prepared to turn down applications for funding that smack of riding the bubble. Experienced researchers should willingly serve on grant panels, then should be prepared to be critical with even their favourite areas of research if necessary.

    Finally, bubbles can be identified and quashed by an active media. The press should have more deeply questioned the Internet and housing bubbles. Research conferences in our field should do the same for research bubbles. Paper reviewers and program committees thus play the same role as investigative journalists.

    This is not to say that all speculative ideas should be systematically de-funded and rejected. There should always be room for open-minded, blue-sky research. However, this activity should be limited and clearly identified. Perhaps every conference should have blue-sky sessions where all assumptions are left unchallenged (our community has done this with recent papers on ‘cleanslate’ designs). The best of these ideas, when proven to be sound, could then be funded and widely adopted.

    Of course, I am assuming that that we can get out of bubbles by rational means. Humans are all too fallible, however, and bubble thinking plays on human foibles. Worse, there is an incentive structure that encourages bubble formation: people at the leading edge of a bubble are disproportionately rewarded and people at the tail end can point to large body of literature (emerging from top-ranked places!) to justify their work, which reduces their cognitive effort. So, bubbles may be here to stay.

    Nevertheless, given the destructive effects of bubbles over the long term, I suggest that we look out for them, deflating them before they deflate us!

  • Brian E. Carpenter

    This paper reports some observations on the relationships between three measures of the size of the Internet over more than ten years. The size of the BGP4 routing table, the number of active BGP4 Autonomous Systems, and a lower bound on the total size of the Internet, appear to have fairly simple relationships despite the Internet’s growth by two orders of magnitude. In particular, it is observed that the size of the BGP4 system appears to have grown approximately in proportion to the square root of the lower-bound size of the globally addressable Internet. A simple model that partially explains this square law is described. It is not suggested that this observation and model have predictive value, since they cannot predict qualitative changes in the Internet topology. However, they do offer a new way to understand and monitor the scaling of the BGP4 system.

    Michalis Faloutsos
  • Mark Allman

    Careless selection of the ephemeral port number portion of a transport protocol’s connection identifier has been shown to potentially degrade security by opening the connection up to injection attacks from “blind” or “off path” attackers—or, attackers that cannot directly observe the connection. This short paper empirically explores a number of algorithms for choosing the ephemeral port number that attempt to obscure the choice from such attackers and hence make mounting these blind attacks more difficult.

    Kevin Almeroth
  • Adam Greenhalgh, Felipe Huici, Mickael Hoerdt, Panagiotis Papadimitriou, Mark Handley, and Laurent Mathy

    The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.

    Chadi Barakat
  • Chuan Han, Siyu Zhan, and Yaling Yang

    This paper addresses the open problem of locating an attacker that intentionally hides or falsifies its position using advanced radio technologies. A novel attacker localization mechanism, called Access Point Coordinated Localization (APCL), is proposed for IEEE 802.11 networks. APCL actively forces the attacker to reveal its position information by combining access point (AP) coordination with the traditional range-free localization. The optimal AP coordination process is calculated by modeling it as a finite horizon discrete Markov decision process, which is efficiently solved by an approximation algorithm. The performance advantages are verified through extensive simulations.

    Suman Banerjee
  • Arun Vishwanath, Vijay Sivaraman, and Marina Thottan

    The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay- bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.

    This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow per- formance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.

  • Jari Arkko, Bob Briscoe, Lars Eggert, Anja Feldmann, and Mark Handley

    This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial. New research agenda items were also identified.

  • Jon Crowcroft

    It has been proposed that research in certain areas is to be avoided when those areas have gone cold. While previous work concentrated on detecting the temperature of a research topic, this work addresses the question of changing the temperature of said topics. We make suggestions for a set of techniques to re-heat a topic that has gone cold. In contrast to other researchers who propose uncertain approaches involving creativity, lateral thinking and imagination, we concern ourselves with deterministic approaches that are guaranteed to yield results.

  • Jens-Matthias Bohli, Christoph Sorge, and Dirk Westhoff

    One expectation about the future Internet is the participa- tion of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Inter- net of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.

  • Henning Schulzrinne

    n double-blind reviewing (DBR), both reviewers and authors are unaware of each others' identities and affiliations. DBR is said to increase review fairness. However, DBR may only be marginally effective in combating the randomness of the typical conference review process for highly-selective conferences. DBR may also make it more difficult to adequately review conference submissions that build on earlier work of the authors and have been partially published in workshops. I believe that DBR mainly increases the perceived fairness of the reviewing process, but that may be an important benefit. Rather than waiting until the final stages, the reviewing process needs to explicitly address the issue of workshop publications early on.

  • Michalis Faloutsos

    They say that music and mathematics are intertwined. I am not sure this is true, but I always wanted to use the word intertwined. The point is that my call for poetry received a overwhelmingly enthusiastic response from at least five people. My mailbox was literally flooded (I have a small mailbox). This article is a tribute to the poetry of science, or, as I like to call it, the Poetry of Science. You will be amazed.

  • S. Keshav

    It is an honour for me to take over the Editorship of CCR from Christophe Diot. In his four years at the helm, Christophe brought in great energy and his inimitable style. More importantly, he put into place policies and processes that streamlined operations, made the magazine a must-read for SIGCOMM members, and improved the visibility of the articles and authors published here. For all this he has our sincere thanks.

    In my term as Editor, I would like to build on Christophe's legacy. I think that the magazine is well-enough established that the Editorial Board can afford to experiment with a few new ideas. Here are some ideas we have in the works.

    First, we are going to limit all future papers to CCR, both editorial and peer-reviewed content, to six pages. This will actively discourage making CCR a burial ground for papers that were rejected elsewhere.

    Second, we will be proactive in seeking timely editorial content. We want SIGCOMM members to view CCR as a forum to publish new ideas, working hypotheses, and opinion pieces. Even surveys and tutorials, as long as they are timely, are welcome.

    Third, we want to encourage participation by industry, researchers and practitioners alike. We request technologists and network engineers in the field to submit articles to CCR outlining issues they face in their work, issues that can be taken up by academic researchers, who are always seeking new problems to tackle.

    Fourth, we would like to make use of online technologies to make CCR useful to its readership. In addition to CCR Online, we are contemplating an arXiv-like repository where papers can be submitted and reviewed. Importantly, readers could ask to be notified when papers matching certain keywords are submitted. This idea is still in its early stages: details will be worked out over the next few months.

    Setting these practicalities aside, let me now turn my attention to an issue that sorely needs our thoughts and innovations: the use of computer networking as a tool to solve real-world problems.

    The world today is in the midst of several crises: climate change, the rapidly growing gap between the haves and the have-nots, and the potential for epidemic outbreaks of infectious diseases, to name but a few. As thinking, educated, citizens of the world, we cannot but be challenged to do our part in averting the worst effects of theseglobal problems.

    Luckily, computer networking researchers and professionals have an important role to play. For instance: We can use networks to massively monitor weather and to allow high quality videoconferences that avoid air travel. We can use wired and wireless sensors to greatly reduce the inefficiencies of our heating and cooling systems. We can provide training videos to people at the ‘bottom-of-the-pyramid’ that can open new horizons to them and allow them to earn a better living. We can spread information that can lead to the overthrow of endemic power hierarchies through the miracles of cheap cell phones and peer-to-peer communication. We can help monitor infectious diseases in the remote corners of the world and help coordinate rapid responses to them.

    We have in our power the ideas and the technologies that can make a difference. And we must put these to good use.

    CCR can and should become the forum where the brilliant minds of today are exposed to the problems, the real problems, that face us, and where solutions are presented, critiqued, improved, and shared. This must be done. Let’s get started!

  • Ashvin Lakshmikantha, R. Srikant, Nandita Dukkipati, Nick McKeown, and Carolyn Beck

    Buffer sizing has received a lot of attention recently since it is becoming increasingly difficult to use large buffers in highspeed routers. Much of the prior work has concentrated on analyzing the amount of buffering required in core routers assuming that TCP carries all the data traffic. In this paper, we evaluate the amount of buffering required for RCP on a single congested link, while explicitly modeling flow arrivals and departures. Our theoretical analysis and simulations indicate that buffer sizes of about 10% of the bandwidth-delay product are sufficient for RCP to deliver good performance to end-users.

    Darryl Veitch
  • Stefan Frei, Thomas Duebendorfer, and Bernhard Plattner

    Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

    Dmitri Krioukov
  • David A. Hayes, Jason But, and Grenville Armitage

    A Stream Control Transmission Protocol (SCTP) capable Network Address Translation (NAT) device is necessary to support the wider deployment of the SCTP protocol. The key issues for an SCTP NAT are SCTP’s control chunk multiplexing and multi-homing features. Control chunk multiplexing can expose an SCTP NAT to possible Denial of Service attacks. These can be mitigated through the use of chunk and parameter processing limits.

    Multiple and changing IP addresses during an SCTP association, mean that SCTP NATs cannot operate in the way conventional UDP/TCP NATs operate. Tracking these multiple global IP addresses can help in avoiding lookup table conflicts, however, it can also result in circumstances that can lead to NAT state inconsistencies. Our analysis shows that tracking global IP addresses is not necessary in most expected practical installations.

    We use our FreeBSD SCTP NAT implementation, alias_sctp to examine the performance implications of tracking global IP addresses. We find that typical memory usage doubles and that the processing requirements are significant for installations that experience high association arrival rates.

    In conclusion we provide practical recommendations for a secure stable SCTP NAT installation.

    Chadi Barakat
  • Tao Ye, Darryl Veitch, and Jean Bolot

    Data confidentiality over mobile devices can be difficult to secure due to a lack of computing power and weak supporting encryption components. However, modern devices often have multiple wireless interfaces with diverse channel capacities and security capabilities. We show that the availability of diverse, heterogeneous links (physical or logical) between nodes in a network can be used to increase data confidentiality, on top of the availability or strength of underlying encryption techniques. We introduce a new security approach using multiple channels to transmit data securely, based on the idea of deliberate corruption, and information reduction, and analyze its security using the information theoretic concept of secrecy capacity, and the wiretap channel. Our work introduces the idea of channel design with security in mind.

    Suman Banerjee
  • David Andersen

    Jay Lepreau was uncharacteristically early in his passing, but left behind him a trail of great research, rigorously and repeatably evaluated systems, and lives changed for the better.

  • Matthew Mathis

    The current Internet fairness paradigm mandates that all protocols have equivalent response to packet loss and other congestion signals, allowing relatively simple network devices to attain a weak form of fairness by sending uniform signals to all ows. Our paper[1], which recently received the ACM SIGCOMM Test of Time Award, modeled the reference Additive-Increase-Multiplicative-Decrease algorithm used by TCP. However, in many parts of the Internet ISPs are choosing to explicitly control customer traffic, because the traditional paradigm does not sufficiently enforce fairness in a number of increasingly common situations. This editorial note takes the position we should embrace this paradigm shift, which will eventually move the responsibility for capacity allocation from the end-systems to the network itself. This paradigm shift might eventually eliminate the requirement that all protocols be TCP-Friendly".

  • Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lindner

    This paper discusses the concept of Cloud Computing to achieve a complete definition of what a Cloud is, using the main characteristics typically associated with this paradigm in the literature. More than 20 definitions have been studied allowing for the extraction of a consensus definition as well as a minimum definition containing the essential characteristics. This paper pays much attention to the Grid paradigm, as it is often confused with Cloud technologies. We also describe the relationships and distinctions between the Grid and Cloud approaches.

  • L. Lily Yang

    60 GHz is considered the most promising technology to deliver gigabit wireless for indoor communications. We propose to integrate 60 GHz radio with the existing Wi-Fi radio in 2.4/5 GHz band to take advantage of the complementary nature of these different bands. This integration presents an opportunity to provide a unified technology for both gigabit Wireless Personal Area Networks (WPAN) and Wireless Local Area Networks (WLAN), thus further reinforcing the technology convergence that is already underway with the widespread adoption of Wi-Fi technology. Many open research questions remain to make this unified solution work seamlessly for WPAN and WLAN.

  • Dola Saha, Dirk Grunwald, and Douglas Sicker

    Advances in networking have been accelerated by the use of abstractions, such as “layering”, and the ability to apply those abstractions across multiple communication media. Wireless communication provides the greatest challenge to these clean abstractions because of the lossy communication media. For many networking researchers, wireless communications hardware starts and ends with WiFi, or 802.11 compliant hardware.

    However, there has been a recent growth in software defined radio, which allows the basic radio medium to be manipulated by programs. This mutable radio layer has allowed researchers to exploit the physical properties of radio communication to overcome some of the challenges of the radio media; in certain cases, researchers have been able to develop mechanisms that are difficult to implement in electrical or optical media. In this paper, we describe the different design variants for software radios, their programming methods and survey some of the more cutting edge uses of those radios.

  • Albert Greenberg, James Hamilton, David A. Maltz, and Parveen Patel

    The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.

  • Henning Schulzrinne

    While most of us are involved in organizing conferences in some way, we probably do not pay too much attention to the organizational model of these events. This is somewhat surprising, given that conferences are probably the most visible activity of most professional societies, and also entail significant expenditures of money and volunteer labor. While the local square dance club with a $500 annual budget probably has bylaws and statutes, most conferences with hundred thousand dollar budgets operate more by oral tradition than by formal descriptions of responsibilities. In almost all cases, this works just fine, but this informality can lead to misunderstandings or problems when expectations differ among the volunteers or when there is a crisis. Thus, I believe that it is helpful to have clearer models, so that conferences and volunteers can reach a common understanding of what is expected of everybody that contributes their time to the conference, and also who is responsible when things go wrong. For long-running conferences, the typical conference organization involves four major actors: the sponsoring professional organization, a steering committee, the general chairs and the technical program chairs. However, the roles and reporting relationships seem to differ rather dramatically between different conferences.

  • Michalis Faloutsos

    It is clearly a time of change. Naturally, I am talking about the change of person in the top post: “Le Boss Grand” Christophe Diot is replaced by “Canadian Chief, Eh?” Srinivasan Keshav. Coincidence? No, my friends. This change of the guards in the most powerful position in CCR, and, by some stretch of the imagination, in SIGCOMM, and, by arbitrary extension, in the scientific world at large, is just the start of a period of change that will be later known as the Great Changes. Taking the cue from that, I say, let’s change.

Syndicate content