Computer Communication Review: Papers

Find a CCR issue:
  • Xiaolong Li and Homayoun Yousefi'zadeh

    In the recent years, end-to-end feedback-based variants of TCP as well as VCP have emerged as practical alternatives of congestion control by requiring the use of only one or two ECN bits in the IP header. However, all such schemes suffer from a relatively low speed of convergence and exhibit a biased fairness behavior in moderate bandwidth high delay networks due to utilizing an insufficient amount of congestion feedback. In this paper, we propose a novel distributed ECN-based congestion control protocol to which we refer as Multi Packet Congestion Control Protocol (MPCP). In contrast to other alternatives, MPCP is able to relay a more precise congestion feedback yet preserve the utilization of the two ECN bits. MPCP distributes (extracts) congestion related information into (from) a series of n packets, thus allowing for a 2n-bit quantization of congestion measures with each packet carrying two of 2n bits in its ECN bits. We describe the design, implementation, and performance evaluation of MPCP through both simulations and experimental studies.

    Dmitri Krioukov
  • F. Gringoli, Luca Salgarelli, M. Dusi, N. Cascarano, F. Risso, and k. c. claffy

    Much of Internet traffic modeling, firewall, and intrusion detection research requires traces where some ground truth regarding application and protocol is associated with each packet or flow. This paper presents the design, development and experimental evaluation of gt, an open source software toolset for associating ground truth information with Internet traffic traces. By probing the monitored host’s kernel to obtain information on active Internet sessions, gt gathers ground truth at the application level. Preliminary experimental results show that gt’s effectiveness comes at little cost in terms of overhead on the hosting machines. Furthermore, when coupled with other packet inspection mechanisms, gt can derive ground truth not only in terms of applications (e.g., e-mail), but also in terms of protocols (e.g., SMTP vs. POP3).

    Pablo Rodriguez
  • Bruce Davie

    In my first month as SIGCOMM chair, I have been asked “what is your vision for SIGCOMM?”, “can we make SIGCOMM more transparent?”, and “will you write an article for CCR?”. This article is an attempt to address all three of those questions.

  • Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, and Stephen Wolff

    This paper was first published online by the Internet Society in December 2003 and is being re-published in ACM SIGCOMM Computer Communication Review because of its historic import. It was written at the urging of its primary editor, the late Barry Leiner. He felt that a factual rendering of the events and activities associated with the development of the early Internet would be a valuable contribution. The contributing authors did their best to incorporate only factual material into this document. There are sure to be many details that have not been captured in the body of the document but it remains one of the most accurate renderings of the early period of development available.

  • k. c. claffy, Marina Fomenkov, Ethan Katz-Bassett, Robert Beverly, Beverly A. Cox, and Matthew Luckie

    Measuring the global Internet is a perpetually challenging task for technical, economic and policy reasons, which leaves scientists as well as policymakers navigating critical questions in their field with little if any empirical grounding. On February 12-13, 2009, CAIDA hosted the Workshop on Active Internet Measurements (AIMS) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops which provide a venue for researchers, operators, and policymakers to exchange ideas and perspectives. The two-day workshop included presentations, discussion after each presentation, and breakout sessions focused on how to increase potential and mitigate limitations of active measurements in the wide area Internet. We identified relevant stakeholders who may support and/or oppose measurement, and explored how collaborative solutions might maximize the benefit of research at minimal cost. This report describes the findings of the workshop, outlines open research problems identified by participants, and concludes with recommendations that can benefit both Internet science and communications policy. Slides from workshop presentations are available at http://www.caida.org/workshops/isma/0902/.

  • Ehab Al-Shaer, Albert Greenberg, Charles Kalmanek, David A. Maltz, T. S. Eugene Ng, and Geoffrey G. Xie

    Network management represents an architectural gap in today’s Internet [1]. Many problems with computer networks today, such as faults, misconfiguration, performance degradation, etc., are due to insufficient support for network management, and the problem takes on additional dimensions with the emerging programmable router paradigm. The Internet Network Management Workshop is working to build a community of researchers interested in solving the challenges of network management via a combination of bottoms-up analysis of data from existing networks and a top-down design of new architectures and approaches driven by that data. This editorial sets out some of the research challenges we see facing network management, and calls for participation in working to solve them.

  • Kentaro Toyama and Muneeb Ali

    Poverty and the associated sufferings remain a global challenge, with over a billion people surviving on less than a dollar a day. Technology, applied appropriately, can help improve their lives. Despite some clear examples of technical research playing a key role in global development, there is a question that repeatedly arises in this area: can technologies for developing regions be considered a core area of computer science research? In this note, we examine some of the arguments on both sides of this question, deliberately avoid answering the question itself (for the lack of community consensus), and provide some suggestions for the case where the answer is in the affirmative.

  • Dirk Trossen

    While many initiatives have been discussing the future of the Internet, the EIFFEL1 think tank set out to push these discussions forward by assembling a community of researchers debating the potential thrusts towards the Future Internet. This article provides an account of these discussions, being addressed both to the EIFFEL membership and more widely to members of the community interested in medium to long-term developments. We outline immediate problems as well as potentially missed opportunities in the current Internet, while focussing the debate on the need for a Future Internet on the style we conduct research as well as how we design systems at large. Most importantly, we recognize that an inter-disciplinary dialogue is required to formulate actions to be taken and to remove barriers to their realization.

  • Richard Gass and Christophe Diot

    Increasing popularity in personal access points and travel routers has begun to rise with the advent of smaller and more battery conscious devices. This article introduces the YAAP (Yet Another Access Point), an open source software that enables ad-hoc infrastructure services in the absence of network connectivity. The YAAP provides a familiar way to connect users to each other and provides a set of useful services. We list and describe its applications, explain how it can be used, provide details about the code, and point readers to where it can be downloaded.

  • S. Keshav

    As a member of the SIGCOMM TPC, I recently had a chance to read over thirty submissions by the best and brightest in our field. Imagine my surprise to find that all but one of them had significant flaws in statistical analysis. These flaws were severe enough that in many other disciplines the papers would have been summarily rejected. Yet, a few of them not only were accepted but also are likely to become role models for the next generation of students. For, though statistically flawed, they were not far from common practice, so it was felt that they should not be unfairly punished.

    In this editorial, I will focus on why statistical analysis matters, three common statistical errors I saw, why I think we have a rather relaxed approach to statistical analysis, and what we can do about it.

    In conducting experiments, only the most trivial situations allow a comprehensive exploration of the underlying parameter space: simulations and measurements alike allow us to explore only a small portion of the space. One role of statistical analysis is guide the selection of the parameter space using techniques from experimental design.

    A second role of statistical analysis is to allow a researcher to draw cautious and justifiable conclusions from a mass of numbers. Over a hundred years of work has created tried-and-tested techniques that allow researchers to compensate for unavoidable measurement errors, and to infer with high probability that the improvement seen due a particular algorithm or system is significant, rather than due to mere luck. Without statistical analysis, one is on thin ice.

    These two roles of statistical analysis make it an essential underpinning for networking research, especially for experimental design, measurement, and performance analysis. Unfortunately, despite its importance, papers in our field--both submissions to SIGCOMM and published papers in similar top-tier conferences--suffer from severe statistical errors. The three most common errors I found were: (1) confusing a sample with a population and, as a corollary, not specifying the underlying population (2) not presenting confidence intervals for sample statistics and (3) incorrect hypothesis testing.

    Most authors did not seem to realize that their measurements represented a sample from a much larger underlying population. Consider the measured throughput during ten runs between two nodes using a particular wireless protocol. These values are a sample of the population of node-to-node throughputs obtained using all possible uses of the protocol under all conceivable circumstances. Wireless performance may, however, vary widely depending on the RF environment. Therefore, the sample can be considered to be representative only if every likely circumstance has a proportionate chance of being represented. Authors need to strongly argue that the sample measurements are chosen in a way that sufficiently covers the underlying parameter space. Otherwise, the sample represents nothing more than itself! Yet, this test for scientific validity was rarely discussed in most papers.

    Given that a sample is not the population, it is imperative that the statistics of a sample be presented along with a confidence interval in which, with high confidence, the population parameters lie. These are the familiar error bars in a typical graph or histogram. Lacking error bars, we cannot interpret the characteristics of the population with any precision; we can only draw conclusions about the sample, which is necessarily limited. To my surprise, only one paper I read had error bars. This is a serious flaw.

    Finally, it is axiomatic in statistical analysis that a hypothesis cannot be proved; it can only rejected or not rejected. Hypothesis testing requires carefully framing a null hypothesis and using standard statistical analysis to either reject or not reject it. I realize that in many cases the null hypothesis is obvious and need not be formally stated. Nevertheless, it appeared that none of the papers I read had tested hypotheses with adequate care.

    Why do papers in our field lack statistical rigor? I think that one reason could be that we teach statistics too early in the academic curriculum. Students who learn statistical inference and hypothesis testing as a chore in a freshman class have all but forgotten it by the time they are writing papers. In my own case, I am embarrassed to admit that I did not thoroughly understand these techniques until I recently wrote a chapter on statistical techniques for networking researchers. I suspect that many of my colleagues are in the same boat. Unfortunately, this makes our weakness self-perpetuating. Having forgotten statistical analysis, we are neither in a position to carry it out properly, nor do we insist upon it during peer review. Thus, we stumble from one flawed paper to the next, continuing the cycle.

    What can we do about this? I suggest that all graduate students be required to take a course on statistical analysis. This need not be a formal course, but could be taken online or using distance education. The concepts are well known and the techniques are explained in numerous textbooks. We just need to buy into the agenda. Second, I think that we need to raise the bar during paper evaluation. Inadequate statistical analysis should be pointed out and should form one criterion for paper rejection. For papers with good experimental results but with poor statistical analysis, we should insist that these issues be rectified during shepherding. Finally, we need to educate the educators. Perhaps SIGCOMM can sponsor online or offline tutorials where researchers can quickly come up to speed in statistical analysis.

    These steps will raise the scientific merit of our discipline, and, more importantly, prevent us from accepting incorrect results due to flaws in statistical analysis.

  • Dragana Damjanovic and Michael Welzl

    When data transfers to or from a host happen in parallel, users do not always consider them to have the same importance. Ideally, a transport protocol should therefore allow its users to manipulate the fairness among ows in an almost arbitrary fashion. Since data transfers can also include real-time media streams which need to keep delay - and hence buffers - small, the protocol should also have a smooth sending rate. In an effort to satisfy the above requirements, we present MulTFRC, a congestion control mechanism which is based on the TCP-friendly Rate Control (TFRC) protocol. It emulates the behavior of a number of TFRC ows while maintaining a smooth sending rate. Our simulations and a real-life test demonstrate that MulTFRC performs significantly better than its competitors, potentially making it applicable in a broader range of settings than what TFRC is normally associated with.

    Darryl Veitch
  • Alice Este, Francesco Gringoli, and Luca Salgarelli

    This paper presents a statistical analysis of the amount of information that the features of traffic flows observed at the packet-level carry, with respect to the protocol that generated them. We show that the amount of information of the majority of such features remain constant irrespective of the point of observation (Internet core vs. Internet edge) and to the capture time (year 2000/01 vs. year 2008). We also describe a comparative analysis of how four statistical classifiers fare using the features we studied.

    Konstantina Papagiannaki
  • Don Bowman

    The Internet is a primary engine of innovation, communication, and entertainment in our society. It is used by diverse and potentially conflicting interests. When these interests collide in the form of technology, should the Internet stay true to its roots and let things take their course naturally, or should regulation and legislation be enacted to control what may or may not be done. This author believes public opinion is best served by light regulation and a strong focus on allowing technological innovation to take its natural course.

  • Romain Kuntz, Antoine Gallais, and Thomas Noel

    Although research on algorithms and communication pro- tocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. Although research on algorithms and communication protocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. The effort allocated to develop suitable protocols from scratch every new deployment could however be minimized by using already existing contributions which provide code reuse and adaptive protocols. Though we advocate their use for nowadays deployments, we have identified several shortcomings in foreseen scenarios for which we provide guidelines for future researches.

  • Chip Elliott and Aaron Falk

    In this paper we discuss the current status of the Global Environment for Network Innovations. Early prototypes of GENI are starting to come online as an end-to-end system and network researchers are invited to participate by engaging in the design process or using GENI to conduct experiments.

  • Deep Medhi and Peter A. Freeman

    A US-JapanWorkshop on Future Networks was held in Palo Alto, CA on October 31 - November 1, 2008. This workshop brought together leading US and Japanese network researchers and network research infrastructure developers who are interested in future networks. The focus was on research issues and experimental infrastructure to support research on future generation networks. The goal was to foster cooperation and communication between peers in the two countries. Through this workshop, a number of research challenges were identified. The workshop also made recommendations to: create a new funding mechanism to foster special collaborations in networking and experimentation, extend current national testbeds with international connectivity; and urge the respective governments to exert strong leadership to ensure that collaborative research for creating future networks is carried out.

  • Henning Schulzrinne

    As the community develops an ever more diverse set of venues for disseminating and discussing technical work and as the traditional resource constraints change from pages and shelf space to reviewer time, the traditional prohibition against double submission deserves closer consideration. We discuss reasons why double submissions have been frowned upon, and try to establish a set of guidelines that ensure appropriate review and dissemination, without preventing informal early discussion of research projects.

  • Konstantina Papagiannaki and Luigi Rizzo

    Selecting a technical program for a conference, and running the process so that decisions are well received by authors and participants, is a challenging task. We report our experience in running the SIGCOMM 2009 Technical Program Committee (TPC). The purpose of this article is to document the process that we followed, and discuss it critically. This should let authors get a better understanding of what led to the final acceptance or rejection of their work, and hopefully let other

  • Michalis Faloutsos

    “Don’t even get me started” is a powerful and direct, yet vague and meaningless, expression. What if you reply: “By all means, please, get started”. I believe that this will stun your opponent (it may be a friendly discussion, but if you don’t believe the other person is an opponent, you will never win.). Then, you can quickly grab his/her nose and yell: ”Twos before eights, and one bird in the bush”. It can be very entertaining unless the other person is your boss or your psychiatrist. In any case, this column dares to go where no column at CCR has gone before. It is all about getting started. And by that, we mean starting start-ups.

    Warning: the content below is not suitable for small children or adults with common sense.

  • S. Keshav

    We are living in the worst economic times since the 1930s. The US economy contracted at an annualized rate of 3.8% in the fourth quarter of 2008, the corresponding figure for Japan is 12.7%, and Iceland may become the first post-depression Western economy to suffer from an outright fiscal collapse. Economists tell us that one of the reasons for this worldwide recession is a ‘housing bubble’ where banks overestimated a borrower's ability to pay back a loan and where house buyers – armed with cheap loans – overestimated the true worth of a house.

    The recent Internet bubble is still fresh in some of our minds, where there was a similar overestimation of the true worth of Internet-enabled businesses. That bubble crashed too, with consequences suffered by the entire economy.

    Unfortunately, bubbles are not uncommon in networking research. Certain topics appear seemingly from nowhere, become ‘hot,’ propelled by interest from both leading researchers and funding agencies, and just as mysteriously die off, leaving behind a flood of papers, mostly in second- and third-tier conferences, written by authors only too keen to jump on a trend. Bubbles lead to an overexpenditure of research effort on marginal topics, wasting resources and breeding a certain degree of cynicism amongst our brightest young minds. Moreover, they drain resources from more deserving but less hyped ventures. Can our experience with economic bubbles shed light on research bubbles and teach us how to avoid them?

    Both economic and research bubbles share some similarities, such as having unrealistic expectations about what can be achieved by a company, the real-estate market, or a new technology. Bubble participants either naively or cynically invest time and money in solutions and technologies whose success is far from assured and whose widespread adoption would require the complete overthrow of legacy infrastructure. To avoid being caught in a bubble, or to merely avoid being caught in the tail end of one (being at the leading edge of a bubble is both fun and profitable!), ask tough questions about the underlying assumptions. In the midst of the housing bubble, could one point out housing prices could down as easily as they could go up? Could anyone have believed in the ’90s that videoconferencing, ATM, RSVP and other 'hot' topics would soon be consigned to the midden heap of history? I think so. It only requires the willingness to question every assumption and draw the inevitable conclusions.

    I think that in the end, what really inflates a bubble is money. Cheap money from venture capitalists, banks, and funding agencies makes it profitable to enter a bubble and make it grow. So it is important that the gatekeepers of funding be vigilant. They should be prepared to turn down applications for funding that smack of riding the bubble. Experienced researchers should willingly serve on grant panels, then should be prepared to be critical with even their favourite areas of research if necessary.

    Finally, bubbles can be identified and quashed by an active media. The press should have more deeply questioned the Internet and housing bubbles. Research conferences in our field should do the same for research bubbles. Paper reviewers and program committees thus play the same role as investigative journalists.

    This is not to say that all speculative ideas should be systematically de-funded and rejected. There should always be room for open-minded, blue-sky research. However, this activity should be limited and clearly identified. Perhaps every conference should have blue-sky sessions where all assumptions are left unchallenged (our community has done this with recent papers on ‘cleanslate’ designs). The best of these ideas, when proven to be sound, could then be funded and widely adopted.

    Of course, I am assuming that that we can get out of bubbles by rational means. Humans are all too fallible, however, and bubble thinking plays on human foibles. Worse, there is an incentive structure that encourages bubble formation: people at the leading edge of a bubble are disproportionately rewarded and people at the tail end can point to large body of literature (emerging from top-ranked places!) to justify their work, which reduces their cognitive effort. So, bubbles may be here to stay.

    Nevertheless, given the destructive effects of bubbles over the long term, I suggest that we look out for them, deflating them before they deflate us!

  • Brian E. Carpenter

    This paper reports some observations on the relationships between three measures of the size of the Internet over more than ten years. The size of the BGP4 routing table, the number of active BGP4 Autonomous Systems, and a lower bound on the total size of the Internet, appear to have fairly simple relationships despite the Internet’s growth by two orders of magnitude. In particular, it is observed that the size of the BGP4 system appears to have grown approximately in proportion to the square root of the lower-bound size of the globally addressable Internet. A simple model that partially explains this square law is described. It is not suggested that this observation and model have predictive value, since they cannot predict qualitative changes in the Internet topology. However, they do offer a new way to understand and monitor the scaling of the BGP4 system.

    Michalis Faloutsos
  • Mark Allman

    Careless selection of the ephemeral port number portion of a transport protocol’s connection identifier has been shown to potentially degrade security by opening the connection up to injection attacks from “blind” or “off path” attackers—or, attackers that cannot directly observe the connection. This short paper empirically explores a number of algorithms for choosing the ephemeral port number that attempt to obscure the choice from such attackers and hence make mounting these blind attacks more difficult.

    Kevin Almeroth
  • Adam Greenhalgh, Felipe Huici, Mickael Hoerdt, Panagiotis Papadimitriou, Mark Handley, and Laurent Mathy

    The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.

    Chadi Barakat
  • Chuan Han, Siyu Zhan, and Yaling Yang

    This paper addresses the open problem of locating an attacker that intentionally hides or falsifies its position using advanced radio technologies. A novel attacker localization mechanism, called Access Point Coordinated Localization (APCL), is proposed for IEEE 802.11 networks. APCL actively forces the attacker to reveal its position information by combining access point (AP) coordination with the traditional range-free localization. The optimal AP coordination process is calculated by modeling it as a finite horizon discrete Markov decision process, which is efficiently solved by an approximation algorithm. The performance advantages are verified through extensive simulations.

    Suman Banerjee
  • Arun Vishwanath, Vijay Sivaraman, and Marina Thottan

    The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay- bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.

    This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow per- formance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.

  • Jari Arkko, Bob Briscoe, Lars Eggert, Anja Feldmann, and Mark Handley

    This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial. New research agenda items were also identified.

  • Jon Crowcroft

    It has been proposed that research in certain areas is to be avoided when those areas have gone cold. While previous work concentrated on detecting the temperature of a research topic, this work addresses the question of changing the temperature of said topics. We make suggestions for a set of techniques to re-heat a topic that has gone cold. In contrast to other researchers who propose uncertain approaches involving creativity, lateral thinking and imagination, we concern ourselves with deterministic approaches that are guaranteed to yield results.

  • Jens-Matthias Bohli, Christoph Sorge, and Dirk Westhoff

    One expectation about the future Internet is the participa- tion of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Inter- net of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.

  • Henning Schulzrinne

    n double-blind reviewing (DBR), both reviewers and authors are unaware of each others' identities and affiliations. DBR is said to increase review fairness. However, DBR may only be marginally effective in combating the randomness of the typical conference review process for highly-selective conferences. DBR may also make it more difficult to adequately review conference submissions that build on earlier work of the authors and have been partially published in workshops. I believe that DBR mainly increases the perceived fairness of the reviewing process, but that may be an important benefit. Rather than waiting until the final stages, the reviewing process needs to explicitly address the issue of workshop publications early on.

  • Michalis Faloutsos

    They say that music and mathematics are intertwined. I am not sure this is true, but I always wanted to use the word intertwined. The point is that my call for poetry received a overwhelmingly enthusiastic response from at least five people. My mailbox was literally flooded (I have a small mailbox). This article is a tribute to the poetry of science, or, as I like to call it, the Poetry of Science. You will be amazed.

  • S. Keshav

    It is an honour for me to take over the Editorship of CCR from Christophe Diot. In his four years at the helm, Christophe brought in great energy and his inimitable style. More importantly, he put into place policies and processes that streamlined operations, made the magazine a must-read for SIGCOMM members, and improved the visibility of the articles and authors published here. For all this he has our sincere thanks.

    In my term as Editor, I would like to build on Christophe's legacy. I think that the magazine is well-enough established that the Editorial Board can afford to experiment with a few new ideas. Here are some ideas we have in the works.

    First, we are going to limit all future papers to CCR, both editorial and peer-reviewed content, to six pages. This will actively discourage making CCR a burial ground for papers that were rejected elsewhere.

    Second, we will be proactive in seeking timely editorial content. We want SIGCOMM members to view CCR as a forum to publish new ideas, working hypotheses, and opinion pieces. Even surveys and tutorials, as long as they are timely, are welcome.

    Third, we want to encourage participation by industry, researchers and practitioners alike. We request technologists and network engineers in the field to submit articles to CCR outlining issues they face in their work, issues that can be taken up by academic researchers, who are always seeking new problems to tackle.

    Fourth, we would like to make use of online technologies to make CCR useful to its readership. In addition to CCR Online, we are contemplating an arXiv-like repository where papers can be submitted and reviewed. Importantly, readers could ask to be notified when papers matching certain keywords are submitted. This idea is still in its early stages: details will be worked out over the next few months.

    Setting these practicalities aside, let me now turn my attention to an issue that sorely needs our thoughts and innovations: the use of computer networking as a tool to solve real-world problems.

    The world today is in the midst of several crises: climate change, the rapidly growing gap between the haves and the have-nots, and the potential for epidemic outbreaks of infectious diseases, to name but a few. As thinking, educated, citizens of the world, we cannot but be challenged to do our part in averting the worst effects of theseglobal problems.

    Luckily, computer networking researchers and professionals have an important role to play. For instance: We can use networks to massively monitor weather and to allow high quality videoconferences that avoid air travel. We can use wired and wireless sensors to greatly reduce the inefficiencies of our heating and cooling systems. We can provide training videos to people at the ‘bottom-of-the-pyramid’ that can open new horizons to them and allow them to earn a better living. We can spread information that can lead to the overthrow of endemic power hierarchies through the miracles of cheap cell phones and peer-to-peer communication. We can help monitor infectious diseases in the remote corners of the world and help coordinate rapid responses to them.

    We have in our power the ideas and the technologies that can make a difference. And we must put these to good use.

    CCR can and should become the forum where the brilliant minds of today are exposed to the problems, the real problems, that face us, and where solutions are presented, critiqued, improved, and shared. This must be done. Let’s get started!

  • Ashvin Lakshmikantha, R. Srikant, Nandita Dukkipati, Nick McKeown, and Carolyn Beck

    Buffer sizing has received a lot of attention recently since it is becoming increasingly difficult to use large buffers in highspeed routers. Much of the prior work has concentrated on analyzing the amount of buffering required in core routers assuming that TCP carries all the data traffic. In this paper, we evaluate the amount of buffering required for RCP on a single congested link, while explicitly modeling flow arrivals and departures. Our theoretical analysis and simulations indicate that buffer sizes of about 10% of the bandwidth-delay product are sufficient for RCP to deliver good performance to end-users.

    Darryl Veitch
  • Stefan Frei, Thomas Duebendorfer, and Bernhard Plattner

    Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

    Dmitri Krioukov
  • David A. Hayes, Jason But, and Grenville Armitage

    A Stream Control Transmission Protocol (SCTP) capable Network Address Translation (NAT) device is necessary to support the wider deployment of the SCTP protocol. The key issues for an SCTP NAT are SCTP’s control chunk multiplexing and multi-homing features. Control chunk multiplexing can expose an SCTP NAT to possible Denial of Service attacks. These can be mitigated through the use of chunk and parameter processing limits.

    Multiple and changing IP addresses during an SCTP association, mean that SCTP NATs cannot operate in the way conventional UDP/TCP NATs operate. Tracking these multiple global IP addresses can help in avoiding lookup table conflicts, however, it can also result in circumstances that can lead to NAT state inconsistencies. Our analysis shows that tracking global IP addresses is not necessary in most expected practical installations.

    We use our FreeBSD SCTP NAT implementation, alias_sctp to examine the performance implications of tracking global IP addresses. We find that typical memory usage doubles and that the processing requirements are significant for installations that experience high association arrival rates.

    In conclusion we provide practical recommendations for a secure stable SCTP NAT installation.

    Chadi Barakat
  • Tao Ye, Darryl Veitch, and Jean Bolot

    Data confidentiality over mobile devices can be difficult to secure due to a lack of computing power and weak supporting encryption components. However, modern devices often have multiple wireless interfaces with diverse channel capacities and security capabilities. We show that the availability of diverse, heterogeneous links (physical or logical) between nodes in a network can be used to increase data confidentiality, on top of the availability or strength of underlying encryption techniques. We introduce a new security approach using multiple channels to transmit data securely, based on the idea of deliberate corruption, and information reduction, and analyze its security using the information theoretic concept of secrecy capacity, and the wiretap channel. Our work introduces the idea of channel design with security in mind.

    Suman Banerjee
  • David Andersen

    Jay Lepreau was uncharacteristically early in his passing, but left behind him a trail of great research, rigorously and repeatably evaluated systems, and lives changed for the better.

  • Matthew Mathis

    The current Internet fairness paradigm mandates that all protocols have equivalent response to packet loss and other congestion signals, allowing relatively simple network devices to attain a weak form of fairness by sending uniform signals to all ows. Our paper[1], which recently received the ACM SIGCOMM Test of Time Award, modeled the reference Additive-Increase-Multiplicative-Decrease algorithm used by TCP. However, in many parts of the Internet ISPs are choosing to explicitly control customer traffic, because the traditional paradigm does not sufficiently enforce fairness in a number of increasingly common situations. This editorial note takes the position we should embrace this paradigm shift, which will eventually move the responsibility for capacity allocation from the end-systems to the network itself. This paradigm shift might eventually eliminate the requirement that all protocols be TCP-Friendly".

  • Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lindner

    This paper discusses the concept of Cloud Computing to achieve a complete definition of what a Cloud is, using the main characteristics typically associated with this paradigm in the literature. More than 20 definitions have been studied allowing for the extraction of a consensus definition as well as a minimum definition containing the essential characteristics. This paper pays much attention to the Grid paradigm, as it is often confused with Cloud technologies. We also describe the relationships and distinctions between the Grid and Cloud approaches.

  • L. Lily Yang

    60 GHz is considered the most promising technology to deliver gigabit wireless for indoor communications. We propose to integrate 60 GHz radio with the existing Wi-Fi radio in 2.4/5 GHz band to take advantage of the complementary nature of these different bands. This integration presents an opportunity to provide a unified technology for both gigabit Wireless Personal Area Networks (WPAN) and Wireless Local Area Networks (WLAN), thus further reinforcing the technology convergence that is already underway with the widespread adoption of Wi-Fi technology. Many open research questions remain to make this unified solution work seamlessly for WPAN and WLAN.

  • Dola Saha, Dirk Grunwald, and Douglas Sicker

    Advances in networking have been accelerated by the use of abstractions, such as “layering”, and the ability to apply those abstractions across multiple communication media. Wireless communication provides the greatest challenge to these clean abstractions because of the lossy communication media. For many networking researchers, wireless communications hardware starts and ends with WiFi, or 802.11 compliant hardware.

    However, there has been a recent growth in software defined radio, which allows the basic radio medium to be manipulated by programs. This mutable radio layer has allowed researchers to exploit the physical properties of radio communication to overcome some of the challenges of the radio media; in certain cases, researchers have been able to develop mechanisms that are difficult to implement in electrical or optical media. In this paper, we describe the different design variants for software radios, their programming methods and survey some of the more cutting edge uses of those radios.

Syndicate content