CCR Papers from January 2009

Find a CCR issue:
  • S. Keshav

    It is an honour for me to take over the Editorship of CCR from Christophe Diot. In his four years at the helm, Christophe brought in great energy and his inimitable style. More importantly, he put into place policies and processes that streamlined operations, made the magazine a must-read for SIGCOMM members, and improved the visibility of the articles and authors published here. For all this he has our sincere thanks.

    In my term as Editor, I would like to build on Christophe's legacy. I think that the magazine is well-enough established that the Editorial Board can afford to experiment with a few new ideas. Here are some ideas we have in the works.

    First, we are going to limit all future papers to CCR, both editorial and peer-reviewed content, to six pages. This will actively discourage making CCR a burial ground for papers that were rejected elsewhere.

    Second, we will be proactive in seeking timely editorial content. We want SIGCOMM members to view CCR as a forum to publish new ideas, working hypotheses, and opinion pieces. Even surveys and tutorials, as long as they are timely, are welcome.

    Third, we want to encourage participation by industry, researchers and practitioners alike. We request technologists and network engineers in the field to submit articles to CCR outlining issues they face in their work, issues that can be taken up by academic researchers, who are always seeking new problems to tackle.

    Fourth, we would like to make use of online technologies to make CCR useful to its readership. In addition to CCR Online, we are contemplating an arXiv-like repository where papers can be submitted and reviewed. Importantly, readers could ask to be notified when papers matching certain keywords are submitted. This idea is still in its early stages: details will be worked out over the next few months.

    Setting these practicalities aside, let me now turn my attention to an issue that sorely needs our thoughts and innovations: the use of computer networking as a tool to solve real-world problems.

    The world today is in the midst of several crises: climate change, the rapidly growing gap between the haves and the have-nots, and the potential for epidemic outbreaks of infectious diseases, to name but a few. As thinking, educated, citizens of the world, we cannot but be challenged to do our part in averting the worst effects of theseglobal problems.

    Luckily, computer networking researchers and professionals have an important role to play. For instance: We can use networks to massively monitor weather and to allow high quality videoconferences that avoid air travel. We can use wired and wireless sensors to greatly reduce the inefficiencies of our heating and cooling systems. We can provide training videos to people at the ‘bottom-of-the-pyramid’ that can open new horizons to them and allow them to earn a better living. We can spread information that can lead to the overthrow of endemic power hierarchies through the miracles of cheap cell phones and peer-to-peer communication. We can help monitor infectious diseases in the remote corners of the world and help coordinate rapid responses to them.

    We have in our power the ideas and the technologies that can make a difference. And we must put these to good use.

    CCR can and should become the forum where the brilliant minds of today are exposed to the problems, the real problems, that face us, and where solutions are presented, critiqued, improved, and shared. This must be done. Let’s get started!

  • Ashvin Lakshmikantha, R. Srikant, Nandita Dukkipati, Nick McKeown, and Carolyn Beck

    Buffer sizing has received a lot of attention recently since it is becoming increasingly difficult to use large buffers in highspeed routers. Much of the prior work has concentrated on analyzing the amount of buffering required in core routers assuming that TCP carries all the data traffic. In this paper, we evaluate the amount of buffering required for RCP on a single congested link, while explicitly modeling flow arrivals and departures. Our theoretical analysis and simulations indicate that buffer sizes of about 10% of the bandwidth-delay product are sufficient for RCP to deliver good performance to end-users.

    Darryl Veitch
  • Stefan Frei, Thomas Duebendorfer, and Bernhard Plattner

    Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

    Dmitri Krioukov
  • David A. Hayes, Jason But, and Grenville Armitage

    A Stream Control Transmission Protocol (SCTP) capable Network Address Translation (NAT) device is necessary to support the wider deployment of the SCTP protocol. The key issues for an SCTP NAT are SCTP’s control chunk multiplexing and multi-homing features. Control chunk multiplexing can expose an SCTP NAT to possible Denial of Service attacks. These can be mitigated through the use of chunk and parameter processing limits.

    Multiple and changing IP addresses during an SCTP association, mean that SCTP NATs cannot operate in the way conventional UDP/TCP NATs operate. Tracking these multiple global IP addresses can help in avoiding lookup table conflicts, however, it can also result in circumstances that can lead to NAT state inconsistencies. Our analysis shows that tracking global IP addresses is not necessary in most expected practical installations.

    We use our FreeBSD SCTP NAT implementation, alias_sctp to examine the performance implications of tracking global IP addresses. We find that typical memory usage doubles and that the processing requirements are significant for installations that experience high association arrival rates.

    In conclusion we provide practical recommendations for a secure stable SCTP NAT installation.

    Chadi Barakat
  • Tao Ye, Darryl Veitch, and Jean Bolot

    Data confidentiality over mobile devices can be difficult to secure due to a lack of computing power and weak supporting encryption components. However, modern devices often have multiple wireless interfaces with diverse channel capacities and security capabilities. We show that the availability of diverse, heterogeneous links (physical or logical) between nodes in a network can be used to increase data confidentiality, on top of the availability or strength of underlying encryption techniques. We introduce a new security approach using multiple channels to transmit data securely, based on the idea of deliberate corruption, and information reduction, and analyze its security using the information theoretic concept of secrecy capacity, and the wiretap channel. Our work introduces the idea of channel design with security in mind.

    Suman Banerjee
  • David Andersen

    Jay Lepreau was uncharacteristically early in his passing, but left behind him a trail of great research, rigorously and repeatably evaluated systems, and lives changed for the better.

  • Matthew Mathis

    The current Internet fairness paradigm mandates that all protocols have equivalent response to packet loss and other congestion signals, allowing relatively simple network devices to attain a weak form of fairness by sending uniform signals to all ows. Our paper[1], which recently received the ACM SIGCOMM Test of Time Award, modeled the reference Additive-Increase-Multiplicative-Decrease algorithm used by TCP. However, in many parts of the Internet ISPs are choosing to explicitly control customer traffic, because the traditional paradigm does not sufficiently enforce fairness in a number of increasingly common situations. This editorial note takes the position we should embrace this paradigm shift, which will eventually move the responsibility for capacity allocation from the end-systems to the network itself. This paradigm shift might eventually eliminate the requirement that all protocols be TCP-Friendly".

  • Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lindner

    This paper discusses the concept of Cloud Computing to achieve a complete definition of what a Cloud is, using the main characteristics typically associated with this paradigm in the literature. More than 20 definitions have been studied allowing for the extraction of a consensus definition as well as a minimum definition containing the essential characteristics. This paper pays much attention to the Grid paradigm, as it is often confused with Cloud technologies. We also describe the relationships and distinctions between the Grid and Cloud approaches.

  • L. Lily Yang

    60 GHz is considered the most promising technology to deliver gigabit wireless for indoor communications. We propose to integrate 60 GHz radio with the existing Wi-Fi radio in 2.4/5 GHz band to take advantage of the complementary nature of these different bands. This integration presents an opportunity to provide a unified technology for both gigabit Wireless Personal Area Networks (WPAN) and Wireless Local Area Networks (WLAN), thus further reinforcing the technology convergence that is already underway with the widespread adoption of Wi-Fi technology. Many open research questions remain to make this unified solution work seamlessly for WPAN and WLAN.

  • Dola Saha, Dirk Grunwald, and Douglas Sicker

    Advances in networking have been accelerated by the use of abstractions, such as “layering”, and the ability to apply those abstractions across multiple communication media. Wireless communication provides the greatest challenge to these clean abstractions because of the lossy communication media. For many networking researchers, wireless communications hardware starts and ends with WiFi, or 802.11 compliant hardware.

    However, there has been a recent growth in software defined radio, which allows the basic radio medium to be manipulated by programs. This mutable radio layer has allowed researchers to exploit the physical properties of radio communication to overcome some of the challenges of the radio media; in certain cases, researchers have been able to develop mechanisms that are difficult to implement in electrical or optical media. In this paper, we describe the different design variants for software radios, their programming methods and survey some of the more cutting edge uses of those radios.

  • Albert Greenberg, James Hamilton, David A. Maltz, and Parveen Patel

    The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.

  • Henning Schulzrinne

    While most of us are involved in organizing conferences in some way, we probably do not pay too much attention to the organizational model of these events. This is somewhat surprising, given that conferences are probably the most visible activity of most professional societies, and also entail significant expenditures of money and volunteer labor. While the local square dance club with a $500 annual budget probably has bylaws and statutes, most conferences with hundred thousand dollar budgets operate more by oral tradition than by formal descriptions of responsibilities. In almost all cases, this works just fine, but this informality can lead to misunderstandings or problems when expectations differ among the volunteers or when there is a crisis. Thus, I believe that it is helpful to have clearer models, so that conferences and volunteers can reach a common understanding of what is expected of everybody that contributes their time to the conference, and also who is responsible when things go wrong. For long-running conferences, the typical conference organization involves four major actors: the sponsoring professional organization, a steering committee, the general chairs and the technical program chairs. However, the roles and reporting relationships seem to differ rather dramatically between different conferences.

  • Michalis Faloutsos

    It is clearly a time of change. Naturally, I am talking about the change of person in the top post: “Le Boss Grand” Christophe Diot is replaced by “Canadian Chief, Eh?” Srinivasan Keshav. Coincidence? No, my friends. This change of the guards in the most powerful position in CCR, and, by some stretch of the imagination, in SIGCOMM, and, by arbitrary extension, in the scientific world at large, is just the start of a period of change that will be later known as the Great Changes. Taking the cue from that, I say, let’s change.

Syndicate content