While many initiatives have been discussing the future of the Internet, the EIFFEL1 think tank set out to push these discussions forward by assembling a community of researchers debating the potential thrusts towards the Future Internet. This article provides an account of these discussions, being addressed both to the EIFFEL membership and more widely to members of the community interested in medium to long-term developments. We outline immediate problems as well as potentially missed opportunities in the current Internet, while focussing the debate on the need for a Future Internet on the style we conduct research as well as how we design systems at large. Most importantly, we recognize that an inter-disciplinary dialogue is required to formulate actions to be taken and to remove barriers to their realization.
Increasing popularity in personal access points and travel routers has begun to rise with the advent of smaller and more battery conscious devices. This article introduces the YAAP (Yet Another Access Point), an open source software that enables ad-hoc infrastructure services in the absence of network connectivity. The YAAP provides a familiar way to connect users to each other and provides a set of useful services. We list and describe its applications, explain how it can be used, provide details about the code, and point readers to where it can be downloaded.
As a member of the SIGCOMM TPC, I recently had a chance to read over thirty submissions by the best and brightest in our field. Imagine my surprise to find that all but one of them had significant flaws in statistical analysis. These flaws were severe enough that in many other disciplines the papers would have been summarily rejected. Yet, a few of them not only were accepted but also are likely to become role models for the next generation of students. For, though statistically flawed, they were not far from common practice, so it was felt that they should not be unfairly punished.
In this editorial, I will focus on why statistical analysis matters, three common statistical errors I saw, why I think we have a rather relaxed approach to statistical analysis, and what we can do about it.
In conducting experiments, only the most trivial situations allow a comprehensive exploration of the underlying parameter space: simulations and measurements alike allow us to explore only a small portion of the space. One role of statistical analysis is guide the selection of the parameter space using techniques from experimental design.
A second role of statistical analysis is to allow a researcher to draw cautious and justifiable conclusions from a mass of numbers. Over a hundred years of work has created tried-and-tested techniques that allow researchers to compensate for unavoidable measurement errors, and to infer with high probability that the improvement seen due a particular algorithm or system is significant, rather than due to mere luck. Without statistical analysis, one is on thin ice.
These two roles of statistical analysis make it an essential underpinning for networking research, especially for experimental design, measurement, and performance analysis. Unfortunately, despite its importance, papers in our field--both submissions to SIGCOMM and published papers in similar top-tier conferences--suffer from severe statistical errors. The three most common errors I found were: (1) confusing a sample with a population and, as a corollary, not specifying the underlying population (2) not presenting confidence intervals for sample statistics and (3) incorrect hypothesis testing.
Most authors did not seem to realize that their measurements represented a sample from a much larger underlying population. Consider the measured throughput during ten runs between two nodes using a particular wireless protocol. These values are a sample of the population of node-to-node throughputs obtained using all possible uses of the protocol under all conceivable circumstances. Wireless performance may, however, vary widely depending on the RF environment. Therefore, the sample can be considered to be representative only if every likely circumstance has a proportionate chance of being represented. Authors need to strongly argue that the sample measurements are chosen in a way that sufficiently covers the underlying parameter space. Otherwise, the sample represents nothing more than itself! Yet, this test for scientific validity was rarely discussed in most papers.
Given that a sample is not the population, it is imperative that the statistics of a sample be presented along with a confidence interval in which, with high confidence, the population parameters lie. These are the familiar error bars in a typical graph or histogram. Lacking error bars, we cannot interpret the characteristics of the population with any precision; we can only draw conclusions about the sample, which is necessarily limited. To my surprise, only one paper I read had error bars. This is a serious flaw.
Finally, it is axiomatic in statistical analysis that a hypothesis cannot be proved; it can only rejected or not rejected. Hypothesis testing requires carefully framing a null hypothesis and using standard statistical analysis to either reject or not reject it. I realize that in many cases the null hypothesis is obvious and need not be formally stated. Nevertheless, it appeared that none of the papers I read had tested hypotheses with adequate care.
Why do papers in our field lack statistical rigor? I think that one reason could be that we teach statistics too early in the academic curriculum. Students who learn statistical inference and hypothesis testing as a chore in a freshman class have all but forgotten it by the time they are writing papers. In my own case, I am embarrassed to admit that I did not thoroughly understand these techniques until I recently wrote a chapter on statistical techniques for networking researchers. I suspect that many of my colleagues are in the same boat. Unfortunately, this makes our weakness self-perpetuating. Having forgotten statistical analysis, we are neither in a position to carry it out properly, nor do we insist upon it during peer review. Thus, we stumble from one flawed paper to the next, continuing the cycle.
What can we do about this? I suggest that all graduate students be required to take a course on statistical analysis. This need not be a formal course, but could be taken online or using distance education. The concepts are well known and the techniques are explained in numerous textbooks. We just need to buy into the agenda. Second, I think that we need to raise the bar during paper evaluation. Inadequate statistical analysis should be pointed out and should form one criterion for paper rejection. For papers with good experimental results but with poor statistical analysis, we should insist that these issues be rectified during shepherding. Finally, we need to educate the educators. Perhaps SIGCOMM can sponsor online or offline tutorials where researchers can quickly come up to speed in statistical analysis.
These steps will raise the scientific merit of our discipline, and, more importantly, prevent us from accepting incorrect results due to flaws in statistical analysis.
When data transfers to or from a host happen in parallel, users do not always consider them to have the same importance. Ideally, a transport protocol should therefore allow its users to manipulate the fairness among ows in an almost arbitrary fashion. Since data transfers can also include real-time media streams which need to keep delay - and hence buffers - small, the protocol should also have a smooth sending rate. In an effort to satisfy the above requirements, we present MulTFRC, a congestion control mechanism which is based on the TCP-friendly Rate Control (TFRC) protocol. It emulates the behavior of a number of TFRC ows while maintaining a smooth sending rate. Our simulations and a real-life test demonstrate that MulTFRC performs significantly better than its competitors, potentially making it applicable in a broader range of settings than what TFRC is normally associated with.
This paper presents a statistical analysis of the amount of information that the features of traffic flows observed at the packet-level carry, with respect to the protocol that generated them. We show that the amount of information of the majority of such features remain constant irrespective of the point of observation (Internet core vs. Internet edge) and to the capture time (year 2000/01 vs. year 2008). We also describe a comparative analysis of how four statistical classifiers fare using the features we studied.
The Internet is a primary engine of innovation, communication, and entertainment in our society. It is used by diverse and potentially conflicting interests. When these interests collide in the form of technology, should the Internet stay true to its roots and let things take their course naturally, or should regulation and legislation be enacted to control what may or may not be done. This author believes public opinion is best served by light regulation and a strong focus on allowing technological innovation to take its natural course.
Although research on algorithms and communication pro- tocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. Although research on algorithms and communication protocols in Wireless Sensor Networks (WSN) has yielded a tremendous effort so far, most of these protocols are hardly used in real deployments nowadays. Several reasons have been put forward in recent publications. In this paper, we further investigate this trend from a Medium Access Control (MAC) perspective by analyzing both the reasons behind successful deployments and the characteristics of the MAC layers proposed in the literature. The effort allocated to develop suitable protocols from scratch every new deployment could however be minimized by using already existing contributions which provide code reuse and adaptive protocols. Though we advocate their use for nowadays deployments, we have identified several shortcomings in foreseen scenarios for which we provide guidelines for future researches.
In this paper we discuss the current status of the Global Environment for Network Innovations. Early prototypes of GENI are starting to come online as an end-to-end system and network researchers are invited to participate by engaging in the design process or using GENI to conduct experiments.
A US-JapanWorkshop on Future Networks was held in Palo Alto, CA on October 31 - November 1, 2008. This workshop brought together leading US and Japanese network researchers and network research infrastructure developers who are interested in future networks. The focus was on research issues and experimental infrastructure to support research on future generation networks. The goal was to foster cooperation and communication between peers in the two countries. Through this workshop, a number of research challenges were identified. The workshop also made recommendations to: create a new funding mechanism to foster special collaborations in networking and experimentation, extend current national testbeds with international connectivity; and urge the respective governments to exert strong leadership to ensure that collaborative research for creating future networks is carried out.
As the community develops an ever more diverse set of venues for disseminating and discussing technical work and as the traditional resource constraints change from pages and shelf space to reviewer time, the traditional prohibition against double submission deserves closer consideration. We discuss reasons why double submissions have been frowned upon, and try to establish a set of guidelines that ensure appropriate review and dissemination, without preventing informal early discussion of research projects.
Selecting a technical program for a conference, and running the process so that decisions are well received by authors and participants, is a challenging task. We report our experience in running the SIGCOMM 2009 Technical Program Committee (TPC). The purpose of this article is to document the process that we followed, and discuss it critically. This should let authors get a better understanding of what led to the final acceptance or rejection of their work, and hopefully let other
“Don’t even get me started” is a powerful and direct, yet vague and meaningless, expression. What if you reply: “By all means, please, get started”. I believe that this will stun your opponent (it may be a friendly discussion, but if you don’t believe the other person is an opponent, you will never win.). Then, you can quickly grab his/her nose and yell: ”Twos before eights, and one bird in the bush”. It can be very entertaining unless the other person is your boss or your psychiatrist. In any case, this column dares to go where no column at CCR has gone before. It is all about getting started. And by that, we mean starting start-ups.
Warning: the content below is not suitable for small children or adults with common sense.
We are living in the worst economic times since the 1930s. The US economy contracted at an annualized rate of 3.8% in the fourth quarter of 2008, the corresponding figure for Japan is 12.7%, and Iceland may become the first post-depression Western economy to suffer from an outright fiscal collapse. Economists tell us that one of the reasons for this worldwide recession is a ‘housing bubble’ where banks overestimated a borrower's ability to pay back a loan and where house buyers – armed with cheap loans – overestimated the true worth of a house.
The recent Internet bubble is still fresh in some of our minds, where there was a similar overestimation of the true worth of Internet-enabled businesses. That bubble crashed too, with consequences suffered by the entire economy.
Unfortunately, bubbles are not uncommon in networking research. Certain topics appear seemingly from nowhere, become ‘hot,’ propelled by interest from both leading researchers and funding agencies, and just as mysteriously die off, leaving behind a flood of papers, mostly in second- and third-tier conferences, written by authors only too keen to jump on a trend. Bubbles lead to an overexpenditure of research effort on marginal topics, wasting resources and breeding a certain degree of cynicism amongst our brightest young minds. Moreover, they drain resources from more deserving but less hyped ventures. Can our experience with economic bubbles shed light on research bubbles and teach us how to avoid them?
Both economic and research bubbles share some similarities, such as having unrealistic expectations about what can be achieved by a company, the real-estate market, or a new technology. Bubble participants either naively or cynically invest time and money in solutions and technologies whose success is far from assured and whose widespread adoption would require the complete overthrow of legacy infrastructure. To avoid being caught in a bubble, or to merely avoid being caught in the tail end of one (being at the leading edge of a bubble is both fun and profitable!), ask tough questions about the underlying assumptions. In the midst of the housing bubble, could one point out housing prices could down as easily as they could go up? Could anyone have believed in the ’90s that videoconferencing, ATM, RSVP and other 'hot' topics would soon be consigned to the midden heap of history? I think so. It only requires the willingness to question every assumption and draw the inevitable conclusions.
I think that in the end, what really inflates a bubble is money. Cheap money from venture capitalists, banks, and funding agencies makes it profitable to enter a bubble and make it grow. So it is important that the gatekeepers of funding be vigilant. They should be prepared to turn down applications for funding that smack of riding the bubble. Experienced researchers should willingly serve on grant panels, then should be prepared to be critical with even their favourite areas of research if necessary.
Finally, bubbles can be identified and quashed by an active media. The press should have more deeply questioned the Internet and housing bubbles. Research conferences in our field should do the same for research bubbles. Paper reviewers and program committees thus play the same role as investigative journalists.
This is not to say that all speculative ideas should be systematically de-funded and rejected. There should always be room for open-minded, blue-sky research. However, this activity should be limited and clearly identified. Perhaps every conference should have blue-sky sessions where all assumptions are left unchallenged (our community has done this with recent papers on ‘cleanslate’ designs). The best of these ideas, when proven to be sound, could then be funded and widely adopted.
Of course, I am assuming that that we can get out of bubbles by rational means. Humans are all too fallible, however, and bubble thinking plays on human foibles. Worse, there is an incentive structure that encourages bubble formation: people at the leading edge of a bubble are disproportionately rewarded and people at the tail end can point to large body of literature (emerging from top-ranked places!) to justify their work, which reduces their cognitive effort. So, bubbles may be here to stay.
Nevertheless, given the destructive effects of bubbles over the long term, I suggest that we look out for them, deflating them before they deflate us!
This paper reports some observations on the relationships between three measures of the size of the Internet over more than ten years. The size of the BGP4 routing table, the number of active BGP4 Autonomous Systems, and a lower bound on the total size of the Internet, appear to have fairly simple relationships despite the Internet’s growth by two orders of magnitude. In particular, it is observed that the size of the BGP4 system appears to have grown approximately in proportion to the square root of the lower-bound size of the globally addressable Internet. A simple model that partially explains this square law is described. It is not suggested that this observation and model have predictive value, since they cannot predict qualitative changes in the Internet topology. However, they do offer a new way to understand and monitor the scaling of the BGP4 system.
Careless selection of the ephemeral port number portion of a transport protocol’s connection identifier has been shown to potentially degrade security by opening the connection up to injection attacks from “blind” or “off path” attackers—or, attackers that cannot directly observe the connection. This short paper empirically explores a number of algorithms for choosing the ephemeral port number that attempt to obscure the choice from such attackers and hence make mounting these blind attacks more difficult.
The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.
This paper addresses the open problem of locating an attacker that intentionally hides or falsifies its position using advanced radio technologies. A novel attacker localization mechanism, called Access Point Coordinated Localization (APCL), is proposed for IEEE 802.11 networks. APCL actively forces the attacker to reveal its position information by combining access point (AP) coordination with the traditional range-free localization. The optimal AP coordination process is calculated by modeling it as a finite horizon discrete Markov decision process, which is efficiently solved by an approximation algorithm. The performance advantages are verified through extensive simulations.
The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay- bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.
This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow per- formance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.
This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial. New research agenda items were also identified.
It has been proposed that research in certain areas is to be avoided when those areas have gone cold. While previous work concentrated on detecting the temperature of a research topic, this work addresses the question of changing the temperature of said topics. We make suggestions for a set of techniques to re-heat a topic that has gone cold. In contrast to other researchers who propose uncertain approaches involving creativity, lateral thinking and imagination, we concern ourselves with deterministic approaches that are guaranteed to yield results.
One expectation about the future Internet is the participa- tion of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Inter- net of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.
n double-blind reviewing (DBR), both reviewers and authors are unaware of each others' identities and affiliations. DBR is said to increase review fairness. However, DBR may only be marginally effective in combating the randomness of the typical conference review process for highly-selective conferences. DBR may also make it more difficult to adequately review conference submissions that build on earlier work of the authors and have been partially published in workshops. I believe that DBR mainly increases the perceived fairness of the reviewing process, but that may be an important benefit. Rather than waiting until the final stages, the reviewing process needs to explicitly address the issue of workshop publications early on.
They say that music and mathematics are intertwined. I am not sure this is true, but I always wanted to use the word intertwined. The point is that my call for poetry received a overwhelmingly enthusiastic response from at least five people. My mailbox was literally flooded (I have a small mailbox). This article is a tribute to the poetry of science, or, as I like to call it, the Poetry of Science. You will be amazed.
It is an honour for me to take over the Editorship of CCR from Christophe Diot. In his four years at the helm, Christophe brought in great energy and his inimitable style. More importantly, he put into place policies and processes that streamlined operations, made the magazine a must-read for SIGCOMM members, and improved the visibility of the articles and authors published here. For all this he has our sincere thanks.
In my term as Editor, I would like to build on Christophe's legacy. I think that the magazine is well-enough established that the Editorial Board can afford to experiment with a few new ideas. Here are some ideas we have in the works.
First, we are going to limit all future papers to CCR, both editorial and peer-reviewed content, to six pages. This will actively discourage making CCR a burial ground for papers that were rejected elsewhere.
Second, we will be proactive in seeking timely editorial content. We want SIGCOMM members to view CCR as a forum to publish new ideas, working hypotheses, and opinion pieces. Even surveys and tutorials, as long as they are timely, are welcome.
Third, we want to encourage participation by industry, researchers and practitioners alike. We request technologists and network engineers in the field to submit articles to CCR outlining issues they face in their work, issues that can be taken up by academic researchers, who are always seeking new problems to tackle.
Fourth, we would like to make use of online technologies to make CCR useful to its readership. In addition to CCR Online, we are contemplating an arXiv-like repository where papers can be submitted and reviewed. Importantly, readers could ask to be notified when papers matching certain keywords are submitted. This idea is still in its early stages: details will be worked out over the next few months.
Setting these practicalities aside, let me now turn my attention to an issue that sorely needs our thoughts and innovations: the use of computer networking as a tool to solve real-world problems.
The world today is in the midst of several crises: climate change, the rapidly growing gap between the haves and the have-nots, and the potential for epidemic outbreaks of infectious diseases, to name but a few. As thinking, educated, citizens of the world, we cannot but be challenged to do our part in averting the worst effects of theseglobal problems.
Luckily, computer networking researchers and professionals have an important role to play. For instance: We can use networks to massively monitor weather and to allow high quality videoconferences that avoid air travel. We can use wired and wireless sensors to greatly reduce the inefficiencies of our heating and cooling systems. We can provide training videos to people at the ‘bottom-of-the-pyramid’ that can open new horizons to them and allow them to earn a better living. We can spread information that can lead to the overthrow of endemic power hierarchies through the miracles of cheap cell phones and peer-to-peer communication. We can help monitor infectious diseases in the remote corners of the world and help coordinate rapid responses to them.
We have in our power the ideas and the technologies that can make a difference. And we must put these to good use.
CCR can and should become the forum where the brilliant minds of today are exposed to the problems, the real problems, that face us, and where solutions are presented, critiqued, improved, and shared. This must be done. Let’s get started!
Buffer sizing has received a lot of attention recently since it is becoming increasingly difficult to use large buffers in highspeed routers. Much of the prior work has concentrated on analyzing the amount of buffering required in core routers assuming that TCP carries all the data traffic. In this paper, we evaluate the amount of buffering required for RCP on a single congested link, while explicitly modeling flow arrivals and departures. Our theoretical analysis and simulations indicate that buffer sizes of about 10% of the bandwidth-delay product are sufficient for RCP to deliver good performance to end-users.
Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.
A Stream Control Transmission Protocol (SCTP) capable Network Address Translation (NAT) device is necessary to support the wider deployment of the SCTP protocol. The key issues for an SCTP NAT are SCTP’s control chunk multiplexing and multi-homing features. Control chunk multiplexing can expose an SCTP NAT to possible Denial of Service attacks. These can be mitigated through the use of chunk and parameter processing limits.
Multiple and changing IP addresses during an SCTP association, mean that SCTP NATs cannot operate in the way conventional UDP/TCP NATs operate. Tracking these multiple global IP addresses can help in avoiding lookup table conflicts, however, it can also result in circumstances that can lead to NAT state inconsistencies. Our analysis shows that tracking global IP addresses is not necessary in most expected practical installations.
We use our FreeBSD SCTP NAT implementation, alias_sctp to examine the performance implications of tracking global IP addresses. We find that typical memory usage doubles and that the processing requirements are significant for installations that experience high association arrival rates.
In conclusion we provide practical recommendations for a secure stable SCTP NAT installation.
Data confidentiality over mobile devices can be difficult to secure due to a lack of computing power and weak supporting encryption components. However, modern devices often have multiple wireless interfaces with diverse channel capacities and security capabilities. We show that the availability of diverse, heterogeneous links (physical or logical) between nodes in a network can be used to increase data confidentiality, on top of the availability or strength of underlying encryption techniques. We introduce a new security approach using multiple channels to transmit data securely, based on the idea of deliberate corruption, and information reduction, and analyze its security using the information theoretic concept of secrecy capacity, and the wiretap channel. Our work introduces the idea of channel design with security in mind.
Jay Lepreau was uncharacteristically early in his passing, but left behind him a trail of great research, rigorously and repeatably evaluated systems, and lives changed for the better.
The current Internet fairness paradigm mandates that all protocols have equivalent response to packet loss and other congestion signals, allowing relatively simple network devices to attain a weak form of fairness by sending uniform signals to all ows. Our paper, which recently received the ACM SIGCOMM Test of Time Award, modeled the reference Additive-Increase-Multiplicative-Decrease algorithm used by TCP. However, in many parts of the Internet ISPs are choosing to explicitly control customer traffic, because the traditional paradigm does not sufficiently enforce fairness in a number of increasingly common situations. This editorial note takes the position we should embrace this paradigm shift, which will eventually move the responsibility for capacity allocation from the end-systems to the network itself. This paradigm shift might eventually eliminate the requirement that all protocols be TCP-Friendly".
This paper discusses the concept of Cloud Computing to achieve a complete definition of what a Cloud is, using the main characteristics typically associated with this paradigm in the literature. More than 20 definitions have been studied allowing for the extraction of a consensus definition as well as a minimum definition containing the essential characteristics. This paper pays much attention to the Grid paradigm, as it is often confused with Cloud technologies. We also describe the relationships and distinctions between the Grid and Cloud approaches.
60 GHz is considered the most promising technology to deliver gigabit wireless for indoor communications. We propose to integrate 60 GHz radio with the existing Wi-Fi radio in 2.4/5 GHz band to take advantage of the complementary nature of these different bands. This integration presents an opportunity to provide a unified technology for both gigabit Wireless Personal Area Networks (WPAN) and Wireless Local Area Networks (WLAN), thus further reinforcing the technology convergence that is already underway with the widespread adoption of Wi-Fi technology. Many open research questions remain to make this unified solution work seamlessly for WPAN and WLAN.
Advances in networking have been accelerated by the use of abstractions, such as “layering”, and the ability to apply those abstractions across multiple communication media. Wireless communication provides the greatest challenge to these clean abstractions because of the lossy communication media. For many networking researchers, wireless communications hardware starts and ends with WiFi, or 802.11 compliant hardware.
However, there has been a recent growth in software defined radio, which allows the basic radio medium to be manipulated by programs. This mutable radio layer has allowed researchers to exploit the physical properties of radio communication to overcome some of the challenges of the radio media; in certain cases, researchers have been able to develop mechanisms that are difficult to implement in electrical or optical media. In this paper, we describe the different design variants for software radios, their programming methods and survey some of the more cutting edge uses of those radios.
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
While most of us are involved in organizing conferences in some way, we probably do not pay too much attention to the organizational model of these events. This is somewhat surprising, given that conferences are probably the most visible activity of most professional societies, and also entail significant expenditures of money and volunteer labor. While the local square dance club with a $500 annual budget probably has bylaws and statutes, most conferences with hundred thousand dollar budgets operate more by oral tradition than by formal descriptions of responsibilities. In almost all cases, this works just fine, but this informality can lead to misunderstandings or problems when expectations differ among the volunteers or when there is a crisis. Thus, I believe that it is helpful to have clearer models, so that conferences and volunteers can reach a common understanding of what is expected of everybody that contributes their time to the conference, and also who is responsible when things go wrong. For long-running conferences, the typical conference organization involves four major actors: the sponsoring professional organization, a steering committee, the general chairs and the technical program chairs. However, the roles and reporting relationships seem to differ rather dramatically between different conferences.
It is clearly a time of change. Naturally, I am talking about the change of person in the top post: “Le Boss Grand” Christophe Diot is replaced by “Canadian Chief, Eh?” Srinivasan Keshav. Coincidence? No, my friends. This change of the guards in the most powerful position in CCR, and, by some stretch of the imagination, in SIGCOMM, and, by arbitrary extension, in the scientific world at large, is just the start of a period of change that will be later known as the Great Changes. Taking the cue from that, I say, let’s change.
This talk overviews some of the highlights and success stories in the mathematical modeling and analysis of the Internet (and other networks). We will begin with Kleinrock’s seminal work on modeling store and forward networks and its extensions, and end with the successful development of fluid models for the current Internet. The rest of the talk will focus on lessons learned from these endeavors and how modeling and analysis can and will play a role in the development of new networks (e.g., wireless networks, application-level networks). Finally, we conclude that one can have fun modeling networks while at the same time make a living.
IP networks today require massive effort to configure and manage. Ethernet is vastly simpler to manage, but does not scale beyond small local area networks. This paper describes an alternative network architecture called SEATTLE that achieves the best of both worlds: The scalability of IP combined with the simplicity of Ethernet. SEATTLE provides plug-and-play functionality via flat addressing, while ensuring scalability and efficiency through shortest-path routing and hash-based resolution of host information. In contrast to previous work on identity-based routing, SEATTLE ensures path predictability and stability, and simplifies network management. We performed a simulation study driven by real-world traffic traces and network topologies, and used Emulab to evaluate a prototype of our design based on the Click and XORP open-source routing platforms. Our experiments show that SEATTLE efficiently handles network failures and host mobility, while reducing control overhead and state requirements by roughly two orders of magnitude compared with Ethernet bridging.
In this paper, we present a new link-state routing algorithm called Approximate Link state (XL) aimed at increasing routing efficiency by suppressing updates from parts of the network. We prove that three simple criteria for update propagation are sufficient to guarantee soundness, completeness and bounded optimality for any such algorithm. We show, via simulation, that XL significantly outperforms standard link-state and distance vector algorithms—in some cases reducing overhead by more than an order of magnitude— while having negligible impact on path length. Finally, we argue that existing link-state protocols, such as OSPF, can incorporate XL routing in a backwards compatible and incrementally deployable fashion.
We present path splicing, a new routing primitive that allows network paths to be constructed by combining multiple routing trees (“slices”) to each destination over a single network topology. Path splicing allows traffic to switch trees at any hop en route to the destination. End systems can change the path on which traffic is forwarded by changing a small number of additional bits in the packet header. We evaluate path splicing for intradomain routing using slices generated from perturbed link weights and find that splicing achieves reliability that approaches the best possible using a small number of slices, for only a small increase in latency and no adverse effects on traffic in the network. In the case of interdomain routing, where splicing derives multiple trees from edges in alternate backup routes, path splicing achieves near-optimal reliability and can provide significant benefits even when only a fraction of ASes deploy it. We also describe several other applications of path splicing, as well as various possible deployment paths.