We are living in the worst economic times since the 1930s. The US economy contracted at an annualized rate of 3.8% in the fourth quarter of 2008, the corresponding figure for Japan is 12.7%, and Iceland may become the first post-depression Western economy to suffer from an outright fiscal collapse. Economists tell us that one of the reasons for this worldwide recession is a ‘housing bubble’ where banks overestimated a borrower's ability to pay back a loan and where house buyers – armed with cheap loans – overestimated the true worth of a house.
The recent Internet bubble is still fresh in some of our minds, where there was a similar overestimation of the true worth of Internet-enabled businesses. That bubble crashed too, with consequences suffered by the entire economy.
Unfortunately, bubbles are not uncommon in networking research. Certain topics appear seemingly from nowhere, become ‘hot,’ propelled by interest from both leading researchers and funding agencies, and just as mysteriously die off, leaving behind a flood of papers, mostly in second- and third-tier conferences, written by authors only too keen to jump on a trend. Bubbles lead to an overexpenditure of research effort on marginal topics, wasting resources and breeding a certain degree of cynicism amongst our brightest young minds. Moreover, they drain resources from more deserving but less hyped ventures. Can our experience with economic bubbles shed light on research bubbles and teach us how to avoid them?
Both economic and research bubbles share some similarities, such as having unrealistic expectations about what can be achieved by a company, the real-estate market, or a new technology. Bubble participants either naively or cynically invest time and money in solutions and technologies whose success is far from assured and whose widespread adoption would require the complete overthrow of legacy infrastructure. To avoid being caught in a bubble, or to merely avoid being caught in the tail end of one (being at the leading edge of a bubble is both fun and profitable!), ask tough questions about the underlying assumptions. In the midst of the housing bubble, could one point out housing prices could down as easily as they could go up? Could anyone have believed in the ’90s that videoconferencing, ATM, RSVP and other 'hot' topics would soon be consigned to the midden heap of history? I think so. It only requires the willingness to question every assumption and draw the inevitable conclusions.
I think that in the end, what really inflates a bubble is money. Cheap money from venture capitalists, banks, and funding agencies makes it profitable to enter a bubble and make it grow. So it is important that the gatekeepers of funding be vigilant. They should be prepared to turn down applications for funding that smack of riding the bubble. Experienced researchers should willingly serve on grant panels, then should be prepared to be critical with even their favourite areas of research if necessary.
Finally, bubbles can be identified and quashed by an active media. The press should have more deeply questioned the Internet and housing bubbles. Research conferences in our field should do the same for research bubbles. Paper reviewers and program committees thus play the same role as investigative journalists.
This is not to say that all speculative ideas should be systematically de-funded and rejected. There should always be room for open-minded, blue-sky research. However, this activity should be limited and clearly identified. Perhaps every conference should have blue-sky sessions where all assumptions are left unchallenged (our community has done this with recent papers on ‘cleanslate’ designs). The best of these ideas, when proven to be sound, could then be funded and widely adopted.
Of course, I am assuming that that we can get out of bubbles by rational means. Humans are all too fallible, however, and bubble thinking plays on human foibles. Worse, there is an incentive structure that encourages bubble formation: people at the leading edge of a bubble are disproportionately rewarded and people at the tail end can point to large body of literature (emerging from top-ranked places!) to justify their work, which reduces their cognitive effort. So, bubbles may be here to stay.
Nevertheless, given the destructive effects of bubbles over the long term, I suggest that we look out for them, deflating them before they deflate us!
This paper reports some observations on the relationships between three measures of the size of the Internet over more than ten years. The size of the BGP4 routing table, the number of active BGP4 Autonomous Systems, and a lower bound on the total size of the Internet, appear to have fairly simple relationships despite the Internet’s growth by two orders of magnitude. In particular, it is observed that the size of the BGP4 system appears to have grown approximately in proportion to the square root of the lower-bound size of the globally addressable Internet. A simple model that partially explains this square law is described. It is not suggested that this observation and model have predictive value, since they cannot predict qualitative changes in the Internet topology. However, they do offer a new way to understand and monitor the scaling of the BGP4 system.
Careless selection of the ephemeral port number portion of a transport protocol’s connection identifier has been shown to potentially degrade security by opening the connection up to injection attacks from “blind” or “off path” attackers—or, attackers that cannot directly observe the connection. This short paper empirically explores a number of algorithms for choosing the ephemeral port number that attempt to obscure the choice from such attackers and hence make mounting these blind attacks more difficult.
The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.
This paper addresses the open problem of locating an attacker that intentionally hides or falsifies its position using advanced radio technologies. A novel attacker localization mechanism, called Access Point Coordinated Localization (APCL), is proposed for IEEE 802.11 networks. APCL actively forces the attacker to reveal its position information by combining access point (AP) coordination with the traditional range-free localization. The optimal AP coordination process is calculated by modeling it as a finite horizon discrete Markov decision process, which is efficiently solved by an approximation algorithm. The performance advantages are verified through extensive simulations.
The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay- bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.
This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow per- formance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.
This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial. New research agenda items were also identified.
It has been proposed that research in certain areas is to be avoided when those areas have gone cold. While previous work concentrated on detecting the temperature of a research topic, this work addresses the question of changing the temperature of said topics. We make suggestions for a set of techniques to re-heat a topic that has gone cold. In contrast to other researchers who propose uncertain approaches involving creativity, lateral thinking and imagination, we concern ourselves with deterministic approaches that are guaranteed to yield results.
One expectation about the future Internet is the participa- tion of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Inter- net of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.
n double-blind reviewing (DBR), both reviewers and authors are unaware of each others' identities and affiliations. DBR is said to increase review fairness. However, DBR may only be marginally effective in combating the randomness of the typical conference review process for highly-selective conferences. DBR may also make it more difficult to adequately review conference submissions that build on earlier work of the authors and have been partially published in workshops. I believe that DBR mainly increases the perceived fairness of the reviewing process, but that may be an important benefit. Rather than waiting until the final stages, the reviewing process needs to explicitly address the issue of workshop publications early on.
They say that music and mathematics are intertwined. I am not sure this is true, but I always wanted to use the word intertwined. The point is that my call for poetry received a overwhelmingly enthusiastic response from at least five people. My mailbox was literally flooded (I have a small mailbox). This article is a tribute to the poetry of science, or, as I like to call it, the Poetry of Science. You will be amazed.