Ranveer Chandra is a Senior Researcher in the Mobility & Networking Research Group at Microsoft Research. His research is focused on mobile devices, with particular emphasis on wireless communications and energy efficiency. Ranveer is leading the white space networking project at Microsoft Research. He was invited to the FCC to present his work, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the worlds first urban white space network. The following interview captures the essence of his work on white spaces by focusing on his work published in ACM SIGCOMM 2009, which received the Best Paper Award.
The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.
The IRR is a set of globally distributed databases with which ASes can register their routing and address-related information. It is often believed that the quality of the IRR data is not reliable since there are few economic incentives for the ASes to register and update their routing information timely. To validate these negative beliefs, we carry out a comprehensive analysis of (IP prefix, its origin AS) pairs in BGP against the corresponding information registered with the IRR, and vice versa. Considering the BGP and IRR practices, we propose a methodology to match the (IP prefix, origin AS) pairs between the IRR and BGP. We observe that the practice of registering IP prefi xes and origin ASes with the IRR is prevalent. However, the quality of the IRR data can vary substantially depending on routing registries, regional Internet registries (to which ASes belong), and AS types. We argue that the IRR can help improve the security level of BGP routing by making BGP routers selectively rely on the corresponding IRR data considering these observations.
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.
Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PLR.
Not only do big data applications impose heavy bandwidth demands, they also have diverse communication patterns denoted as *-cast) that mix together unicast, multicast, incast, and all-to-all-cast. Effectively supporting such traffic demands remains an open problem in data center networking. We propose an unconventional approach that leverages physical layer photonic technologies to build custom communication devices for accelerating each *-cast pattern, and integrates such devices into an application-driven, dynamically configurable photonics accelerated data center network. We present preliminary results from a multicast case study to highlight the potential benefits of this approach.
In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.
Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.
During the last decade, we have witnessed a substantial change in content delivery networks (CDNs) and user access paradigms. If previously, users consumed content from a central server through their personal computers, nowadays they can reach a wide variety of repositories from virtually everywhere using mobile devices. This results in a considerable time-, location-, and event-based volatility of content popularity. In such a context, it is imperative for CDNs to put in place adaptive content management strategies, thus, improving the quality of services provided to users and decreasing the costs. In this paper, we introduce predictive content distribution strategies inspired by methods developed in the Recommender Systems area. Specifically, we outline different content placement strategies based on the observed user consumption patterns, and advocate their applicability in the state of the art CDNs.
Free and open access to information on the Internet is at risk: more than 60 countries around the world practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are likely to increase. We posit that, although it may not always be feasible to guarantee free and open access to information, citizens have the right to know when their access has been obstructed, restricted, or tampered with, so that they can make informed decisions on information access. We motivate the need for a system that provides accurate, verifiable reports of censorship and discuss the challenges involved in designing such a system. We place these challenges in context by studying their applicability to OONI, a new censorship measurement platform.
Many people in CS in general, and SIGCOMM in particular, have expressed concerns about an increasingly "hypercritical" approach to reviewing, which can block or discourage the publication of innovative research. The SIGCOMM Technical Steering Committee (TSC) has been addressing this issue, with the goal of encouraging cultural change without undermining the integrity of peer review. Based on my experience as an author, PC member, TSC member, and occasional PC chair, I examine possible causes for hypercritical reviewing, and offer some advice for PC chairs, reviewers, and authors. My focus is on improving existing publication cultures and peer review processes, rather than on proposing radical changes.
On December 12-13 2012, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 3rd interdisciplinary Workshop on Internet Economics (WIE) at the University of California's San Diego Supercomputer Center. The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform current and emerging regulatory and policy debates. The theme for this year's workshop was "Definitions and Data". This report describes the discussions and presents relevant open research questions identified by participants. Slides presented at the workshop and a copy of this final report are available at 
On February 6-8, 2013, CAIDA hosted the fifth Workshop on Active Internet Measurements (AIMS-5) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. As with previous AIMS workshops, the goals were to further our understanding of the potential and limitations of active measurement research and infrastructure in the wide-area Internet, and to promote cooperative solutions and coordinated strategies to address future data needs of the network and security operations and research communities. The workshop focus this year was on creating, managing, and analyzing annotations of large longitudinal active Internet measurement data sets. Due to popular demand, we also dedicated half a day to large-scale active measurement (performance/topology) from mobile/cellular devices. This report describes topics discussed at this year's workshop. Materials related to the workshop are available at http://www.caida.org/workshops/.
The ACM 8th international conference on emerging Networking EXperiements and Technologies (CoNEXT) was or- ganized in a lovely hotel in the south of France. Although it was in an excellent location in the city center of Nice with views to the sea, it suffered from poor Internet connectivity. In this paper we describe what happened to the network at CoNEXT and explain why Internet connectivity is usually a problem at small hotel venues. Next we highlight the usual issues with the network equipment that leads to the general network dissatisfaction of conference attendees. Finally we describe how we alleviated the problem by offloading network services and all network traffic into the cloud while supporting over 100 simultaneous connected devices on a single ADSL link with a device that is rated to only support around 15-20. Our experience shows that with simple offloading of certain network services, small conference venues with limited budget no longer have to be plagued by the usual factors that lead to an unsatisfactory Internet connectivity experience.
Here is my second issue of CCR, and I am really happy to see that a lot of the things I wrote in my previous editorial are happening or are on their way! Thanks to the wonderful editorial team this issue has five technical papers, while some of the area editors have started contacting prominent members of our community to obtain their retrospective on their past work. In parallel, I have been really fortunate to receive a number of interesting editorials, some of which I solicited and some of which I received through the normal submission process.
Craig Partridge has provided us with an editorial on the history of CCR. A very interesting read not only for the new members of our community, but for everyone. This issue features an editorial on the challenges that cognitive radio deployments are going to face, and a new network paradigm that could be very relevant in developing regions, named "lowest cost denominator networking." I am positive that each one of those editorials is bound to make you think.
Following my promise in January's editorial note, this issue is also bringing some industrial perspective to CCR. We have two editorial notes on standardization activities at the IETF, 3GPP, ITU, etc. I would like to sincerely thank the authors, since putting structure around such activities to report them in a concise form is not an easy task to say the least.
Research in the area of networking has seen a tremendous increase in breadth in recent years. Our community is now studying core networking technologies, cellular networks, mobile systems, networked applications. In addition, a large number of consumer electronics products are increasingly becoming connected, using wired or wireless technologies. Understanding the trends in the consumer electronics industry is bound to inform interesting related research in our field. With that in mind, I invited my colleagues in the Telefonica Video Unit and Telefonica Digital to submit their report on what they considered the highlights of the Consumer Electronics Show (CES) that took place in Las Vegas in January 2013. I hope that article inspires you towards novel directions.
I am really pleased to see CCR growing! Please do not hesitate to contact me with comments, and suggestions!
The Border Gateway Protocol (BGP) was designed without security in mind. Until today, this fact makes the Internet vulnerable to hijacking attacks that intercept or blackhole Internet traffic. So far, significant effort has been put into the detection of IP prefix hijacking, while AS hijacking has received little attention. AS hijacking is more sophisticated than IP prefix hijacking, and is aimed at a long-term benefit such as over a duration of months. In this paper, we study a malicious case of AS hijacking, carried out in order to send spam from the victim's network. We thoroughly investigate this AS hijacking incident using live data from both the control and the data plane. Our analysis yields insights into how an attacker proceeded in order to covertly hijack a whole autonomous system, how he misled an upstream provider, and how he used an unallocated address space. We further show that state of the art techniques to prevent hijacking are not fully capable of dealing with this kind of attack. We also derive guidelines on how to conduct future forensic studies of AS hijacking. Our findings show that there is a need for preventive measures that would allow to anticipate AS hijacking and we outline the design of an early warning system.
To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.
We consider an important problem of wireless sensor network (WSN) routing topology inference/tomography from indirect measurements observed at the data sink. Previous studies on WSN topology tomography are restricted to static routing tree estimation, which is unrealistic in real-world WSN time-varying routing due to wireless channel dynamics. We study general WSN routing topology inference where routing structure is dynamic. We formulate the problem as a novel compressed sensing problem. We then devise a suite of decoding algorithms to recover the routing path of each aggregated measurement. Our approach is tested and evaluated though simulations with favorable results. WSN routing topology inference capability is essential for routing improvement, topology control, anomaly detection and load balance to enable effective network management and optimized operations of deployed WSNs.
Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor , we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.
During the last decade, we have seen the rise of discussions regarding the emergence of a Future Internet. One of the proposed approaches leverages on the separation of the identifier and the locator roles of IP addresses, leading to the LISP (Locator/Identifier Separation Protocol) protocol, currently under development at the IETF (Internet Engineering Task Force). Up to now, researches made on LISP have been rather theoretical, i.e., based on simulations/emulations often using Internet traffic traces. There is no work in the literature attempting to assess the state of its deployment and how this has evolved in recent years. This paper aims at bridging this gap by presenting a first measurement study on the existing worldwide LISP network (lisp4.net). Early results indicate that there is a steady growth of the LISP network but also that network manageability might receive a higher priority than performance in a large scale deployment.
A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.
"The Internet is for everyone" claims Vint Cerf, the father of the Internet via RFC 3271. The Internet Society's recent global Internet survey reveals that the Internet should be considered as a basic human birth right. We strongly agree with these and believe that basic access to the Internet should be made free, at least to access the essential services. However the current Internet access model, which is governed by market economics makes it practically infeasible for enabling universal access especially for those with socio-economic barriers. We see enabling benevolence in the Internet (act of sharing resources) as a potential solution to solve the problem of digital exclusion caused due to socio-economic barriers. In this paper, we propose LCD-Net: Lowest Cost Denominator Networking, a new Internet paradigm that architects multi-layer resource pooling Internet technologies to support benevolence in the Internet. LCD-Net proposes to bring together several existing resource pooling Internet technologies to ensure that users and network operators who share their resources are not affected and at the same time are incentivised for sharing. The paper also emphasizes the need to identify and extend the stakeholder value chain to ensure such benevolent access to the Internet is sustainable.
Over the last few years, we have witnessed the deployment of large measurement platforms that enable measurements from many vantage points. Examples of these platforms include SamKnows and RIPE ATLAS. All told, there are tens of thousands of measurement agents. Most of these measurement agents are located in the end-user premises; these can run measurements against other user agents located in strategic locations, according to the measurements to be performed. Thanks to the large number of measurement agents, these platforms can provide data about key network performance indicators from the end-user perspective. This data is useful to network operators to improve their operations, as well to regulators and to end users themselves. Currently deployed platforms use proprietary protocols to exchange information between the different parts. As these platforms grow to become an important tool to understand network performance, it is important to standardize the protocols between the different elements of the platform. In this paper, we present ongoing standardization efforts in this area as well as the main challenges that these efforts are facing.
Standardization organizations play a major role in the telecommunications industry to guarantee interoperability between vendors and allow for a common ground where all players can voice their opinion regarding the direction the industry should follow. In this paper we review the current activities in some of the most relevant standardization bodies in the area of communication networks: 3GPP, IEEE 802.11, BBF, IETF, ONF, ETSI ISG NFV, oneM2M and ETSI TC ITS. Major innovations being developed in these bodies are summarized describing the most disruptive directions taken and expected to have a remarkable impact in future networks. Finally, some trends common among different bodies are identified covering different dimensions: i) core technology enhancements, ii) inter-organizations cooperation for convergence, iii) consideration of raising disruptive technical concepts, and iv) expanding into emerging use cases aiming at an increase of future market size.
A brief history of the evolution of ACM SIGCOMM Computer Communication Review as a newsletter and journal is presented.
The Consumer Electronics Show, which is held every year in Las Vegas in early January, continues to be an important fair in the consumer sector, though increasingly the major manufacturers prefer to announce their new products at their own specific events in order to gain greater impact. Only the leading TV brands unveil their artillery of new models for the coming year. Despite this, it continues to break records: there were over 150,000 visitors (from more than 150 countries), the number of new products announced exceeded 20,000 and the fair occupied over 2 million square meters.
There have been many recent discussions within the computer science community on the relative roles of conferences and journals [1, 2, 3]. They clearly offer different forums for the dissemination of scientific and technical ideas, and much of the debate has been on if and how to leverage both. These are important questions that every conference and journal ought to carefully consider, and the CoNEXT Steering Committee recently initiated a discussion on this topic. The main focus of the discussion was on how to on one hand maintain the high quality of papers accepted for presentation at CoNEXT, and on the other hand improve the conference's ability to serve as a timely forum where new and exciting but not necessarily polished or fully developed ideas could be presented. Unfortunately, the stringent "quality control" that prevails during the paper selection process of selective conferences, including CoNEXT, often makes it difficult for interesting new ideas to break-through. To make it, papers need to ace it along three major dimensions, namely, technical correctness and novelty, polish of exposition and motivations, and completeness of the results. Most if not all hot-off-the-press papers will fail in at least one of those dimensions. On the other hand, there are conferences and workshops that target short papers. Hotnets is one of such venues that has attracted short papers presenting new ideas. However, from a community viewpoint, Hotnets has several limitations. First, Hotnets is an invitation-only workshop. Coupled with a low acceptance rate, this limits the exposure of Hotnets papers to the community. Second, Hotnets has never been held outside North-America. The SIGCOMM and CoNEXT workshops are also a venue where short papers can be presented and discussed. However, these workshops are focussed on a specific subdomain and usually do not attract a broad audience. The IMC short papers are a more interesting model because short and regular papers are mixed in the single track conference. This ensures broad exposure for the short papers, but the scope of IMC is much smaller than CoNEXT. In order to address this intrinsic tension that plagues all selective conferences, CoNEXT 2013 is introducing a short paper category with submissions requested through a logically separate call-for-papers. The separate call for paper is meant to clarify to both authors and TPC members that short papers are to be judged using different criteria. Short papers will be limited to six (6) two-column pages in the standard ACM conference format. Most importantly, short papers are not meant to be condensed versions of standard length papers and neither are they targeted at traditional "position papers." In particular, papers submitted as regular (long) papers will not be eligible for consideration as short papers. Instead, short paper submissions are intended for high-quality technical works that either target a topical issue that can be covered in 6 pages, or are introducing a novel but not fully flushed out idea that can benefit from the feedback that early exposure can provide. Short papers will be reviewed and selected through a process distinct from that of long papers and based on how good a match they are for the above criteria. As alluded to, this separation is meant to address the inherent dilemma faced by highly selective conferences, where reviewers typically approach the review process looking for reasons to reject a paper (how high are the odds that a paper is in the top 10-15%?). For that purpose, Program Committee members will be reminded that completeness of the results should NOT be a criterion used when assessing short papers. Similarly, while an unreadable paper is obviously not one that should be accepted, polish should not be a major consideration either. As long as the paper manages to convey its idea, a choppy presentation should not by itself be ground for rejecting a paper. Finally, while technical correctness is important, papers that maybe claim more than they should, are not to be disqualified simply on those grounds. As a rule, the selection process should focus on the "idea" presented in the paper. If the idea is new, or interesting, or unusual, etc., and is not fundamentally broken, the paper should be considered. Eventual acceptance will ultimately depend on logistics constraints (how many such papers can be presented), but the goal is to offer a venue at CoNEXT where new, emerging ideas can be presented and receive constructive feedback. The CoNEXT web site1 provide additional information on the submission process of short (and regular) papers.
A new year begins and a new challenge needs to be undertaken. Life is full of challenges, but those that we invite ourselves have something special of their own. In that spirit, I am really happy to be taking on as the editor for ACM Computer Communications Review. Keshav has done a tremendous job making CCR a high quality publication that unites our community. The combination of peer reviewed papers and editorial submissions provides a ground to publish the latest scientific achievements in our field, but also position them within the context of our ever changing technological landscape.
Internet traffic measurement and analysis has long been used to characterize network usage and user behaviors, but faces the problem of scalability under the explosive growth of Internet traffic and high-speed access. Scalable Internet traffic measurement and analysis is difficult because a large data set requires matching computing and storage resources. Hadoop, an open-source computing platform of MapReduce and a distributed file system, has become a popular infrastructure for massive data analytics because it facilitates scalable data processing and storage services on a distributed computing system consisting of commodity hardware. In this paper, we present a Hadoop-based traffic monitoring system that performs IP, TCP, HTTP, and NetFlow analysis of multi-terabytes of Internet traffic in a scalable manner. From experiments with a 200-node testbed, we achieved 14 Gbps throughput for 5 TB files with IP and HTTP-layer analysis MapReduce jobs. We also explain the performance issues related with traffic analysis MapReduce jobs.
The size of the global Routing Information Base (RIB) has been increasing at an alarming rate. This directly leads to the rapid growth of the global Forwarding Information Base (FIB) size, which raises serious concerns for ISPs as the FIB memory in line cards is much more expensive than regular memory modules and it is very costly to increase this memory capacity frequently for all the routers in an ISP. One potential solution is to install only the most popular FIB entries into the fast memory (i.e., a FIB cache), while storing the complete FIB in slow memory. In this paper, we propose an effective FIB caching scheme that achieves a considerably higher hit ratio than previous approaches while preventing the cache-hiding problem. Our experimental results show that with only 20K prefixes in the cache (5.36% of the actual FIB size), the hit ratio of our scheme is higher than 99.95%. Our scheme can also handle cache misses, cache replacement and routing updates efficiently.
The computer science research paper review process is largely human and time-intensive. More worrisome, review processes are frequently questioned, and often non-transparent. This work advocates applying computer science methods and tools to the computer science review process. As an initial exploration, we data mine the submissions, bids, reviews, and decisions from a recent top-tier computer networking conference. We empirically test several common hypotheses, including the existence of readability, citation, call-for-paper adherence, and topical bias. From our findings, we hypothesize review process methods to improve fairness, efficiency, and transparency.