Efficient FIB caching using minimal non-overlapping prefixes

Yaoqing Liu, Syed Obaid Amin, Lan Wang
Appears in: 
CCR January 2013

The size of the global Routing Information Base (RIB) has been increasing at an alarming rate. This directly leads to the rapid growth of the global Forwarding Information Base (FIB) size, which raises serious concerns for ISPs as the FIB memory in line cards is much more expensive than regular memory modules and it is very costly to increase this memory capacity frequently for all the routers in an ISP. One potential solution is to install only the most popular FIB entries into the fast memory (i.e., a FIB cache), while storing the complete FIB in slow memory. In this paper, we propose an effective FIB caching scheme that achieves a considerably higher hit ratio than previous approaches while preventing the cache-hiding problem. Our experimental results show that with only 20K prefixes in the cache (5.36% of the actual FIB size), the hit ratio of our scheme is higher than 99.95%. Our scheme can also handle cache misses, cache replacement and routing updates efficiently.

Public Review By: 
Fabián E. Bustamante

The sizes of the Routing Information Base (RIB) and the associated Internet’s routers forwarding tables (the Forwarding Information Based or FIB) are growing quickly due to a number of trends, including the ever-increasing user population, finer-grained traffic engineering practices, non-aggregatable address allocations, and multi-homing. Growing FIB are a serious concern for ISPs, since the need for high-speed packet forwarding means the FIB must be kept in special purpose memory that is more expensive and power hungry than conventional DRAM. Sounds familiar, right? High performance demand on expensive, small memory. You have seen this before and the response has become almost instinctive: caching. This paper presents the design and trace-based evaluation of a scheme for FIB caching. With FIB caching, the FIB stores the most popular route entries in fast memory and the rest in a larger, slower and cheaper memory. As in other contexts, caching works here because Internet traffic exhibits high degrees of reference locality, both temporal, as packets are grouped into flows, often transmitted in bursts - and spatial locality, as many hosts access a small number of popular destinations. FIB caching is not without issues. As with any caching mechanism, high hit ratios are critical since a 1% miss ratio could lead to millions of lookups per second in slow memory. In addition, as the keys used for looking up routes in the FIB are prefixes, it is possible that the longest matching prefix for an IP address in the cache differs from that in the full FIB - a unique challenge known as the “cache hiding” problem. The authors propose a caching scheme that addresses both issues, achieving considerably higher hit ratios than previous approaches while avoiding the cache-hiding problem. The proposed scheme is based on the observation that cache hiding occurs because a less specific prefix that covers a more specific prefix in the full FIB might be in the cache when the latter is not. The authors suggest, then, to store only non-overlapping prefixes in the cache, and show how to generate a minimal number of such non-overlapping prefixes while handling cache misses, replacements and route changes. Promising trace-based evaluation results show this scheme can achieve a hit ratio higher than 99.95%, while caching about 5.3% of the full FIB. Overall, the reviewers felt that the contribution of the work is clear, if incremental, and while the proposed algorithm appears obvious, it seems to have been so far overlooked by the community.