Steve Uhlig

Pushing CDN-ISP collaboration to the limit

By: 
Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber
Appears in: 
CCR July 2013

Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions.

Public Review By: 
Fabián E. Bustamante

Content delivery has become the Internet's primary purpose and its main source of traffic. Current statistics are staggering, with Netflix alone being responsible for 30% of the peak traffic in North America. Not surprisingly, the variety of architectural models for distributing this content is rapidly expanding with approaches that involve, to different degrees, all parties in the content delivery ecosystem - CDNs, content providers, ISPs and end users. This paper presents the design and evaluation of a system – NetPaaS – to enable the collaboration of two key stakeholders – ISPs and CDNs. NetPaaS builds on the authors' previous work on CaTE*, expanding the forms of collaboration to the placement of content servers in a network. Reviewers have a number of comments on the paper’s early draft, including the apparent simplicity of the system, the potential for information leakage (from CDN to ISP and vice-versa) and a lack of novelty when compared with the authors’ own and other's related efforts. In the final version, the authors addressed most of the comments – pointing out, for instance, to the challenge of incorporating in a scalable manner all network updates and clarifying that NetPaaS assumes that the information exchange is between trusted parties that have already formed strategic alliances. Reviewers uniformly agreed that even if the ideas put forward are not particularly new, the impressive empirical evaluation of the proposed ideas, leveraging traces from the largest commercial CDN and a large tier1 ISP, is a unique and interesting contribution in itself.

Enabling content-aware traffic engineering

By: 
Ingmar Poese, Benjamin Poese, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, Bruce Maggs
Appears in: 
CCR October 2012

Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs.

Public Review By: 
Renata Teixeira

Content Distribution Networks (CDNs) dynamically map end-users to servers triggering rapid shifts in traffic demands of Internet Service Providers (ISPs). These highly variable traffic demands represent an important challenge to ISPs that must adapt routing to avoid congestion. At the same time, CDNs face challenges to identify the best server to map a user without better knowledge of the ISP network. This paper designs and evaluates a system, called Content-aware Traffic Engineering (CaTE), allowing CDNs and ISPs to cooperate to perform better server selection and traffic engineering. The idea is that the CDN gives a list of potential servers for a user’s request and the ISP returns a rank of these servers to optimize both content delivery performance and link utilization. Hence, instead of tuning the routing matrix to perform traffic engineering, which can lead to transient traffic disruptions, CaTE directly changes traffic demands through server selection. A key advantage of this approach is that no sensitive information flows between CDNs and ISPs. Reviewers had numerous comments on the submitted version of this paper. The main concern was the novelty of the solution compared to other proposals for CDN/ISP cooperation (for instance, the P4P proposal). In this revised version, the authors explain that it is not the proposal of CDN/ISP cooperation per se that is novel, but the fact that this cooperation is done without the exchange of sensitive information, with low overhead, and on small time scales. Another concern from reviewers was that the authors didn’t compare CaTE with other existing schemes. Although the authors accurately argue that the other schemes require changing routing weights, the end-goal is similar. A quantitative comparison would allow operators of CDNs and ISPs to better weight the tradeoffs among content delivery performance, link utilization, routing disruptions, and privacy. In the end, all reviewers recognized that the paper addresses an interesting and current problem. They also appreciated the impressive system design/implementation, and the evaluation using traces from tens of thousands of DSL lines.

Leveraging Zipf's law for traffic offloading

By: 
Nadi Sarrar, Steve Uhlig, Anja Feldmann, Rob Sherwood, Xin Huang
Appears in: 
CCR January 2012

Internet traffic has Zipf-like properties at multiple aggregation levels. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the forwarder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintaining a reliable set of heavy hitters over time is challenging.

Public Review By: 
Jia Wang

This paper presented a router architecture design that consists of a complex software based controller and a simple fast packet forwarder. By leveraging the Zipf's property of the Internet traffic, authors proposed to improve router performance by passing a small number of heavy hitter traffic flows to the fast packet forwarder and hence offloading the software controller from most packets. The idea itself seems to be very simple. The authors showed that the top 1000 heavy hitter prefixes capture over 50% of traffic. The controller can be easily configured to offload these top heavy hitter flows to the forwarder. However, due to the churn in heavy hitter flows, the real challenge in employing this approach is how to select the set of heavy hitter flows that minimize the churn. To overcome this problem, this paper proposed a heavy hitter selection strategy – Traffic-aware Flow Offloading (TFO). TFO keeps tracking traffic statistics at multiple time scales and uses it in the heavy hitter selection process in order to maintain a high offloading ratio while limiting the changes to the set of heavy hitters. The paper used simulation on real traffic traces to evaluate TFO and compared it with traditional caching and bin-optimal. The results suggested that TFO can achieve similar offloading effectiveness as the bin-optimal, but with an order of magnitude smaller churn ratio. The paper also showed that TFO outperforms LRU and LFU in terms of the churn ratio when a small number of heavy hitters are selected. Though the core idea presented in this paper is simple, I find this paper interesting because the paper took one step further beyond simply exploring the Zipf's property of the Internet traffic and focused on solving the real challenge in selecting heavy hitters with limited churn ratio. The preliminary results show that TFO can be a promising solution. Clearly there are many interesting tradeoffs involved in TFO that need careful evaluation. For example, the parameters used in TFO are not thoroughly evaluated. In addition, it will be useful to show how TFO performs under more diverse set of traffic load and behavior patterns. Overall, the paper has presented an interesting idea and is well written. A more thorough evaluation of TFO will further strengthen the paper.

IP Geolocation Databases: Unreliable?

By: 
Ingmar Poese, Steve Uhlig, Mohamed Ali Kaafar, Benoit Donnet, and Bamba Gueye
Appears in: 
CCR April 2011

The most widely used technique for IP geolocation consists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolocation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitations in their usability.

Modeling Internet Topology Dynamics

By: 
Hamed Haddadi, Steve Uhlig, Andrew Moore, Richard Mortier, and Miguel Rio
Appears in: 
CCR April 2008

Despite the large number of papers on network topology modeling and inference, there still exists ambiguity about the real nature of the Internet AS and router level topology. While recent findings have illustrated the inaccuracies in maps inferred from BGP peering and traceroute measurements, existing topology models still produce static topologies, using simplistic assumptions about power law observations and preferential attachment.

In search for an appropriate granularity to model routing policies

By: 
Wolfgang Mühlbauer, Steve Uhlig, Bingjie Fu, Mickael Meulle, and Olaf Maennel
Appears in: 
CCR October 2007

Routing policies are typically partitioned into a few classes that capture the most common practices in use today [1]. Unfortunately, it is known that the reality of routing policies [2] and peering relationships is far more complex than those few classes [1,3]. We take the next step of searching for the appropriate granularity at which policies should be modeled.

Syndicate content