Anja Feldmann

Programming the Home and Enterprise WiFi with OpenSDWN

By: 
Julius Schulz-Zander, Carlos Mayer, Bogdan Ciobotaru, Stefan Schmid, Anja Feldmann, Roberto Riggio
Appears in: 
CCR August 2015

The quickly growing demand for wireless networks and the numerous application-specific requirements stand in stark contrast to today's inflexible management and operation of WiFi networks. In this paper, we present and evaluate O PEN SDWN, a novel WiFi architecture based on an SDN/NFV approach. O PEN SDWN exploits datapath programmability to enable service differentiation and fine-grained transmission control, facilitating the prioritization of critical applications.

Pushing CDN-ISP collaboration to the limit

By: 
Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber
Appears in: 
CCR July 2013

Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions.

Public Review By: 
Fabián E. Bustamante

Content delivery has become the Internet's primary purpose and its main source of traffic. Current statistics are staggering, with Netflix alone being responsible for 30% of the peak traffic in North America. Not surprisingly, the variety of architectural models for distributing this content is rapidly expanding with approaches that involve, to different degrees, all parties in the content delivery ecosystem - CDNs, content providers, ISPs and end users. This paper presents the design and evaluation of a system – NetPaaS – to enable the collaboration of two key stakeholders – ISPs and CDNs. NetPaaS builds on the authors' previous work on CaTE*, expanding the forms of collaboration to the placement of content servers in a network. Reviewers have a number of comments on the paper’s early draft, including the apparent simplicity of the system, the potential for information leakage (from CDN to ISP and vice-versa) and a lack of novelty when compared with the authors’ own and other's related efforts. In the final version, the authors addressed most of the comments – pointing out, for instance, to the challenge of incorporating in a scalable manner all network updates and clarifying that NetPaaS assumes that the information exchange is between trusted parties that have already formed strategic alliances. Reviewers uniformly agreed that even if the ideas put forward are not particularly new, the impressive empirical evaluation of the proposed ideas, leveraging traces from the largest commercial CDN and a large tier1 ISP, is a unique and interesting contribution in itself.

Enabling content-aware traffic engineering

By: 
Ingmar Poese, Benjamin Poese, Georgios Smaragdakis, Steve Uhlig, Anja Feldmann, Bruce Maggs
Appears in: 
CCR October 2012

Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being full+y aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs.

Public Review By: 
Renata Teixeira

Content Distribution Networks (CDNs) dynamically map end-users to servers triggering rapid shifts in traffic demands of Internet Service Providers (ISPs). These highly variable traffic demands represent an important challenge to ISPs that must adapt routing to avoid congestion. At the same time, CDNs face challenges to identify the best server to map a user without better knowledge of the ISP network. This paper designs and evaluates a system, called Content-aware Traffic Engineering (CaTE), allowing CDNs and ISPs to cooperate to perform better server selection and traffic engineering. The idea is that the CDN gives a list of potential servers for a user’s request and the ISP returns a rank of these servers to optimize both content delivery performance and link utilization. Hence, instead of tuning the routing matrix to perform traffic engineering, which can lead to transient traffic disruptions, CaTE directly changes traffic demands through server selection. A key advantage of this approach is that no sensitive information flows between CDNs and ISPs. Reviewers had numerous comments on the submitted version of this paper. The main concern was the novelty of the solution compared to other proposals for CDN/ISP cooperation (for instance, the P4P proposal). In this revised version, the authors explain that it is not the proposal of CDN/ISP cooperation per se that is novel, but the fact that this cooperation is done without the exchange of sensitive information, with low overhead, and on small time scales. Another concern from reviewers was that the authors didn’t compare CaTE with other existing schemes. Although the authors accurately argue that the other schemes require changing routing weights, the end-goal is similar. A quantitative comparison would allow operators of CDNs and ISPs to better weight the tradeoffs among content delivery performance, link utilization, routing disruptions, and privacy. In the end, all reviewers recognized that the paper addresses an interesting and current problem. They also appreciated the impressive system design/implementation, and the evaluation using traces from tens of thousands of DSL lines.

Leveraging Zipf's law for traffic offloading

By: 
Nadi Sarrar, Steve Uhlig, Anja Feldmann, Rob Sherwood, Xin Huang
Appears in: 
CCR January 2012

Internet traffic has Zipf-like properties at multiple aggregation levels. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the forwarder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintaining a reliable set of heavy hitters over time is challenging.

Public Review By: 
Jia Wang

This paper presented a router architecture design that consists of a complex software based controller and a simple fast packet forwarder. By leveraging the Zipf's property of the Internet traffic, authors proposed to improve router performance by passing a small number of heavy hitter traffic flows to the fast packet forwarder and hence offloading the software controller from most packets. The idea itself seems to be very simple. The authors showed that the top 1000 heavy hitter prefixes capture over 50% of traffic. The controller can be easily configured to offload these top heavy hitter flows to the forwarder. However, due to the churn in heavy hitter flows, the real challenge in employing this approach is how to select the set of heavy hitter flows that minimize the churn. To overcome this problem, this paper proposed a heavy hitter selection strategy – Traffic-aware Flow Offloading (TFO). TFO keeps tracking traffic statistics at multiple time scales and uses it in the heavy hitter selection process in order to maintain a high offloading ratio while limiting the changes to the set of heavy hitters. The paper used simulation on real traffic traces to evaluate TFO and compared it with traditional caching and bin-optimal. The results suggested that TFO can achieve similar offloading effectiveness as the bin-optimal, but with an order of magnitude smaller churn ratio. The paper also showed that TFO outperforms LRU and LFU in terms of the churn ratio when a small number of heavy hitters are selected. Though the core idea presented in this paper is simple, I find this paper interesting because the paper took one step further beyond simply exploring the Zipf's property of the Internet traffic and focused on solving the real challenge in selecting heavy hitters with limited churn ratio. The preliminary results show that TFO can be a promising solution. Clearly there are many interesting tradeoffs involved in TFO that need careful evaluation. For example, the parameters used in TFO are not thoroughly evaluated. In addition, it will be useful to show how TFO performs under more diverse set of traffic load and behavior patterns. Overall, the paper has presented an interesting idea and is well written. A more thorough evaluation of TFO will further strengthen the paper.

Dagstuhl Perspectives Workshop on End-to-End Protocols for the Future Internet

By: 
Jari Arkko, Bob Briscoe, Lars Eggert, Anja Feldmann, and Mark Handley
Appears in: 
CCR April 2009

This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial.

Enriching Network Security Analysis with Time Travel

By: 
Gregor Maier, Robin Sommer, Holger Dreger, Anja Feldmann, Vern Paxson, and Fabian Schneider
Appears in: 
CCR October 2008

In many situations it can be enormously helpful to archive the raw contents of a network traffic stream to disk, to enable later inspection of activity that becomes interesting only in retrospect. We present a Time Machine (TM) for network traffic that provides such a capability. The TM leverages the heavy-tailed nature of network flows to capture nearly all of the likely-interesting traffic while storing only a small fraction of the total volume.

Internet Clean-Slate Design: What and Why?

By: 
Anja Feldmann
Appears in: 
CCR July 2007

Many believe that it is impossible to resolve the challenges facing today’s Internet without rethinking the fundamental assumptions and design decisions underlying its current architecture. Therefore, a major research effort has been initiated on the topic of Clean Slate Design of the Internet’s architecture. In this paper we first give an overview of the challenges that a future Internet has to address and then discuss approaches for finding possible solutions, including Clean Slate Design.

Can ISPs and P2P Users Cooperate for Improved Performance?

By: 
Vinay Aggarwal, Anja Feldmann, and Christian Scheideler
Appears in: 
CCR July 2007

Peer-to-peer (P2P) systems, which are realized as overlays on top of the underlying Internet routing architecture, contribute a significant portion of today’s Internet traffic. While the P2P users are a good source of revenue for the Internet Service Providers (ISPs), the immense P2P traffic also poses a significant traffic engineering challenge to the ISPs.

Public Review By: 
Michalis Faloutsos

This paper addresses the antagonistic relationship between overlay/p2p networks and IPS providers: they both try to manage and control traffic at different level and with different goals, but in a way that inevitably leads to overlapping, duplicated, and conflicting behavior. The creation of a p2p network and the routing at the p2p layer are ultimately treading on the routing functions of ISPs. The paper proposes a solution to develop a synergistic relationship between p2p and ISPs: ISPs maintain an “oracle” to help p2p networks in making better choices in picking neighboring nodes. The solution provides benefits to both parties. ISPs become able to influence the p2p decisions, and ultimately the amount of traffic that flows in and out of their network, while p2p networks get performance information for “free.” The reviewers find that the problem is important and the solution is interesting and shows promise. An advantage of the method is that ISPs do not run into legal issues, since they do not engage in caching of potentially illegal content, they just provide performance information.

Syndicate content