Mark Handley

A Platform for High Performance and Flexible Virtual Routers on Commodity Hardware

By: 
Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, Laurent Mathy, and Panagiotis Papadimitriou
Appears in: 
CCR January 2010

Multi-core CPUs, along with recent advances in memory and buses, render commodity hardware a strong candidate for software router virtualization. In this context, we present the design of a new platform for virtual routers on modern PC hardware. We further discuss our design choices in order to achieve both high performance and flexibility for packet processing.

ROAR: Increasing the Flexibility and Performance of Distributed Search

By: 
Costin Raiciu, Felipe Huici, Mark Handley, and David S. Rosenblum
Appears in: 
CCR October 2009

To search the web quickly, search engines partition the web index over many machines, and consult every partition when answering a query. To increase throughput, replicas are added for each of these machines. The key parameter of these algorithms is the trade-off between replication and partitioning: increasing the partitioning level improves query completion time since more servers handle the query, but may incur non-negligible startup costs for each subquery. Finding the right operating point and adapting to it can significantly improve performance and reduce costs.

Dagstuhl Perspectives Workshop on End-to-End Protocols for the Future Internet

By: 
Jari Arkko, Bob Briscoe, Lars Eggert, Anja Feldmann, and Mark Handley
Appears in: 
CCR April 2009

This article summarises the presentations and discussions during a workshop on end-to-end protocols for the future Internet in June 2008. The aim of the workshop was to establish a dialogue at the interface between two otherwise fairly distinct communities working on future Internet protocols: those developing internetworking functions and those developing end-to-end transport protocols. The discussion established near-consensus on some of the open issues, such as the preferred placement of traffic engineering functionality, whereas other questions remained controversial.

Flow Processing and the Rise of Commodity Network Hardware

By: 
Adam Greenhalgh, Felipe Huici, Mickael Hoerdt, Panagiotis Papadimitriou, Mark Handley, and Laurent Mathy
Appears in: 
CCR April 2009

The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner.

Public Review By: 
Chadi Barakat

Network functionalities such as intrusion detection and load balancing are often implemented in specialized expensive middleboxes plugged inside the network. But, with the advent of commodity hardware and network switches, it is time to think about leveraging these new and cheap resources to support the same functionalities with lower cost without compromising efficiency. This is in the same spirit that software radio, virtual machines and virtual routers, have been introduced. The implementation of network functionalities in a kind of software environment has the further advantage of making them easily manageable and extendable to other applications (on software timescales).
The architecture introduced in this paper is called Flowstream. It proposes the implementation of network functionalities in virtualized machines/servers/routers run on top of commodity PCs. The flow of traffic among these virtual network entities is controlled by a programmable network switch implementing Openflow. The papers motivates the problem and discusses the architecture and its main components, plus a description of some potential applications. Even though there are no validation results, all reviewers appreciate the idea and agree on the fact that it will trigger discussions among CCR readers and the members of the networking community. This is a new research area that involves several tradeoffs (technical vs. economical, reliability vs. programmability) to be clearly understood and evaluated.
Programmable flow forwarding using Openflow has been already proposed in an operating system context as for example in the NOX architecture that has appeared as an editorial note in the CCR July 2008 issue. The novelty of this new paper is in combining flow forwarding and virtualization to replace network middlebox functionalities.

The Resource Pooling Principle

By: 
Damon Wischik, Mark Handley, and Marcelo Bagnulo Braun
Appears in: 
CCR October 2008

Since the ARPAnet, network designers have built localized mechanisms for statistical multiplexing, load balancing, and failure resilience, often without understanding the broader implications. These mechanisms are all types of resource pooling, whichmeans making a collection of resources behave like a single pooled resource. We believe that the natural evolution of the Internet is that it should achieve resource pooling by harnessing the responsiveness of multipath-capable end systems.

An Edge-to-Edge Filtering Architecture Against DoS

By: 
Felipe Huici and Mark Handley
Appears in: 
CCR April 2007

Defending against large, distributed Denial-of-Service attacks is challenging, with large changes to the network core or to end-hosts often suggested. To make matters worse, spoofing adds to the difficulty, since defenses must resist attempts to trigger filtering of other people’s traffic. Further, any solution has to provide incentives for deployment, or it will never see the light of day.

Public Review By: 
Ernst Biersack

Defense against DoS attacks is definitely an important practical problem given the fact that potential attackers may control botnets with hundreds of thousands of machines. This paper adopts the approach of marking IP traffic close at the source, which then gets encapsulated and tunneled to a decapsulator near the destination. A server under attack can ask the decapsulator to suppress certain traffic destined to that server:
In this case, the decapsulator determines the “entry point” encapsulator from which the unwanted traffic is coming and asks the encapsulator to filter the unwanted traffic. This approach has the advantage that the en- and decapsulation boxes are deployed at the edge of the network, not requiring any changes to the core network. The paper presents performance results showing that off-the-shelf HW is sufficient to perform en-/decapsulation at a speed of hundreds of Mbit/sec, which means that requiring packet rewrites (at least at the edge) is not a show-stopper any more.
However, efficient en-/decapsulation is only one piece of a successful DoS defense system. Such a system also critically relies on securely establishing the decapsulator-to-destination mapping and on the filtering mechanism to work reliably. Getting these functions deployed in a robust, large-scale fashion seems to be a major hurdle that limits the chances for this approach to get widely deployed. There exists already made a number of proposals for DoS defense systems as well as some commercial systems that have been successfully deployed. As the reviewers observed, this paper does not clearly state why the approach proposed is superior other DoS defense systems. Dismissing commercial systems on the basis they require "special boxes" does not convince given the fact that the solution proposed here requires the deployment of en-/decapsulator boxes.

Syndicate content