More recently, work towards VMs based on minimalistic or specialized OSes (e.g., OSv [10], ClickOS [8], Mirage [7], Erlang on Xen [3], HalVM [6], etc.) has started pushing the envelope of how reactive or fluid the cloud can be. These VMs’ small CPU and memory footprints (as little as a few megabytes) enable a number of scenarios that are not possible with traditional VMs. First, such VMs have the potential be instantiated and suspended in tens of milliseconds.
Multi-core CPUs, along with recent advances in memory and buses, render commodity hardware a strong candidate for software router virtualization. In this context, we present the design of a new platform for virtual routers on modern PC hardware. We further discuss our design choices in order to achieve both high performance and flexibility for packet processing.
To search the web quickly, search engines partition the web index over many machines, and consult every partition when answering a query. To increase throughput, replicas are added for each of these machines. The key parameter of these algorithms is the trade-off between replication and partitioning: increasing the partitioning level improves query completion time since more servers handle the query, but may incur non-negligible startup costs for each subquery. Finding the right operating point and adapting to it can significantly improve performance and reduce costs.
The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner.
Network functionalities such as intrusion detection and load balancing are often implemented in specialized expensive middleboxes plugged inside the network. But, with the advent of commodity hardware and network switches, it is time to think about leveraging these new and cheap resources to support the same functionalities with lower cost without compromising efficiency. This is in the same spirit that software radio, virtual machines and virtual routers, have been introduced. The implementation of network functionalities in a kind of software environment has the further advantage of making them easily manageable and extendable to other applications (on software timescales).
The architecture introduced in this paper is called Flowstream. It proposes the implementation of network functionalities in virtualized machines/servers/routers run on top of commodity PCs. The flow of traffic among these virtual network entities is controlled by a programmable network switch implementing Openflow. The papers motivates the problem and discusses the architecture and its main components, plus a description of some potential applications. Even though there are no validation results, all reviewers appreciate the idea and agree on the fact that it will trigger discussions among CCR readers and the members of the networking community. This is a new research area that involves several tradeoffs (technical vs. economical, reliability vs. programmability) to be clearly understood and evaluated.
Programmable flow forwarding using Openflow has been already proposed in an operating system context as for example in the NOX architecture that has appeared as an editorial note in the CCR July 2008 issue. The novelty of this new paper is in combining flow forwarding and virtualization to replace network middlebox functionalities.
Defending against large, distributed Denial-of-Service attacks is challenging, with large changes to the network core or to end-hosts often suggested. To make matters worse, spoofing adds to the difficulty, since defenses must resist attempts to trigger filtering of other people’s traffic. Further, any solution has to provide incentives for deployment, or it will never see the light of day.
Defense against DoS attacks is definitely an important practical problem given the fact that potential attackers may control botnets with hundreds of thousands of machines. This paper adopts the approach of marking IP traffic close at the source, which then gets encapsulated and tunneled to a decapsulator near the destination. A server under attack can ask the decapsulator to suppress certain traffic destined to that server:
In this case, the decapsulator determines the “entry point” encapsulator from which the unwanted traffic is coming and asks the encapsulator to filter the unwanted traffic. This approach has the advantage that the en- and decapsulation boxes are deployed at the edge of the network, not requiring any changes to the core network. The paper presents performance results showing that off-the-shelf HW is sufficient to perform en-/decapsulation at a speed of hundreds of Mbit/sec, which means that requiring packet rewrites (at least at the edge) is not a show-stopper any more.
However, efficient en-/decapsulation is only one piece of a successful DoS defense system. Such a system also critically relies on securely establishing the decapsulator-to-destination mapping and on the filtering mechanism to work reliably. Getting these functions deployed in a robust, large-scale fashion seems to be a major hurdle that limits the chances for this approach to get widely deployed. There exists already made a number of proposals for DoS defense systems as well as some commercial systems that have been successfully deployed. As the reviewers observed, this paper does not clearly state why the approach proposed is superior other DoS defense systems. Dismissing commercial systems on the basis they require "special boxes" does not convince given the fact that the solution proposed here requires the deployment of en-/decapsulator boxes.