Dummynet Revisited

By: 
Marta Carbone and Luigi Rizzo
Appears in: 
CCR April 2010

Dummynet is a widely used link emulator, developed long ago to run experiments in user-configurable network environments. Since its original design, our system has been extended in various ways, and has become very popular in the research community due to its features and to the ability to emulate even moderately complex network setups on unmodified operating systems.

We have recently made a number of extensions to the emulator, including loadable packet schedulers, support for better MAC layer modeling, the inclusion in PlanetLab, and development of Linux and Windows versions in addition to the native FreeBSD and Mac OS X ones.

The goal of this paper is to present in detail the current features of Dummynet, compare it with other emulation solutions, and discuss what operating conditions should be considered and what kind of accuracy to expect when using an emulation system.

Public Review By: 
Kevin Almeroth

This paper comes from the authors of Dummynet, one of the more widely used link emulation tools. In this CCR paper, the authors provide an update on some of the revisions that have been made to the tool. This paper also represents a re-submission of an earlier rejected CCR submission.
In the feedback provided by the first set of reviewers, a couple of weaknesses were noted. The authors have provided an excellent example of how to respond to reviewer comments, make changes, and now have a paper that provides quite a bit of useful information and is worth including in this issue of CCR. The first round of reviews and how the authors have fixed the paper are particularly illuminating as to what have become strengths of the paper.
In the first version of the paper, the authors missed a couple of key pieces of related work (isn't that always the case!). Given that the state-of-the-art in network evaluation has evolved and there are now a much broader array of simulation, emulation, and testbed techniques for evaluating proposed algorithms, the authors really needed to provide more background. This background is important given that this paper serves as a nice introduction for new researchers to the field. This paper serves as a brief, but concise survey of the breadth of available techniques. The authors addressed the original weakness nicely, even including some of the specific tools mentioned by the reviewers in the first round.
Given that the variety of available techniques, which are nicely surveyed in this paper, one of the most important questions to answer is which of the techniques are most accurate and how does Dummynet stack up in comparison. This question is clearly a difficult one to answer; is something that is beyond the scope of this paper; but one that the reviewers felt needed to be addressed in some way for the paper to be complete. The authors have added additional details to Section 4 (“Accuracy and Performance”), and have done a commendable job in addressing this topic as much as possible within the limits of the current paper.
Of course, given that evaluation techniques will continue to evolve, and given that evaluation is fundamental to evaluating the quality of a contribution to the field, there is certainly room for additional work to design new evaluation techniques and platforms while also evaluating which are the most effective.