The goal of any program committee is to select the "best" papers, but by what criterion should we judge which papers are best for the SIGCOMM conference? It is the opinion of the SIGCOMM Technical Steering Committee that we should evaluate papers based on their potential to have lasting intellectual and/or practical impact, in the hope that SIGCOMM-sponsored conferences will publish those papers that are most likely to change how the research community thinks about networks and network-based systems.
This may seem an obvious statement, and indeed we hope it meets with wide agreement within the SIGCOMM community. However, it has at least two important implications for how papers should be reviewed.
First, reviewers should pay attention to both intellectual and practical impact; just because a paper's contribution cannot be deployed in the foreseeable future need not eliminate it from consideration. Reviewers should give some deference to any paper that has the potential to change how we think about such systems, even if not in the form of a novel and/or practically deployable mechanism. Similarly, reviewers should respect a paper that combines, extends, or evaluates ideas in a novel way---the publication of an early work should not preclude someone else from publishing a stronger, more mature paper about the same idea.
Second, we believe a paper's long-term impact depends more on its core result than on its minor flaws. Thus, when evaluating papers, the initial focus should be on the degree to which a paper's core result, assuming it is correct, is likely to have impact. Only then should the committee assess whether the potential impact is undermined by methodological problems, inappropriate assumptions, or other flaws in execution.
Many within the SIGCOMM community feel that reviewing has become too negative. In fact, one issue of CCR contained no technical papers, because the reviewers found all current submissions wanting. This is unfortunate, because our goal should be to make progress, not achieve perfection. Also, reviews that take an excessively negative tone towards an imperfect paper can discourage authors (especially students), and can reduce confidence in the fairness of the review process.
We hope that valuing both intellectual and practical impact (i.e., not excluding contributions that are more conceptual in nature, and not focusing overly much on near-term deployability) and strictly ordering reviewing priorities (potential impact considered first, and execution flaws only to the extent they undermine that impact) will lead to a more positive reviewing process, one that maximizes the impact of our conferences and the scientific progress made in our field.