Computer Communication Review: Papers

Find a CCR issue:
  • Dina Papagiannaki

    Welcome to the April 2014 issue for Computer Communications Review. I am really happy to see CCR increasing its presence in our community and serving as a venue where we express our opinions on the way our community is evolving, discussing its future, and publish papers that advance the state of the art in data communications. In the past 3 months, I have received a number of comments from members in the community, on previous published articles and expressing their willingness to contribute to its continued success. Thank you very much!

    This issue of CCR features 13 papers, out of which 6 are editorial notes. The technical papers cover wireless and wired networking solutions, as well as SDN. Our editorials cover workshop reports, but also opinion papers. Lastly, I am very happy to also include an editorial on MCKit, the smartphone app that was launched for SIGCOMM 2013, and the organizers’ thoughts on how well it worked, how it was built, and results on how it was used. I hope it proves to be useful as we are getting close to this year’s SIGCOMM in Chicago.

    One of the discussions we have started in the community has to do with our actual impact on commercial products. March was the month of the Mobile World Congress (MWC), in Barcelona, the premier industry venue in mobile communications and products. It was really exciting to see one of our community’s outcomes presented during the venue and receiving tremendous coverage by the media. I am referring to Kumu Networks, a startup company founded by Sachin Katti, Steven Hong, Jeffrey Mehlman, and Mayank Jain, whose seeds were sown in Stanford University, and that aims to commercialize full duplex radio technology. The technology behind Kumu Network was published in SIGCOMM 2012, SIGCOMM 2013, as well as NSDI, Mobicom and Hotnets in the past 4 years. Kumu Networks is a clear testament to the quality of work done in our community, and its relevance in the market. A tremendous achievement by all standards.

    This issue also marks the end of term for Sharad Agarwal, from Microsoft Research in Redmond. I really wanted to thank Sharad for his contributions throughout his tenure at CCR. We will miss your perspective, as well as some of the greatest public reviews CCR has even seen!

    We also say goodbye to Matteo Varvello, from Bell Labs. Matteo has been the heart behind the online version of CCR. I would really like to thank him for all his help throughout the past year, and welcome Prof. Mike Wittie, from Montana State University, who joins full of energy as the new CCR publications chair.

    With all that, I hope you enjoy this issue and I am always at your disposal in case of questions or comments.

  • X. Yao, W. Wang, S. Yang, Y. Cen, X. Yao, T. Pan

    This paper proposed an IPB-frame Adaptive Mapping Mechanism (AMM) to improve the video transmission quality over IEEE 802.11e Wireless Local Area Networks (WLANs). Based on the frame structure of hierarchical coding technology, the probability of each frame allocated to the most appropriate Access Category (AC) was dynamically updated according to its importance and traffic load of each AC. Simulation results showed the superior performance of the proposed AMM by comparing with three other existing mechanisms in terms of three objective metrics.

    Joseph Camp
  • F. Ge, L. Tan

    A communication network usually has data packets and acknowledge (ACK) packets being transmitted in opposite directions. ACK packet flows may affect the performance of data packet flows, which is unfortunately not considered in the usual network utility maximization (NUM) model. This paper presents a NUM model in networks with two-way flows (NUMtw) by adding a routing matrix to cover ACK packet flows. The source rates are obtained by solving the dual model and the relation to the routing matrix of ACK packet flows is disclosed. Furthermore, the source rates in networks with one-way flows by the usual NUM model are compared to those in networks with two-way flows by the NUMtw model.

    Nikolaos Laoutaris
  • A. Lodhi, N. Larson, A. Dhamdhere, C. Dovrolis, K. Claffy

    In this study we mine one of the few sources of public data available about the interdomain peering ecosytem: PeeringDB [1], an online database where participating networks contribute information about their peering policies, traffic volumes and presence at various geographic locations. Although established to support the practical needs of operators, this data also provides a valuable source of information to researchers. Using BGP data to cross-validate three years of PeeringDB snapshots, we find that PeeringDB membership is reasonably representative of the Internet’s transit, content, and access providers in terms of business types and geography of participants, and PeeringDB data is generally up-to-date. We find strong correlations among different measures of network size - BGP-advertised address space, PeeringDB-reported traffic volume and presence at peering facilities, and between these size measures and advertised peering policies.

    Renata Teixeira
  • M. Raju, A. Wundsam, M. Yu

    In spite of the standardization of the OpenFlow API, it is very difficult to write an SDN controller application that is portable (i.e., guarantees correct packet processing over a wide range of switches) and achieves good performance (i.e., fully leverages switch capabilities). This is because the switch landscape is fundamentally diverse in performance, feature set and supported APIs. We propose to address this challenge via a lightweight portability layer that acts as a rendezvous point between the requirements of controller application and the vendor knowledge of switch implementations. Above, applications specify rules in virtual flow tables annotated with semantic intents and expectations. Below, vendor specific drivers map them to optimized switch-specific rule sets. NOSIX represents a first step towards achieving both portability and good performance across a diverse set of switches.

    Hitesh Ballani
  • R. Singh, T. Brecht, S. Keshav

    The number of publicly accessible virtual execution environments (VEEs) has been growing steadily in the past few years. To be accessible by clients, such VEEs need either a public IPv4 or a public IPv6 address. However, the pool of available public IPv4 addresses is nearly depleted and the low rate of adoption of IPv6 precludes its use. Therefore, what is needed is a way to share precious IPv4 public addresses among a large pool of VEEs. Our insight is that if an IP address is assigned at the time of a client DNS request for the VEE’s name, it is possible to share a single public IP address amongst a set of VEEs whose workloads are not network intensive, such as those hosting personal servers or performing data analytics. We investigate several approaches to multiplexing a pool of global IP addresses among a large number of VEEs, and design a system that overcomes the limitations of current approaches. We perform a qualitative and quantitative comparison of these solutions. We find that upon receiving a DNS request from a client, our solution has a latency as low as 1 ms to allocate a public IP address to a VEE, while keeping the size of the required IP address pool close to the minimum possible.

    Phillipa Gill
  • G. Bianchi, M. Bonola, A. Capone, C. Cascone

    Software Defined Networking envisions smart centralized controllers governing the forwarding behavior of dumb low-cost switches. But are “dumb” switches an actual strategic choice, or (at least to some extent) are they a consequence of the lack of viable alternatives to OpenFlow as programmatic data plane forwarding interface? Indeed, some level of (programmable) control logic in the switches might be beneficial to offload logically centralized controllers (de facto complex distributed systems) from decisions just based on local states (versus network-wide knowledge), which could be handled at wire speed inside the device itself. Also, it would reduce the amount of flow processing tasks currently delegated to specialized middleboxes. The underlying challenge is: can we devise a stateful data plane programming abstraction (versus the stateless OpenFlow match/action table) which still entails high performance and remains consistent with the vendors’ preference for closed platforms? We posit that a promising answer revolves around the usage of extended finite state machines, as an extension (super-set) of the OpenFlow match/action abstraction. We concretely turn our proposed abstraction into an actual table-based API, and, perhaps surprisingly, we show how it can be supported by (mostly) reusing core primitives already implemented in OpenFlow devices.

    Hitesh Ballani
  • M. Honda, F. Huici, C. Raiciu, J. Araujo, L. Rizzo

    Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to become widely used) should be placed on end systems. In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user- level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s. We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18–90% both the same server and nginx running over the kernel’s stack.

    Sharad Agarwal
  • L. Zhan, D. Chiu

    Smart phones have become very popular. Most people attending a conference have a smartphone with them; so it is natural to think about how to build a mobile application to support a conference. In the process of organizing ACM Sigcomm 2013, we initiated a student project to build such a conference app. As a conference organizator, we had good motivation and inspiration to design functions we would like to support. In this paper, we share our experiences, in both functional design and implementation, as well as our experience in trying it out during Sigcomm 2013.

  • B. Carpenter

    This note describes how the Internet has got itself into deep trouble by over-reliance on IP addresses and discusses some possible ways forward.

  • S. Vissicchio, L. Vanbever, O. Bonaventure

    Software Defined Networking (SDN) promises to ease design, operation and management of communication networks. However, SDN comes with its own set of challenges, including incremental deployability, robustness, and scalability. Those challenges make a full SDN deployment difficult in the short-term and possibly inconvenient in the longer-term. In this paper, we explore hybrid SDN models that combine SDN with a more traditional networking approach based on distributed protocols. We show a number of use cases in which hybrid models can mitigate the respective limitations of traditional and SDN approaches, providing incentives to (partially) transition to SDN. Further, we expose the qualitatively diverse tradeoffs that are naturally achieved in hybrid models, making them convenient for different transition strategies and long-term network designs. For those reasons, we argue that hybrid SDN architectures deserve more attention from the scientific community.

  • E. Kenneally, M. Bailey

    The inaugural Cyber-security Research Ethics Dialogue & Strategy Workshop was held on May 23, 2013, in conjunction with the IEEE Security Privacy Symposium in San Francisco, California. CREDS embraced the theme of ethics-by-design in the context of cyber security research, and aimed to: - Educate participants about underlying ethics principles and applications; - Discuss ethical frameworks and how they are applied across the various stakeholders and respective communities who are involved; - Impart recommendations about how ethical frameworks can be used to inform policymakers in evaluating the ethical underpinning of critical policy decisions; - Explore cyber security research ethics techniques, tools, standards and practices so researchers can apply ethical principles within their research methodologies; and - Discuss specific case vignettes and explore the ethical impli- cations of common research acts and omissions.

  • Mat Ford

    This paper reports on a workshop convened to develop an action plan to reduce Internet latency. Internet latency has become a focus of attention at the leading edge of the industry as the desire to make Internet applications more responsive outgrows the ability of increased bandwidth to address this problem. There are fundamental limits to the extent to which latency can be reduced, but there is considerable capacity for improvement throughout the system, making Internet latency a multifaceted challenge. Perhaps the greatest challenge of all is to re-educate the mainstream of the industry to understand that bandwidth is not the panacea, and other optimizations, such as reducing packet loss, are at odds with latency reduction. For Internet applications, reducing the latency impact of sharing the communications medium with other users and applications is key. Current Internet network devices were often designed with a belief that additional buffering would reduce packet loss. In practice, this additional buffering leads to intermittently excessive latency and even greater packet loss under saturating load. For this reason, getting smarter queue management techniques more widely deployed is a high priority. We can reduce these intermittent increases in delay, sometimes by up to two orders of magnitude, by shifting the focus from packet loss avoidance to delay avoidance using technology that we already have developed, tested, implemented and deployed today. There is also plenty of scope for removing other major sources of delay. For instance, connecting to a website could be completed in one roundtrip (the time it takes for packets to travel from source to destination and back again) rather than three or four, by folding two or three rounds of flow and security set-up into the first data exchange, without compromising security or efficiency. Motivating the industry to deploy these advances needs to be aided by the availability of mass-market latency testing tools that could give consumers the information they need to gravitate towards low latency services, providers and products. There is no single network latency metric but several alternatives have been identified that compactly express aggregate delay (e.g. as relationships or a constellation), and tools that make use of these will give greater insight into the impact of changes and the diversity of Internet connections around the world. In many developing countries (and in rural regions of developed countries), aside from Internet access itself, there are significant structural issues, such as trombone routes through the developed world and a lack of content distribution networks (CDNs), that need to be addressed with more urgency than Active Queue Management (AQM) deployment, but the 'blank slate' of new deployments provides an opportunity to consider latency now. More widespread use of Internet exchange points for hosting local content and fostering local interconnections is key to addressing some of these structural challenges.

  • N. Feamster, J. Rexford, E. Zegura

    Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.

  • A. Dainotti, K. Benson, A. King, kc claffy, M. Kallitsis, E. Glatz, X. Dimitropoulos

    This errata is to help viewers/readers identify/properly understand our contribution to the SIGCOMM CCR Newsletter. Volume 44 Issue 1, (January 2014) on pages 42-49.

  • Dina Papagiannaki
    Happy new year! Welcome to the January 2014 issue of ACM Computer Communications Review. We are starting the new year with one of the largest CCR issues I have had the pleasure to edit. This issue contains 10 papers, 6 technical peer reviewed contributions and 4 editorial notes.
     
    The technical papers cover a range of areas, such as routing, Internet measurements, WiFi networking, named data networking and online social networks. They should make a very diverse and interesting read for the CCR audience. In the editorial zone, we have had the pleasure to receive 4 contributions, 3 out of which address fundamental issues around how our community works.
     
    In his editorial note, Prof. Nick McKeown, from Stanford University, is providing his perspective on the issues that go right and the issues that could be improved in the way our premier conference, ACM SIGCOMM, is organized. Prof. McKeown is making a case for a more inclusive conference, drawing examples from other communities. He is further attempting to identify possible directions we could pursue in order to transfer our fundamental contributions into the industry and the society as a whole. 
     
    One more editorial is touching upon some of the issues that Prof. McKeown is outlining in his editorial. Its focus is to identify ways to bridge the gap between the networking community and the Internet standardization bodies. The authors, from Broadcom, Nokia, University of Cambridge, Aalto University and University of Helsinki, are describing the differences and similarities between how the two communities operate. They further provide interesting data on the participation of academic and industrial researchers in standardization bodies. They discuss ways to minimize the friction that may exist as a particular technology is making the leap from the scientific community into the industry. 
     
    Similarities can also be found in Dr. Partridge’s editorial. Dr. Partridge identifies the difficulties faced in publishing work that challenges the existing Internet architecture. One of the interesting recommendations made in the editorial is that a new Internet architecture should not start off trying to be backwards compatible. He encourages our community to be more receptive when it comes to those contributions.
     
    Lastly, we have the pleasure to host our second interview in this issue of CCR. Prof. Mellia interviewed Dr. Antonio Nucci, that is the current CTO of Narus, based in the Bay Area. In this interview you will see a description of Dr. Nucci’s journey from an academic researcher to the Best CTO awardee and his recommendations on interesting research directions for current and future PhD candidates.
     
    All in all, this issue of CCR features a number of interesting, thought provoking articles that we hope you enjoy. The intention behind some of them is that they become the catalyst to a discussion as to how we can make our work more impactful in today’s society, a discussion that I find of critical importance, given our society’s increasing reliance on the Internet.
     
    This issue is also accompanied by a number of departures from the editorial board. I would like to thank Dr. Nikolaos Laoutaris, and Dr. Jia Wang, for their continuous help over the past 2 and 3 years respectively. And we are welcoming Prof. Phillipa Gill, from Stony Brook University, and Prof. Joel Sommers, from Colgate University. They both join the editorial board with a lot of passion to contribute to CCR’s continued success.
    I hope this issue stimulates some discussion and I am at your disposal for any questions or suggestions.
  • Ahmed Elmokashfi, Amogh Dhamdhere
    In the mid 2000s there was some concern in the research and operational communities over the scalability of BGP, the Internet’s interdomain routing protocol. The focus was on update churn (the number of routing protocol messages that are exchanged when the network undergoes routing changes) and whether churn was growing too fast for routers to handle. Recent work somewhat allayed those fears, showing that update churn grows slowly in IPv4, but the question of routing scalability has re-emerged with IPv6. In this work, we develop amodel that expresses BGP churn in terms of four measurable properties of the routing system. We show why the number of updates normalized by the size of the topology is constant, and why routing dynamics are qualitatively similar in IPv4 and IPv6. We also show that the exponential growth of IPv6 churn is entirely expected, as the underlying IPv6 topology is also growing exponentially.
    Jia Wang
  • Mishari Almishari, Paolo Gasti, Naveen Nathan, Gene Tsudik
    Content-Centric Networking (CCN) is an alternative to today’s Internet IP-style packet-switched host-centric networking. One key feature of CCN is its focus on content distribution, which dominates current Internet traffic and which is not well-served by IP. Named Data Networking (NDN) is an instance of CCN; it is an on-going research effort aiming to design and develop a full-blown candidate future Internet architecture. Although NDN’s emphasizes content distribution, it must also support other types of traffic, such as conferencing (audio, video) as well as more historical applications, such as remote login and file transfer. However, suitability of NDN for applications that are not obviously or primarily content-centric. We believe that such applications are not going away any time soon. In this paper, we explore NDN in the context of a class of applications that involve lowlatency bi-directional (point-to-point) communication. Specifically, we propose a few architectural amendments to NDN that provide significantly better throughput and lower latency for this class of applications by reducing routing and forwarding costs. The proposed approach is validated via experiments.
    Katerina Argyraki
  • Mohammad Rezaur Rahman, Pierre-Andr Nol, Chen-Nee Chuah, Balachander Krishnamurthy, Raissa M. D'Souza, S. Felix Wu
    Online social network (OSN) based applications often rely on user interactions to propagate information or to recruit more users, producing a sequence of user actions called adoption process or cascades. This paper presents the first attempt to quantitatively study the adoption process or cascade of such OSN-based applications by analyzing detailed user activity data from a popular Facebook gifting application. In particular, due to the challenge of monitoring user interactions over all possible channels on OSN platforms, we focus on characterizing the adoption process that relies only on user-based invitation (which is applicable to most gifting applications). We characterize the adoptions by tracking the invitations sent by the existing users to their friends through the Facebook gifting application and the events when their friends install the application for the first time. We found that a small number of big cascades carry the adoption of
    most of the application users. Contrary to common beliefs, we did not observe special influential nodes that are responsible for the viral adoption of the application.
    Fabian E. Bustamante
  • Phillipa Gill, Michael Schapira, Sharon Goldberg
    Researchers studying the inter-domain routing system typically rely on models to ll in the gaps created by the lack of information about the business relationships and routing policies used by individual autonomous systems. To shed light on this unknown information, we asked  100 network
    operators about their routing policies, billing models, and thoughts on routing security. This short paper reports the survey's results and discusses their implications.
    Jia Wang
  • Pablo Salvador, Luca Cominardi, Francesco Gringoli, Pablo Serrano
    The IEEE 802.11aa Task Group has recently standardized a set of mechanisms to eciently support video multicasting, namely, the Group Addressed Transmission Service (GATS). In this article, we report the implementation of these mechanisms over commodity hardware, which we make publicly available, and conduct a study to assess their performance under a variety of real-life scenarios. To the best of our knowledge, this is the rst experimental assessment of GATS, which is performed along three axes: we report their complexity in terms of lines of code, their e ectiveness when delivering video trac, and their eciency when utilizing wireless resources. Our results provide key insights on the
    resulting trade-o s when using each mechanism, and paves the way for new enhancements to deliver video over 802.11 Wireless LANs.
    Sharad Agarwal
  • Alberto Dainotti, Karyn Benson, Alistair King, kc claffy, Michael Kallitsis, Eduard Glatz, Xenofontas Dimitropoulos
    One challenge in understanding the evolution of Internet infrastructure is the lack of systematic mechanisms for monitoring the extent to which allocated IP addresses are actually used. Address utilization has been monitored via actively scanning the entire IPv4 address space. We evaluate
    the potential to leverage passive network traffic measurements in addition to or instead of active probing. Passive traffic measurements introduce no network traffic overhead, do not rely on unfiltered responses to probing, and could potentially apply to IPv6 as well. We investigate two chal-
    lenges in using passive traffic for address utilization inference: the limited visibility of a single observation point; and the presence of spoofed IP addresses in packets that can distort results by implying faked addresses are active. We propose a methodology for removing such spoofed traf-
    fic on both darknets and live networks, which yields results comparable to inferences made from active probing. Our preliminary analysis reveals a number of promising findings, including novel insight into the usage of the IPv4 address space that would expand with additional vantage points.
    Renata Teixeira
  • Craig Partridge
    Some of the challenges of developing and maturing a future internet architecture (FIA) are described. Based on a talk given at the Conference on Future Internet Technologies 2013.
  • Marco Mellia
    Dr. Antonio Nucci is the chief technology officer of Narus1 and is responsible for setting the company’s direction with respect to technology and innovation. He oversees the en- tire technology innovation lifecycle, including incubation, research, and prototyping. He also is responsible for ensuring a smooth transition to engineering for final commercialization. Antonio has published more than 100 technical papers and has been awarded 38 U.S. patents. He authored a book, “Design, Measurement and Management of Large-Scale IP Networks Bridging the Gap Between Theory and Practice”, in 2009 on advanced network analytics. In 2007 he was recognized for his vision and contributions with the prestigious Infoworld CTO Top 25 Award. In 2013, Antonio was honored by InfoSecurity Products Guide’s 2013 Global Excellence Awards as “CTO of the Year” [1] and Gold winner in the “People Shaping Info Security” category. He served as a technical lead member of the Enduring Security Framework (ESF) initiative sponsored by various U.S. agencies to produce a set of recommendations, policies, and technology pilots to better secure the Internet (Integrated Network Defense). He is also a technical advisor for several venture capital firms. Antonio holds a Ph.D. in computer science, and master’s and bachelor’s degrees
  • Aaron Yi Ding, Jouni Korhonen, Teemu Savolainen, Markku Kojo, Joerg Ott, Sasu Tarkoma, Jon Crowcroft
    The participation of the network research community in the Internet Standards Development Organizations (SDOs) has been relatively low over the recent years, and this has drawn attention from both academics and industry due to its possible negative impact. The reasons for this gap are complex and extend beyond the purely technical. In this editorial we share our views on this challenge, based on the experience we have obtained from joint projects with universities and companies. We highlight the lessons learned, covering both successful and under-performing cases, and suggest viable approaches to bridge the gap between networking research and Internet standardization, aiming to promote and maximize the outcome of such collaborative endeavours.
  • Nick McKeown
    At every Sigcomm conference the corridors buzz with ideas about how to improve Sigcomm. It is a healthy sign that the premier conference in networking keeps debating how to reinvent and improve itself. In 2012 I got the chance to throw my hat into the ring; at the end of a talk I spent a
    few minutes describing why I think the Sigcomm conference should be greatly expanded. A few people encouraged me to write the ideas down.
    My high level goal is to enlarge the Sigcomm tent, welcoming in more researchers and more of our colleagues from industry. More researchers because our eld has grown enormously in the last two decades, and Sigcomm has not adapted. I believe our small program limits the opportunities for our young researchers and graduate students to publish new ideas, and therefore we are holding back their careers. More colleagues from industry because too few industry thought-leaders are involved in Sigcomm. The academic eld of networking has weak ties to the industry it
    serves, particularly when compared to other elds of systems research. Both sides lose out: there is very little transfer of ideas in either direction, and not enough vigorous debate about the directions networking should be heading.
  • Dina Papagiannaki
    Welcome to the October 2013 issue of ACM Computer Communications Review. This issue includes 1 technical peer-reviewed paper, and 3 editorial notes. The topics include content distribution, SDN, and Internet Exchange Points (IXPs).
     
    One of my goals upon taking over as editor of CCR was to try to make it the place where we would publish fresh, novel ideas, but also where we could exchange perspectives and share lessons. This is the reason why for the past 9 months the editorial board and I have been working on what we call the "interview section" of CCR. This October issue carries our first interview note, captured by Prof. Joseph Camp, from SMU.
     
    Prof. Camp recently interviewed Dr. Ranveer Chandra, from MSR Redmond. The idea was to get Dr. Chandra's view on what has happened in white space networking since his best paper award at ACM SIGCOMM 2009. I find that the resulting article is very interesting. The amount of progress made in white space networking solutions, that has actually led to an operational deployment in Africa, is truly inspiring, and a clear testament to the amount of impact our community can have. I do sincerely hope that you will be as inspired as I was while reading it. This issue of CCR is also being published after ACM SIGCOMM in Hong Kong. SIGCOMM 2013 was marked with a number of records: 1) it has been the only SIGCOMM, I at least remember, hit by a natural disaster - typhoon Utor, 2), resulting in 2 entire sessions postponed to the afternoon (making it essentially dual track:), and 3) it has had the highest acceptance rate since 1987 - with 38 accepted papers.
     
    During their opening remarks the TPC chairs, Prof. Paul Barford, University of Wisconsin at Madison, and Prof. Srini Seshan, Carnegie Mellon University, presented the following two word clouds, which I found highly interesting. The first word cloud represents the most common words found in the titles of the submitted papers, and the second one the most common words in the titles of the accepted papers. Maybe they could form the input to a future editorial by someone in the community.
     
    As one can tell, Software Defined Networking (SDN) was one major topic in this year's conference. Interestingly, behavior, experience and privacy also appear boldly, confirming the belief of some of the community, that indeed SIGCOMM is broadening its reach, covering a diverse set of topics that the Interent is touching in today's society.
     
    This year's SIGCOMM also featured an experiment. All sessions were scribed in real time and notes were added in the blog at layer9.org. You can find a lot of additional information on the papers, and the questions asked on that site.
     
    Reaching the end of this note, I would like to welcome Prof. Sanjay Jha, from the University of New South Wales, in Sydney, Australia, to the editorial board. Prof. Jha brings expertise in a wide range of topics in networking, including wireless sensor networks, ad-hoc/community wireless networks, resilience and multicasting in IP networks and security protocols for wired/wireless networks. I hope you enjoy this issue, and its accompanying special issue on ACM SIGCOMM and the best papers of its associated workshops. I am always at your disposal in case of questions, suggestions, and comments.
  • Stefano Traverso, Mohamed Ahmed, Michele Garetto, Paolo Giaccone, Emilio Leonardi, Saverio Niccolini
    The dimensioning of caching systems represents a difficult task in the design of infrastructures for content distribution in the current Internet. This paper addresses the problem of defining a realistic arrival process for the content requests generated by users, due its critical importance for both analytical and simulative evaluations of the performance of caching systems. First, with the aid of YouTube traces collected inside operational residential networks, we identify the characteristics of real traffic that need to be considered or can be safely neglected in order to accurately predict the performance of a cache. Second, we propose a new parsimonious traffic model, named the Shot Noise Model (SNM), that enables users to natively capture the dynamics of content popularity, whilst still being sufficiently simple to be employed effectively for both analytical and scalable simulative studies of caching systems. Finally, our results show that the SNM presents a much better solution to account for the temporal locality observed in real traffic compared to existing approaches.
    Augustin Chaintreau
  • Jon Crowcroft, Markus Fidler, Klara Nahrstedt, Ralf Steinmetz
    Dagstuhl hosted a three-day seminar on the Future Internet on March 25-27, 2013. At the seminar, about 40 invited researchers from academia and industry discussed the promises, approaches, and open challenges of the Future Internet. This report gives a general overview of the presentations and outcomes of discussions of the seminar.
  • Nikolaos Chatzis, Georgios Smaragdakis, Anja Feldmann, Walter Willinger
    Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points (NAPs) that were mandated as part of the decommissioning of the National Science Foundation Network (NSFNET) in 1994/95 to facilitate the transition from the NSFNET to the “public Internet” as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 20131), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs. However, IXPs have never really attracted much attention from the networking research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current “hot” topic areas such as data centers and cloud services or Software Defined Networking (SDN) and mobile communication. However, we argue in this article that, in fact, IXPs are all about data centers and cloud services and even SDN and mobile communication and should be of great interest to networking researchers interested in understanding the current and future Internet ecosystem. To this end, we survey the existing but largely fragmented sources of publicly available information about IXPs to describe their basic technical and operational aspects and highlight the critical differences among the various IXPs in the different regions of the world, especially in Europe and North America. More importantly, we illustrate the important role that IXPs play in today’s Internet ecosystem and discuss how IXP-driven innovation in Europe is shaping and redefining the Internet marketplace, not only in Europe but increasingly so around the world.
  • Joseph D. Camp

    Ranveer Chandra is a Senior Researcher in the Mobility & Networking Research Group at Microsoft Research. His research is focused on mobile devices, with particular emphasis on wireless communications and energy efficiency. Ranveer is leading the white space networking project at Microsoft Research. He was invited to the FCC to present his work, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the worlds first urban white space network. The following interview captures the essence of his work on white spaces by focusing on his work published in ACM SIGCOMM 2009, which received the Best Paper Award.

  • Dina Papagiannaki
    It is hard to believe it is already July. July marks a few milestones: i) schools are over, ii) most of the paper submission deadlines for the year are behind us, iii) a lot, but not all, of the reviewing duty has been accomplished. July also marks another milestone for CCR and myself: the longest CCR issue I have had the pleasure to edit this year! This issue of CCR features 15 papers in total: 7 technical papers, and 8 editorials. The technical papers cover the areas of Internet measurement, routing, privacy, content
    delivery, as well as data center networks.  
     
    The editorial zone features the report on the workshop of Internet economics, and the workshop on active Internet measurements that took place in early 2013. It also features position papers that regard empirical Internet measurement, community networks, the use of recommendation engines in content delivery, and censorship on the Internet. I found every single one of them thought provoking, and with the potential to initiate discussion in our community. Finally, we have two slightly more unusual editorial notes. The first one describes the experience of
    the CoNEXT 2012 Internet chairs, and the way they found to enable flawless connectivity despite only having access to residential grade equipment. The second one focuses on the criticism that we often portray as a community in our major conferences, in particular SIGCOMM, and suggests a number of directions conference organizers could take.
     
    This last editorial has made me think a little more about my experience as an author, reviewer, and TPC member in the past 15 years of my career. It quotes Jeffrey Naughton’s keynote at ICDE 2010 and his statement about the Computer Science community - “Funding agencies believe us when we say we suck.”. 
     
    Being on a TPC, one actually realizes that criticism is something that we naturally do as a community - not personal. Being an author, who has never been on a TPC, however, makes this process far more personal. I still remember the days when each one of my papers was prepared to what I considered perfection and sent into the “abyss”, sometimes with a positive, and others with a negative response. I also do remember the disappointment of my first rejection. Some perspective on the process could possibly be of interest.
  • Thomas Callahan, Mark Allman, Michael Rabinovich

    The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.

    Sharad Agarwal
  • Akmal Khan, Hyun-chul Kim, Taekyoung Kwon, Yanghee Choi

    The IRR is a set of globally distributed databases with which ASes can register their routing and address-related information. It is often believed that the quality of the IRR data is not reliable since there are few economic incentives for the ASes to register and update their routing information timely. To validate these negative beliefs, we carry out a comprehensive analysis of (IP prefix, its origin AS) pairs in BGP against the corresponding information registered with the IRR, and vice versa. Considering the BGP and IRR practices, we propose a methodology to match the (IP prefix, origin AS) pairs between the IRR and BGP. We observe that the practice of registering IP prefi xes and origin ASes with the IRR is prevalent. However, the quality of the IRR data can vary substantially depending on routing registries, regional Internet registries (to which ASes belong), and AS types. We argue that the IRR can help improve the security level of BGP routing by making BGP routers selectively rely on the corresponding IRR data considering these observations.

    Bhaskaran Raman
  • Abdelberi Chaabane, Emiliano De Cristofaro, Mohamed Ali Kaafar, Ersin Uzun

    As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.

    Augustin Chaintreau
  • Benjamin Frank, Ingmar Poese, Yin Lin, Georgios Smaragdakis, Anja Feldmann, Bruce Maggs, Jannis Rake, Steve Uhlig, Rick Weber

    Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN.

    Fabián E. Bustamante
  • Simone Basso, Michela Meo, Juan Carlos De Martin

    Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PLR.

    Nikolaos Laoutaris
  • Howard Wang, Yiting Xia, Keren Bergman, T.S. Eugene Ng, Sambit Sahu, Kunwadee Sripanidkulchai

    Not only do big data applications impose heavy bandwidth demands, they also have diverse communication patterns denoted as *-cast) that mix together unicast, multicast, incast, and all-to-all-cast. Effectively supporting such traffic demands remains an open problem in data center networking. We propose an unconventional approach that leverages physical layer photonic technologies to build custom communication devices for accelerating each *-cast pattern, and integrates such devices into an application-driven, dynamically configurable photonics accelerated data center network. We present preliminary results from a multicast case study to highlight the potential benefits of this approach.

    Hitesh Ballani
  • Giuseppe Bianchi, Andrea Detti, Alberto Caponi, Nicola Blefari Melazzi

    In some network and application scenarios, it is useful to cache content in network nodes on the fly, at line rate. Resilience of in-network caches can be improved by guaranteeing that all content therein stored is valid. Digital signatures could be indeed used to verify content integrity and provenance. However, their operation may be much slower than the line rate, thus limiting caching of cryptographically verified objects to a small subset of the forwarded ones. How this affects caching performance? To answer such a question, we devise a simple analytical approach which permits to assess performance of an LRU caching strategy storing a randomly sampled subset of requests. A key feature of our model is the ability to handle traffic beyond the traditional Independent Reference Model, thus permitting us to understand how performance vary in different temporal locality conditions. Results, also verified on real world traces, show that content integrity verification does not necessarily bring about a performance penalty; rather, in some specific (but practical) conditions, performance may even improve.

    Sharad Agarwal
  • Bart Braem, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Leandro Navarro, Joseph Bonicioli, Stavros Papathanasiou, Pau Escrich, Roger Baig Viñas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, Blaine Tatum, Malcolm Matson

    Community Networks are large scale, self-organized and decentralized networks, built and operated by citizens for citizens. In this paper, we make a case for research on and with community networks, while explaining the relation to Community-Lab. The latter is an open, distributed infrastructure for researchers to experiment with community networks. The goal of Community-Lab is to advance research and empower society by understanding and removing obstacles for these networks and services.

Syndicate content