Computer Communication Review: Papers

Find a CCR issue:
  • Arjuna Sathiaseelan, M. Said Seddiki, Stoyan Stoyanov, Dirk Trossen
  • Baobao Zhang, Jun Bi, Jianping Wu, Fred Baker
  • Masoud Moshref, Apoorv Bhargava, Adhip Gupta, Minlan Yu, Ramesh Govindan
  • Srikanth Sundaresan, Nick Feamster, Renata Teixeira

    We present a demonstration of WTF (Where’s The Fault?), a system that localizes performance problems in home and access networks. We implement WTF as custom firmware that runs in an off-the-shelf home router. WTF uses timing and buffering information from passively monitored traffic at home routers to detect both access link and wireless network bottlenecks.

  • Sajad Shirali-Shahreza, Yashar Ganjali

    One of the limitations of wildcard rules in Software Defined Networks, such as OpenFlow, is losing visibility. FleXam is a flexible sampling extension for OpenFlow that allows the controller to define which packets should be sampled, what parts of each packet should be selected, and where they should be sent. Here, we present an interactive demo showing how FleXam enables the controller to dynamically adjust sampling rates and change the sampling scheme to optimally keep up with a sampling budget in the context of a traffic statistics collection application.

  • Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya
  • Oliver Michel, Michael Coughlin, Eric Keller

    Given that Software-Defined Networking is highly successful in solving many of today’s manageability, flexibility, and scalability issues in large-scale networks, in this paper we argue that the concept of SDN can be extended even further. Many applications (esp. stream processing and big-data applications) rely on graph-based inter-process communication patterns that are very similar to those in computer networks. To our mind, this network abstraction spanning over different types of entities is highly suitable for and would benefit from central (SDN-inspired) control for the same reasons classical networks do. In this work, we investigate the commonalities between such intra-host networks and classical computer networking. Based on this, we study the feasibility of a central network controller that manages both network traffic and intra-host communication over a custom bus system.

  • Matthew K. Mukerjee, JungAh Hong, Junchen Jiang, David Naylor, Dongsu Han, Srinivasan Seshan, Hui Zhang
  • Arash Molavi Kakhki, Abbas Razaghpanah, Rajesh Golani, David Choffnes, Phillipa Gill, Alan Mislove
  • Rui Miao, Minlan Yu, Navendu Jain
  • Ricky K.P. Mok, Weichao Li, Rocky K.C. Chang

    Crowdtesting is increasingly popular among researchers to carry out subjective assessments of different services. Experimenters can easily assess to a huge pool of human subjects through crowdsourcing platforms. The workers are usually anonymous, and they participate in the experiments independently. Therefore, a fundamental problem threatening the integrity of these platforms is to detect various types of cheating from the workers. In this poster, we propose cheat-detection mechanism based on an analysis of the workers’ mouse cursor trajectories. It provides a jQuery-based library to record browser events. We compute a set of metrics from the cursor traces to identify cheaters. We deploy our mechanism to the survey pages for our video quality assessment tasks published on Amazon Mechanical Turk. Our results show that cheaters’ cursor movement is usually more direct and contains less pauses.

  • Attila Csoma, Balázs Sonkoly, Levente Csikor, Felicián Németh, Andràs Gulyas, Wouter Tavernier, Sahel Sahhaf

    Mininet is a great prototyping tool which combines existing SDN-related software components (e.g., Open vSwitch, OpenFlow controllers, network namespaces, cgroups) into a framework, which can automatically set up and configure customized OpenFlow testbeds scaling up to hundreds of nodes. Standing on the shoulders of Mininet, we implement a similar prototyping system called ESCAPE, which can be used to develop and test various components of the service chaining architecture. Our framework incorporates Click for implementing Virtual Network Functions (VNF), NETCONF for managing Click-based VNFs and POX for taking care of traffic steering. We also add our extensible Orchestrator module, which can accommodate mapping algorithms from abstract service descriptions to deployed and running service chains.

  • Filipe Manco, Joao Martins, Felipe Huici

    More recently, work towards VMs based on minimalistic or specialized OSes (e.g., OSv [10], ClickOS [8], Mirage [7], Erlang on Xen [3], HalVM [6], etc.) has started pushing the envelope of how reactive or fluid the cloud can be. These VMs’ small CPU and memory footprints (as little as a few megabytes) enable a number of scenarios that are not possible with traditional VMs. First, such VMs have the potential be instantiated and suspended in tens of milliseconds. This means that they can be deployed on-the-fly, even as new flows arrive in a network, and can be used to effectively cope with flash crowds. Second, the ability to quickly migrate the VM and its state would allow operators to run their servers at "hotter" load levels without fear of overload, since processing could be near instantaneously moved to a less loaded server. Finally, these VMs’ small memory footprints could potentially allow thousands or even more such VMs to run on a single, inexpensive server; this would lead to important investment and operating savings, and would allow for fine granularity, virtualized network processing (e.g., per-customer firewalls or CPEs, to name a couple). Realizing such a super fluid cloud, however, poses a number of important challenges, since the virtualization technologies that these VMs run on (e.g., Xen or KVM) were never designed to run this large number of concurrent VMs. In the case of Xen [2], the system that this demo is based on, attempts to tackle some of the issues such as limited number of event channels or memory grants are under way, but these are still in their infancy and are not necessarily aiming to run the huge number of VMs we are envisioning. In this demo we will demonstrate how to concurrently execute thousands of MiniOS-based guests 1 on a single inexpensive server. We will also show instantiation and migration of such VMs in tens of milliseconds, and transparent, wide area migration of virtualized middleboxes by combining such VMs with the multi-path TCP (MPTCP) protocol.

  • Florian Wamser, Thomas Zinner, Lukas Iffländer, Phuoc Tran-Gia
  • Sean Donovan, Nick Feamster

    Home and business network operators have limited network statistics available over which management decisions can be made. Similarly, there are few triggered behaviors, such as usage or bandwidths cap for individual users, that are available. By looking at sources of traffic, based on Domain Name System (DNS) cues for content of particular web addresses or source Autonomous System (AS) of the traffic, network operators could create new and interesting rules for their network. NetAssay is a Software-Defined Networking (SDN)-based, network-wide monitoring and reaction framework. By integrating information from Border Gateway Protocol (BGP) and the Domain Name System, NetAssay is able to integrate formerly disparate sources of control information, and use it to provide better monitoring, more useful triggered events, and security benefits for network operators.

  • Maksym Gabielkov, Ashwin Rao, Arnaud Legout

    Online social networks (OSNs) are an important source of information for scientists in different fields such as computer science, sociology, economics, etc. However, it is hard to study OSNs as they are very large. For instance, Facebook has 1.28 billion active users in March 2014 and Twitter claims 255 million active users in April 2014. Also, companies take measures to prevent crawls of their OSNs and refrain from sharing their data with the research community. For these reasons, we argue that sampling techniques will be the best technique to study OSNs in the future. In this work, we take an experimental approach to study the characteristics of well-known sampling techniques on a full social graph of Twitter crawled in 2012 [2]. Our contribution is to evaluate the behavior of these techniques on a real directed graph by considering two sampling scenarios: (a) obtaining most popular users (b) obtaining an unbiased sample of users, and to find the most suitable sampling techniques for each scenario.

  • Benjamin Hesmans, Olivier Bonaventure
  • Jinzhen Bao, Baokang Zhao, Wanrong Yu, Zhenqian Feng, Chunqing Wu, Zhenghu Gong

    In recent years, with the rapid development of satellite technology including On Board Processing (OBP) and Inter Satellite Link (ISL), satellite network devices such as space IP routers have been experimentally carried in space. However, there are many difficulties to build a future satellite network with current terrestrial Internet technologies due to the distinguished space features, such as the severely limited resources, remote hardware/software upgrade in space. In this paper, we propose OpenSAN, a novel architecture of software-defined satellite network. By decoupling the data plane and control plane, OpenSAN provides satellite network with high efficiency, finegrained control, as well as flexibility to support future advanced network technology. Furthermore, we also discuss some practical challenges in the deployment of OpenSAN.

  • Ravi Netravali, Anirudh Sivaraman, Keith Winstein, Somak Das, Ameesh Goyal, Hari Balakrishnan

    This demo presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi is structured as a set of arbitrarily composable UNIX shells. It includes two shells to record and replay Web pages, RecordShell and ReplayShell, as well as two shells for network emulation, DelayShell and LinkShell. In addition, Mahimahi includes a corpus of recorded websites along with benchmark results and link traces (https://github.com/ravinet/sites). Mahimahi improves on prior record-and-replay frameworks in three ways. First, it preserves the multi-origin nature of Web pages, present in approximately 98% of the Alexa U.S. Top 500, when replaying. Second, Mahimahi isolates its own network traffic, allowing multiple instances to run concurrently with no impact on the host machine and collected measurements. Finally, Mahimahi is not inherently tied to browsers and can be used to evaluate many different applications. A demo of Mahimahi recording and replaying a Web page over an emulated link can be found at http://youtu.be/vytwDKBA-8s. The source code and instructions to use Mahimahi are available at http://mahimahi.mit.edu/.

  • Mo Dong, Qingxi Li, Doron Zarchy, Brighten Godfrey, Michael Schapira

    After more than two decades of evolution, TCP and its end host based modifications can still suffer from severely degraded performance under real-world challenging network conditions. The reason, as we observe, is due to TCP family’s fundamental architectural deficiency, which hardwires packet-level events to control responses and ignores emprical performance. Jumping out of TCP lineage’s architectural deficiency, we propose Performanceoriented Congestion Control (PCC), a new congestion control architecture in which each sender controls its sending strategy based on empirically observed performance metrics. We show through preliminary experimental results that PCC achieves consistently high performance under various challenging network conditions.

  • Arup Raton Roy, Md. Faizul Bari, Mohamed Faten Zhani, Reaz Ahmed, Raouf Boutaba
  • Adrian Gämperli, Vasileios Kotronis, Xenofontas Dimitropoulos
  • Abdulla Alwabel, Minlan Yu, Ying Zhang, Jelena Mirkovic

    We propose a new software-defined security service – SENSS – that enables a victim network to request services from remote ISPs for traffic that carries source IPs or destination IPs from this network’s address space. These services range from statistics gathering, to filtering or quality of service guarantees, to route reports or modifications. The SENSS service has very simple, yet powerful, interfaces. This enables it to handle a variety of data plane and control plane attacks, while being easily implementable in today’s ISP. Through extensive evaluations on realistic traffic traces and Internet topology, we show how SENSS can be used to quickly, safely and effectively mitigate a variety of large-scale attacks that are largely unhandled today.

  • Zachary S. Bischof, Fabián E. Bustamante
  • Pierdomenico Fiadino, Mirko Schiavone, Pedro Casas

    WhatsApp, the new giant in instant multimedia messaging in mobile networks is rapidly increasing its popularity, taking over the traditional SMS/MMS messaging. In this paper we present the first large-scale characterization of WhatsApp, useful among others to ISPs willing to understand the impacts of this and similar applications on their networks. Through the combined analysis of passive measurements at the core of a national mobile network, worldwide geo-distributed active measurements, and tra c analysis at end devices, we show that: (i) the WhatsApp hosting architecture is highly centralized and exclusively located in the US; (ii) video sharing covers almost 40% of the total WhatsApp tra c volume; (iii) flow characteristics depend on the OS of the end device; (iv) despite the big latencies to US servers, download throughputs are as high as 1.5 Mbps; (v) users react immediately and negatively to service outages through social networks feedbacks.

  • Aisha Mushtaq, Asad Khalid Ismail, Abdul Wasay, Bilal Mahmood, Ihsan Ayyub Qazi, Zartash Afzal Uzmi

    Data center operators face extreme challenges in simultaneously providing low latency for short flows, high throughput for long flows, and high burst tolerance. We propose a buffer management strategy that addresses these challenges by isolating short and long flows into separate buffers, sizing these buffers based on flow requirements, and scheduling packets to meet different flow-level objectives. Our design provides new opportunities for performance improvements that complement transport layer optimizations.

  • Joel Obstfeld, Simon Knight, Ed Kern, Qiang Sheng Wang, Tom Bryan, Dan Bourque

    The increasing demand to provide new network services in a timely and efficient manner is driving the need to design, test and deploy networks quickly and consistently. Testing and verifying at scale is a challenge: network equipment is expensive, requires space, power and cooling, and there is never enough test equipment for everyone who wants to use it! Network virtualization technologies enable a flexible environment for educators, researchers, and operators to create functional models of current, planned, or theoretical networks. This demonstration will show VIRL — the Virtual Internet Routing Lab — a platform that can be used for network change validation, training, education, research, or networkaware applications development. The platform combines network virtualization technologies with virtual machines (VMs) running open-source and commercial operating systems; VM orchestration capabilities; a context-aware configuration engine; and an extensible data-collection framework. The system simplifies the process to create both simple and complex environments, run simulations, and collect measurement data.

  • Wentao Chang, An Wang, Aziz Mohaisen, Songqing Chen
  • John P. Rula, Fabian E. Bustamante
  • Shaofeng Chen, Dingyi Fang, Xiaojiang Chen, Tingting Xia, Meng Jin

    This poster presents GuideLoc, a highly efficient aerial wireless localization system that uses directional antennas mounted on a mini Multirotor Unmanned Aerial Vehicle (UAV) to enable detecting and positioning of targets. Taking advantage of angle and signal strength of frames transmitted from targets, GuideLoc can directly fly towards the targets with the minimum flight route and time. We implement a prototype of GuideLoc and evaluate the performance by simulations and experiments. Experimental results show that GuideLoc achieves an average location accuracy of 2.7 meters and reduces flight distance more than 50% compared with existing localization approaches using UAV.

  • Keunhong Lee, Joongi Kim, Sue Moon
  • Yuliang Li, Guang Yao, Jun Bi
  • Jun Li, Skyler Berg, Mingwei Zhang, Peter Reiher, Tao Wei

    End hosts in today’s Internet have the best knowledge of the type of traffic they should receive, but they play no active role in traffic engineering. Traffic engineering is conducted by ISPs, which unfortunately are blind to specific user needs. End hosts are therefore subject to unwanted traffic, particularly from Distributed Denial of Service (DDoS) attacks. This research proposes a new system called DrawBridge to address this traffic engineering dilemma. By realizing the potential of software-defined networking (SDN), in this research we investigate a solution that enables end hosts to use their knowledge of desired traffic to improve traffic engineering during DDoS attacks.

  • Angela H. Jiang, Zachary S. Bischof, Fabian E. Bustamante

    A social news site presents user-curated content, ranked by popularity. Popular curators like Reddit, or Facebook have become effective way of crowdsourcing news or sharing for personal opinions. Traditionally, these services require a centralized authority to aggregate data and determine what to display. However, the trust issues that arise from a centralized system are particularly damaging to the “Web democracy” that social news sites are meant to provide. In this poster, we present cliq, a decentralized social news curator. cliq is a P2P based social news curator that provides private and unbiased reporting. All users in cliq share responsibility for tracking and providing popular content. Any user data that cliq needs to store is also managed across the network. We first inform our design of cliq through an analysis of Reddit. We design a way to provide content curation without a persistent moderator, or usernames.

  • Matthias Vallentin, Dominik Charousset, Thomas C. Schmidt, Vern Paxson, Matthias Wählisch

    When an organization detects a security breach, it undertakes a forensic analysis to figure out what happened. This investigation involves inspecting a wide range of heterogeneous data sources spanning over a long period of time. The iterative nature of the analysis procedure requires an interactive experience with the data. However, the distributed processing paradigms we find in practice today fail to provide this requirement: the batch-oriented nature of MapReduce cannot deliver sub-second round-trip times, and distributed in-memory processing cannot store the terabytes of activity logs needed to inspect during an incident. We present the design and implementation of Visibility Across Space and Time (VAST), a distributed database to support interactive network forensics, and libcppa, its exceptionally scalable messaging core. The extended actor framework libcppa enables VAST to distribute lightweight tasks at negligible overhead. In our live demo, we showcase how VAST enables security analysts to grapple with the huge amounts of data often associated with incident investigations.

  • Chengchen Hu, Ji Yang, Zhimin Gong, Shuoling Deng, Hongbo Zhao
  • Arpit Gupta, Laurent Vanbever, Muhammad Shahbaz, Sean Patrick Donovan, Brandon Schlinker, Nick Feamster, Jennifer Rexford, Scott Shenker, Russ Clark, Ethan Katz-Bassett

    BGP severely constrains how networks can deliver traffic over the Internet. Today’s networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an “SDX”), need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. We must also ensure that the system scales, both in rule-table size and computational overhead. In this demo, we show how we tackle these challenges demonstrating the flexibility and scalability of our SDX platform. The paper also appears in the main program [1].

  • Dina Papagiannaki

    Welcome to the July issue of CCR, an issue that should hopefully inspire a number of discussions that we can continue in person during Sigcomm, in Chicago. This issue features 17 papers, 5 of which are editorial notes, and 12 technical contributions from our community. The technical part features novel contributions in the area of router location inference, performance of fiber-to-the-home networks, BGP, programmable middleboxes, and a programming language for protocol independent packet processors. Each one of them is advancing the state of the art and should be a useful building block for future research. The research community is increasingly becoming multidisciplinary. One cannot help but get inspired when he/she sees the elegance of solutions that address real problems in one discipline while exploiting knowledge produced in another. This is the mission of the fifth technical submission in this issue. The core of the contribution is to adopt the concept of design contests and apply it to the area of congestion control protocols in wireless networks. The authors point out that one of the key requirements in any design contest is to “have an unambigious, measurable objective that will allow one to compare protocols”. And this is exactly what the authors do in their work. The article concludes that design contests can benefit networking research, if designed properly, and they encourage others to explore their strengths and weaknesses. The remaining papers of the technical part are devoted to one of the largest efforts undertaken in the recent years to rethink the architecture of the Internet, e.g. the Future Internet Architecture program of the U.S. National Science Foundation. FIA targets the design of a trustworthy Internet, that incorporates societal, economical, and legal constraints, while following a clean slate approach. It was the initiative of Prof. David Wetherall, from University of Washington, to bring the four FIA proposals, and affiliated project ChoiceNet, to CCR, and provide a very comprehensive exposition of the different avenues taken by the different consortia. I have to thank David for acm all the hard work he did in order to bring all the pieces in the same place, something that will undoubtedly help our community understand the FIA efforts in a greater extent. The FIA session is preceded by a technical note by Dr. Darleen Fisher, FIA program director at the U.S. National Science Foundation. It is inspiring to see how a long term (11-year) funding effort has led to a number of functioning components that may define the Internet of the future. Thank you Darleen for a wonderful introductory note! Our editorial session comprises 5 papers. Two of the papers cover workshop reports: i) the workshop on Internet Economics 2013, and ii) the roundtable on real time communications research, that was held along with IPTComm, in October 2013. We have an article introducing the ProtoRINA prototype, a user-space prototype of the Recursive InterNetwork Architecture (RINA), and a qualitative study of the Internet census data that was collected in March 2013, and that has attracted significant attention in our community. The last editorial is appearing in CCR per my own invitation to its author, Daniel Stenberg. By the end of this year the Internet Engineering Task Force (IETF) is aiming to standardize the second version of HTTP, i.e. HTTP 2.0. This new version is going to be a very significant change compared to HTTP v1 aiming to provide better support for mobile browsing. Daniel is a Mozilla engineer participating in the standardization of HTTP 2.0 and has kindly accepted to publish his thoughts on HTTP 2.0 at CCR. This issue also marks the start of tenure for Dr. Aline Carneiro Viana, from INRIA. Aline is bringing a lot of energy to the editorial board and her expertise in ad hoc, sensor networks, delay tolerant networks, and cognitive radio networks. With all that, I hope to see most of you in Chicago in August, and please feel free to send me any suggestions on things you would like to see published from CCR in the future.

  • B. Huffaker, M. Fomenkov, K. Claffy

    In this paper we focus on geolocating Internet routers, using a methodology for extracting and decoding geography-related strings from fully qualified domain names (hostnames). We first compiled an extensive dictionary associating geographic strings (e.g., airport codes) with geophysical locations. We then searched a large set of router hostnames for these strings, assuming each autonomous naming domain uses geographic hints consistently within that domain. We used topology and performance data continually collected by our global measurement infrastructure to discern whether a given hint appears to co-locate different hostnames in which it is found. Finally, we generalized geolocation hints into domain-specific rule sets. We generated a total of 1,711 rules covering 1,398 different domains and validated them using domain-specific ground truth we gathered for six domains. Unlike previous efforts which relied on labor-intensive domain-specific manual analysis, we automate our process for inferring the domain specific heuristics, substantially advancing the state-of-the-art of methods for geolocating Internet resources.

    Joel Sommers
  • M. Luckie

    Researchers depend on public BGP data to understand the structure and evolution of the AS topology, as well as the operational security and resiliency of BGP. BGP data is provided voluntarily by network operators who establish BGP sessions with route collectors that record this data. In this paper, we show how trivial it is for a single vantage point (VP) to introduce thousands of spurious routes into the collection by providing examples of five VPs that did so. We explore the impact these misbehaving VPs had on AS relationship inference, showing these misbehaving VPs introduced thousands of AS links that did not exist, and caused relationship inferences for links that did exist to be corrupted. We evaluate methods to automatically identify misbehaving VPs, although we find the result unsatisfying because the limitations of real-world BGP practices and AS relationship inference algorithms produce signatures similar to those created by misbehaving VPs. The most recent misbehaving VP we discovered added thousands of spurious routes for nine consecutive months until 8 November 2012. This misbehaving VP barely impacts (0.1%) our validation of our AS relationship inferences, but this number may be misleading since most of our validation data relies on BGP and RPSL which validates only existing links, rather than asserting the non-existence of links. We have only a few assertions of non-existent routes, all received via our public-facing website that allows operators to provide validation data through our interactive feedback mechanism. We only discovered this misbehavior because two independent operators corrected some inferences, and we noticed that the spurious routes all came from the same VP. This event highlights the limitations of even the best available topology data, and provides additional evidence that comprehensive ground truth validation from operators is essential to scientific research on Internet topology.

    Renata Teixeira
Syndicate content