With increasing frequency, users raise concerns about data privacy and protection in centralized Online Social Networks (OSNs), in which providers have the unprecedented privilege to access and exploit every user’s private data at will. To mitigate these concerns, researchers have suggested to decentralize OSNs and thereby enable users to control and manage access to their data themselves. However, previously proposed decentralization approaches suﬀer from several drawbacks. To tackle their deﬁciencies, we introduce the Self-Organized Universe of People (SOUP). In this demonstration, we present a prototype of SOUP and share our experiences from a real-world deployment.
Adaptive bitrate (ABR) technologies are being widely used in today’s popular HTTP-based video streaming such as YouTube and Netﬂix. Such a rate-switching algorithm embedded in a video player is designed to improve video qualityof-experience (QoE) by selecting an appropriate resolution based on the analysis of network conditions while the video is playing. However, a bad viewing experience is often caused by the video player having difficulty estimating transit or client-side network conditions accurately. In order to analyze the ABR streaming performance, we developed YouSlow, a web browser plug-in that can detect and report live buﬀer stalling events to our analysis tool. Currently, YouSlow has collected more than 20,000 of YouTube video stalling events over 40 countries.
As smartphones and mobile devices are rapidly becoming indispensable for many network users, mobile malware has become a serious threat in the network security and privacy. Especially on the popular Android platform, many malicious apps are hiding in a large number of normal apps, which makes the malware detection more challenging. In this paper, we propose a ML-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of Android app for malware detection. The comparison of modeling results demonstrates that the deep learning technique is especially suitable for Android malware detection and can achieve a high level of 96% accuracy with real-world Android application sets.
BGP, the Internet’s interdomain routing protocol, is highly vulnerable to routing failures that result from unintentional misconﬁgurations or deliberate attacks. To defend against these failures, recent years have seen the adoption of the Resource Public Key Infrastructure (RPKI), which currently authorizes 4% of the Internet’s routes. The RPKI is a completely new security infrastructure (requiring new servers, caches, and the design of new protocols), a fact that has given rise to some controversy . Thus, an alternative proposal has emerged: Route Origin Veriﬁcation (ROVER) [4, 7], which leverages the existing reverse DNS (rDNS) and DNSSEC to secure the interdomain routing system. Both RPKI and ROVER rely on a hierarchy of authorities to provide trusted information about the routing system. Recently, however,  argued that the misconﬁgured, faulty or compromised RPKI authorities introduce new vulnerabilities in the routing system, which can take IP preﬁxes oﬄine. Meanwhile, the designers of ROVER claim that it operates in a “fail-safe” mode, where “[o]ne could completely unplug a router veriﬁcation application at any time and Internet routing would continue to work just as it does today”. There has been debate in Internet community mailing lists  about the pros and cons of both approaches. This poster therefore compares the impact of ROVER failures to those of the RPKI, in a threat model that covers misconﬁgurations, faults or compromises of their trusted authorities.
We present a control plane architecture to accelerate multicast and incast traﬃc delivery for data-intensive applications in cluster-computing interconnection networks. The architecture is experimentally examined by enabling physical layer optical multicasting on-demand for the application layer to achieve non-blocking performance.
We present a demonstration of WTF (Where’s The Fault?), a system that localizes performance problems in home and access networks. We implement WTF as custom ﬁrmware that runs in an off-the-shelf home router. WTF uses timing and buffering information from passively monitored trafﬁc at home routers to detect both access link and wireless network bottlenecks.
One of the limitations of wildcard rules in Software Defined Networks, such as OpenFlow, is losing visibility. FleXam is a flexible sampling extension for OpenFlow that allows the controller to define which packets should be sampled, what parts of each packet should be selected, and where they should be sent. Here, we present an interactive demo showing how FleXam enables the controller to dynamically adjust sampling rates and change the sampling scheme to optimally keep up with a sampling budget in the context of a traffic statistics collection application.
Given that Software-Deﬁned Networking is highly successful in solving many of today’s manageability, ﬂexibility, and scalability issues in large-scale networks, in this paper we argue that the concept of SDN can be extended even further. Many applications (esp. stream processing and big-data applications) rely on graph-based inter-process communication patterns that are very similar to those in computer networks. To our mind, this network abstraction spanning over diﬀerent types of entities is highly suitable for and would beneﬁt from central (SDN-inspired) control for the same reasons classical networks do. In this work, we investigate the commonalities between such intra-host networks and classical computer networking. Based on this, we study the feasibility of a central network controller that manages both network traﬃc and intra-host communication over a custom bus system.
Crowdtesting is increasingly popular among researchers to carry out subjective assessments of diﬀerent services. Experimenters can easily assess to a huge pool of human subjects through crowdsourcing platforms. The workers are usually anonymous, and they participate in the experiments independently. Therefore, a fundamental problem threatening the integrity of these platforms is to detect various types of cheating from the workers. In this poster, we propose cheat-detection mechanism based on an analysis of the workers’ mouse cursor trajectories. It provides a jQuery-based library to record browser events. We compute a set of metrics from the cursor traces to identify cheaters. We deploy our mechanism to the survey pages for our video quality assessment tasks published on Amazon Mechanical Turk. Our results show that cheaters’ cursor movement is usually more direct and contains less pauses.
Mininet is a great prototyping tool which combines existing SDN-related software components (e.g., Open vSwitch, OpenFlow controllers, network namespaces, cgroups) into a framework, which can automatically set up and conﬁgure customized OpenFlow testbeds scaling up to hundreds of nodes. Standing on the shoulders of Mininet, we implement a similar prototyping system called ESCAPE, which can be used to develop and test various components of the service chaining architecture. Our framework incorporates Click for implementing Virtual Network Functions (VNF), NETCONF for managing Click-based VNFs and POX for taking care of traﬃc steering. We also add our extensible Orchestrator module, which can accommodate mapping algorithms from abstract service descriptions to deployed and running service chains.
More recently, work towards VMs based on minimalistic or specialized OSes (e.g., OSv , ClickOS , Mirage , Erlang on Xen , HalVM , etc.) has started pushing the envelope of how reactive or ﬂuid the cloud can be. These VMs’ small CPU and memory footprints (as little as a few megabytes) enable a number of scenarios that are not possible with traditional VMs. First, such VMs have the potential be instantiated and suspended in tens of milliseconds. This means that they can be deployed on-the-ﬂy, even as new ﬂows arrive in a network, and can be used to eﬀectively cope with ﬂash crowds. Second, the ability to quickly migrate the VM and its state would allow operators to run their servers at "hotter" load levels without fear of overload, since processing could be near instantaneously moved to a less loaded server. Finally, these VMs’ small memory footprints could potentially allow thousands or even more such VMs to run on a single, inexpensive server; this would lead to important investment and operating savings, and would allow for ﬁne granularity, virtualized network processing (e.g., per-customer ﬁrewalls or CPEs, to name a couple). Realizing such a super ﬂuid cloud, however, poses a number of important challenges, since the virtualization technologies that these VMs run on (e.g., Xen or KVM) were never designed to run this large number of concurrent VMs. In the case of Xen , the system that this demo is based on, attempts to tackle some of the issues such as limited number of event channels or memory grants are under way, but these are still in their infancy and are not necessarily aiming to run the huge number of VMs we are envisioning. In this demo we will demonstrate how to concurrently execute thousands of MiniOS-based guests 1 on a single inexpensive server. We will also show instantiation and migration of such VMs in tens of milliseconds, and transparent, wide area migration of virtualized middleboxes by combining such VMs with the multi-path TCP (MPTCP) protocol.
Home and business network operators have limited network statistics available over which management decisions can be made. Similarly, there are few triggered behaviors, such as usage or bandwidths cap for individual users, that are available. By looking at sources of traffic, based on Domain Name System (DNS) cues for content of particular web addresses or source Autonomous System (AS) of the traﬃc, network operators could create new and interesting rules for their network. NetAssay is a Software-Deﬁned Networking (SDN)-based, network-wide monitoring and reaction framework. By integrating information from Border Gateway Protocol (BGP) and the Domain Name System, NetAssay is able to integrate formerly disparate sources of control information, and use it to provide better monitoring, more useful triggered events, and security beneﬁts for network operators.
Online social networks (OSNs) are an important source of information for scientists in diﬀerent ﬁelds such as computer science, sociology, economics, etc. However, it is hard to study OSNs as they are very large. For instance, Facebook has 1.28 billion active users in March 2014 and Twitter claims 255 million active users in April 2014. Also, companies take measures to prevent crawls of their OSNs and refrain from sharing their data with the research community. For these reasons, we argue that sampling techniques will be the best technique to study OSNs in the future. In this work, we take an experimental approach to study the characteristics of well-known sampling techniques on a full social graph of Twitter crawled in 2012 . Our contribution is to evaluate the behavior of these techniques on a real directed graph by considering two sampling scenarios: (a) obtaining most popular users (b) obtaining an unbiased sample of users, and to ﬁnd the most suitable sampling techniques for each scenario.
In recent years, with the rapid development of satellite technology including On Board Processing (OBP) and Inter Satellite Link (ISL), satellite network devices such as space IP routers have been experimentally carried in space. However, there are many difficulties to build a future satellite network with current terrestrial Internet technologies due to the distinguished space features, such as the severely limited resources, remote hardware/software upgrade in space. In this paper, we propose OpenSAN, a novel architecture of software-defined satellite network. By decoupling the data plane and control plane, OpenSAN provides satellite network with high efficiency, finegrained control, as well as flexibility to support future advanced network technology. Furthermore, we also discuss some practical challenges in the deployment of OpenSAN.
This demo presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi is structured as a set of arbitrarily composable UNIX shells. It includes two shells to record and replay Web pages, RecordShell and ReplayShell, as well as two shells for network emulation, DelayShell and LinkShell. In addition, Mahimahi includes a corpus of recorded websites along with benchmark results and link traces (https://github.com/ravinet/sites). Mahimahi improves on prior record-and-replay frameworks in three ways. First, it preserves the multi-origin nature of Web pages, present in approximately 98% of the Alexa U.S. Top 500, when replaying. Second, Mahimahi isolates its own network trafﬁc, allowing multiple instances to run concurrently with no impact on the host machine and collected measurements. Finally, Mahimahi is not inherently tied to browsers and can be used to evaluate many different applications. A demo of Mahimahi recording and replaying a Web page over an emulated link can be found at http://youtu.be/vytwDKBA-8s. The source code and instructions to use Mahimahi are available at http://mahimahi.mit.edu/.
After more than two decades of evolution, TCP and its end host based modiﬁcations can still suffer from severely degraded performance under real-world challenging network conditions. The reason, as we observe, is due to TCP family’s fundamental architectural deﬁciency, which hardwires packet-level events to control responses and ignores emprical performance. Jumping out of TCP lineage’s architectural deﬁciency, we propose Performanceoriented Congestion Control (PCC), a new congestion control architecture in which each sender controls its sending strategy based on empirically observed performance metrics. We show through preliminary experimental results that PCC achieves consistently high performance under various challenging network conditions.
We propose a new software-deﬁned security service – SENSS – that enables a victim network to request services from remote ISPs for trafﬁc that carries source IPs or destination IPs from this network’s address space. These services range from statistics gathering, to ﬁltering or quality of service guarantees, to route reports or modiﬁcations. The SENSS service has very simple, yet powerful, interfaces. This enables it to handle a variety of data plane and control plane attacks, while being easily implementable in today’s ISP. Through extensive evaluations on realistic trafﬁc traces and Internet topology, we show how SENSS can be used to quickly, safely and effectively mitigate a variety of large-scale attacks that are largely unhandled today.
WhatsApp, the new giant in instant multimedia messaging in mobile networks is rapidly increasing its popularity, taking over the traditional SMS/MMS messaging. In this paper we present the ﬁrst large-scale characterization of WhatsApp, useful among others to ISPs willing to understand the impacts of this and similar applications on their networks. Through the combined analysis of passive measurements at the core of a national mobile network, worldwide geo-distributed active measurements, and tra c analysis at end devices, we show that: (i) the WhatsApp hosting architecture is highly centralized and exclusively located in the US; (ii) video sharing covers almost 40% of the total WhatsApp tra c volume; (iii) ﬂow characteristics depend on the OS of the end device; (iv) despite the big latencies to US servers, download throughputs are as high as 1.5 Mbps; (v) users react immediately and negatively to service outages through social networks feedbacks.
Data center operators face extreme challenges in simultaneously providing low latency for short ﬂows, high throughput for long ﬂows, and high burst tolerance. We propose a buffer management strategy that addresses these challenges by isolating short and long ﬂows into separate buffers, sizing these buffers based on ﬂow requirements, and scheduling packets to meet different ﬂow-level objectives. Our design provides new opportunities for performance improvements that complement transport layer optimizations.
The increasing demand to provide new network services in a timely and eﬃcient manner is driving the need to design, test and deploy networks quickly and consistently. Testing and verifying at scale is a challenge: network equipment is expensive, requires space, power and cooling, and there is never enough test equipment for everyone who wants to use it! Network virtualization technologies enable a ﬂexible environment for educators, researchers, and operators to create functional models of current, planned, or theoretical networks. This demonstration will show VIRL — the Virtual Internet Routing Lab — a platform that can be used for network change validation, training, education, research, or networkaware applications development. The platform combines network virtualization technologies with virtual machines (VMs) running open-source and commercial operating systems; VM orchestration capabilities; a context-aware conﬁguration engine; and an extensible data-collection framework. The system simpliﬁes the process to create both simple and complex environments, run simulations, and collect measurement data.
This poster presents GuideLoc, a highly eﬃcient aerial wireless localization system that uses directional antennas mounted on a mini Multirotor Unmanned Aerial Vehicle (UAV) to enable detecting and positioning of targets. Taking advantage of angle and signal strength of frames transmitted from targets, GuideLoc can directly ﬂy towards the targets with the minimum ﬂight route and time. We implement a prototype of GuideLoc and evaluate the performance by simulations and experiments. Experimental results show that GuideLoc achieves an average location accuracy of 2.7 meters and reduces ﬂight distance more than 50% compared with existing localization approaches using UAV.
End hosts in today’s Internet have the best knowledge of the type of traffic they should receive, but they play no active role in traffic engineering. Traffic engineering is conducted by ISPs, which unfortunately are blind to speciﬁc user needs. End hosts are therefore subject to unwanted traffic, particularly from Distributed Denial of Service (DDoS) attacks. This research proposes a new system called DrawBridge to address this traffic engineering dilemma. By realizing the potential of software-deﬁned networking (SDN), in this research we investigate a solution that enables end hosts to use their knowledge of desired traffic to improve traffic engineering during DDoS attacks.
A social news site presents user-curated content, ranked by popularity. Popular curators like Reddit, or Facebook have become eﬀective way of crowdsourcing news or sharing for personal opinions. Traditionally, these services require a centralized authority to aggregate data and determine what to display. However, the trust issues that arise from a centralized system are particularly damaging to the “Web democracy” that social news sites are meant to provide. In this poster, we present cliq, a decentralized social news curator. cliq is a P2P based social news curator that provides private and unbiased reporting. All users in cliq share responsibility for tracking and providing popular content. Any user data that cliq needs to store is also managed across the network. We ﬁrst inform our design of cliq through an analysis of Reddit. We design a way to provide content curation without a persistent moderator, or usernames.