Computer Communication Review: Papers

Find a CCR issue:
  • Ilias Marinos, Robert N.M. Watson, Mark Handley
  • Aaron N. Parks, Angli Liu, Shyamnath Gollakota, Joshua R. Smith

    Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication.

  • Aaron Gember-Jacobson, Raajay Viswanathan, Chaithan Prakash, Robert Grandl, Junaid Khalid, Sourav Das, Aditya Akella

    Network functions virtualization (NFV) together with softwaredefined networking (SDN) has the potential to help operators satisfy tight service level agreements, accurately monitor and manipulate network traffic, and minimize operating expenses. However, in scenarios that require packet processing to be redistributed across a collection of network function (NF) instances, simultaneously achieving all three goals requires a framework that provides efficient, coordinated control of both internal NF state and network forwarding state. To this end, we design a control plane called OpenNF. We use carefully designed APIs and a clever combination of events and forwarding updates to address race conditions, bound overhead, and accommodate a variety of NFs. Our evaluation shows that OpenNF offers efficient state control without compromising flexibility, and requires modest additions to NFs.

  • Hongqiang Harry Liu, Srikanth Kandula, Ratul Mahajan, Ming Zhang, David Gelernter

    Network faults such as link failures and high switch configuration delays can cause heavy congestion and packet loss. Because it takes time for the traffic engineering systems to detect and react to such faults, these conditions can last long—even tens of seconds. We propose forward fault correction (FFC), a proactive approach for handling faults. FFC spreads network traffic such that freedom from congestion is guaranteed under arbitrary combinations of up to k faults. We show how FFC can be practically realized by compactly encoding the constraints that arise from this large number of possible faults and solving them efficiently using sorting networks. Experiments with data from real networks show that, with negligible loss in overall network throughput, FFC can reduce data loss by a factor of 7–130 in well-provisioned networks, and reduce the loss of high-priority traffic to almost zero in well-utilized networks.

  • Mosharaf Chowdhury, Yuan Zhong, Ion Stoica

    Communication in data-parallel applications often involves a collection of parallel flows. Traditional techniques to optimize flowlevel metrics do not perform well in optimizing such collections, because the network is largely agnostic to application-level requirements. The recently proposed coflow abstraction bridges this gap and creates new opportunities for network scheduling. In this paper, we address inter-coflow scheduling for two different objectives: decreasing communication time of data-intensive jobs and guaranteeing predictable communication time. We introduce the concurrent open shop scheduling with coupled resources problem, analyze its complexity, and propose effective heuristics to optimize either objective. We present Varys, a system that enables data-intensive frameworks to use coflows and the proposed algorithms while maintaining high network utilization and guaranteeing starvation freedom. EC2 deployments and trace-driven simulations show that communication stages complete up to 3.16× faster on average and up to 2× more coflows meet their deadlines using Varys in comparison to per-flow mechanisms. Moreover, Varys outperforms non-preemptive coflow schedulers by more than 5×.

  • Mohammad Alizadeh, Tom Edsall, Sarang Dharmapurikar, Ramanan Vaidyanathan, Kevin Chu, Andy Fingerhut, Vinh The Lam, Francis Matus, Rong Pan, Navindra Yadav, George Varghese

    We present the design, implementation, and evaluation of CONGA, a network-based distributed congestion-aware load balancing mechanism for datacenters. CONGA exploits recent trends including the use of regular Clos topologies and overlays for network virtualization. It splits TCP flows into flowlets, estimates real-time congestion on fabric paths, and allocates flowlets to paths based on feedback from remote switches. This enables CONGA to efficiently balance load and seamlessly handle asymmetry, without requiring any TCP modifications. CONGA has been implemented in custom ASICs as part of a new datacenter fabric. In testbed experiments, CONGA has 5× better flow completion times than ECMP even with a single link failure and achieves 2–8× better throughput than MPTCP in Incast scenarios. Further, the Price of Anarchy for CONGA is provably small in Leaf-Spine topologies; hence CONGA is nearly as effective as a centralized scheduler while being able to react to congestion in microseconds. Our main thesis is that datacenter fabric load balancing is best done in the network, and requires global schemes such as CONGA to handle asymmetry.

  • Rohan Gandhi, Hongqiang Harry Liu, Y. Charlie Hu, Guohan Lu, Jitendra Padhye, Lihua Yuan, Ming Zhang

    Load balancing is a foundational function of datacenter infrastructures and is critical to the performance of online services hosted in datacenters. As the demand for cloud services grows, expensive and hard-to-scale dedicated hardware load balancers are being replaced with software load balancers that scale using a distributed data plane that runs on commodity servers. Software load balancers offer low cost, high availability and high flexibility, but suffer high latency and low capacity per load balancer, making them less than ideal for applications that demand either high throughput, or low latency or both. In this paper, we present D UET, which offers all the benefits of software load balancer, along with low latency and high availability – at next to no cost. We do this by exploiting a hitherto overlooked resource in the data center networks – the switches themselves. We show how to embed the load balancing functionality into existing hardware switches, thereby achieving organic scalability at no extra cost. For flexibility and high availability, D UET seamlessly integrates the switch-based load balancer with a small deployment of software load balancer. We enumerate and solve several architectural and algorithmic challenges involved in building such a hybrid load balancer. We evaluate D UET using a prototype implementation, as well as extensive simulations driven by traces from our production data centers. Our evaluation shows that D UET provides 10x more capacity than a software load balancer, at a fraction of a cost, while reducing latency by a factor of 10 or more, and is able to quickly adapt to network dynamics including failures.

  • Simon Peter, Umar Javed, Qiao Zhang, Doug Woos, Thomas Anderson, Arvind Krishnamurthy

    A longstanding problem with the Internet is that it is vulnerable to outages, black holes, hijacking and denial of service. Although architectural solutions have been proposed to address many of these issues, they have had difficulty being adopted due to the need for widespread adoption before most users would see any benefit. This is especially relevant as the Internet is increasingly used for applications where correct and continuous operation is essential. In this paper, we study whether a simple, easy to implement model is sufficient for addressing the aforementioned Internet vulnerabilities. Our model, called ARROW (Advertised Reliable Routing Over Waypoints), is designed to allow users to configure reliable and secure end to end paths through participating providers. With ARROW, a highly reliable ISP offers tunneled transit through its network, along with packet transformation at the ingress, as a service to remote paying customers. Those customers can stitch together reliable end to end paths through a combination of participating and non-participating ISPs in order to improve the faulttolerance, robustness, and security of mission critical transmissions. Unlike efforts to redesign the Internet from scratch, we show that ARROW can address a set of well-known Internet vulnerabilities, for most users, with the adoption of only a single transit ISP. To demonstrate ARROW, we have added it to a small-scale wide-area ISP we control. We evaluate its performance and failure recovery properties in both simulation and live settings.

  • Bryce Kellogg, Aaron Parks, Shyamnath Gollakota, Joshua R. Smith, David Wetherall

    RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter’s feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization’s Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.

  • Swarun Kumar, Ezzeldin Hamed, Dina Katabi, Li Erran Li

    Despite the rapid growth of next-generation cellular networks, researchers and end-users today have limited visibility into the performance and problems of these networks. As LTE deployments move towards femto and pico cells, even operators struggle to fully understand the propagation and interference patterns affecting their service, particularly indoors. This paper introduces LTEye, the first open platform to monitor and analyze LTE radio performance at a fine temporal and spatial granularity. LTEye accesses the LTE PHY layer without requiring private user information or provider support. It provides deep insights into the PHY-layer protocols deployed in these networks. LTEye’s analytics enable researchers and policy makers to uncover serious deficiencies in these networks due to inefficient spectrum utilization and inter-cell interference. In addition, LTEye extends synthetic aperture radar (SAR), widely used for radar and backscatter signals, to operate over cellular signals. This enables businesses and end-users to localize mobile users and capture the distribution of LTE performance across spatial locations in their facility. As a result, they can diagnose problems and better plan deployment of repeaters or femto cells. We implement LTEye on USRP software radios, and present empirical insights and analytics from multiple AT&T and Verizon base stations in our locality.

  • Ryan Craven, Robert Beverly, Mark Allman

    Understanding, measuring, and debugging IP networks, particularly across administrative domains, is challenging. One particularly daunting aspect of the challenge is the presence of transparent middleboxes—which are now common in today’s Internet. In-path middleboxes that modify packet headers are typically transparent to a TCP, yet can impact end-to-end performance or cause blackholes. We develop TCP HICCUPS to reveal packet header manipulation to both endpoints of a TCP connection. HICCUPS permits endpoints to cooperate with currently opaque middleboxes without prior knowledge of their behavior. For example, with visibility into end-to-end behavior, a TCP can selectively enable or disable performance enhancing options. This cooperation enables protocol innovation by allowing new IP or TCP functionality (e.g., ECN, SACK, Multipath TCP, Tcpcrypt) to be deployed without fear of such functionality being misconstrued, modified, or blocked along a path. HICCUPS is incrementally deployable and introduces no new options. We implement and deploy TCP HICCUPS across thousands of disparate Internet paths, highlighting the breadth and scope of subtle and hard to detect middlebox behaviors encountered. We then show how path diagnostic capabilities provided by HICCUPS can benefit applications and the network.

  • Fahad R. Dogar, Thomas Karagiannis, Hitesh Ballani, Antony Rowstron

    Many data center applications perform rich and complex tasks (e.g., executing a search query or generating a user’s news-feed). From a network perspective, these tasks typically comprise multiple flows, which traverse different parts of the network at potentially different times. Most network resource allocation schemes, however, treat all these flows in isolation – rather than as part of a task – and therefore only optimize flow-level metrics. In this paper, we show that task-aware network scheduling, which groups flows of a task and schedules them together, can reduce both the average as well as tail completion time for typical data center applications. To achieve these benefits in practice, we design and implement Baraat, a decentralized task-aware scheduling system. Baraat schedules tasks in a FIFO order but avoids head-of-line blocking by dynamically changing the level of multiplexing in the network. Through experiments with Memcached on a small testbed and large-scale simulations, we show that Baraat outperforms state-of-the-art decentralized schemes (e.g., pFabric) as well as centralized schedulers (e.g., Orchestra) for a wide range of workloads (e.g., search, analytics, etc).

  • Tiffany Hyun-Jin Kim, Cristina Basescu, Limin Jia, Soo Bum Lee, Yih-Chun Hu, Adrian Perrig

    In-network source authentication and path validation are fundamental primitives to construct higher-level security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, currently proposed solutions either fall short of addressing important security concerns or require a substantial amount of router overhead. In this paper, we propose lightweight, scalable, and secure protocols for shared key setup, source authentication, and path validation. Our prototype implementation demonstrates the efficiency and scalability of the protocols, especially for software-based implementations.

  • Anirudh Sivaraman, Keith Winstein, Pratiksha Thaker, Hari Balakrishnan

    When designing a distributed network protocol, typically it is infeasible to fully define the target network where the protocol is intended to be used. It is therefore natural to ask: How faithfully do protocol designers really need to understand the networks they design for? What are the important signals that endpoints should listen to? How can researchers gain confidence that systems that work well on well-characterized test networks during development will also perform adequately on real networks that are inevitably more complex, or future networks yet to be developed? Is there a tradeoff between the performance of a protocol and the breadth of its intended operating range of networks? What is the cost of playing fairly with cross-traffic that is governed by another protocol? We examine these questions quantitatively in the context of congestion control, by using an automated protocol-design tool to approximate the best possible congestion-control scheme given imperfect prior knowledge about the network. We found only weak evidence of a tradeoff between operating range in link speeds and performance, even when the operating range was extended to cover a thousand-fold range of link speeds. We found that it may be acceptable to simplify some characteristics of the network—such as its topology—when modeling for design purposes. Some other features, such as the degree of multiplexing and the aggressiveness of contending endpoints, are important to capture in a model.

  • K.V. Rashmi, Nihar B. Shah, Dikang Gu, Hairong Kuang, Dhruba Borthakur, Kannan Ramchandran

    Erasure codes such as Reed-Solomon (RS) codes are being extensively deployed in data centers since they offer significantly higher reliability than data replication methods at much lower storage overheads. These codes however mandate much higher resources with respect to network bandwidth and disk IO during reconstruction of data that is missing or otherwise unavailable. Existing solutions to this problem either demand additional storage space or severely limit the choice of the system parameters. In this paper, we present Hitchhiker, a new erasure-coded storage system that reduces both network traffic and disk IO by around 25% to 45% during reconstruction of missing or otherwise unavailable data, with no additional storage, the same fault tolerance, and arbitrary flexibility in the choice of parameters, as compared to RS-based systems. Hitchhiker "rides" on top of RS codes, and is based on novel encoding and decoding techniques that will be presented in this paper. We have implemented Hitchhiker in the Hadoop Distributed File System (HDFS). When evaluating various metrics on the data-warehouse cluster in production at Facebook with real-time traffic and workloads, during reconstruction, we observe a 36% reduction in the computation time and a 32% reduction in the data read time, in addition to the 35% reduction in network traffic and disk IO. Hitchhiker can thus reduce the latency of degraded reads and perform faster recovery from failed or decommissioned machines.

  • Jeongkeun Lee, Yoshio Turner, Myungjin Lee, Lucian Popa, Sujata Banerjee, Joon-Myung Kang, Puneet Sharma

    Providing bandwidth guarantees to specific applications is becoming increasingly important as applications compete for shared cloud network resources. We present CloudMirror, a solution that provides bandwidth guarantees to cloud applications based on a new network abstraction and workload placement algorithm. An effective network abstraction would enable applications to easily and accurately specify their requirements, while simultaneously enabling the infrastructure to provision resources efficiently for deployed applications. Prior research has approached the bandwidth guarantee specification by using abstractions that resemble physical network topologies. We present a contrasting approach of deriving a network abstraction based on application communication structure, called Tenant Application Graph or TAG. CloudMirror also incorporates a new workload placement algorithm that efficiently meets bandwidth requirements specified by TAGs while factoring in high availability considerations. Extensive simulations using real application traces and datacenter topologies show that CloudMirror can handle 40% more bandwidth demand than the state of the art (e.g., the Oktopus system), while improving high availability from 20% to 70%.

  • Dinesh Bharadia, Sachin Katti

    This paper presents, FastForward (FF), a novel full-duplex relay that constructively forwards signals such that wireless network throughput and coverage is significantly enhanced. FF is a Layer 1 in-band full-duplex device, it receives and transmits signals directly and simultaneously on the same frequency. It cleanly integrates into existing networks (both WiFi and LTE) as a separate device and does not require changes to the clients. FF’s key invention is a constructive filtering algorithm that transforms the signal at the relay such that when it reaches the destination, it constructively combines with the direct signals from the source and provides a significant throughput gain. We prototype FF using off-the-shelf software radios running a stock WiFi PHY and show experimentally that it provides a 3× median throughput increase and nearly a 4× gain at the edge of the coverage area.

  • Navid Hamedazimi, Zafar Qazi, Himanshu Gupta, Vyas Sekar, Samir R. Das, Jon P. Longtin, Himanshu Shah, Ashish Tanwer

    This paper describes FireFly a first but significant step toward realizing this vision. Figure 1 shows a high-level overview of FireFly. Each ToR is equipped with reconfigurable wireless links which can connect to other ToR switches. However, we need to look beyond traditional radio-frequency (RF) wireless solutions (e.g., 60GHz) as their interference characteristics limit range and capacity. Thus, we envision a new use-case for Free-Space Optical communications (FSO) as it can offer high data rates (tens of Gbps) over long ranges using low transmission power and with zero interference [31]. The centralized FireFly controller reconfigures the topology and forwarding rules to adapt to changing traffic patterns. While prior work made the case for using FSO links in DCs [19, 28], these fail to establish a viable hardware design and also do not address practical network design and management challenges that

  • Vivek Yenamandra, Kannan Srinivasan

    Global synchronization across time and frequency domains significantly benefits wireless communications. Multi-Cell (Network) MIMO, interference alignment solutions, opportunistic routing techniques in ad-hoc networks, OFDMA etc. all necessitate synchronization in either time or frequency domain or both. This paper presents Vidyut, a system that exploits the easily accessible and ubiquitous power line infrastructure to achieve synchronization in time and frequency domains across nodes distributed beyond a singlecollision domain. Vidyut uses the power lines to transmit a reference frequency tone to which each node locks its frequency. Vidyut exploits the steady periodicity of delivered power signal itself to synchronize distributed nodes in time. We validate the extent of Vidyut’s synchronization and evaluate its effectiveness. We verify Vidyut’s suitability for wireless applications such as OFDMA and multi-cell MIMO by validating the benefits of global synchronization in an enterprise wireless network. Our experiments show a throughput gain of 8.2x over MegaMIMO, 7x over NEMOx and 2.5x over OFDMA systems.

  • Jue Wang, Deepak Vasisht, Dina Katabi

    Prior work in RF-based positioning has mainly focused on discovering the absolute location of an RF source, where state-of-theart systems can achieve an accuracy on the order of tens of centimeters using a large number of antennas. However, many applications in gaming and gesture based interface see more benefits in knowing the detailed shape of a motion. Such trajectory tracing requires a resolution several fold higher than what existing RF-based positioning systems can offer. This paper shows that one can provide a dramatic increase in trajectory tracing accuracy, even with a small number of antennas. The key enabler for our design is a multi-resolution positioning technique that exploits an intrinsic tradeoff between improving the resolution and resolving ambiguity in the location of the RF source. The unique property of this design is its ability to precisely reconstruct the minute details in the trajectory shape, even when the absolute position might have an offset. We built a prototype of our design with commercial off-the-shelf RFID readers and tags and used it to enable a virtual touch screen, which allows a user to interact with a desired computing device by gesturing or writing her commands in the air, where each letter is only a few centimeters wide.

  • Abhigyan Sharma, Xiaozheng Tie, Hardeep Uppal, Arun Venkataramani, David Westbrook, Aditya Yadav

    Mobile devices dominate the Internet today, however the Internet rooted in its tethered origins continues to provide poor infrastructure support for mobility. Our position is that in order to address this problem, a key challenge that must be addressed is the design of a massively scalable global name service that rapidly resolves identities to network locations under high mobility. Our primary contribution is the design, implementation, and evaluation of Auspice, a nextgeneration global name service that addresses this challenge. A key insight underlying Auspice is a demand-aware replica placement engine that intelligently replicates name records to provide low lookup latency, low update cost, and high availability. We have implemented a prototype of Auspice and compared it against several commercial managed DNS providers as well as state-of-the-art research alternatives, and shown that Auspice significantly outperforms both. We demonstrate proof-of-concept that Auspice can serve as a complete end-to-end mobility solution as well as enable novel context-based communication primitives that generalize nameor address-based communication in today’s Internet.

  • Yunpeng James Liu, Peter Xiang Gao, Bernard Wong, Srinivasan Keshav

    Most datacenter network (DCN) designs focus on maximizing bisection bandwidth rather than minimizing server-to-server latency. We explore architectural approaches to building low-latency DCNs and introduce Quartz, a design element consisting of a full mesh of switches. Quartz can be used to replace portions of either a hierarchical network or a random network. Our analysis shows that replacing high port-count core switches with Quartz can significantly reduce switching delays, and replacing groups of topof-rack and aggregation switches with Quartz can significantly reduce congestion-related delays from cross-traffic. We overcome the complexity of wiring a complete mesh using low-cost optical multiplexers that enable us to efficiently implement a logical mesh as a physical ring. We evaluate our performance using both simulations and a small working prototype. Our evaluation results confirm our analysis, and demonstrate that it is possible to build low-latency DCNs using inexpensive commodity elements without significant concessions to cost, scalability, or wiring complexity.

  • Zhaoyu Gao, Arun Venkataramani, James F. Kurose, Simon Heimlicher

    This paper presents a quantitative methodology and results comparing different approaches for location-independent communication. Our approach is empirical and is based on real Internet topologies, routing tables from real routers, and a measured workload of the mobility of devices and content across network addresses today. We measure the extent of network mobility exhibited by mobile devices with a homebrewed Android app deployed on hundreds of smartphones, and measure the network mobility of Internet content from distributed vantage points. We combine this measured data with our quantitative methodology to analyze the different cost-benefit tradeoffs struck by location-independent network architectures with respect to routing update cost, path stretch, and forwarding table size. We find that more than 20% of users change over 10 IP addresses a day, suggesting that mobility is the norm rather than the exception, so intrinsic and efficient network support for mobility is critical. We also find that with purely name-based routing approaches, each event involving the mobility of a device or popular content may result in an update at up to 14% of Internet routers; but, the fraction of impacted routers is much smaller for the long tail of unpopular content. These results suggest that recent proposals for pure name-based networking may be suitable for highly aggregateable content that moves infrequently but may need to be augmented with addressing-assisted approaches to handle device mobility.

  • Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, Aditya Akella

    Tasks in modern data-parallel clusters have highly diverse resource requirements along CPU, memory, disk and network. We present Tetris, a multi-resource cluster scheduler that packs tasks to machines based on their requirements of all resource types. Doing so avoids resource fragmentation as well as over-allocation of the resources that are not explicitly allocated, both of which are drawbacks of current schedulers. Tetris adapts heuristics for the multidimensional bin packing problem to the context of cluster schedulers wherein task arrivals and machine availability change in an online manner and wherein task’s resource needs change with time and with the machine that the task is placed at. In addition, Tetris improves average job completion time by preferentially serving jobs that have less remaining work. We observe that fair allocations do not offer the best performance and the above heuristics are compatible with a large class of fairness policies; hence, we show how to simultaneously achieve good performance and fairness. Tracedriven simulations and deployment of our Apache YARN prototype on a 250 node cluster show gains of over 30% in makespan and job completion time while achieving nearly perfect fairness.

  • Yang Wu, Mingchen Zhao, Andreas Haeberlen, Wenchao Zhou, Boon Thau Loo

    When debugging a distributed system, it is sometimes necessary to explain the absence of an event – for instance, why a certain route is not available, or why a certain packet did not arrive. Existing debuggers offer some support for explaining the presence of events, usually by providing the equivalent of a backtrace in conventional debuggers, but they are not very good at answering “Why not?” questions: there is simply no starting point for a possible backtrace. In this paper, we show that the concept of negative provenance can be used to explain the absence of events in distributed systems. Negative provenance relies on counterfactual reasoning to identify the conditions under which the missing event could have occurred. We define a formal model of negative provenance for distributed systems, and we present the design of a system called Y! that tracks both positive and negative provenance and can use them to answer diagnostic queries. We describe how we have used Y! to debug several realistic problems in two application domains: softwaredefined networks and BGP interdomain routing. Results from our experimental evaluation show that the overhead of Y! is moderate.

  • Srikanth Kandula, Ishai Menache, Roy Schwartz, Spandana Raj Babbula

    Datacenter WAN traffic consists of high priority transfers that have to be carried as soon as they arrive, alongside large transfers with preassigned deadlines on their completion. The ability to offer guarantees to large transfers is crucial for business needs and impacts overall cost-of-business. State-of-the-art traffic engineering solutions only consider the current time epoch or minimize maximum utilization and hence cannot provide pre-facto promises to long-lived transfers. We present Tempus, an online temporal planning scheme that appropriately packs long-running transfers across network paths and future timesteps, while leaving capacity slack for future changes. Tempus builds on a tailored approximate solution to a mixed packing-covering linear program, which is parallelizable and scales well in both running time and memory usage. Consequently, Tempus can quickly and effectively update the promised future flow allocation when new transfers arrive or unexpected changes happen. Our experiments on traces from a large production WAN show, Tempus can offer and keep promises to longlived transfers well in advance of their actual deadlines; the promise on minimal transfer size is comparable with an offline optimal solution and outperforms state-of-the-art solutions by 2-3X.

  • George Varghese

    The most compelling ideas in systems are abstractions such as virtual memory, sockets, or packet scheduling. Algorithmics is the servant of abstraction, allowing system performance to approach that of the underlying hardware, sometimes by using efficient algorithms but often by simply leveraging other aspects of the system. I will survey the trajectory of network algorithmics starting with a focus on speed and scale in the 1990s to measurement and security in the 2000s. While doing so, I will reflect on my experiences in choosing problems and conducting research. I will conclude by describing my passion for the emerging field of network verification and its confluence with programming language research.

  • Bo Zhang, Jinfan Wang, Xinyu Wang, Yingying Cheng, Xiaohua Jia, Jianfei He

    In the current Internet architecture, application service providers (ASPs) own users' data and social groups information, which made a handful of ASP companies growing bigger and bigger and denied small and medium companies from entering this business. We propose a new architecture, called Application Independent Information Infrastructure (AI3). The design goals of AI3 are: 1) Decoupling users' data from ASPs and users' social relations from ASPs, such that ASPs become independent from users’ data and social relations. 2) Open architecture, such that different ASPs can interoperate with each other. This demo is to show a prototype of AI3. The demo has four parts: 1) ASPindependent data management in AI3; 2) ASP-independent management of users’ social relations in AI3; 3) inter-domain data transport and user roaming; 4) real-time communications by using AI3. The demo video can be watched at: http://www.cs.cityu.edu.hk/~jia/AI3_DemoVideo.mp4

  • Dinesh Bharadia, Kiran Joshi, Sachin Katti

    This paper presents demonstration of a real-time full duplex pointto-point link, where transmission and reception occurs in the same spectrum band simultaneously between a pair of full-duplex radios. This demo first builds a full duplex radio by implementing selfinterference cancellation technique on top of a traditional half duplex radio architecture. We then establish a point-to-point link using a pair of these radios that can transmit and receive OFDM packets. By changing the environmental conditions around the full-duplex radios we then demonstrate the robustness of the self-interference cancellation to adapt to the changing environment.

  • Xiongzi Ge, Yi Liu, David H.C. Du, Liang Zhang, Hongguang Guan, Jian Chen, Yuping Zhao, Xinyu Hu
  • Gordon Stewart, Mahanth Gowda, Geoffrey Mainland, Bozidar Radunovic, Dimitrios Vytiniotis, Doug Patterson

    Software-defined radios (SDR) have the potential to bring major innovation in wireless networking design. However, their impact so far has been limited due to complex programming tools. Most of the existing tools are either too slow to achieve the full line speeds of contemporary wireless PHYs or are too complex to master. In this demo we present our novel SDR programming environment called Ziria. Ziria consists of a novel programming language and an optimizing compiler. The compiler is able to synthesize very efficient SDR code from high-level PHY descriptions written in Ziria language. To illustrate its potential, we present the design of an LTE-like PHY layer in Ziria. We run it on the Sora SDR platform and demonstrate on a test-bed that it is able to operate in real-time.

  • Steffen Gebert, David Hock, Thomas Zinner, Phuoc Tran-Gia, Marco Hoffmann, Michael Jarschel, Ernst-Dieter Schmidt, Ralf-Peter Braun, Christian Banse, Andreas Köpsel
  • Mark Schmidt, Florian Heimgaertner, Michael Menth

    This demo presents a testbed for computer networking education. It leverages hardware virtualization to accommodate 6 PCs and 2 routers on a single testbed host to reduce costs, energy consumption, space requirements, and heat emission. The testbed excels by providing dedicated physical Ethernet and USB interfaces for virtual machines so that students can interconnect them with cables and switches like in a nonvirtualized testbed.

  • Han Hu, Yichao Jin, Yonggang Wen, Tat-Seng Chua, Xuelong Li

    The emergence of portable devices and online social networks (OSNs) has changed the traditional video consumption paradigm by simultaneously providing multi-screen video watching, social networking engagement, etc. One challenge is to design a unified solution to support ever-growing features while guarantee system performance. In this demo, we design and implement a multi-screen technology to provide multi-screen interactions over wide area network (WAN). Furthermore, we incorporate face-detection technology into our system to identify users’ bio-features and employ a machine learning based traffic scheduling mechanism to improve the system performance.

  • Jiaqiang Liu, Yong Li, Depeng Jin
  • David Koll, Jun Li, Xiaoming Fu

    With increasing frequency, users raise concerns about data privacy and protection in centralized Online Social Networks (OSNs), in which providers have the unprecedented privilege to access and exploit every user’s private data at will. To mitigate these concerns, researchers have suggested to decentralize OSNs and thereby enable users to control and manage access to their data themselves. However, previously proposed decentralization approaches suffer from several drawbacks. To tackle their deficiencies, we introduce the Self-Organized Universe of People (SOUP). In this demonstration, we present a prototype of SOUP and share our experiences from a real-world deployment.

  • Hyunwoo Nam, Kyung-Hwa Kim, Doru Calin, Henning Schulzrinne

    Adaptive bitrate (ABR) technologies are being widely used in today’s popular HTTP-based video streaming such as YouTube and Netflix. Such a rate-switching algorithm embedded in a video player is designed to improve video qualityof-experience (QoE) by selecting an appropriate resolution based on the analysis of network conditions while the video is playing. However, a bad viewing experience is often caused by the video player having difficulty estimating transit or client-side network conditions accurately. In order to analyze the ABR streaming performance, we developed YouSlow, a web browser plug-in that can detect and report live buffer stalling events to our analysis tool. Currently, YouSlow has collected more than 20,000 of YouTube video stalling events over 40 countries.

  • Zhenlong Yuan, Yongqiang Lu, Zhaoguo Wang, Yibo Xue

    As smartphones and mobile devices are rapidly becoming indispensable for many network users, mobile malware has become a serious threat in the network security and privacy. Especially on the popular Android platform, many malicious apps are hiding in a large number of normal apps, which makes the malware detection more challenging. In this paper, we propose a ML-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of Android app for malware detection. The comparison of modeling results demonstrates that the deep learning technique is especially suitable for Android malware detection and can achieve a high level of 96% accuracy with real-world Android application sets.

  • Aanchal Malhotra, Sharon Goldberg

    BGP, the Internet’s interdomain routing protocol, is highly vulnerable to routing failures that result from unintentional misconfigurations or deliberate attacks. To defend against these failures, recent years have seen the adoption of the Resource Public Key Infrastructure (RPKI), which currently authorizes 4% of the Internet’s routes. The RPKI is a completely new security infrastructure (requiring new servers, caches, and the design of new protocols), a fact that has given rise to some controversy [1]. Thus, an alternative proposal has emerged: Route Origin Verification (ROVER) [4, 7], which leverages the existing reverse DNS (rDNS) and DNSSEC to secure the interdomain routing system. Both RPKI and ROVER rely on a hierarchy of authorities to provide trusted information about the routing system. Recently, however, [2] argued that the misconfigured, faulty or compromised RPKI authorities introduce new vulnerabilities in the routing system, which can take IP prefixes offline. Meanwhile, the designers of ROVER claim that it operates in a “fail-safe” mode, where “[o]ne could completely unplug a router verification application at any time and Internet routing would continue to work just as it does today”. There has been debate in Internet community mailing lists [1] about the pros and cons of both approaches. This poster therefore compares the impact of ROVER failures to those of the RPKI, in a threat model that covers misconfigurations, faults or compromises of their trusted authorities.

  • Payman Samadi, Varun Gupta, Berk Birand, Howard Wang, Gil Zussman, Keren Bergman

    We present a control plane architecture to accelerate multicast and incast traffic delivery for data-intensive applications in cluster-computing interconnection networks. The architecture is experimentally examined by enabling physical layer optical multicasting on-demand for the application layer to achieve non-blocking performance.

Syndicate content