Computer Communication Review: Papers

Find a CCR issue:
  • Tiffany Hyun-Jin Kim, Cristina Basescu, Limin Jia, Soo Bum Lee, Yih-Chun Hu, Adrian Perrig

    In-network source authentication and path validation are fundamental primitives to construct higher-level security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, currently proposed solutions either fall short of addressing important security concerns or require a substantial amount of router overhead. In this paper, we propose lightweight, scalable, and secure protocols for shared key setup, source authentication, and path validation. Our prototype implementation demonstrates the efficiency and scalability of the protocols, especially for software-based implementations.

  • Anirudh Sivaraman, Keith Winstein, Pratiksha Thaker, Hari Balakrishnan

    When designing a distributed network protocol, typically it is infeasible to fully define the target network where the protocol is intended to be used. It is therefore natural to ask: How faithfully do protocol designers really need to understand the networks they design for? What are the important signals that endpoints should listen to? How can researchers gain confidence that systems that work well on well-characterized test networks during development will also perform adequately on real networks that are inevitably more complex, or future networks yet to be developed? Is there a tradeoff between the performance of a protocol and the breadth of its intended operating range of networks? What is the cost of playing fairly with cross-traffic that is governed by another protocol? We examine these questions quantitatively in the context of congestion control, by using an automated protocol-design tool to approximate the best possible congestion-control scheme given imperfect prior knowledge about the network. We found only weak evidence of a tradeoff between operating range in link speeds and performance, even when the operating range was extended to cover a thousand-fold range of link speeds. We found that it may be acceptable to simplify some characteristics of the network—such as its topology—when modeling for design purposes. Some other features, such as the degree of multiplexing and the aggressiveness of contending endpoints, are important to capture in a model.

  • K.V. Rashmi, Nihar B. Shah, Dikang Gu, Hairong Kuang, Dhruba Borthakur, Kannan Ramchandran

    Erasure codes such as Reed-Solomon (RS) codes are being extensively deployed in data centers since they offer significantly higher reliability than data replication methods at much lower storage overheads. These codes however mandate much higher resources with respect to network bandwidth and disk IO during reconstruction of data that is missing or otherwise unavailable. Existing solutions to this problem either demand additional storage space or severely limit the choice of the system parameters. In this paper, we present Hitchhiker, a new erasure-coded storage system that reduces both network traffic and disk IO by around 25% to 45% during reconstruction of missing or otherwise unavailable data, with no additional storage, the same fault tolerance, and arbitrary flexibility in the choice of parameters, as compared to RS-based systems. Hitchhiker "rides" on top of RS codes, and is based on novel encoding and decoding techniques that will be presented in this paper. We have implemented Hitchhiker in the Hadoop Distributed File System (HDFS). When evaluating various metrics on the data-warehouse cluster in production at Facebook with real-time traffic and workloads, during reconstruction, we observe a 36% reduction in the computation time and a 32% reduction in the data read time, in addition to the 35% reduction in network traffic and disk IO. Hitchhiker can thus reduce the latency of degraded reads and perform faster recovery from failed or decommissioned machines.

  • Jeongkeun Lee, Yoshio Turner, Myungjin Lee, Lucian Popa, Sujata Banerjee, Joon-Myung Kang, Puneet Sharma

    Providing bandwidth guarantees to specific applications is becoming increasingly important as applications compete for shared cloud network resources. We present CloudMirror, a solution that provides bandwidth guarantees to cloud applications based on a new network abstraction and workload placement algorithm. An effective network abstraction would enable applications to easily and accurately specify their requirements, while simultaneously enabling the infrastructure to provision resources efficiently for deployed applications. Prior research has approached the bandwidth guarantee specification by using abstractions that resemble physical network topologies. We present a contrasting approach of deriving a network abstraction based on application communication structure, called Tenant Application Graph or TAG. CloudMirror also incorporates a new workload placement algorithm that efficiently meets bandwidth requirements specified by TAGs while factoring in high availability considerations. Extensive simulations using real application traces and datacenter topologies show that CloudMirror can handle 40% more bandwidth demand than the state of the art (e.g., the Oktopus system), while improving high availability from 20% to 70%.

  • Dinesh Bharadia, Sachin Katti

    This paper presents, FastForward (FF), a novel full-duplex relay that constructively forwards signals such that wireless network throughput and coverage is significantly enhanced. FF is a Layer 1 in-band full-duplex device, it receives and transmits signals directly and simultaneously on the same frequency. It cleanly integrates into existing networks (both WiFi and LTE) as a separate device and does not require changes to the clients. FF’s key invention is a constructive filtering algorithm that transforms the signal at the relay such that when it reaches the destination, it constructively combines with the direct signals from the source and provides a significant throughput gain. We prototype FF using off-the-shelf software radios running a stock WiFi PHY and show experimentally that it provides a 3× median throughput increase and nearly a 4× gain at the edge of the coverage area.

  • Navid Hamedazimi, Zafar Qazi, Himanshu Gupta, Vyas Sekar, Samir R. Das, Jon P. Longtin, Himanshu Shah, Ashish Tanwer

    This paper describes FireFly a first but significant step toward realizing this vision. Figure 1 shows a high-level overview of FireFly. Each ToR is equipped with reconfigurable wireless links which can connect to other ToR switches. However, we need to look beyond traditional radio-frequency (RF) wireless solutions (e.g., 60GHz) as their interference characteristics limit range and capacity. Thus, we envision a new use-case for Free-Space Optical communications (FSO) as it can offer high data rates (tens of Gbps) over long ranges using low transmission power and with zero interference [31]. The centralized FireFly controller reconfigures the topology and forwarding rules to adapt to changing traffic patterns. While prior work made the case for using FSO links in DCs [19, 28], these fail to establish a viable hardware design and also do not address practical network design and management challenges that

  • Vivek Yenamandra, Kannan Srinivasan

    Global synchronization across time and frequency domains significantly benefits wireless communications. Multi-Cell (Network) MIMO, interference alignment solutions, opportunistic routing techniques in ad-hoc networks, OFDMA etc. all necessitate synchronization in either time or frequency domain or both. This paper presents Vidyut, a system that exploits the easily accessible and ubiquitous power line infrastructure to achieve synchronization in time and frequency domains across nodes distributed beyond a singlecollision domain. Vidyut uses the power lines to transmit a reference frequency tone to which each node locks its frequency. Vidyut exploits the steady periodicity of delivered power signal itself to synchronize distributed nodes in time. We validate the extent of Vidyut’s synchronization and evaluate its effectiveness. We verify Vidyut’s suitability for wireless applications such as OFDMA and multi-cell MIMO by validating the benefits of global synchronization in an enterprise wireless network. Our experiments show a throughput gain of 8.2x over MegaMIMO, 7x over NEMOx and 2.5x over OFDMA systems.

  • Jue Wang, Deepak Vasisht, Dina Katabi

    Prior work in RF-based positioning has mainly focused on discovering the absolute location of an RF source, where state-of-theart systems can achieve an accuracy on the order of tens of centimeters using a large number of antennas. However, many applications in gaming and gesture based interface see more benefits in knowing the detailed shape of a motion. Such trajectory tracing requires a resolution several fold higher than what existing RF-based positioning systems can offer. This paper shows that one can provide a dramatic increase in trajectory tracing accuracy, even with a small number of antennas. The key enabler for our design is a multi-resolution positioning technique that exploits an intrinsic tradeoff between improving the resolution and resolving ambiguity in the location of the RF source. The unique property of this design is its ability to precisely reconstruct the minute details in the trajectory shape, even when the absolute position might have an offset. We built a prototype of our design with commercial off-the-shelf RFID readers and tags and used it to enable a virtual touch screen, which allows a user to interact with a desired computing device by gesturing or writing her commands in the air, where each letter is only a few centimeters wide.

  • Abhigyan Sharma, Xiaozheng Tie, Hardeep Uppal, Arun Venkataramani, David Westbrook, Aditya Yadav

    Mobile devices dominate the Internet today, however the Internet rooted in its tethered origins continues to provide poor infrastructure support for mobility. Our position is that in order to address this problem, a key challenge that must be addressed is the design of a massively scalable global name service that rapidly resolves identities to network locations under high mobility. Our primary contribution is the design, implementation, and evaluation of Auspice, a nextgeneration global name service that addresses this challenge. A key insight underlying Auspice is a demand-aware replica placement engine that intelligently replicates name records to provide low lookup latency, low update cost, and high availability. We have implemented a prototype of Auspice and compared it against several commercial managed DNS providers as well as state-of-the-art research alternatives, and shown that Auspice significantly outperforms both. We demonstrate proof-of-concept that Auspice can serve as a complete end-to-end mobility solution as well as enable novel context-based communication primitives that generalize nameor address-based communication in today’s Internet.

  • Yunpeng James Liu, Peter Xiang Gao, Bernard Wong, Srinivasan Keshav

    Most datacenter network (DCN) designs focus on maximizing bisection bandwidth rather than minimizing server-to-server latency. We explore architectural approaches to building low-latency DCNs and introduce Quartz, a design element consisting of a full mesh of switches. Quartz can be used to replace portions of either a hierarchical network or a random network. Our analysis shows that replacing high port-count core switches with Quartz can significantly reduce switching delays, and replacing groups of topof-rack and aggregation switches with Quartz can significantly reduce congestion-related delays from cross-traffic. We overcome the complexity of wiring a complete mesh using low-cost optical multiplexers that enable us to efficiently implement a logical mesh as a physical ring. We evaluate our performance using both simulations and a small working prototype. Our evaluation results confirm our analysis, and demonstrate that it is possible to build low-latency DCNs using inexpensive commodity elements without significant concessions to cost, scalability, or wiring complexity.

  • Zhaoyu Gao, Arun Venkataramani, James F. Kurose, Simon Heimlicher

    This paper presents a quantitative methodology and results comparing different approaches for location-independent communication. Our approach is empirical and is based on real Internet topologies, routing tables from real routers, and a measured workload of the mobility of devices and content across network addresses today. We measure the extent of network mobility exhibited by mobile devices with a homebrewed Android app deployed on hundreds of smartphones, and measure the network mobility of Internet content from distributed vantage points. We combine this measured data with our quantitative methodology to analyze the different cost-benefit tradeoffs struck by location-independent network architectures with respect to routing update cost, path stretch, and forwarding table size. We find that more than 20% of users change over 10 IP addresses a day, suggesting that mobility is the norm rather than the exception, so intrinsic and efficient network support for mobility is critical. We also find that with purely name-based routing approaches, each event involving the mobility of a device or popular content may result in an update at up to 14% of Internet routers; but, the fraction of impacted routers is much smaller for the long tail of unpopular content. These results suggest that recent proposals for pure name-based networking may be suitable for highly aggregateable content that moves infrequently but may need to be augmented with addressing-assisted approaches to handle device mobility.

  • Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, Aditya Akella

    Tasks in modern data-parallel clusters have highly diverse resource requirements along CPU, memory, disk and network. We present Tetris, a multi-resource cluster scheduler that packs tasks to machines based on their requirements of all resource types. Doing so avoids resource fragmentation as well as over-allocation of the resources that are not explicitly allocated, both of which are drawbacks of current schedulers. Tetris adapts heuristics for the multidimensional bin packing problem to the context of cluster schedulers wherein task arrivals and machine availability change in an online manner and wherein task’s resource needs change with time and with the machine that the task is placed at. In addition, Tetris improves average job completion time by preferentially serving jobs that have less remaining work. We observe that fair allocations do not offer the best performance and the above heuristics are compatible with a large class of fairness policies; hence, we show how to simultaneously achieve good performance and fairness. Tracedriven simulations and deployment of our Apache YARN prototype on a 250 node cluster show gains of over 30% in makespan and job completion time while achieving nearly perfect fairness.

  • Yang Wu, Mingchen Zhao, Andreas Haeberlen, Wenchao Zhou, Boon Thau Loo

    When debugging a distributed system, it is sometimes necessary to explain the absence of an event – for instance, why a certain route is not available, or why a certain packet did not arrive. Existing debuggers offer some support for explaining the presence of events, usually by providing the equivalent of a backtrace in conventional debuggers, but they are not very good at answering “Why not?” questions: there is simply no starting point for a possible backtrace. In this paper, we show that the concept of negative provenance can be used to explain the absence of events in distributed systems. Negative provenance relies on counterfactual reasoning to identify the conditions under which the missing event could have occurred. We define a formal model of negative provenance for distributed systems, and we present the design of a system called Y! that tracks both positive and negative provenance and can use them to answer diagnostic queries. We describe how we have used Y! to debug several realistic problems in two application domains: softwaredefined networks and BGP interdomain routing. Results from our experimental evaluation show that the overhead of Y! is moderate.

  • Srikanth Kandula, Ishai Menache, Roy Schwartz, Spandana Raj Babbula

    Datacenter WAN traffic consists of high priority transfers that have to be carried as soon as they arrive, alongside large transfers with preassigned deadlines on their completion. The ability to offer guarantees to large transfers is crucial for business needs and impacts overall cost-of-business. State-of-the-art traffic engineering solutions only consider the current time epoch or minimize maximum utilization and hence cannot provide pre-facto promises to long-lived transfers. We present Tempus, an online temporal planning scheme that appropriately packs long-running transfers across network paths and future timesteps, while leaving capacity slack for future changes. Tempus builds on a tailored approximate solution to a mixed packing-covering linear program, which is parallelizable and scales well in both running time and memory usage. Consequently, Tempus can quickly and effectively update the promised future flow allocation when new transfers arrive or unexpected changes happen. Our experiments on traces from a large production WAN show, Tempus can offer and keep promises to longlived transfers well in advance of their actual deadlines; the promise on minimal transfer size is comparable with an offline optimal solution and outperforms state-of-the-art solutions by 2-3X.

  • George Varghese

    The most compelling ideas in systems are abstractions such as virtual memory, sockets, or packet scheduling. Algorithmics is the servant of abstraction, allowing system performance to approach that of the underlying hardware, sometimes by using efficient algorithms but often by simply leveraging other aspects of the system. I will survey the trajectory of network algorithmics starting with a focus on speed and scale in the 1990s to measurement and security in the 2000s. While doing so, I will reflect on my experiences in choosing problems and conducting research. I will conclude by describing my passion for the emerging field of network verification and its confluence with programming language research.

  • Bo Zhang, Jinfan Wang, Xinyu Wang, Yingying Cheng, Xiaohua Jia, Jianfei He

    In the current Internet architecture, application service providers (ASPs) own users' data and social groups information, which made a handful of ASP companies growing bigger and bigger and denied small and medium companies from entering this business. We propose a new architecture, called Application Independent Information Infrastructure (AI3). The design goals of AI3 are: 1) Decoupling users' data from ASPs and users' social relations from ASPs, such that ASPs become independent from users’ data and social relations. 2) Open architecture, such that different ASPs can interoperate with each other. This demo is to show a prototype of AI3. The demo has four parts: 1) ASPindependent data management in AI3; 2) ASP-independent management of users’ social relations in AI3; 3) inter-domain data transport and user roaming; 4) real-time communications by using AI3. The demo video can be watched at:

  • Dinesh Bharadia, Kiran Joshi, Sachin Katti

    This paper presents demonstration of a real-time full duplex pointto-point link, where transmission and reception occurs in the same spectrum band simultaneously between a pair of full-duplex radios. This demo first builds a full duplex radio by implementing selfinterference cancellation technique on top of a traditional half duplex radio architecture. We then establish a point-to-point link using a pair of these radios that can transmit and receive OFDM packets. By changing the environmental conditions around the full-duplex radios we then demonstrate the robustness of the self-interference cancellation to adapt to the changing environment.

  • Xiongzi Ge, Yi Liu, David H.C. Du, Liang Zhang, Hongguang Guan, Jian Chen, Yuping Zhao, Xinyu Hu
  • Gordon Stewart, Mahanth Gowda, Geoffrey Mainland, Bozidar Radunovic, Dimitrios Vytiniotis, Doug Patterson

    Software-defined radios (SDR) have the potential to bring major innovation in wireless networking design. However, their impact so far has been limited due to complex programming tools. Most of the existing tools are either too slow to achieve the full line speeds of contemporary wireless PHYs or are too complex to master. In this demo we present our novel SDR programming environment called Ziria. Ziria consists of a novel programming language and an optimizing compiler. The compiler is able to synthesize very efficient SDR code from high-level PHY descriptions written in Ziria language. To illustrate its potential, we present the design of an LTE-like PHY layer in Ziria. We run it on the Sora SDR platform and demonstrate on a test-bed that it is able to operate in real-time.

  • Steffen Gebert, David Hock, Thomas Zinner, Phuoc Tran-Gia, Marco Hoffmann, Michael Jarschel, Ernst-Dieter Schmidt, Ralf-Peter Braun, Christian Banse, Andreas Köpsel
  • Mark Schmidt, Florian Heimgaertner, Michael Menth

    This demo presents a testbed for computer networking education. It leverages hardware virtualization to accommodate 6 PCs and 2 routers on a single testbed host to reduce costs, energy consumption, space requirements, and heat emission. The testbed excels by providing dedicated physical Ethernet and USB interfaces for virtual machines so that students can interconnect them with cables and switches like in a nonvirtualized testbed.

  • Han Hu, Yichao Jin, Yonggang Wen, Tat-Seng Chua, Xuelong Li

    The emergence of portable devices and online social networks (OSNs) has changed the traditional video consumption paradigm by simultaneously providing multi-screen video watching, social networking engagement, etc. One challenge is to design a unified solution to support ever-growing features while guarantee system performance. In this demo, we design and implement a multi-screen technology to provide multi-screen interactions over wide area network (WAN). Furthermore, we incorporate face-detection technology into our system to identify users’ bio-features and employ a machine learning based traffic scheduling mechanism to improve the system performance.

  • Jiaqiang Liu, Yong Li, Depeng Jin
  • David Koll, Jun Li, Xiaoming Fu

    With increasing frequency, users raise concerns about data privacy and protection in centralized Online Social Networks (OSNs), in which providers have the unprecedented privilege to access and exploit every user’s private data at will. To mitigate these concerns, researchers have suggested to decentralize OSNs and thereby enable users to control and manage access to their data themselves. However, previously proposed decentralization approaches suffer from several drawbacks. To tackle their deficiencies, we introduce the Self-Organized Universe of People (SOUP). In this demonstration, we present a prototype of SOUP and share our experiences from a real-world deployment.

  • Hyunwoo Nam, Kyung-Hwa Kim, Doru Calin, Henning Schulzrinne

    Adaptive bitrate (ABR) technologies are being widely used in today’s popular HTTP-based video streaming such as YouTube and Netflix. Such a rate-switching algorithm embedded in a video player is designed to improve video qualityof-experience (QoE) by selecting an appropriate resolution based on the analysis of network conditions while the video is playing. However, a bad viewing experience is often caused by the video player having difficulty estimating transit or client-side network conditions accurately. In order to analyze the ABR streaming performance, we developed YouSlow, a web browser plug-in that can detect and report live buffer stalling events to our analysis tool. Currently, YouSlow has collected more than 20,000 of YouTube video stalling events over 40 countries.

  • Zhenlong Yuan, Yongqiang Lu, Zhaoguo Wang, Yibo Xue

    As smartphones and mobile devices are rapidly becoming indispensable for many network users, mobile malware has become a serious threat in the network security and privacy. Especially on the popular Android platform, many malicious apps are hiding in a large number of normal apps, which makes the malware detection more challenging. In this paper, we propose a ML-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of Android app for malware detection. The comparison of modeling results demonstrates that the deep learning technique is especially suitable for Android malware detection and can achieve a high level of 96% accuracy with real-world Android application sets.

  • Aanchal Malhotra, Sharon Goldberg

    BGP, the Internet’s interdomain routing protocol, is highly vulnerable to routing failures that result from unintentional misconfigurations or deliberate attacks. To defend against these failures, recent years have seen the adoption of the Resource Public Key Infrastructure (RPKI), which currently authorizes 4% of the Internet’s routes. The RPKI is a completely new security infrastructure (requiring new servers, caches, and the design of new protocols), a fact that has given rise to some controversy [1]. Thus, an alternative proposal has emerged: Route Origin Verification (ROVER) [4, 7], which leverages the existing reverse DNS (rDNS) and DNSSEC to secure the interdomain routing system. Both RPKI and ROVER rely on a hierarchy of authorities to provide trusted information about the routing system. Recently, however, [2] argued that the misconfigured, faulty or compromised RPKI authorities introduce new vulnerabilities in the routing system, which can take IP prefixes offline. Meanwhile, the designers of ROVER claim that it operates in a “fail-safe” mode, where “[o]ne could completely unplug a router verification application at any time and Internet routing would continue to work just as it does today”. There has been debate in Internet community mailing lists [1] about the pros and cons of both approaches. This poster therefore compares the impact of ROVER failures to those of the RPKI, in a threat model that covers misconfigurations, faults or compromises of their trusted authorities.

  • Payman Samadi, Varun Gupta, Berk Birand, Howard Wang, Gil Zussman, Keren Bergman

    We present a control plane architecture to accelerate multicast and incast traffic delivery for data-intensive applications in cluster-computing interconnection networks. The architecture is experimentally examined by enabling physical layer optical multicasting on-demand for the application layer to achieve non-blocking performance.

  • Arjuna Sathiaseelan, M. Said Seddiki, Stoyan Stoyanov, Dirk Trossen
  • Baobao Zhang, Jun Bi, Jianping Wu, Fred Baker
  • Masoud Moshref, Apoorv Bhargava, Adhip Gupta, Minlan Yu, Ramesh Govindan
  • Srikanth Sundaresan, Nick Feamster, Renata Teixeira

    We present a demonstration of WTF (Where’s The Fault?), a system that localizes performance problems in home and access networks. We implement WTF as custom firmware that runs in an off-the-shelf home router. WTF uses timing and buffering information from passively monitored traffic at home routers to detect both access link and wireless network bottlenecks.

  • Sajad Shirali-Shahreza, Yashar Ganjali

    One of the limitations of wildcard rules in Software Defined Networks, such as OpenFlow, is losing visibility. FleXam is a flexible sampling extension for OpenFlow that allows the controller to define which packets should be sampled, what parts of each packet should be selected, and where they should be sent. Here, we present an interactive demo showing how FleXam enables the controller to dynamically adjust sampling rates and change the sampling scheme to optimally keep up with a sampling budget in the context of a traffic statistics collection application.

  • Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya
  • Oliver Michel, Michael Coughlin, Eric Keller

    Given that Software-Defined Networking is highly successful in solving many of today’s manageability, flexibility, and scalability issues in large-scale networks, in this paper we argue that the concept of SDN can be extended even further. Many applications (esp. stream processing and big-data applications) rely on graph-based inter-process communication patterns that are very similar to those in computer networks. To our mind, this network abstraction spanning over different types of entities is highly suitable for and would benefit from central (SDN-inspired) control for the same reasons classical networks do. In this work, we investigate the commonalities between such intra-host networks and classical computer networking. Based on this, we study the feasibility of a central network controller that manages both network traffic and intra-host communication over a custom bus system.

  • Matthew K. Mukerjee, JungAh Hong, Junchen Jiang, David Naylor, Dongsu Han, Srinivasan Seshan, Hui Zhang
  • Arash Molavi Kakhki, Abbas Razaghpanah, Rajesh Golani, David Choffnes, Phillipa Gill, Alan Mislove
  • Rui Miao, Minlan Yu, Navendu Jain
  • Ricky K.P. Mok, Weichao Li, Rocky K.C. Chang

    Crowdtesting is increasingly popular among researchers to carry out subjective assessments of different services. Experimenters can easily assess to a huge pool of human subjects through crowdsourcing platforms. The workers are usually anonymous, and they participate in the experiments independently. Therefore, a fundamental problem threatening the integrity of these platforms is to detect various types of cheating from the workers. In this poster, we propose cheat-detection mechanism based on an analysis of the workers’ mouse cursor trajectories. It provides a jQuery-based library to record browser events. We compute a set of metrics from the cursor traces to identify cheaters. We deploy our mechanism to the survey pages for our video quality assessment tasks published on Amazon Mechanical Turk. Our results show that cheaters’ cursor movement is usually more direct and contains less pauses.

  • Attila Csoma, Balázs Sonkoly, Levente Csikor, Felicián Németh, Andràs Gulyas, Wouter Tavernier, Sahel Sahhaf

    Mininet is a great prototyping tool which combines existing SDN-related software components (e.g., Open vSwitch, OpenFlow controllers, network namespaces, cgroups) into a framework, which can automatically set up and configure customized OpenFlow testbeds scaling up to hundreds of nodes. Standing on the shoulders of Mininet, we implement a similar prototyping system called ESCAPE, which can be used to develop and test various components of the service chaining architecture. Our framework incorporates Click for implementing Virtual Network Functions (VNF), NETCONF for managing Click-based VNFs and POX for taking care of traffic steering. We also add our extensible Orchestrator module, which can accommodate mapping algorithms from abstract service descriptions to deployed and running service chains.

Syndicate content