CCR Papers from April 2011

  • S. Keshav

    Twenty years ago, when I was still a graduate student, going online meant firing up a high-speed 1200 baud modem and typing text on a Z19 glass terminal to interact with my university’s VAX 11/780 server. Today, this seems quaint, if not downright archaic. Fast forwarding twenty years from now, it seems very likely that reading newspapers and magazines on paper will seem equally quaint, if not downright wasteful. It is clear that the question is when, not if, CCR goes completely online.

    CCR today provides two types of content: editorials and technical articles. Both are selected to be relevant, novel, and timely. By going online only, we would certainly not give up these qualities. Instead, by not being tied to the print medium, we could publish articles as they were accepted, instead of waiting for a publication deadline. This would reduce the time-to-publication from the current 16 weeks to less than 10 weeks, making the content even more timely.

    Freeing CCR from print has many other benefits. We could publish content that goes well beyond black-and-white print and graphics. For example, graphs and photographs in papers would no longer have to be black-and-white. But that is not all: it would be possible, for example, to publish professional- quality videos of paper presentations at the major SIG conferences. We could also publish and archive the software and data sets for accepted papers. Finally, it would allow registered users to receive alerts when relevant content was published. Imagine the benefits from getting a weekly update from CCR with pointers to freshly-published content that is directly relevant to your research!

    These potential benefits can be achieved at little additional cost and using off-the-shelf technologies. They would, however, significantly change the CCR experience for SIG members. Therefore, before we plunge ahead, we’d like to know what you think. Do send your comments to me at: ccr-edit@uwaterlo.ca

  • Martin Heusse, Sears A. Merritt, Timothy X. Brown, and Andrzej Duda

    Many papers explain the drop of download performance when two TCP connections in opposite directions share a common bottleneck link by ACK compression, the phenomenon in which download ACKs arrive in bursts so that TCP self clocking breaks. Efficient mechanisms to cope with the performance problem exist and we do not consider proposing yet another solution. We rather thoroughly analyze the interactions between connections and show that actually ACK compression only arises in a perfectly symmetrical setup and it has little impact on performance. We provide a different explanation of the interactions—data pendulum, a core phenomenon that we analyze in this paper. In the data pendulum effect, data and ACK segments alternately fill only one of the link buffers (on the upload or download side) at a time, but almost never both of them. We analyze the effect in the case in which buffers are structured as arrays of bytes and derive an expression for the ratio between the download and upload throughput. Simulation results and measurements confirm our analysis and show how appropriate buffer sizing alleviates performance degradation. We also consider the case of buffers structured as arrays of packets and show that it amplifies the effects of data pendulum.

    D. Papagiannaki
  • Nasif Ekiz, Abuthahir Habeeb Rahman, and Paul D. Amer

    While analyzing CAIDA Internet traces of TCP traffic to detect instances of data reneging, we frequently observed seven misbehaviors in the generation of SACKs. These misbehaviors could result in a data sender mistakenly thinking data reneging occurred. With one misbehavior, the worst case could result in a data sender receiving a SACK for data that was transmitted but never received. This paper presents a methodology and its application to test a wide range of operating systems using TBIT to fingerprint which ones misbehave in each of the seven ways. Measuring the performance loss due to these misbehaviors is outside the scope of this study; the goal is to document the misbehaviors so they may be corrected. One can conclude that the handling of SACKs while simple in concept is complex to implement.

    S. Saroiu
  • Shane Alcock and Richard Nelson

    This paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40% of packet loss events in YouTube flows in a residential DSL dataset and the re- transmission of over 1% of all YouTube data sent after the application flow control began. We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams.

    S. Moon
  • Marcus Lundén and Adam Dunkels

    In low-power wireless networks, nodes need to duty cycle their radio transceivers to achieve a long system lifetime. Counter-intuitively, in such networks broadcast becomes expensive in terms of energy and bandwidth since all neighbors must be woken up to receive broadcast messages. We argue that there is a class of traffic for which broadcast is overkill: periodic redundant transmissions of semi-static information that is already known to all neighbors, such as neighbor and router advertisements. Our experiments show that such traffic can account for as much as 20% of the network power consumption. We argue that this calls for a new communication primitive and present politecast, a communication primitive that allows messages to be sent without explicitly waking neighbors up. We have built two systems based on politecast: a low-power wireless mobile toy and a full-scale low-power wireless network deployment in an art gallery and our experimental results show that politecast can provide up to a four-fold lifetime improvement over broadcast.

    P. Levis
  • Xiang Cheng, Sen Su, Zhongbao Zhang, Hanchi Wang, Fangchun Yang, Yan Luo, and Jie Wang

    Virtualizing and sharing networked resources have become a growing trend that reshapes the computing and networking architectures. Embedding multiple virtual networks (VNs) on a shared substrate is a challenging problem on cloud computing platforms and large-scale sliceable network testbeds. In this paper we apply the Markov Random Walk (RW) model to rank a network node based on its resource and topological attributes. This novel topology-aware node ranking measure reflects the relative importance of the node. Using node ranking we devise two VN embedding algorithms. The first algorithm maps virtual nodes to substrate nodes according to their ranks, then embeds the virtual links between the mapped nodes by finding shortest paths with unsplittable paths and solving the multi-commodity flow problem with splittable paths. The second algorithm is a backtracking VN embedding algorithm based on breadth-first search, which embeds the virtual nodes and links during the same stage using node ranks. Extensive simulation experiments show that the topology-aware node rank is a better resource measure and the proposed RW-based algorithms increase the long-term average revenue and acceptance ratio compared to the existing embedding algorithms.

    S. Agarwal
  • Jianping Wu, Jessie Hui Wang, and Jiahai Yang

    Research and promotion of next generation Internet have drawn attention of researchers in many countries. In USA, FIND initiative takes a clean-slate approach. In EU, EIFFEL think tank concludes that both clean slate and evolutionary approach are needed. While in China, researchers and the country are enthusiastic on the promotion and immediate deployment of IPv6 due to the imminent problem of IPv4 address exhaustion.

    Since 2003, China launched a strategic programme called China Next Generation Internet (CNGI). China is expecting that Chinese industry is better positioned on future Internet technologies and services than it was for the first generation. Under the support of CNGI grant, China Education and Research Network (CERNET) started to build an IPv6- only network, i.e. CNGI-CERNET2. Currently it provides IPv6 access service for students and staff in many Chinese universities. In this article, we will introduce the CNGI programme, the architecture of CNGI-CERNET2, and some aspects of CNGI-CERNET2’s deployment and operation, such as transition, security, charging and roaming service etc.

  • Ingmar Poese, Steve Uhlig, Mohamed Ali Kaafar, Benoit Donnet, and Bamba Gueye

    The most widely used technique for IP geolocation consists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolocation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitations in their usability.

    First, the vast majority of entries in the databases refer only to a few popular countries (e.g., U.S.). This creates an imbalance in the representation of countries across the IP blocks of the databases. Second, these entries do not reflect the original allocation of IP blocks, nor BGP announcements. In addition, we quantify the accuracy of geolocation databases on a large European ISP based on ground truth information. This is the first study using a ground truth showing that the overly fine granularity of database entries makes their accuracy worse, not better. Geolocation databases can claim country-level accuracy, but certainly not city-level.

  • Wai-Leong Yeow, Cedric Westphal, and Ulas C. Kozat

    In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.

Syndicate content