Amogh Dhamdhere

Revisiting BGP churn growth

By: 
Ahmed Elmokashfi, Amogh Dhamdhere
Appears in: 
CCR January 2014
Public Review By: 
Jia Wang

BGP is the interdomain routing protocol that is used on today’s Internet. BGP churn growth has been an old topic which has been extensively studied over the past years. The problem is that, when the network undergoes routing changes, a large number of routing protocol messages could be exchanged between routers before converging to new stable routes. With the rapid growth of the Internet, this update churn could grow too fast for routers to handle. This paper revisits this old concern and looks into multi-year long BGP routing traces obtained from RouteView  collectors. The authors compared IPv4 and IPv6 and presented a BGP churn model. Several interesting findings are presented and explained by the model: (i) the number of routing updates normalized by the size of topology is constant; (ii) the growth trends of routing dynamics are qualitatively similar in IPv4 and IPv6 (i.e., they both depend on the growth of the size of network topology); (iii) the exponential growth of IPv6 churn is expected due to the exponential growth of the underlying IPv6 topology; (iv) IPv6 is less stable than IPv4 with 6 times more routing

events being observed per origin AS. Though some of the findings presented in this paper are consistent with what has been reported in previous literature, I found it is still quite valuable to the community to re-confirm previous observations and common assumptions over multi-year long data traces. This paper would serve as a good reference point of BGP Churn for researchers who work on interdomain routing and related areas.

Towards a cost model for network traffic

By: 
Murtaza Motiwala, Amogh Dhamdhere, Nick Feamster, Anukool Lakhina
Appears in: 
CCR January 2012

We develop a holistic cost model that operators can use to help evaluate the costs of various routing and peering decisions. Using real traffic data from a large carrier network, we show how network operators can use this cost model to significantly reduce the cost of carrying traffic in their networks. We find that adjusting the routing for a small fraction of total flows (and total traffic volume) significantly reduces cost in many cases. We also show how operators can use the cost model both to evaluate potential peering arrangements and for other network operations problems.

Public Review By: 
Augustin Chaintreau

Signal is not information, and however hard we try our networked life is not 100% efficient. In some extreme cases, one might ever wonder if some of these online conversations are worth taking place. One reasurring thought is that, if the value of communication is subject to debate, its cost on the other hand can probably be assessed objectively. No matter how boring is your train neighbor’s cellphone conversation, you can probably infer how much he or she will eventually pay for it. This paper may (or not) surprise you as it asks the apparently simple question “what is the real cost of carrying a given traffic flow?” and it tells that, well, it's perhaps more complicated than we think. As several reviewers pointed out, this is perhaps even more complicated than what a 6 page paper can tell. There are many good reasons (much better than the one I gave above) to compute this cost, primarily for an operator to optimize key decisions like the establishment of peering links. This is where the paper focus and it establishes that it is likely that the cost can be greatly reduce by modifying routes for a small portion of the traffic. Where it differs from traditional traffic engineering is that it does not aim at a previously agreed performance goals, but it uses the same means to minimize the overall cost. Not surprisingly, the cost model turns out to be an essential piece: for substantially the same gain, the fraction of traffic to reroute varies from 10% (for linear cost) to 30% (when a cooperative game theory following Shapley’s fairness axioms are used). This result more generally establishes that cost models matter, and also that they can be useful. One clear merit of this work is to make all of us aware that perhaps our community should be engaging in understanding which cost model can and should be used. Major questions remaining to answer are (1) the impact of the congestion and feedback loop, (2) the roles that content providers and CDNs play in the cost value chain, and (3) how would the rerouting proposed in this article be actually implemented without terribly impacting performances or previously agreed terms of service. This paper will not answer these entirely, but it provides some elements and hopefully you'll think differently about them after you read it.

Measured Impact of Crooked Traceroute

By: 
Matthew Luckie, Amogh Dhamdhere, kc claffy, and David Murrell
Appears in: 
CCR January 2011

Data collected using traceroute-based algorithms underpins research into the Internet’s router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths.

Public Review By: 
R. Teixeira

The research community has applied traceroute-style probing to measure Internet topologies for more than a decade with systems such as Skitter/Ark, Dimes, or Rocketfuel. These topologies are the basis of many other research efforts. Unfortunately, recent studies showed that classic traceroute can report false links when a router in the path performs load balancing. Although new probing techniques correct measurement artifacts under per-flow load balancing, we cannot correct topologies that have already been collected using classic traceroute and no prior work has studied how these errors affect inferred topologies. A natural question is then: how accurate are the topologies that we have all been using in our research?
This paper gives us a mixed answer. Measurement artifacts due to per-flow load balancing introduce only few errors when traceroute is used to discover a macroscopic topology (i.e., an Internet-wide topology), but they introduce significant errors when discovering the topology of an ISP. Such a sharp difference in the fraction of false links between the macroscopic topology and the ISP topology suggests that the error really depends on the set of vantage points and the networks traversed. This paper studies only one source of errors in inferred Internet topologies. As the authors point out: "the state of the art in Internet topology measurement is essentially and necessarily a set of hacks, which introduce many sources of possible errors". Hopefully, new studies will follow to understand the caveats of measured Internet topologies and to measure more accurate topologies. In the mean time, this paper confirms that we should be cautions when using inferred Internet topologies.

Syndicate content