Stream-monitoring with blockmon: convergence of network measurements and data analytics platforms

By: 
Davide Simoncelli, Maurizio Dusi, Francesco Gringoli, Saverio Niccolini
Appears in: 
CCR April 2013

Recent work in network measurements focuses on scaling the performance of monitoring platforms to 10Gb/s and beyond. Concurrently, IT community focuses on scaling the analysis of big-data over a cluster of nodes. So far, combinations of these approaches have targeted flexibility and usability over real-timeliness of results and efficient allocation of resources. In this paper we show how to meet both objectives with BlockMon, a network monitoring platform originally designed to work on a single node, which we extended to run distributed stream-data analytics tasks. We compare its performance against Storm and Apache S4, the state-of-the-art open-source stream-processing platforms, by implementing a phone call anomaly detection system and a Twitter trending algorithm: our enhanced BlockMon has a gain in performance of over 2.5x and 23x, respectively. Given the different nature of those applications and the performance of BlockMon as single-node network monitor [1], we expect our results to hold for a broad range of applications, making distributed BlockMon a good candidate for the convergence of network-measurement and IT-analysis platforms.

Public Review By: 
Konstantina Papagiannaki

The paper introduces a distributed implementation of the stream-processing system BlockMon. The authors compare their system against Storm and Apache S4, demonstrating clear benefits for two specific applications, an anomaly detection system for phone calls, and a twitter trending algorithm. The reviewers found the overall topic of the paper interesting, pushing real time analytics to a distributed processing paradigm. Given the community's interest in big data analytics, this paper could seed followup research, enabling high performance analysis of real time big data sources.