In their book Building the Network of the Future, Mazin Gilbert and Mark Austin of AT&T describe the big data framework that the operator has adopted to process the 118 petabytes of data that pass through its networks each day (as of 2016).
The operator not only tracks the payload of data that traverses its networks but also captures and stores, for later analysis, myriad data from user devices, radio access infrastructure, core network elements (such as XDRs), Internet cloud infrastructure (for example, CDN logs) as well as the application data itself (such as website logs). Much of this data stored in a Hadoop-based system running on common, off-the-shelf hardware. Feeding the Hadoop distributed file systems is a data ingestion engine based on open-source tools such as Kafka, Flume and Scoop. Sitting on top of Hadoop (figuratively) are modules for analytics (SPARK), batch processing (Map Reduce), search (SOLR) and NoSQL (e.g., MongoDB, Cassandra).
While all this open source technology looks fantastically fun, operators should step back for a second and ask themselves if filling huge data lakes with streaming telemetry about network paths, traffic flows and performance is going to provide a valuable resource for analytics or simply rack up a rather large bill for storage infrastructure (albeit commodity hardware). After all, the key point of the exercise is to unearth some valuable insights from the data that enables them to improve the business, such as faster root cause analysis, reduced mean time to repair or earlier detection of security threats. Might they be better off applying a courser filter to the data they collect, focusing on the metrics which are likely to have a material impact on performance? Judgement calls about which data is worth keeping require networking expertise which may be lacking in the IT development team tasked with building the data analytics platforms.
As this article notes: "The best strategy for data lakes is to only collect data that is useful now. Data loses its value over time and if you can’t find what you’re looking for in the mess that is the data swamp, it's pointless to keep adding to it. Projects should only go after sources that can provide useful solutions to clearly defined business problems."
To find out more about data collection best practices and what to do with the data once you have decided to store it (standard correlations, sophisticated machine learning algorithms, etc.), join us at Software-Defined Operations & the Autonomous Network event in London, November 7-8 for the panel Zero Touch Analytics – Delivering Insights In Real Time.
Operators want analytics tools to provide them with tangible insights: findings that are actionable, concrete and palpable. At the same time, they want these systems to be highly automated, employ artificial intelligence and be zero-touch. So palpable and zero-touch at the same time -- quite a challenge. I'll be discussing this, and more, with speakers from Atrinet, Netcracker and Telefonica.
— James Crawshaw, Senior Analyst, Heavy Reading