Info Image

Spinning Data into Pure Gold

Spinning Data into Pure Gold Image Credit: monsitj/Bigstockphoto.com

Automation will help companies better monetize their big data and transform their economics in the coming year.

The recent past has been rife with activity aimed at achieving digital transformation outcomes using virtualization, automation, big data, AI, and analytics. Indeed, many companies have taken the mantra of using these capabilities to create personalized customer experiences, service agility, and business disruption to heart. They’ve been laying the groundwork for these outcomes by seeking ways to mine actionable insights from the large volumes of customer data they collect.

If the past few years have been mostly about gathering and storing big data, next year will see companies start monetizing that big data in much bigger ways as technologies emerge to 1) boost the productivity of data scientists, 2) automate a key aspect of machine learning, and 3) spur communications service providers (CSPs) to start riding the digital wave and exit the low-margin, pure-pipe business. All of these trends should make the job of turning data into more accurate decision-making and actionable business outcomes possible.

Specifically, here are three key trends on board for 2019:

#1: Companies will start deriving real value from big data, thanks to middleware that will take big data initiatives from pilots to production faster

The situation today: Big data has been a trend for several years. However, companies sitting on massive data stores, such as communications service providers (CSPs), have been challenged to monetize the data. One of the challenges has been getting the data they have collected into the correct format so that it is easily accessible and can be queried at a moment’s notice.

Data scientists face the challenge of integrating the company’s data storage repositories with the data science platforms they use for modelling and analysis. The process has involved building interfaces from each of the various repositories to each data science platform, one at a time. This manual and latency-prone practice has limited CSPs’ and others’ ability to mine actionable insights from big data that they might turn into personalized customer experiences, slicker operational processes, and other business outcomes that could add new revenue streams to their coffers or save them money.

What’s changing in 2019: Automating the data science process requires putting the modelling and analytics functions onto each data repository platform. To that end, middleware will debut in 2019 that integrates multiple data repository platforms, such as Amazon Web Services and Microsoft Azure, with multiple data science platforms, such as KNIME and H20.

The platform integration will collapse data silos and boost the productivity of data scientists, which will translate into more actionable insights for CSPs and other companies to use to monetize their data. To illustrate the integration impact: Let’s say a data scientist asks for five data attributes to run an algorithm, but these attributes don’t match the native storage formats of the various data types stored in the organization’s data lake. Yet - voilà! - the five attributes will nonetheless be delivered transparently to the scientist without the scientist having to endure the time- and labor-intensive process of reformatting the data. The result: accelerated results, decisions, and disruption.

#2: Machine learning deployments will be reinvigorated, as feature engineering becomes automated

Anupam Rastogi,
Chief Architect & SVP of Technology,
Guavus

The situation today: Feature engineering is complex, expensive, and time consuming, yet it’s a necessary and fundamental component to machine learning. It involves selecting the correct attributes of raw data to be fed into data models for analysis to yield accurate results.

Selecting the right attributes requires extensive domain expertise. It has also been largely a manual process. The code for manual feature engineering is problem-dependent, which means that it must be rewritten for each new dataset, rendering the process slow and error-prone. These issues have been difficult and costly, impeding the overall progress with machine learning and causing a number of pilots to fail or stall.

What’s changing in 2019: Feature engineering will become automated, extracting useful and meaningful features from a set of related data tables using a framework that can be applied to any problem. Time spent on feature engineering will be slashed, which will accelerate machine learning efforts and implementation times. Models will become adaptable: when data changes, it will trigger a features change, which, in turn, will trigger a model change. By reducing error, automated feature engineering will help prevent improper data usage that would invalidate a model and in doing so, drive better decision-making and, ultimately, help companies increase revenue.

LATEST FREE WHITEPAPER - DOWNLOAD NOW

#3: CSPs will start moving out of the pure-pipe business with a little help from predictive analytics

The situation today: CSPs are sitting on vast volumes of big data that they need to exploit to lower operational costs and to monetize their networks with completely new types of services and customer experiences. If they don’t use data-centric measures to get ahead of the game, they could be forever stuck in the pipe business, transporting bits, which relies on ever-shrinking margins for survival.

What’s changing in 2019: Smart CSPs will get creative about exploiting the data generated by network equipment and their subscribers. Web powerhouses have demonstrated how analytics can be used both to make money through targeted advertising, personalized services, and improved customer experiences that improve customer stickiness; CSPs will get on the bandwagon with these tactics to find and retain customers and enter new markets. They will likely be forming partnerships with companies in retail, banking, and other industries to create network-based services with a much bigger value add than simply data transport.

They will also begin to use predictive analytics and AI regularly to save money, as they make intelligent decisions about network traffic routing, device repairs, and SDN/NFV management. Predictive maintenance is a huge growth area as companies realize the cost savings of preventing problems rather than reacting to them.

As large companies and CSPs finally start making sense of their big data with smart tools that accelerate the creation of actionable insights with that data, the floodgates will open to business disruption. For its part, 2019 is likely to represent one of the early years where the digital transformation proof finally shows up in the proverbial pudding, as companies begin to monetize their big data in creative ways that have yet to be conceived.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Anupam Rastogi is the Chief Architect and SVP of Technology at Guavus, a pioneer in AI-based big data analytics for CSPs. Anupam is a 30-year veteran in the IT industry and has a wide ranging expertise across multiple fields, including Data Engineering/Management/Warehousing, Business Intelligence/Analytics and Enterprise Architecture. Most recently, he spearheaded an award-winning cloud-based self-service reporting platform at Google, where he served as Corporate Engineering - Head of Reporting Platform and Solutions. Prior to Google, Anupam gained valuable experience in different capacities at Netapp, Cisco, and Oracle.

PREVIOUS POST

Real-Time Location Services: Three Predictions for 2019

NEXT POST

Testing the Limits of the Network in 2019 and Beyond