Welcome to part 4 of a 6-part article series on knowledge, the Buyer Journey, marketing/sales synergy and better sales results. Our last article What data sources to integrate to gain from Big Knowledge looked at the main data sources you need to focus on and the value of each, especially the CRM system. This article looks at the ways in which machine learning can help to produce more precise knowledge from the data sources you have used.
Digital marketing, in general, is very quickly turning what used to be a guessing game into an uninterrupted series of data-driven actions. Effective marketing has become a blend of science, strategy and creativity. There’s a lot to know about new ways of driving business with customer data. Machine learning is, for example, a key part of how ID BBN client, Visma Software, was able to find 602 new leads and predict 722 customer businesses at risk of churn. (More on that later.)
How does Machine Learning work?
Machine learning is easy to understand: it’s an application of artificial intelligence (AI) that finds patterns in data. It can, for example, predict buyer behaviour with a high degree of accuracy. Without being explicitly programmed to do so, machine learning algorithms automatically learn and improve from experience. This is different from traditional analytics, where you typically start with an assumption about the data itself, then select the appropriate method of analysis. Machine learning is a simple thing in theory: you put your data through an algorithm…and it finds patterns.
Key to the effective use of machine learning is a large quantity of data. With a large enough data set and enough parameters, machine learning will find linkages between data that a human would miss. There is really no limit to how much data you can feed into a machine learning process.
In practice, you still need a data scientist on board to combine the data sets you have and extract the relevant information. Then you need them to help customize the machine learning algorithms themselves, as they are so complicated that you simply can’t build a model by hand. We often use RapidMiner for processing data and building machine learning programs. When building from scratch, Python and R are the main programming language contenders for data science and both have machine learning algorithm libraries.
Uniting data sets for Machine Learning
With machine learning, raw customer data is fed into an unsupervised learning algorithm that separates contacts by pre-defined actions. Many machine learning models are non-linear, which means they can explore the data in greater depth than their linear counterparts, producing more insightful results. In the case of lead scoring, customer behavioural data is fed into a supervised learning algorithm that calculates the likelihood of purchase with a high degree of accuracy.
As with most data analysis, the biggest challenge with machine learning lies in uniting and normalising data before it’s fed into an algorithm. For high-quality results, you need to start with high-quality data. First, you have to look at variables in the data sets and discard those with lots of missing values. Then join different data sets together into one large one. This is where time often plays a role. The format of information from a marketing automation platform like Oracle Eloqua, for example, is collected over the course of an entire year, while firmographics information is from one point in time. So you need to summarize the activity data in Eloqua so that it follows the same format as the firmographics data.
Meet, sort and test
If data sets are significantly different, for example, richer data for one subset of customers, you need to consider if that’s going to have a negative effect on the machine learning algorithm. If it starts to rely on data that doesn’t exist for other customers or prospects, the results will be skewed.
That there are no critical data collection flaws is something you have to verify. You essentially need to have a good understanding of an organization’s business so that you can be sure that it is properly reflected in the data. That’s where data governance plays a part in building a solid machine learning program. The best way to know how data is being collected and stored is by meeting with customers and building an understanding of their processes.
Once you have united your data, you should test how well the machine learning model performs to get an indication of your data quality. For example, split your data set 70/30 do A/B testing. Use 70 as ‘old data’ and 30 good.
Machine learning in action: Visma Software
The Visma Group of Companies has 900,000 customers spanning 12 countries and services in five distinct business areas. One business unit, Visma Software, wanted to take their digital marketing to the next level, which involved machine learning. We had two objectives: to analyse and predict the probability of both churn and buying; and to understand the factors, processes and variables that influence churn & buying probability the most—so that we could prevent churn or accelerate buying.
Visma Software had plenty of customer data to use. We combined sales and support data sets, enriched with marketing activity data such as email activity data and website and customer portal activity data, along with firmographics information. All in all, we gathered data from five systems, controlled 70 variables and analysed 168,000 data points across 2,722 customer businesses for churn and an additional 6,444 potential purchasers.
In the end, we achieved prediction model accuracy of 84.6% for churn and 84.1% for sales—very high when it comes to predictive marketing. A big part of this success was the availability of multiple years of activity data. The marketing team identified a total of 602 new leads and 722 customer businesses at risk of churn. The stimulation and mitigation that will follow will have a positive effect on the bottom line for years to come. The results also provided new insights into the processes and mechanisms at Visma that affect churn and buying.