One of the most commonly pitched features of Marketing Automation is lead scoring. This gives MA users the ability to score the quality of a leads explicit profile information (job title, industry, revenue…) and implicit engagement activity (website visits, event registrations, email click-throughs…) allowing your sales team to prioritize leads and focus their efforts on the most qualified leads nearing the end of the purchase cycle.
There is certainly great value to be had. So much so that this is one of the first features MA users build into their system. However, lead scoring can be a complex task with many pitfalls – especially for a company unfamiliar with MA. Jumping in too early almost always results in failure – which can be a big blow when you are trying to quickly prove the value of a new MA system. This post outlines some of my experiences with lead scoring and lists the reasons you may not want to build a lead scoring program…yet.
Complex Build Process
MA vendors seem to offer one of two lead scoring engines: a very user friendly scoring module that offers little in the way of flexibility or a complex workflow engine that allows you to fully customize every aspect of scoring. The pros and cons of each are obvious. Unfortunately, most users want something in the middle (unless the MA vendors develop an easy to use scoring module that allows for complex scoring models – I have seen concepts of these modules but have yet to witness them in GA product).
It is unlikely that new MA users are equipped to start building complex programs. To get around this it’s advisable to hire an experienced MA user who can infuse your team with their knowledge and kick start your MA system or engage an agency that specializes in MA so you can utilize their experience and resources. Either route will help you avoid learning through failure and can reduce the time it takes to develop your lead scoring system.
In a recent blog post, Ryan Kelly (@theleftandright) describes lead scoring as “Skeletons in the Closet”. He’s exactly right – building a lead scoring system is like running for public office: it will expose all of your skeletons, particularly around data quality. Lead scoring works by analyzing your contact and account data. Therefore, scoring can only be as accurate as the data you feed it. There are two main areas where data quality is an issue:
You can’t score what you don’t have. If a contact has unpopulated fields that are included in your scoring model they will receive a lower score despite potentially being a great fit as a lead. When determining the fields you will score on it is important to take into account field completeness. Use only those fields that are populated with accurate data.
If you wish to score on a field that does not contain quality data you may want to look to outside sources to get this data. These sources can include agencies that collect data via telemarketing or contact databases such as OneSource and JigSaw (Data.com and others can even be integrated into your MA system via cloud connector).
Scoring on text fields can be difficult. Due to the variability of their non-standardized nature it can be difficult to predict possible values and assign scores. To make things worse some MA vendors do not offer the ability to score using a ‘contains’ operator (for example, add 5 points if job title ‘contains’ V.P.). Because of the complexity of text data it is advisable to stay away from it in your scoring model. Numeric text fields are the exception because they are predictable and you can score on ranges (for example, add 10 points if account revenue is between $250,000,000 and $999,999,999.99).
Your Sale team will find text fields appealing because they are looking at each lead’s information individually and can make a human judgment on fields containing detailed, non-standard values. For this reason it’s important to keep text fields on forms and in your database.
So how are you supposed to score leads? Whenever possible, it is best to score on normalized data such as pick-list fields. This gives you a defined set of options, each of which you can assign a score to (for example, add 7 points if industry = Software). Generally speaking, you only need to include pick-list options that are appealing to your business and all other values can be categorized as ‘other’. This helps keep the number of pick list options to a minimum.
The challenge is how to get the granularity of text fields and the segmentation value of pick-lists. The answer is data normalization. Creating text fields with a matching pick-list field and using automation to populate the pick-list field based on the text field. For example, you may have a Job Title text field available on forms to collect data. Following a form submission the contact can be sent to a data normalization program that runs a set of processes such as ‘IF Job Title = CEO, CTO, CIO, CFO set Job Role = Executive Management’. This gives you the benefits of both field types and keeps both marketing and sales happy. The mechanics of this vary between MA platforms but a quick search for ‘Contact Washing Machine’ should give you detailed instructions.
Insufficient Activity Data
Determining a leads Implicit Engagement Score requires activity data. This data is collected through web tracking, email tracking, and form submissions. Using this information to score leads can give great insight into the activity and interest level of a prospect. However, if you’ve recently launched a new MA system chances are you have not had time to collect this data. As mentioned before, scoring on blank data will give you a score of ‘0’, which result in chronic low scores. There is only one cure for this: time. It’s best to skip activity based scoring until you have the data to produce accurate scores. As I’ll discuss below, additional scoring criteria can be added later.
Set it and Forget it
Your data will change over time. Your prospects buying habits will change over time. Lead Scoring uses your data to predict buying habits. Therefore, your lead scoring must also change over time.
Be sure to monitor the success of the leads you score. Are they converting at an acceptable rate? How can you improve that rate? Are leads scoring high on particular criteria not converting? You need to keep challenging and improving your scoring model. Start simple, experiment, and add more complex criteria overtime. The model will be dynamic and require constant improvement. It will grow in complexity and accuracy.
Using lead scoring I was once able to reduce the number of leads passed to sales so much that the sales team was cut by 50% with no reduction in revenue (don’t worry – the sales reps were moved to a different product line and we’re still friends!). That’s the type of efficiency gain you can get from lead scoring. The catch is you need to lay the foundation with quality data, knowledge, and resources before you jump in.