Goddamnit Dan, stop making our lives harder!

Sorry, you feel that way, random internet person, but hopefully, this post makes your lives easier!

Anyway, this tweet by friend of the show John-Henry Scherck got me thinking

Are these tools trustworthy in the B2B Saas space? In the thread above, Russ Jones, who works on Keyword Explorer and other things at Moz, had this to say:

So let’s all agree that this is a very difficult problem to solve and that nobody is perfect, but since tools are out there selling their data as meaningful it needs to be tested

The Test

So here is what I did:

1) Took a year’s worth of warehoused Google Search Console data for a B2B SaaS Client (Calendar year 2019) and calculated the average number of monthly impressions for all terms that got a click in our data set. This is going to be our source of truth for “Monthly Search Volume.” Of course, whether or not this is true is debatable, but it def means more to us and our clients then 3rd party tool data.

2) Exported the keywords that this site was ranking for from SEMrush and AHREFs. This export has the monthly search volume, as calculated by that tool, for each term the site is ranking for. SEMRush had about ~1,000 more terms then AHREFs for this site.

3) Then I took the overlapping keywords among all 3 data sets (GSC/AHREFS/SEMrush) and ran it through Keywordtool.io to get their search volume.

4) Created a histogram to represent the distribution of how things varied from the “source of truth.” To do this we calculated the difference using a modified growth calculation. If you weren’t already reading the post I would tell you, “You won’t even guess what happened next…”

But Dan, why didn’t you use Moz’s Keyword Explorer?

Great question! I did not use Moz Keyword Explorer as it is not available as a standalone product, and so we don’t have it in our toolbox. Someone at Moz please change this!

At the end of the day, a keyword set of ~10,000 keywords that drove traffic in 2019 was whittled down to 505. Wow!

The Results

This histogram above does what I think is a good job summarizing the data. That’s because I’m a giant nerd. The Y-axis is # of keywords. The “x-axis” represents buckets of keywords that are a certain percentage of monthly search volume away from the source of truth. If you want to nitpick my methodology, feel free to hit me up (@danleibson) on Twitter. Hat tip to Alexis Sanders on the visualization.

I think one of the most fascinating things is to look at the distribution between the large buckets on both ends. Those are the keywords that are off by the largest amounts, and you can see that whether that amount they are off is positive or negative varies.

I think this makes it hard to even discuss the directional accuracy of 3rd party keyword data. Just to illustrate this, look at where all 3 tools fell in regard to overestimating or underestimating search volume.

If you don’t know if it’s off by +/- 250% and which way, you can’t really call it directionally accurate…

The Takeaways

To me the biggest takeaways are as follows:

1) 3rd Party tools are only to be used for keyword discovery unless you are either looking at top of the Internet (specifically massive consumer queries), desperate, or both. In regard to the tweet from Russ I shared earlier, I think that it’s up to the tool providers to share this answer with us in a meaningful way, as they are the ones that want us to pay to use their data.

2) You have to have ALL THE 3rd PARTY TOOLs for keyword discovery. One of the things that stood out to me is that there were only 3 terms between SEMrush and AHREfs that were off <10% in BOTH data sets. That tells me that, in addition to the different terms in their index for this site, both indexes have very different methods of calculating keyword volume.
Their data sets are just way too different to rely on one. Similar to using multiple link index tools, if you don’t you are just going to have way less decision making information on what opportunities are out there.

3) No volume or low volume query strategies are money. This site I’m using as an example generated just over $2M ARR in 2019. That was through this strategy almost exclusively. This means having a tight analytics setup where you can show the business impact in revenue/pipeline, etc. from specific pages/content is a must.

I reached out to John-Henry for comment and this is what he had to say:

“Unfortunately, the data isn’t all that surprising. After helping numerous B2B software companies map out potential paths to generate inbound traffic, we have learned to lean into Google’s auto-suggest data for new content initiatives (where GSC data isn’t available) and ignore the no- and low-volume, inaccurate metrics provided by these tools for mid- and bottom-of-the-funnel keywords.”

I also wanted Alexis to give me some feedback on what she thought of the data:

“Estimates of organic traffic start with search volume, a CTR-curve (where each position is allocated a certain percentage of the click bounty), and then some clickstream data folded in (from sources like Jumpshot… RIP…). It’s always going to be an estimate.”

Buuuuuuuut that doesn’t mean that it’s not useful directionally. As the adage from George E. P. Box (founder of many models and tests) goes:

“All models are wrong; some are useful.”

In other words, “SEOs need to use what they have.”

How will this impact how you think about or use search volume data from 3rd party tool providers? Will you start warehousing and using calculated Google Search Console data? Get at the discussion here or on Twitter (dot) com!

Share This Story!

About Author

No Comment

Comments are closed.