It was a pretty big deal to the world of local search when Google removed “Search Tools” from their search engine. It may seem like NBD, but everyone in local search knew that that was the best way to see what a local search looked like in a different location from your own. After that was eliminated, there was a mad dash to find out how to best spoof searches from other locations. While there were a few ways to do this, using the UULE search parameter to the city quickly became the preferred method. The tl;dr on UULE is that it is an URL parameter you can pass into a search to spoof location. Unfortunately, I have some bad news folks.
If you are using UULE then there is a good chance you are borking your local search. This is basically ALL rank tracking solutions. While the problem with rank tracking is known, to me this is an added wrinkle based on how Google is treating proximity.
The implicit geo-location search query has a much tighter map, and also a more proximate grouping of businesses. This is because Google treats implicit and explicit geo-location queries differently when it comes to how they weigh proximity. Combine this with their ability to better suss out the specific location of searchers and, in my opinion, you have a problem.
Since this has potential ramifications across rank tracking, competitive intelligence tools etc I looked at a few more examples, but the results are pretty clear:
No, but seriously, this is borked
Check out these results for ‘dentist houston’ in Houston:
Now check out how many UULE codes there are for Houston, based on zip code:
This practice of using UULE to the city has been the more or less default method that lots of rank tracking providers have chosen for reporting on local search results. The radius that the queries will be reporting on, and representative rankings, won’t be the same as someone who is doing a native Google search. This means the best way to determine directionally accurate rankings is to track keywords at multiple search points in a city and create a composite ranking. Stay tuned, as I will have a how to guide on setting this up in the coming weeks.
Okay, things are borked. What does that mean?
Well first, it means rank tracking at scale becomes complicated and expensive, especially for something that is at best directionally accurate. I mean, let’s do the math. Let’s assume you have 2000 locations and are tracking 50 keywords per location. Now on top of that, you want to track 5 points per location. This is 500,000 keywords you are tracking on a potentially daily basis. That is 182,500,000 keywords a year. Even at a half a penny per keyword, $0.005, that is almost $1,000,000 per year! So for a 500 location business, you are looking at a cool $250,000 annual cost. But wait, there is more!
On top of that, this is really only available through most rank trackers when using their API, so you have to build you own reporting solution and maintain it’s functionality, which means development resources etc. This is an even greater complication to an already complicated problem. So, while there is still value for those willing to brave the water, it’s not always easy to justify the cost.
Additionally, it forces agencies and others to grapple with the ever complicated world of local KPIs. Most of our clients don’t use ranking as the most important KPI right now, but I still regularly get asked about rank trackers etc. How about all of you?