MOZ Local Search Ranking Factors 2017
For a number of years now, David Mihm has headed up one of the most important surveys in the Local Search marketing industry–The Local Search Ranking Factors Survey. David co-founded GetListed.org, which he sold to Moz in November 2012.
This year’s Local Search Ranking Factors marks at least one significant change: David Mihm has handed over the data collection, analysis, and publication of the survey results to me, Darren Shaw (official announcement).
Thank you, David, for trusting me with this important industry resource. It is an honor to follow in your footsteps with this, and I hope to live up to the high standards you have set for it year after year.
My apologies to the community for the delay between the last Local Search Ranking Factors (September 24th 2015) and this one. While David passed the reins to me in the summer of 2016, it has taken me this long to get everything organized and put together. I now have a much deeper appreciation for the amount of work David has invested in this for the past eight years. 🙂
Changes Made to the Survey
I have kept David’s survey style mostly intact, aside from the following 5 changes:
1) Foundational factors versus the competitive difference-maker factors
Many of the local search ranking factors are “foundational,” in that they are needed to have any chance at showing in the local results, but continuing to focus on them isn’t going to move the needle (proper GMB categories, for example). On the other hand, many of the factors can be considered “competitive difference-makers” in that continuing to invest in them will push your local rankings further.
By surveying the participants on which factors are foundational and which factors are competitive difference-makers, I’m hoping to provide some guidance on what to focus on in your ongoing local search work, after you have laid down the proper foundation.
2) Changes in approach to local search since the Possum update
Has the Possum update had much of an impact on anyone’s approach to local search? Here I ask participants to rate the top 5 factors they’re focusing on more since Possum, and which factors they’re focusing on less.
3) Breaking down citation consistency into multiple factors
How far do you need to go with citation consistency? Do you need to spend hours and hours hunting down and fixing ALL incorrect citations that exist on the web? For some businesses that have been around for a long time and have gone through many name, address, and phone number changes, this could mean hundreds or thousands of incorrect listings to clean up. Do you just do the top 10 sites? The top 30?
To answer this, I removed “Consistency of Structured Citations” as a general factor and replaced it with these 4 new factors:
- Consistency of Citations on the Primary Data Sources (aggregators in the US and primary data sources in other countries)
- Consistency of Citations on Tier 1 Citation Sources (the top 5 to 10 most prominent structured citation sources in the country)
- Consistency of Citations on Tier 2 Citation Sources (the next 10 to 50 most prominent structured citation sources in the country)
- Consistency of Citations on Tier 3 Citation Sources (the hundreds of other business listing sites out there)
4) Expanded commentary by asking direct questions
Each year of the survey, I find that the real gold can be found by reading the many insightful comments that participants provide. Phil Rozek suggested the excellent idea that I could encourage more commentary by prompting with questions. I tried to leave the questions open-ended enough to get a broad range of answers, and it appears to have worked well, since I’ve ended up with 33 pages of incredible insights from the best in the business.
5) Factors dropped and factors added
There were a total of 115 ranking factors and 27 negative ranking factors in the 2015 survey. Some of these factors just aren’t relevant anymore (for example, you can no longer edit the description on your Google listing), and some of them were just so obscure that they never made anyone’s top 20 list anyway (“Number of +1s on Website”). Also, many new factors that we’re seeing these days weren’t on the list; I went through all the factors, removing 32 of them and adding 38.
For anyone interested, you can see the full list of added, removed, and updated factors here.
Is it called a snack pack, a local pack, a pak, or something else? I’m hoping to help standardize the terminology used across the industry, particularly with the pack types. I can’t think of a better place to define these than on the Local Search Ranking Factors Survey results.
Google My Business Listing. Your primary listing at Google that is editable in the GMB dashboard and publicly accessible at 3 locations:
GMB Landing Page
The page that a GMB listing links to. Usually the homepage or a location page. (example)
The regular local 3-pack that appears for most local search terms. (example)
Local ABC Pack
A local 3-pack with A, B, and C to the left of each result. No review stars, ratings, or counts appear for this type. This pack type is returned for branded terms such as “Starbucks” and, inexplicably, for storage and gas station terms. (example)
Local Snack Pack
This style of local 3-pack appears for dining, hospitality, and entertainment terms. Results have a photo, no phone number, and no links to the website. (example)
Local Sponsored Pack
The special pack type that is currently appearing in San Diego for plumbers and locksmiths. It appears in addition to the regular local pack. (example 1) There are also these sponsored pack types appearing for home services businesses in the San Francisco area: (example 2)
The complete list of local results that appears when the “More places” link at the bottom of a local pack is clicked. (example)
The 2017 survey is structured into five primary sections:
- Thematic Ranking Signals
- Specific Ranking Factors in Local Pack/Finder and Local Organic Results
- Foundational vs. Competitive Ranking Factors
- Impact of the Possum Update
- Negative Ranking Factors
I. General Ranking Factors
In this section, participants are asked, “In your opinion, to what extent do each of the following thematic clusters contribute to rankings across result types at Google?” They then enter a percentage of influence for each of these 8 thematic areas, for both local pack/finder results and local organic results:
- My Business signals (proximity, categories, keyword in business title, etc.)
- Citation signals (IYP/aggregator NAP consistency, citation volume, etc.)
- On-page signals (presence of NAP, keywords in titles, domain authority, etc.)
- Link signals (inbound anchor text, linking domain authority, linking domain quantity, etc.)
- Review signals (review quantity, review velocity, review diversity, etc.)
- Social signals (Google engagement, Facebook engagement, Twitter engagement, etc.)
- Behavioral/mobile signals (click-through rate, mobile clicks-to-call, check-ins, etc.)
The results here give us a sense of which general ranking factor areas are more important than others.
II. Specific Ranking Factors
In part A of this section, I asked the experts to rank the top 20 individual ranking factors (out of a total list of 113) that have the biggest impact on pack/finder rankings.
In part B of this section, I asked them to rank the top 20 factors from the same list, only this time to rank them based on impact on localized organic rankings.
Results were then tabulated via inverse scoring, where the number one-ranked factor received the most “points” for that question, and the lowest-ranked factor received the fewest points. (The factors ranking outside the top 20 for all respondents ended up with zero points.)
III. Foundational vs Competitive Factors
In this section, I asked the experts to rank the 10 factors they think are the most important foundational ranking factors, and to rank the 10 factors they think are competitive difference makers.
Results were then tabulated via inverse scoring, where the #1 ranked factor received the most “points” for that question, and the lowest-ranked factor received the fewest points. (The factors ranking outside the top 10 for all respondents ended up with zero points.)
IV. Impact of the Possum Update
Here, I asked the experts to rank the five factors they were paying more attention to since the Possum update, and the five factors they were paying less attention to since the update.
Results were then tabulated via inverse scoring, where the #1 ranked factor received the most “points” for that question, and the lowest-ranked factor received the fewest points. (The factors ranking outside the top 5 for all respondents ended up with zero points.)
IV. Negative Ranking Factors
In this section, I asked the experts to rank 34 negative factors in order of most damaging to most benign.