Pop-Ups, Overlays, Modals, Interstitials, and just how They Communicate with Search engine optimization – White board Friday

Published by Pop-ups, modals, overlays, interstitials, and how they work with SEO

Click the white board image above to spread out a higher-resolution version inside a new tab!

Video Transcription

Howdy, Moz fans, and thanks for visiting another edition of White board Friday. Now we are chatting about pop-ups, overlays, modals, interstitials, and all sorts of such things as them. They’ve specific types of interactions with Search engine optimization. Additionally to Google getting some guidelines around them, they can also change how people communicate with your site, which can adversely or positively affect you accomplishing your objectives, Search engine optimization and otherwise.

Types

So let us walk-through what these components, these design and UX elements do, the way they work, and finest practices for the way you should be considering them and just how they may hinder our Search engine optimization efforts.

Pop-ups

So, first of all, let us talk particularly by what each element is. A pop-up now, okay, there’s a couple of kinds. You will find pop-ups which happen in new home windows. New window pop-ups are, essentially, new window, not good. Google hates individuals. They’re essentially against them. Many browsers stop them instantly. Chrome does. Firefox does. Actually, users dislike these too. You may still find some spammy and sketchy sites available which use them, but, in most cases, not so good news.

Overlays

When we are speaking in regards to a pop-up that occurs within the same browser window, basically it is simply a visible element, that’s frequently also called an overlay. So, for that purpose of this White board Friday, we’ll call that the overlay. An overlay is essentially such as this, where you will find the page’s content and there is some smaller sized element, a bit, a box, a window, a visible of some type which comes up which basically states, maybe it states, “Join my email e-newsletter,” after which there is a spot to enter your email, or, “Get my book now,” and also you click might obtain the book. Individuals kinds of overlays are pretty common on the internet, and they don’t create quite exactly the same issues that pop-ups do, a minimum of from Google’s perspective. However, we’ll discuss individuals later, there are several issues around them, particularly with mobile.

Modals

Modals are usually home windows of interaction, tend to be aspects of use. So lightboxes for images is an extremely popular modal. A modal is one thing where you stand carrying out work within that new box instead of the information that’s beneath it. So an indication-healthy that overlays, that appears over all of those other content, however that does not permit you to build relationships the information beneath it, that might be considered a modal. Generally, more often than not, these bankruptcies are not an issue, unless of course they’re something similar to junk e-mail, or advertising, or something like that that’s taking you from the consumer experience.

Interstitials

Then finally, interstitials are basically, and a number of these may also be known as interstitial encounters, however a classic interstitial is one thing like what Forbes.com does. Whenever you visit Forbes, articles the very first time, you receive this, “Welcome. Our sponsor during the day is Brawndo. Brawndo, it’s what plants need.” You’ll be able to continue following a certain quantity of seconds. These really piss people off, myself incorporated. I truly hate the interstitial experience. I realize it’s a marketing factor. But, yeah, Google hates them as well. Less than enough to kick Forbes from their SERPs entirely yet, but, fingers entered, it has happened to sometime soon. They’ve certainly removed lots of other people who’ve gone with invasive or excessively heavy interstitials through the years making individuals pretty tough.

Do you know the factors that matter for Search engine optimization?

A) Timing

Well, as it happens timing is a huge one. Then when the element seems matters. Essentially, when the element turns up initially upon page load, they’ll contemplate it differently than whether it turns up following a couple of minutes. So, for instance, for those who have a “Register Now” overlay that appears the 2nd you go to the page, that will be treated differently than something which occurs when you are 80% or you’ve just finished scrolling with an entire blog publish. Which get treated very differently. Or it might don’t have any effect really about how Google treats the Search engine optimization, after which it truly comes lower to how users do.

Then how lengthy will it last too. So interstitials, especially individuals advertising interstitials, there are several issues governing by using people like Forbes. There’s also some issues around an overlay that can not be closed and just how lengthy a window can appear, particularly if it shows advertising and individuals kinds of things. In most cases, clearly, shorter is much better, but you will get into trouble despite very short ones.

B) Interaction

Can that element be easily closed, and will it hinder the information or readability? So Google’s new mobile guidelines, I believe by only a couple of several weeks ago, now condition when an overlay or perhaps a modal or something like that disrupts a visitor’s capability to browse the actual content around the page, Google may penalize individuals or remove their mobile-friendly tags and take away any mobile-friendly benefit. That’s clearly quite concerning for Search engine optimization.

C) Content

So likely to exception or perhaps an exclusion to numerous Google’s rules for this, that is for those who have a component that’s basically requesting anyone’s age, or requesting some type of legal consent, or giving an alert about cookies, that is extremely popular within the EU, obviously, and also the United kingdom due to the legal needs around saying, “Hey, this site uses cookies,” and you’ve got to accept it, individuals types of things, that really will get around Google’s issues. So Google won’t provide you with a difficult time for those who have an overlay interstitial or modal that states, “Are you currently of legal consuming age inside your country? Enter your date of birth to carry on.Inch They’re not going to always penalize individuals kinds of things.

Advertising, however, advertising might get you into more trouble, once we have discussed. Whether it’s a proactive approach for that website itself, again, that may go in either case. Whether it’s area of the consumer experience, generally you’re all right there. Meaning something similar to a modal where you’re able to an internet site and you say, “Hey, I wish to leave a remark,Inch and thus there is a modal which makes you sign in, that kind of a modal. Or else you click a picture also it teaches you a bigger form of that image inside a modal, again, not a problem. That’s area of the consumer experience.

D) Conditions

Conditions matter too. So if it’s triggered from SERP visits versus not, and therefore for those who have an exclusionary protocol inside your interstitial, your overlay, your modal that states, “Hey, if someone’s visiting from Google, do not show this for them,Inch or “If someone’s visiting from Bing, someone’s visiting from DuckDuckGo, do not show this for them,Inch that may change how the various search engines see it too.

It is also the situation this can alter should you only show to cookied or logged in or logged out kinds of users. Now, logged out kinds of users implies that everybody from the internet search engine could or can get it. However for logged in users, for instance, imaginable when you go to a page on the social networking site and there is a modal which includes or perhaps an overlay which includes some notification around activity that you have recently been performing on the website, since gets to be more an element of the consumer experience. That isn’t always likely to harm you.

Where it may hurt is the other way round, in which you get visitors from search engines like google, they’re logged out, and also you require these to sign in before seeing the information. Quora were built with a major problem with this particular for any lengthy time, plus they appear to possess mostly resolved that through a number of measures, and they are fairly sophisticated about this. But Facebook still struggles with this particular, because lots of their content, they need that you simply sign in before you ever view or can get on. That does keep a few of their results from Google, or certainly ranking lower.

E) Engagement impact

I believe this is exactly what Google’s ultimately attempting to measure and just what they are attempting to basically say, “Hey, for this reason we’ve these problems for this,Inch that is if you’re hurting the press-through rate or you are hurting pogo-sticking, and therefore more and more people are clicking on your website from Google after which immediately clicking the rear button when one of these simple things seems, that’s a sign to Google you have provided an undesirable consumer experience, that individuals will not jump through whatever hoop you’ve produced to allow them to get access your articles, which suggests they don’t wish to make it happen. Making this kind of the best factor that you ought to be calculating. A few of these can continue to hurt you if they are okay, but this is actually the big one.

Guidelines

So some guidelines around using all these kinds of elements in your website. I’d strongly urge you to definitely avoid factors that are considerably harming UX. If you are prepared to have a small sacrifice in consumer experience in return for a lot of value since you capture people’s emails or else you have more engagement of other different types, okay. But this is something I’d watch.

You will find 3 or 4 metrics that I’d urge you to look at to check whether this really is doing the best factor. Individuals are:

  • Bounce rate
  • Browse rate
  • Return customer rates, meaning the proportion of people that return to your website over and over, and
  • Time on-site following the element seems

So individuals four can help let you know regardless if you are truly interfering badly with consumer experience.

On mobile, make sure that your crucial submissions are not hidden, the studying experience, the browsing experience is not hidden by one of these simple elements. Please, anything you do, make individuals elements simple and easy , apparent to dismiss. This belongs to Google’s guidelines around it, but it is additionally a best practice, and it’ll certainly strengthen your consumer experience metrics.

Only decide to keep one of these simple elements if you’re discovering that the sacrifice… and there is more often than not a sacrifice cost, as if you will hurt bounce rate or browse rate or return customer rate or time on-site. You’ll hurt it. Now you ask ,, could it be a small enough hurt in return for enough gain, and that is that trade-off you need to decide be it worthwhile. I believe if you’re hurting customer interaction with a couple of seconds typically per visit, but you’re getting 5% of the visitors to provide you with their email, that’s most likely worthwhile. Whether it’s a lot more like thirty seconds and 1%, not nearly as good.

Consider taking out the components from triggering when the visit originates from search engines like google. Therefore if you are discovering that this works fine and great, but you are getting issues around search guidelines, you could look at potentially just taking out the element from the visit which comes from a internet search engine and rather placing that within the content itself or allowing it to happen on the second page load, presuming that the browse rates are decently high. This is a fine approach to take too.

If you’re looking to get the very best value from these kinds of elements, it is commonly the situation that the less frequent and fewer well used the visual element is, the greater interaction and engagement it is going to get. But sleep issues of this gold coin is it can produce a more frustrating experience. Therefore if people do not know the overlay or modal or interstitial visual layout design that you have selected, they might engage more by using it. They may not write it off beyond control, because they are unfamiliar with it yet, but they may also have more annoyed by it. So, again, go back to searching at individuals metrics.

Knowing that, hopefully you’ll effectively, and never too harmfully for your Search engine optimization, have the ability to begin using these pop-ups, overlays, interstitials, modals, and all sorts of other kinds of factors that hinder consumer experience.

And we’ll help you again in a few days for an additional edition of White board Friday. Be mindful.

Video transcription by Speechpad.com

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by Pop-ups, modals, overlays, interstitials, and how they work with SEO

Click the white board image above to spread out a higher-resolution version inside a new tab!

Video Transcription

Howdy, Moz fans, and thanks for visiting another edition of White board Friday. Now we are chatting about pop-ups, overlays, modals, interstitials, and all sorts of such things as them. They’ve specific types of interactions with Search engine optimization. Additionally to Google getting some guidelines around them, they can also change how people communicate with your site, which can adversely or positively affect you accomplishing your objectives, Search engine optimization and otherwise.

Types

So let us walk-through what these components, these design and UX elements do, the way they work, and finest practices for the way you should be considering them and just how they may hinder our Search engine optimization efforts.

Pop-ups

So, first of all, let us talk particularly by what each element is. A pop-up now, okay, there’s a couple of kinds. You will find pop-ups which happen in new home windows. New window pop-ups are, essentially, new window, not good. Google hates individuals. They’re essentially against them. Many browsers stop them instantly. Chrome does. Firefox does. Actually, users dislike these too. You may still find some spammy and sketchy sites available which use them, but, in most cases, not so good news.

Overlays

When we are speaking in regards to a pop-up that occurs within the same browser window, basically it is simply a visible element, that’s frequently also called an overlay. So, for that purpose of this White board Friday, we’ll call that the overlay. An overlay is essentially such as this, where you will find the page’s content and there is some smaller sized element, a bit, a box, a window, a visible of some type which comes up which basically states, maybe it states, “Join my email e-newsletter,” after which there is a spot to enter your email, or, “Get my book now,” and also you click might obtain the book. Individuals kinds of overlays are pretty common on the internet, and they don’t create quite exactly the same issues that pop-ups do, a minimum of from Google’s perspective. However, we’ll discuss individuals later, there are several issues around them, particularly with mobile.

Modals

Modals are usually home windows of interaction, tend to be aspects of use. So lightboxes for images is an extremely popular modal. A modal is one thing where you stand carrying out work within that new box instead of the information that’s beneath it. So an indication-healthy that overlays, that appears over all of those other content, however that does not permit you to build relationships the information beneath it, that might be considered a modal. Generally, more often than not, these bankruptcies are not an issue, unless of course they’re something similar to junk e-mail, or advertising, or something like that that’s taking you from the consumer experience.

Interstitials

Then finally, interstitials are basically, and a number of these may also be known as interstitial encounters, however a classic interstitial is one thing like what Forbes.com does. Whenever you visit Forbes, articles the very first time, you receive this, “Welcome. Our sponsor during the day is Brawndo. Brawndo, it’s what plants need.” You’ll be able to continue following a certain quantity of seconds. These really piss people off, myself incorporated. I truly hate the interstitial experience. I realize it’s a marketing factor. But, yeah, Google hates them as well. Less than enough to kick Forbes from their SERPs entirely yet, but, fingers entered, it has happened to sometime soon. They’ve certainly removed lots of other people who’ve gone with invasive or excessively heavy interstitials through the years making individuals pretty tough.

Do you know the factors that matter for Search engine optimization?

A) Timing

Well, as it happens timing is a huge one. Then when the element seems matters. Essentially, when the element turns up initially upon page load, they’ll contemplate it differently than whether it turns up following a couple of minutes. So, for instance, for those who have a “Register Now” overlay that appears the 2nd you go to the page, that will be treated differently than something which occurs when you are 80% or you’ve just finished scrolling with an entire blog publish. Which get treated very differently. Or it might don’t have any effect really about how Google treats the Search engine optimization, after which it truly comes lower to how users do.

Then how lengthy will it last too. So interstitials, especially individuals advertising interstitials, there are several issues governing by using people like Forbes. There’s also some issues around an overlay that can not be closed and just how lengthy a window can appear, particularly if it shows advertising and individuals kinds of things. In most cases, clearly, shorter is much better, but you will get into trouble despite very short ones.

B) Interaction

Can that element be easily closed, and will it hinder the information or readability? So Google’s new mobile guidelines, I believe by only a couple of several weeks ago, now condition when an overlay or perhaps a modal or something like that disrupts a visitor’s capability to browse the actual content around the page, Google may penalize individuals or remove their mobile-friendly tags and take away any mobile-friendly benefit. That’s clearly quite concerning for Search engine optimization.

C) Content

So likely to exception or perhaps an exclusion to numerous Google’s rules for this, that is for those who have a component that’s basically requesting anyone’s age, or requesting some type of legal consent, or giving an alert about cookies, that is extremely popular within the EU, obviously, and also the United kingdom due to the legal needs around saying, “Hey, this site uses cookies,” and you’ve got to accept it, individuals types of things, that really will get around Google’s issues. So Google won’t provide you with a difficult time for those who have an overlay interstitial or modal that states, “Are you currently of legal consuming age inside your country? Enter your date of birth to carry on.Inch They’re not going to always penalize individuals kinds of things.

Advertising, however, advertising might get you into more trouble, once we have discussed. Whether it’s a proactive approach for that website itself, again, that may go in either case. Whether it’s area of the consumer experience, generally you’re all right there. Meaning something similar to a modal where you’re able to an internet site and you say, “Hey, I wish to leave a remark,Inch and thus there is a modal which makes you sign in, that kind of a modal. Or else you click a picture also it teaches you a bigger form of that image inside a modal, again, not a problem. That’s area of the consumer experience.

D) Conditions

Conditions matter too. So if it’s triggered from SERP visits versus not, and therefore for those who have an exclusionary protocol inside your interstitial, your overlay, your modal that states, “Hey, if someone’s visiting from Google, do not show this for them,Inch or “If someone’s visiting from Bing, someone’s visiting from DuckDuckGo, do not show this for them,Inch that may change how the various search engines see it too.

It is also the situation this can alter should you only show to cookied or logged in or logged out kinds of users. Now, logged out kinds of users implies that everybody from the internet search engine could or can get it. However for logged in users, for instance, imaginable when you go to a page on the social networking site and there is a modal which includes or perhaps an overlay which includes some notification around activity that you have recently been performing on the website, since gets to be more an element of the consumer experience. That isn’t always likely to harm you.

Where it may hurt is the other way round, in which you get visitors from search engines like google, they’re logged out, and also you require these to sign in before seeing the information. Quora were built with a major problem with this particular for any lengthy time, plus they appear to possess mostly resolved that through a number of measures, and they are fairly sophisticated about this. But you can observe that Facebook still struggles with this particular, because lots of their content, they need that you simply sign in before you ever view or can get on. That does keep a few of their results from Google, or certainly ranking lower.

E) Engagement impact

I believe this is exactly what Google’s ultimately attempting to measure and just what they are attempting to basically say, “Hey, for this reason we’ve these problems for this,Inch that is if you’re hurting the press-through rate or you are hurting pogo-sticking, and therefore more and more people are clicking on your website from Google after which immediately clicking the rear button when one of these simple things seems, that’s a sign to Google you have provided an undesirable consumer experience, that individuals will not jump through whatever hoop you’ve produced to allow them to get access your articles, which suggests they don’t wish to make it happen. Making this kind of the best factor that you ought to be calculating. A few of these can continue to hurt you if they are okay, but this is actually the big one.

Guidelines

So some guidelines around using all these kinds of elements in your website. I’d strongly urge you to definitely avoid factors that are considerably harming UX. If you are prepared to have a small sacrifice in consumer experience in return for a lot of value since you capture people’s emails or else you have more engagement of other different types, okay. But this is something I’d watch.

You will find 3 or 4 metrics that I’d urge you to look at to check whether this really is doing the best factor. Individuals are:

  • Bounce rate
  • Browse rate
  • Return customer rates, meaning the proportion of people that return to your website over and over, and
  • Time on-site following the element seems

So individuals four can help let you know regardless if you are truly interfering badly with consumer experience.

On mobile, make sure that your crucial submissions are not hidden, the studying experience, the browsing experience is not hidden by one of these simple elements. Please, anything you do, make individuals elements simple and easy , apparent to dismiss. This belongs to Google’s guidelines around it, but it is additionally a best practice, and it’ll certainly strengthen your consumer experience metrics.

Only decide to keep one of these simple elements if you’re discovering that the sacrifice… and there is more often than not a sacrifice cost, as if you will hurt bounce rate or browse rate or return customer rate or time on-site. You’ll hurt it. Now you ask ,, could it be a small enough hurt in return for enough gain, and that is that trade-off you need to decide be it worthwhile. I believe if you’re hurting customer interaction with a couple of seconds typically per visit, but you’re getting 5% of the visitors to provide you with their email, that’s most likely worthwhile. Whether it’s a lot more like thirty seconds and 1%, not nearly as good.

Consider taking out the components from triggering when the visit originates from search engines like google. Therefore if you are discovering that this works fine and great, but you are getting issues around search guidelines, you could look at potentially just taking out the element from the visit which comes from a internet search engine and rather placing that within the content itself or allowing it to happen on the second page load, presuming that the browse rates are decently high. This is a fine approach to take too.

If you’re looking to get the very best value from these kinds of elements, it is commonly the situation that the less frequent and fewer well used the visual element is, the greater interaction and engagement it is going to get. But sleep issues of this gold coin is it can produce a more frustrating experience. Therefore if people do not know the overlay or modal or interstitial visual layout design that you have selected, they might engage more by using it. They may not write it off beyond control, because they are unfamiliar with it yet, but they may also have more annoyed by it. So, again, go back to searching at individuals metrics.

Knowing that, hopefully you’ll effectively, and never too harmfully for your Search engine optimization, have the ability to begin using these pop-ups, overlays, interstitials, modals, and all sorts of other kinds of factors that hinder consumer experience.

And we’ll help you again in a few days for an additional edition of White board Friday. Be mindful.

Video transcription by Speechpad.com

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

There’s No Such Thing as a Site Migration

Posted by #ecomchat #seo pic.twitter.com/flgncLVJBT
— Mark Cook (@thetafferboy) February 26, 2017


Migrations vary wildly in scope

As an SEO consultant and practitioner, I’ve been involved in more “site migrations” than I can remember or count — for charities, startups, international e-commerce sites, and even global household brands. Every one has been uniquely challenging and stressful.

In each case, the businesses involved have underestimated (and in some cases, increased) the complexity, the risk, and the details involved in successfully executing their “migration.”

As a result, many of these projects negatively impacted performance and potential in ways that could have been easily avoided.

This isn’t a case of the scope of the “migration” being too big, but rather, a misalignment of understanding, objectives, methods, and priorities, resulting in stakeholders working on entirely different scopes.

The migrations I’ve experienced have varied from simple domain transfers to complete overhauls of server infrastructure, content management frameworks, templates, and pages — sometimes even scaling up to include the consolidation (or fragmentation) of multiple websites and brands.

In the minds of each organization, however, these have all been “migration” projects despite their significantly varying (and poorly defined) scopes. In each case, the definition and understanding of the word “migration” has varied wildly.

We suck at definitions

As an industry, we’re used to struggling with labels. We’re still not sure if we’re SEOs, inbound marketers, digital marketers, or just… marketers. The problem is that, when we speak to each other (and those outside of our industry), these words can carry different meaning and expectations.

Even amongst ourselves, a conversation between two digital marketers, analysts, or SEOs about their fields of expertise is likely to reveal that they have surprisingly different definitions of their roles, responsibilities, and remits. To them, words like “content” or “platform” might mean different things.

In the same way, “site migrations” vary wildly, in form, function, and execution — and when we discuss them, we’re not necessarily talking about the same thing. If we don’t clarify our meanings and have shared definitions, we risk misunderstandings, errors, or even offense.

Ambiguity creates risk

Poorly managed migrations can have a number of consequences beyond just drops in rankings, traffic, and performance. There are secondary impacts, too. They can also inadvertently:

  • Provide a poor user experience (e.g., old URLs now 404, or error states are confusing to users, or a user reaches a page different from what they expected).
  • Break or omit tracking and/or analytics implementations, resulting in loss of business intelligence.
  • Limit the size, shape, or scalability of a site, resulting in static, stagnant, or inflexible templates and content (e.g., omitting the ability to add or edit pages, content, and/or sections in a CMS), and a site which struggles to compete as a result.
  • Miss opportunities to benefit from what SEOs do best: blending an understanding of consumer demand and behavior, the market and competitors, and the brand in question to create more effective strategies, functionality and content.
  • Create conflict between stakeholders, when we need to “hustle” at the last minute to retrofit our requirements into an already complex project (“I know it’s about to go live, but PLEASE can we add analytics conversion tracking?”) — often at the cost of our reputation.
  • Waste future resource, where mistakes require that future resource is spent recouping equity resulting from faults or omissions in the process, rather than building on and enhancing performance.

I should point out that there’s nothing wrong with hustle in this case; that, in fact, begging, borrowing, and stealing can often be a viable solution in these kinds of scenarios. There’s been more than one occasion when, late at night before a site migration, I’ve averted disaster by literally begging developers to include template review processes, to implement redirects, or to stall deployments.

But this isn’t a sensible or sustainable or reliable way of working.

Mistakes will inevitably be made. Resources, favors, and patience are finite. Too much reliance on “hustle” from individuals (or multiple individuals) may in fact further widen the gap in understanding and scope, and positions the hustler as a single point of failure.

More importantly, hustle may only fix the symptoms, not the cause of these issues. That means that we remain stuck in a role as the disruptive outsiders who constantly squeeze in extra unscoped requirements at the eleventh hour.

Where things go wrong

If we’re to begin to address some of these challenges, we need to understand when, where, and why migration projects go wrong.

The root cause of all less-than-perfect migrations can be traced to at least one of the following scenarios:

  • The migration project occurs without consultation.
  • Consultation is sought too late in the process, and/or after the migration.
  • There is insufficient planned resource/time/budget to add requirements (or processes)/make recommended changes to the brief.
  • The scope is changed mid-project, without consultation, or in a way which de-prioritizes requirements.
  • Requirements and/or recommended changes are axed at the eleventh hour (due to resource/time/budget limitations, or educational/political conflicts).

There’s a common theme in each of these cases. We’re not involved early enough in the process, or our opinions and priorities don’t carry sufficient weight to impact timelines and resources.

Chances are, these mistakes are rarely the product of spite or of intentional omission; rather, they’re born of gaps in the education and experience of the stakeholders and decision-makers involved.

We can address this, to a degree, by elevating ourselves to senior stakeholders in these kinds of projects, and by being consulted much earlier in the timeline.

Let’s be more specific

I think that it’s our responsibility to help the organizations we work for to avoid these mistakes. One of the easiest opportunities to do that is to make sure that we’re talking about the same thing, as early in the process as possible.

Otherwise, migrations will continue to go wrong, and we will continue to spend far too much of our collective time fixing broken links, recommending changes or improvements to templates, and holding together bruised-and-broken websites — all at the expense of doing meaningful, impactful work.

Perhaps we can begin to answer to some of these challenges by creating better definitions and helping to clarify exactly what’s involved in a “site migration” process.

Unfortunately, I suspect that we’re stuck with the word “migration,” at least for now. It’s a term which is already widely used, which people think is a correct and appropriate definition. It’s unrealistic to try to change everybody else’s language when we’re already too late to the conversation.

Our next best opportunity to reduce ambiguity and risk is to codify the types of migration. This gives us a chance to prompt further exploration and better definitions.

For example, if we can say “This sounds like it’s actually a domain migration paired with a template migration,” we can steer the conversation a little and rely on a much better shared frame of reference.

If we can raise a challenge that, e.g., the “translation project” a different part of the business is working on is actually a whole bunch of interwoven migration types, then we can raise our concerns earlier and pursue more appropriate resource, budget, and authority (e.g., “This project actually consists of a series of migrations involving templates, content, and domains. Therefore, it’s imperative that we also consider X and Y as part of the project scope.”).

By persisting in labelling this way, stakeholders may gradually come to understand that, e.g., changing the design typically also involves changing the templates, and so the SEO folks should really be involved earlier in the process. By challenging the language, we can challenge the thinking.

Let’s codify migration types

I’ve identified at least seven distinct types of migration. Next time you encounter a “migration” project, you can investigate the proposed changes, map them back to these types, and flag any gaps in understanding, expectations, and resource.

You could argue that some of these aren’t strictly “migrations” in a technical sense (i.e., changing something isn’t the same as moving it), but grouping them this way is intentional.

Remember, our goal here isn’t to neatly categorize all of the requirements for any possible type of migration. There are plenty of resources, guides, and lists which already try do that.

Instead, we’re trying to provide neat, universal labels which help us (the SEO folks) and them (the business stakeholders) to have shared definitions and to remove unknown unknowns.

They’re a set of shared definitions which we can use to trigger early warning signals, and to help us better manage stakeholder expectations.

Feel free to suggest your own, to grow, shrink, combine, or bin any of these to fit your own experience and requirements!

1. Hosting migrations

A broad bundling of infrastructure, hardware, and server considerations (while these are each broad categories in their own right, it makes sense to bundle them together in this context).

If your migration project contains any of the following changes, you’re talking about a hosting migration, and you’ll need to explore the SEO implications (and development resource requirements) to make sure that changes to the underlying platform don’t impact front-end performance or visibility.

  • You’re changing hosting provider.
  • You’re changing, adding, or removing server locations.
  • You’re altering the specifications of your physical (or virtual) servers (e.g., RAM, CPU, storage, hardware types, etc).
  • You’re changing your server technology stack (e.g., moving from Apache to Nginx).*
  • You’re implementing or removing load balancing, mirroring, or extra server environments.
  • You’re implementing or altering caching systems (database, static page caches, varnish, object, memcached, etc).
  • You’re altering the physical or server security protocols and features.**
  • You’re changing, adding or removing CDNs.***

*Might overlap into a software migration if the changes affect the configuration or behavior of any front-end components (e.g., the CMS).

**Might overlap into other migrations, depending on how this manifests (e.g., template, software, domain).

***Might overlap into a domain migration if the CDN is presented as/on a distinct hostname (e.g., AWS), rather than invisibly (e.g., Cloudflare).

2. Software migrations

Unless your website is comprised of purely static HTML files, chances are that it’s running some kind of software to serve the right pages, behaviors, and content to users.

If your migration project contains any of the following changes, you’re talking about a software migration, and you’ll need to understand (and input into) how things like managing error codes, site functionality, and back-end behavior work.

  • You’re changing CMS.
  • You’re adding or removing plugins/modules/add-ons in your CMS.
  • You’re upgrading or downgrading the CMS, or plugins/modules/addons (by a significant degree/major release) .
  • You’re changing the language used to render the website (e.g., adopting Angular2 or NodeJS).
  • You’re developing new functionality on the website (forms, processes, widgets, tools).
  • You’re merging platforms; e.g., a blog which operated on a separate domain and system is being integrated into a single CMS.*

*Might overlap into a domain migration if you’re absorbing software which was previously located/accessed on a different domain.

3. Domain migrations

Domain migrations can be pleasantly straightforward if executed in isolation, but this is rarely the case. Changes to domains are often paired with (or the result of) other structural and functional changes.

If your migration project alters the URL(s) by which users are able to reach your website, contains any of the following changes, then you’re talking about a domain migration, and you need to consider how redirects, protocols (e.g., HTTP/S), hostnames (e.g., www/non-www), and branding are impacted.

  • You’re changing the main domain of your website.
  • You’re buying/adding new domains to your ecosystem.
  • You’re adding or removing subdomains (e.g., removing domain sharding following a migration to HTTP2).
  • You’re moving a website, or part of a website, between domains (e.g., moving a blog on a subdomain into a subfolder, or vice-versa).
  • You’re intentionally allowing an active domain to expire.
  • You’re purchasing an expired/dropped domain.

4. Template migrations

Chances are that your website uses a number of HTML templates, which control the structure, layout, and peripheral content of your pages. The logic which controls how your content looks, feels, and behaves (as well as the behavior of hidden/meta elements like descriptions or canonical URLs) tends to live here.

If your migration project alters elements like your internal navigation (e.g., the header or footer), elements in your <head>, or otherwise changes the page structure around your content in the ways I’ve outlined, then you’re talking about a template migration. You’ll need to consider how users and search engines perceive and engage with your pages, how context, relevance, and authority flow through internal linking structures, and how well-structured your HTML (and JS/CSS) code is.

  • You’re making changes to internal navigation.
  • You’re changing the layout and structure of important pages/templates (e.g., homepage, product pages).
  • You’re adding or removing template components (e.g., sidebars, interstitials).
  • You’re changing elements in your <head> code, like title, canonical, or hreflang tags.
  • You’re adding or removing specific templates (e.g., a template which shows all the blog posts by a specific author).
  • You’re changing the URL pattern used by one or more templates.
  • You’re making changes to how device-specific rendering works*

*Might involve domain, software, and/or hosting migrations, depending on implementation mechanics.

5. Content migrations

Your content is everything which attracts, engages with, and convinces users that you’re the best brand to answer their questions and meet their needs. That includes the words you use to describe your products and services, the things you talk about on your blog, and every image and video you produce or use.

If your migration project significantly changes the tone (including language, demographic targeting, etc), format, or quantity/quality of your content in the ways I’ve outlined, then you’re talking about a content migration. You’ll need to consider the needs of your market and audience, and how the words and media on your website answer to that — and how well it does so in comparison with your competitors.

  • You significantly increase or reduce the number of pages on your website.
  • You significantly change the tone, targeting, or focus of your content.
  • You begin to produce content on/about a new topic.
  • You translate and/or internationalize your content.*
  • You change the categorization, tagging, or other classification system on your blog or product content.**
  • You use tools like canonical tags, meta robots indexation directives, or robots.txt files to control how search engines (and other bots) access and attribute value to a content piece (individually or at scale).

*Might involve domain, software and/or hosting, and template migrations, depending on implementation mechanics.

**May overlap into a template migration if the layout and/or URL structure changes as a result.

6. Design migrations

The look and feel of your website doesn’t necessarily directly impact your performance (though user signals like engagement and trust certainly do). However, simple changes to design components can often have unintended knock-on effects and consequences.

If your migration project contains any of the following changes, you’re talking about a design migration, and you’ll need to clarify whether changes are purely cosmetic or whether they go deeper and impact other areas.

  • You’re changing the look and feel of key pages (like your homepage).*
  • You’re adding or removing interaction layers, e.g. conditionally hiding content based on device or state.*
  • You’re making design/creative changes which change the HTML (as opposed to just images or CSS files) of specific elements.*
  • You’re changing key messaging, like logos and brand slogans.
  • You’re altering the look and feel to react to changing strategies or monetization models (e.g., introducing space for ads in a sidebar, or removing ads in favor of using interstitial popups/states).
  • You’re changing images and media.**

*All template migrations.

**Don’t forget to 301 redirect these, unless you’re replacing like-for-like filenames (which isn’t always best practice if you wish to invalidate local or remote caches).

7. Strategy migrations

A change in organizational or marketing strategy might not directly impact the website, but a widening gap between a brand’s audience, objectives, and platform can have a significant impact on performance.

If your market or audience (or your understanding of it) changes significantly, or if your mission, your reputation, or the way in which you describe your products/services/purpose changes, then you’re talking about a strategy migration. You’ll need to consider how you structure your website, how you target your audiences, how you write content, and how you campaign (all of which might trigger a set of new migration projects!).

  • You change the company mission statement.
  • You change the website’s key objectives, goals, or metrics.
  • You enter a new marketplace (or leave one).
  • Your channel focus (and/or your audience’s) changes significantly.
  • A competitor disrupts the market and/or takes a significant amount of your market share.
  • Responsibility for the website/its performance/SEO/digital changes.
  • You appoint a new agency or team responsible for the website’s performance.
  • Senior/C-level stakeholders leave or join.
  • Changes in legal frameworks (e.g. privacy compliance or new/changing content restrictions in prescriptive sectors) constrain your publishing/content capabilities.

Let’s get in earlier

Armed with better definitions, we can begin to force a more considered conversation around what a “migration” project actually involves. We can use a shared language and ensure that stakeholders understand the risks and opportunities of the changes they intend to make.

Unfortunately, however, we don’t always hear about proposed changes until they’ve already been decided and signed off.

People don’t know that they need to tell us that they’re changing domain, templates, hosting, etc. So it’s often too late when — or if — we finally get involved. Decisions have already been made before they trickle down into our awareness.

That’s still a problem.

By the time you’re aware of a project, it’s usually too late to impact it.

While our new-and-improved definitions are a great starting place to catch risks as you encounter them, avoiding those risks altogether requires us to develop a much better understanding of how, where, and when migrations are planned, managed, and start to go wrong.

Let’s identify trigger points

I’ve identified four common scenarios which lead to organizations deciding to undergo a migration project.

If you can keep your ears to the ground and spot these types of events unfolding, you have an opportunity to give yourself permission to insert yourself into the conversation, and to interrogate to find out exactly which types of migrations might be looming.

It’s worth finding ways to get added to deployment lists and notifications, internal project management tools, and other systems so that you can look for early warning signs (without creating unnecessary overhead and comms processes).

1. Mergers, acquisitions, and closures

When brands are bought, sold, or merged, this almost universally triggers changes to their websites. These requirements are often dictated from on-high, and there’s limited (or no) opportunity to impact the brief.

Migration strategies in these situations are rarely comfortable, and almost always defensive by nature (focusing on minimizing impact/cost rather than capitalizing upon opportunity).

Typically, these kinds of scenarios manifest in a small number of ways:

  • The “parent” brand absorbs the website of the purchased brand into their own website; either by “bolting it on” to their existing architecture, moving it to a subdomain/folder, or by distributing salvageable content throughout their existing site and killing the old one (often triggering most, if not every type of migration).
  • The purchased brand website remains where it is, but undergoes a design migration and possibly template migrations to align it with the parent brand.
  • A brand website is retired and redirected (a domain migration).

2. Rebrands

All sorts of pressures and opportunities lead to rebranding activity. Pressures to remain relevant, to reposition within marketplaces, or change how the brand represents itself can trigger migration requirements — though these activities are often led by brand and creative teams who don’t necessarily understand the implications.

Often, the outcome of branding processes and initiatives creates new a or alternate understanding of markets and consumers, and/or creates new guidelines/collateral/creative which must be reflected on the website(s). Typically, this can result in:

  • Changes to core/target audiences, and the content or language/phrasing used to communicate with them (strategy and content migrations -—more if this involves, for example, opening up to international audiences).
  • New collateral, replacing or adding to existing media, content, and messaging (content and design migrations).
  • Changes to website structure and domain names (template and domain migrations) to align to new branding requirements.

3. C-level vision

It’s not uncommon for senior stakeholders to decide that the strategy to save a struggling business, to grow into new markets, or to make their mark on an organization is to launch a brand-new, shiny website.

These kinds of decisions often involve a scorched-earth approach, tearing down the work of their predecessors or of previously under-performing strategies. And the more senior the decision-maker, the less likely they’ll understand the implications of their decisions.

In these kinds of scenarios, your best opportunity to avert disaster is to watch for warning signs and to make yourself heard before it’s too late. In particular, you can watch out for:

  • Senior stakeholders with marketing, IT, or C-level responsibilities joining, leaving, or being replaced (in particular if in relation to poor performance).
  • Boards of directors, investors, or similar pressuring web/digital teams for unrealistic performance goals (based on current performance/constraints).
  • Gradual reduction in budget and resource for day-to-day management and improvements to the website (as a likely prelude to a big strategy migration).
  • New agencies being brought on board to optimize website performance, who’re hindered by the current framework/constraints.
  • The adoption of new martech and marketing automation software.*

*Integrations of solutions like SalesForce, Marketo, and similar sometimes rely on utilizing proxied subdomains, embedded forms/content, and other mechanics which will need careful consideration as part of a template migration.

4. Technical or financial necessity

The current website is in such a poor, restrictive, or cost-ineffective condition that it makes it impossible to adopt new-and-required improvements (such as compliance with new standards, an integration of new martech stacks, changes following a brand purchase/merger, etc).

Generally, like the kinds of C-level “new website” initiatives I’ve outlined above, these result in scorched earth solutions.

Particularly frustrating, these are the kinds of migration projects which you yourself may well argue and fight for, for years on end, only to then find that they’ve been scoped (and maybe even begun or completed) without your input or awareness.

Here are some danger signs to watch out for which might mean that your migration project is imminent (or, at least, definitely required):

  • Licensing costs for parts or the whole platform become cost-prohibitive (e.g., enterprise CMS platforms, user seats, developer training, etc).
  • The software or hardware skill set required to maintain the site becomes rarer or more expensive (e.g., outdated technologies).
  • Minor-but-urgent technical changes take more than six months to implement.
  • New technical implementations/integrations are agreed upon in principle, budgeted for, but not implemented.
  • The technical backlog of tasks grows faster than it shrinks as it fills with breakages and fixes rather than new features, initiatives, and improvements.
  • The website ecosystem doesn’t support the organization’s ways of working (e.g., the organization adopts agile methodologies, but the website only supports waterfall-style codebase releases).
  • Key technology which underpins the site is being deprecated, and there’s no easy upgrade path.*

*Will likely trigger hosting or software migrations.

Let’s not count on this

While this kind of labelling undoubtedly goes some way to helping us spot and better manage migrations, it’s far from a perfect or complete system.

In fact, I suspect it may be far too ambitious, and unrealistic in its aspiration. Accessing conversations early enough — and being listened to and empowered in those conversations — relies on the goodwill and openness of companies who aren’t always completely bought into or enamored with SEO.

This will only work in an organization which is open to this kind of thinking and internal challenging — and chances are, they’re not the kinds of organizations who are routinely breaking their websites. The very people who need our help and this kind of system are fundamentally unsuited to receive it.

I suspect, then, it might be impossible in many cases to make the kinds of changes required to shift behaviors and catch these problems earlier. In most organizations, at least.

Avoiding disasters resulting from ambiguous migration projects relies heavily on broad education. Everything else aside, people tend to change companies faster than you can build deep enough tribal knowledge.

That doesn’t mean that the structure isn’t still valuable, however. The types of changes and triggers I’ve outlined can still be used as alarm bells and direction for your own use.

Let’s get real

If you can’t effectively educate stakeholders on the complexities and impact of them making changes to their website, there are more “lightweight” solutions.

At the very least, you can turn these kinds of items (and expand with your own, and in more detail) into simple lists which can be printed off, laminated, and stuck to a wall. At the very least, perhaps you’ll remind somebody to pick up the phone to the SEO team when they recognize an issue.

In a more pragmatic world, stakeholders don’t necessarily have to understand the nuance or the detail if they at least understand that they’re meant to ask for help when they’re changing domain, for example, or adding new templates to their website.

Whilst this doesn’t solve the underlying problems, it does provide a mechanism through which the damage can be systematically avoided or limited. You can identify problems earlier and be part of the conversation.

If it’s still too late and things do go wrong, you’ll have something you can point to and say “I told you so,” or, more constructively perhaps, “Here’s the resource you need to avoid this happening next time.”

And in your moment of self-righteous vindication, having successfully made it through this post and now armed to save your company from a botched migration project, you can migrate over to the bar. Good work, you.


Thanks to…

This turned into a monster of a post, and its scope meant that it almost never made it to print. Thanks to a few folks in particular for helping me to shape, form, and ship it. In particular:

  • Hannah Thorpe, for help in exploring and structuring the initial concept.
  • Greg Mitchell, for a heavy dose of pragmatism in the conclusion.
  • Gerry White, for some insightful additions and the removal of dozens of typos.
  • Sam Simpson for putting up with me spending hours rambling and ranting at her about failed site migrations.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by #ecomchat #seo pic.twitter.com/flgncLVJBT
— Mark Cook (@thetafferboy) February 26, 2017


Migrations vary wildly in scope

As an SEO consultant and practitioner, I’ve been involved in more “site migrations” than I can remember or count — for charities, startups, international e-commerce sites, and even global household brands. Every one has been uniquely challenging and stressful.

In each case, the businesses involved have underestimated (and in some cases, increased) the complexity, the risk, and the details involved in successfully executing their “migration.”

As a result, many of these projects negatively impacted performance and potential in ways that could have been easily avoided.

This isn’t a case of the scope of the “migration” being too big, but rather, a misalignment of understanding, objectives, methods, and priorities, resulting in stakeholders working on entirely different scopes.

The migrations I’ve experienced have varied from simple domain transfers to complete overhauls of server infrastructure, content management frameworks, templates, and pages — sometimes even scaling up to include the consolidation (or fragmentation) of multiple websites and brands.

In the minds of each organization, however, these have all been “migration” projects despite their significantly varying (and poorly defined) scopes. In each case, the definition and understanding of the word “migration” has varied wildly.

We suck at definitions

As an industry, we’re used to struggling with labels. We’re still not sure if we’re SEOs, inbound marketers, digital marketers, or just… marketers. The problem is that, when we speak to each other (and those outside of our industry), these words can carry different meaning and expectations.

Even amongst ourselves, a conversation between two digital marketers, analysts, or SEOs about their fields of expertise is likely to reveal that they have surprisingly different definitions of their roles, responsibilities, and remits. To them, words like “content” or “platform” might mean different things.

In the same way, “site migrations” vary wildly, in form, function, and execution — and when we discuss them, we’re not necessarily talking about the same thing. If we don’t clarify our meanings and have shared definitions, we risk misunderstandings, errors, or even offense.

Ambiguity creates risk

Poorly managed migrations can have a number of consequences beyond just drops in rankings, traffic, and performance. There are secondary impacts, too. They can also inadvertently:

  • Provide a poor user experience (e.g., old URLs now 404, or error states are confusing to users, or a user reaches a page different from what they expected).
  • Break or omit tracking and/or analytics implementations, resulting in loss of business intelligence.
  • Limit the size, shape, or scalability of a site, resulting in static, stagnant, or inflexible templates and content (e.g., omitting the ability to add or edit pages, content, and/or sections in a CMS), and a site which struggles to compete as a result.
  • Miss opportunities to benefit from what SEOs do best: blending an understanding of consumer demand and behavior, the market and competitors, and the brand in question to create more effective strategies, functionality and content.
  • Create conflict between stakeholders, when we need to “hustle” at the last minute to retrofit our requirements into an already complex project (“I know it’s about to go live, but PLEASE can we add analytics conversion tracking?”) — often at the cost of our reputation.
  • Waste future resource, where mistakes require that future resource is spent recouping equity resulting from faults or omissions in the process, rather than building on and enhancing performance.

I should point out that there’s nothing wrong with hustle in this case; that, in fact, begging, borrowing, and stealing can often be a viable solution in these kinds of scenarios. There’s been more than one occasion when, late at night before a site migration, I’ve averted disaster by literally begging developers to include template review processes, to implement redirects, or to stall deployments.

But this isn’t a sensible or sustainable or reliable way of working.

Mistakes will inevitably be made. Resources, favors, and patience are finite. Too much reliance on “hustle” from individuals (or multiple individuals) may in fact further widen the gap in understanding and scope, and positions the hustler as a single point of failure.

More importantly, hustle may only fix the symptoms, not the cause of these issues. That means that we remain stuck in a role as the disruptive outsiders who constantly squeeze in extra unscoped requirements at the eleventh hour.

Where things go wrong

If we’re to begin to address some of these challenges, we need to understand when, where, and why migration projects go wrong.

The root cause of all less-than-perfect migrations can be traced to at least one of the following scenarios:

  • The migration project occurs without consultation.
  • Consultation is sought too late in the process, and/or after the migration.
  • There is insufficient planned resource/time/budget to add requirements (or processes)/make recommended changes to the brief.
  • The scope is changed mid-project, without consultation, or in a way which de-prioritizes requirements.
  • Requirements and/or recommended changes are axed at the eleventh hour (due to resource/time/budget limitations, or educational/political conflicts).

There’s a common theme in each of these cases. We’re not involved early enough in the process, or our opinions and priorities don’t carry sufficient weight to impact timelines and resources.

Chances are, these mistakes are rarely the product of spite or of intentional omission; rather, they’re born of gaps in the education and experience of the stakeholders and decision-makers involved.

We can address this, to a degree, by elevating ourselves to senior stakeholders in these kinds of projects, and by being consulted much earlier in the timeline.

Let’s be more specific

I think that it’s our responsibility to help the organizations we work for to avoid these mistakes. One of the easiest opportunities to do that is to make sure that we’re talking about the same thing, as early in the process as possible.

Otherwise, migrations will continue to go wrong, and we will continue to spend far too much of our collective time fixing broken links, recommending changes or improvements to templates, and holding together bruised-and-broken websites — all at the expense of doing meaningful, impactful work.

Perhaps we can begin to answer to some of these challenges by creating better definitions and helping to clarify exactly what’s involved in a “site migration” process.

Unfortunately, I suspect that we’re stuck with the word “migration,” at least for now. It’s a term which is already widely used, which people think is a correct and appropriate definition. It’s unrealistic to try to change everybody else’s language when we’re already too late to the conversation.

Our next best opportunity to reduce ambiguity and risk is to codify the types of migration. This gives us a chance to prompt further exploration and better definitions.

For example, if we can say “This sounds like it’s actually a domain migration paired with a template migration,” we can steer the conversation a little and rely on a much better shared frame of reference.

If we can raise a challenge that, e.g., the “translation project” a different part of the business is working on is actually a whole bunch of interwoven migration types, then we can raise our concerns earlier and pursue more appropriate resource, budget, and authority (e.g., “This project actually consists of a series of migrations involving templates, content, and domains. Therefore, it’s imperative that we also consider X and Y as part of the project scope.”).

By persisting in labelling this way, stakeholders may gradually come to understand that, e.g., changing the design typically also involves changing the templates, and so the SEO folks should really be involved earlier in the process. By challenging the language, we can challenge the thinking.

Let’s codify migration types

I’ve identified at least seven distinct types of migration. Next time you encounter a “migration” project, you can investigate the proposed changes, map them back to these types, and flag any gaps in understanding, expectations, and resource.

You could argue that some of these aren’t strictly “migrations” in a technical sense (i.e., changing something isn’t the same as moving it), but grouping them this way is intentional.

Remember, our goal here isn’t to neatly categorize all of the requirements for any possible type of migration. There are plenty of resources, guides, and lists which already try do that.

Instead, we’re trying to provide neat, universal labels which help us (the SEO folks) and them (the business stakeholders) to have shared definitions and to remove unknown unknowns.

They’re a set of shared definitions which we can use to trigger early warning signals, and to help us better manage stakeholder expectations.

Feel free to suggest your own, to grow, shrink, combine, or bin any of these to fit your own experience and requirements!

1. Hosting migrations

A broad bundling of infrastructure, hardware, and server considerations (while these are each broad categories in their own right, it makes sense to bundle them together in this context).

If your migration project contains any of the following changes, you’re talking about a hosting migration, and you’ll need to explore the SEO implications (and development resource requirements) to make sure that changes to the underlying platform don’t impact front-end performance or visibility.

  • You’re changing hosting provider.
  • You’re changing, adding, or removing server locations.
  • You’re altering the specifications of your physical (or virtual) servers (e.g., RAM, CPU, storage, hardware types, etc).
  • You’re changing your server technology stack (e.g., moving from Apache to Nginx).*
  • You’re implementing or removing load balancing, mirroring, or extra server environments.
  • You’re implementing or altering caching systems (database, static page caches, varnish, object, memcached, etc).
  • You’re altering the physical or server security protocols and features.**
  • You’re changing, adding or removing CDNs.***

*Might overlap into a software migration if the changes affect the configuration or behavior of any front-end components (e.g., the CMS).

**Might overlap into other migrations, depending on how this manifests (e.g., template, software, domain).

***Might overlap into a domain migration if the CDN is presented as/on a distinct hostname (e.g., AWS), rather than invisibly (e.g., Cloudflare).

2. Software migrations

Unless your website is comprised of purely static HTML files, chances are that it’s running some kind of software to serve the right pages, behaviors, and content to users.

If your migration project contains any of the following changes, you’re talking about a software migration, and you’ll need to understand (and input into) how things like managing error codes, site functionality, and back-end behavior work.

  • You’re changing CMS.
  • You’re adding or removing plugins/modules/add-ons in your CMS.
  • You’re upgrading or downgrading the CMS, or plugins/modules/addons (by a significant degree/major release) .
  • You’re changing the language used to render the website (e.g., adopting Angular2 or NodeJS).
  • You’re developing new functionality on the website (forms, processes, widgets, tools).
  • You’re merging platforms; e.g., a blog which operated on a separate domain and system is being integrated into a single CMS.*

*Might overlap into a domain migration if you’re absorbing software which was previously located/accessed on a different domain.

3. Domain migrations

Domain migrations can be pleasantly straightforward if executed in isolation, but this is rarely the case. Changes to domains are often paired with (or the result of) other structural and functional changes.

If your migration project alters the URL(s) by which users are able to reach your website, contains any of the following changes, then you’re talking about a domain migration, and you need to consider how redirects, protocols (e.g., HTTP/S), hostnames (e.g., www/non-www), and branding are impacted.

  • You’re changing the main domain of your website.
  • You’re buying/adding new domains to your ecosystem.
  • You’re adding or removing subdomains (e.g., removing domain sharding following a migration to HTTP2).
  • You’re moving a website, or part of a website, between domains (e.g., moving a blog on a subdomain into a subfolder, or vice-versa).
  • You’re intentionally allowing an active domain to expire.
  • You’re purchasing an expired/dropped domain.

4. Template migrations

Chances are that your website uses a number of HTML templates, which control the structure, layout, and peripheral content of your pages. The logic which controls how your content looks, feels, and behaves (as well as the behavior of hidden/meta elements like descriptions or canonical URLs) tends to live here.

If your migration project alters elements like your internal navigation (e.g., the header or footer), elements in your <head>, or otherwise changes the page structure around your content in the ways I’ve outlined, then you’re talking about a template migration. You’ll need to consider how users and search engines perceive and engage with your pages, how context, relevance, and authority flow through internal linking structures, and how well-structured your HTML (and JS/CSS) code is.

  • You’re making changes to internal navigation.
  • You’re changing the layout and structure of important pages/templates (e.g., homepage, product pages).
  • You’re adding or removing template components (e.g., sidebars, interstitials).
  • You’re changing elements in your <head> code, like title, canonical, or hreflang tags.
  • You’re adding or removing specific templates (e.g., a template which shows all the blog posts by a specific author).
  • You’re changing the URL pattern used by one or more templates.
  • You’re making changes to how device-specific rendering works*

*Might involve domain, software, and/or hosting migrations, depending on implementation mechanics.

5. Content migrations

Your content is everything which attracts, engages with, and convinces users that you’re the best brand to answer their questions and meet their needs. That includes the words you use to describe your products and services, the things you talk about on your blog, and every image and video you produce or use.

If your migration project significantly changes the tone (including language, demographic targeting, etc), format, or quantity/quality of your content in the ways I’ve outlined, then you’re talking about a content migration. You’ll need to consider the needs of your market and audience, and how the words and media on your website answer to that — and how well it does so in comparison with your competitors.

  • You significantly increase or reduce the number of pages on your website.
  • You significantly change the tone, targeting, or focus of your content.
  • You begin to produce content on/about a new topic.
  • You translate and/or internationalize your content.*
  • You change the categorization, tagging, or other classification system on your blog or product content.**
  • You use tools like canonical tags, meta robots indexation directives, or robots.txt files to control how search engines (and other bots) access and attribute value to a content piece (individually or at scale).

*Might involve domain, software and/or hosting, and template migrations, depending on implementation mechanics.

**May overlap into a template migration if the layout and/or URL structure changes as a result.

6. Design migrations

The look and feel of your website doesn’t necessarily directly impact your performance (though user signals like engagement and trust certainly do). However, simple changes to design components can often have unintended knock-on effects and consequences.

If your migration project contains any of the following changes, you’re talking about a design migration, and you’ll need to clarify whether changes are purely cosmetic or whether they go deeper and impact other areas.

  • You’re changing the look and feel of key pages (like your homepage).*
  • You’re adding or removing interaction layers, e.g. conditionally hiding content based on device or state.*
  • You’re making design/creative changes which change the HTML (as opposed to just images or CSS files) of specific elements.*
  • You’re changing key messaging, like logos and brand slogans.
  • You’re altering the look and feel to react to changing strategies or monetization models (e.g., introducing space for ads in a sidebar, or removing ads in favor of using interstitial popups/states).
  • You’re changing images and media.**

*All template migrations.

**Don’t forget to 301 redirect these, unless you’re replacing like-for-like filenames (which isn’t always best practice if you wish to invalidate local or remote caches).

7. Strategy migrations

A change in organizational or marketing strategy might not directly impact the website, but a widening gap between a brand’s audience, objectives, and platform can have a significant impact on performance.

If your market or audience (or your understanding of it) changes significantly, or if your mission, your reputation, or the way in which you describe your products/services/purpose changes, then you’re talking about a strategy migration. You’ll need to consider how you structure your website, how you target your audiences, how you write content, and how you campaign (all of which might trigger a set of new migration projects!).

  • You change the company mission statement.
  • You change the website’s key objectives, goals, or metrics.
  • You enter a new marketplace (or leave one).
  • Your channel focus (and/or your audience’s) changes significantly.
  • A competitor disrupts the market and/or takes a significant amount of your market share.
  • Responsibility for the website/its performance/SEO/digital changes.
  • You appoint a new agency or team responsible for the website’s performance.
  • Senior/C-level stakeholders leave or join.
  • Changes in legal frameworks (e.g. privacy compliance or new/changing content restrictions in prescriptive sectors) constrain your publishing/content capabilities.

Let’s get in earlier

Armed with better definitions, we can begin to force a more considered conversation around what a “migration” project actually involves. We can use a shared language and ensure that stakeholders understand the risks and opportunities of the changes they intend to make.

Unfortunately, however, we don’t always hear about proposed changes until they’ve already been decided and signed off.

People don’t know that they need to tell us that they’re changing domain, templates, hosting, etc. So it’s often too late when — or if — we finally get involved. Decisions have already been made before they trickle down into our awareness.

That’s still a problem.

By the time you’re aware of a project, it’s usually too late to impact it.

While our new-and-improved definitions are a great starting place to catch risks as you encounter them, avoiding those risks altogether requires us to develop a much better understanding of how, where, and when migrations are planned, managed, and start to go wrong.

Let’s identify trigger points

I’ve identified four common scenarios which lead to organizations deciding to undergo a migration project.

If you can keep your ears to the ground and spot these types of events unfolding, you have an opportunity to give yourself permission to insert yourself into the conversation, and to interrogate to find out exactly which types of migrations might be looming.

It’s worth finding ways to get added to deployment lists and notifications, internal project management tools, and other systems so that you can look for early warning signs (without creating unnecessary overhead and comms processes).

1. Mergers, acquisitions, and closures

When brands are bought, sold, or merged, this almost universally triggers changes to their websites. These requirements are often dictated from on-high, and there’s limited (or no) opportunity to impact the brief.

Migration strategies in these situations are rarely comfortable, and almost always defensive by nature (focusing on minimizing impact/cost rather than capitalizing upon opportunity).

Typically, these kinds of scenarios manifest in a small number of ways:

  • The “parent” brand absorbs the website of the purchased brand into their own website; either by “bolting it on” to their existing architecture, moving it to a subdomain/folder, or by distributing salvageable content throughout their existing site and killing the old one (often triggering most, if not every type of migration).
  • The purchased brand website remains where it is, but undergoes a design migration and possibly template migrations to align it with the parent brand.
  • A brand website is retired and redirected (a domain migration).

2. Rebrands

All sorts of pressures and opportunities lead to rebranding activity. Pressures to remain relevant, to reposition within marketplaces, or change how the brand represents itself can trigger migration requirements — though these activities are often led by brand and creative teams who don’t necessarily understand the implications.

Often, the outcome of branding processes and initiatives creates new a or alternate understanding of markets and consumers, and/or creates new guidelines/collateral/creative which must be reflected on the website(s). Typically, this can result in:

  • Changes to core/target audiences, and the content or language/phrasing used to communicate with them (strategy and content migrations -—more if this involves, for example, opening up to international audiences).
  • New collateral, replacing or adding to existing media, content, and messaging (content and design migrations).
  • Changes to website structure and domain names (template and domain migrations) to align to new branding requirements.

3. C-level vision

It’s not uncommon for senior stakeholders to decide that the strategy to save a struggling business, to grow into new markets, or to make their mark on an organization is to launch a brand-new, shiny website.

These kinds of decisions often involve a scorched-earth approach, tearing down the work of their predecessors or of previously under-performing strategies. And the more senior the decision-maker, the less likely they’ll understand the implications of their decisions.

In these kinds of scenarios, your best opportunity to avert disaster is to watch for warning signs and to make yourself heard before it’s too late. In particular, you can watch out for:

  • Senior stakeholders with marketing, IT, or C-level responsibilities joining, leaving, or being replaced (in particular if in relation to poor performance).
  • Boards of directors, investors, or similar pressuring web/digital teams for unrealistic performance goals (based on current performance/constraints).
  • Gradual reduction in budget and resource for day-to-day management and improvements to the website (as a likely prelude to a big strategy migration).
  • New agencies being brought on board to optimize website performance, who’re hindered by the current framework/constraints.
  • The adoption of new martech and marketing automation software.*

*Integrations of solutions like SalesForce, Marketo, and similar sometimes rely on utilizing proxied subdomains, embedded forms/content, and other mechanics which will need careful consideration as part of a template migration.

4. Technical or financial necessity

The current website is in such a poor, restrictive, or cost-ineffective condition that it makes it impossible to adopt new-and-required improvements (such as compliance with new standards, an integration of new martech stacks, changes following a brand purchase/merger, etc).

Generally, like the kinds of C-level “new website” initiatives I’ve outlined above, these result in scorched earth solutions.

Particularly frustrating, these are the kinds of migration projects which you yourself may well argue and fight for, for years on end, only to then find that they’ve been scoped (and maybe even begun or completed) without your input or awareness.

Here are some danger signs to watch out for which might mean that your migration project is imminent (or, at least, definitely required):

  • Licensing costs for parts or the whole platform become cost-prohibitive (e.g., enterprise CMS platforms, user seats, developer training, etc).
  • The software or hardware skill set required to maintain the site becomes rarer or more expensive (e.g., outdated technologies).
  • Minor-but-urgent technical changes take more than six months to implement.
  • New technical implementations/integrations are agreed upon in principle, budgeted for, but not implemented.
  • The technical backlog of tasks grows faster than it shrinks as it fills with breakages and fixes rather than new features, initiatives, and improvements.
  • The website ecosystem doesn’t support the organization’s ways of working (e.g., the organization adopts agile methodologies, but the website only supports waterfall-style codebase releases).
  • Key technology which underpins the site is being deprecated, and there’s no easy upgrade path.*

*Will likely trigger hosting or software migrations.

Let’s not count on this

While this kind of labelling undoubtedly goes some way to helping us spot and better manage migrations, it’s far from a perfect or complete system.

In fact, I suspect it may be far too ambitious, and unrealistic in its aspiration. Accessing conversations early enough — and being listened to and empowered in those conversations — relies on the goodwill and openness of companies who aren’t always completely bought into or enamored with SEO.

This will only work in an organization which is open to this kind of thinking and internal challenging — and chances are, they’re not the kinds of organizations who are routinely breaking their websites. The very people who need our help and this kind of system are fundamentally unsuited to receive it.

I suspect, then, it might be impossible in many cases to make the kinds of changes required to shift behaviors and catch these problems earlier. In most organizations, at least.

Avoiding disasters resulting from ambiguous migration projects relies heavily on broad education. Everything else aside, people tend to change companies faster than you can build deep enough tribal knowledge.

That doesn’t mean that the structure isn’t still valuable, however. The types of changes and triggers I’ve outlined can still be used as alarm bells and direction for your own use.

Let’s get real

If you can’t effectively educate stakeholders on the complexities and impact of them making changes to their website, there are more “lightweight” solutions.

At the very least, you can turn these kinds of items (and expand with your own, and in more detail) into simple lists which can be printed off, laminated, and stuck to a wall. At the very least, perhaps you’ll remind somebody to pick up the phone to the SEO team when they recognize an issue.

In a more pragmatic world, stakeholders don’t necessarily have to understand the nuance or the detail if they at least understand that they’re meant to ask for help when they’re changing domain, for example, or adding new templates to their website.

Whilst this doesn’t solve the underlying problems, it does provide a mechanism through which the damage can be systematically avoided or limited. You can identify problems earlier and be part of the conversation.

If it’s still too late and things do go wrong, you’ll have something you can point to and say “I told you so,” or, more constructively perhaps, “Here’s the resource you need to avoid this happening next time.”

And in your moment of self-righteous vindication, having successfully made it through this post and now armed to save your company from a botched migration project, you can migrate over to the bar. Good work, you.


Thanks to…

This turned into a monster of a post, and its scope meant that it almost never made it to print. Thanks to a few folks in particular for helping me to shape, form, and ship it. In particular:

  • Hannah Thorpe, for help in exploring and structuring the initial concept.
  • Greg Mitchell, for a heavy dose of pragmatism in the conclusion.
  • Gerry White, for some insightful additions and the removal of dozens of typos.
  • Sam Simpson for putting up with me spending hours rambling and ranting at her about failed site migrations.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”

The Condition of Links: Yesterday’s Ranking Factor?

Published by talk involved backlink building and also the primary kinds of strategy he saw as still being relevant and efficient today. Throughout his introduction, he stated something which really got me thinking, about how exactly the entire reason for links and PageRank have been to approximate traffic.

Source

Basically, during the late ’90s, links were an even bigger a part of the way we experienced the net — consider hubs like Excite, America online, and Yahoo. Google’s big innovation ended up being to understand that, because individuals navigated the net by hitting links, they might approximate the relative recognition of pages by searching at individuals links.

A lot of links, such very little time.

Rand noticed that, given all the details available these days — being an Isp, a internet search engine, a browser, an operating-system, and so forth — Google could now much more precisely model whether a hyperlink drives traffic, which means you shouldn’t try to build links that don’t bring customers. This can be a pretty big advance in the link-building tactics of old, however it happened in my experience it it most likely doesn’t go far enough.

If Google has enough data to determine which links are genuinely getting visitors or traffic, why make use of links whatsoever? The entire point was to determine which websites and pages were popular, plus they are now able to answer that question directly. (It’s important to note that there is a dichotomy between “popular” and “trustworthy” which i shouldn’t get too stuck into, but which isn’t too large an offer here considering that both could be deduced from either link-based data sources, or from non-link-based data sources — for instance, SERP click-through rate might correlate well with “trustworthy,” while “search volume” might correlate well with “popular”).

However, there’s lots of evidence available suggesting that Google is actually still making significant utilization of links like a ranking factor, and so i made the decision to attempted to challenge the information on sides of this argument. The finish consequence of that scientific studies are this publish.

The horse’s mouth

One reasonably authoritative source on matters associated with Bing is Google themselves. Google continues to be fairly unequivocal, even just in recent occasions, that links continue to be an issue. For instance:

  • March 2016: Google Senior Search Quality Strategist Andrey Lipattsev confirms that content and links are the initial and 2nd finest ranking factors. (The entire quote is: “Yes I will tell you the things they [the most important and a pair of ranking factors] are. It’s content, and links pointing to your website.Inches)
  • April 2014: Matt Cutts confirms that Google has tested search quality without links, and found it inferior.
  • October 2016: Gary Illyes signifies that backlinks continue being valuable while playing lower the idea of Domain Authority.

Then, obviously, there’s their ongoing concentrate on abnormal backlinks and so forth — none which could be necessary inside a world where links aren’t a ranking factor.

However, I’d reason that this doesn’t indicate the finish in our discussion before it’s even begun. First of all, Google includes a great history of supplying dodgy Search engine optimization advice. Consider HTTPS migrations pre-2016. Will Critchlow spoken at SearchLove North Park about how exactly Google’s algorithms are in an amount of complexity and opaqueness where they’re no more even attempting to understand them themselves — not to mention there are many tales of unintended behaviors from machine learning algorithms in nature.

Third-party correlation studies

It isn’t difficult to construct your personal data and show a correlation between link-based metrics and rankings. Take, for instance:

  • Moz’s newest study in 2015, showing strong relationships between link-based factors and rankings overall.
  • This newer study by Stone Temple Talking to.

However, these studies fall under significant difficulties with correlation versus. causation.

You will find three primary mechanisms that could explain the relationships they show:

  1. Getting good links causes sites to position greater (yay!)
  2. Ranking greater causes sites to obtain more links
  3. Some third factor, for example brand awareness, relates to both links and rankings, making them correlated with one another despite the lack of an immediate causal relationship

I’ve yet to determine any correlation study that addresses these serious shortcomings, or perhaps particularly acknowledges them. Indeed, I am not certain it might be also possible to do this because of the available data, however this demonstrates that being an industry we have to apply some critical thinking towards the suggest that we’re consuming.

However, captured Used to do write down some investigation of my very own here around the Moz Blog, demonstrating that brand awareness could actually be considered a more helpful factor than links for predicting rankings.

Source

The issue with this particular study was it demonstrated rapport which was concrete (i.e. very statistically significant), however that was surprisingly missing in explanatory power. Indeed, I discussed for the reason that publish how I’d were left with a correlation which was cheaper than Moz’s for Domain Authority.

Fortunately, Malcolm Slade lately discussed his much the same research at BrightonSEO, by which he finds similar broad correlations to myself between brand factors and rankings, but far, far more powerful correlations for certain kinds of query, and particularly big, high-volume, highly competitive mind terms.

Exactly what do we conclude overall from all of these third-party studies? Two primary things:

  1. We ought to take having a large pinch of salt any study that doesn’t address the options of reverse causation, or perhaps a jointly-causing third factor.
  2. Links can also add hardly any explanatory capacity to a rankings conjecture model according to branded amount of searches, a minimum of in a domain level.

The real life: So why do rankings change?

In the finish during the day, we’re thinking about whether links really are a ranking factor because we’re thinking about whether you should be trying for their services to enhance the rankings in our sites, or our clients’ sites.

Fluctuation

The very first example I wish to take a look at here’s this graph, showing United kingdom rankings for that keyword “flowers” from May to December this past year:

The truth is our traditional knowledge of ranking changes — which breaks lower into links, on-site, and formula changes — cannot explain this amount of rapid fluctuation. Should you not trust me, the above mentioned information is available openly through platforms like SEMRush and Searchmetrics, so attempt to dig in it yourself and find out if there’s any exterior explanation.

This level and frequency of fluctuation is more and more common for hotly contested terms, also it shows a inclination by Google to continuously iterate and optimize — just like marketers do when they’re optimizing a compensated search advert, or perhaps a website landing page, or perhaps an email campaign.

What’s Google optimizing for?

Source

The above mentioned slide comes from Ray Kim’s presentation at SearchLove North Park, also it shows the way the greatest SERP positions are gaining click-through rate with time, despite all the new changes in the search engines Search (for example elevated non-organic results) which will drive the alternative.

Larry’s suggestion is this fact is really a characteristic of Google’s procedural optimization — not from the formula, but through the formula as well as results. This certainly matches with everything else you’ve seen.

Effective backlink building

However, in the other finish from the scale, we obtain examples such as this:

Picture1.png

The above mentioned graph (thanks to STAT) shows rankings for that commercial keywords for Fleximize.com throughout a Distilled creative campaign. This can be a particularly interesting example for 2 reasons:

  • Fleximize began off like a domain with relatively little equity, and therefore changes were measurable, which there have been simple enough gains to make
  • Nothing happened using the first couple of pieces (1, 2), while they scored high-quality coverage and were apparently very similar to the 3rd (3).

It appears that links did eventually slowly move the needle here, and massively so, however the mechanisms at the office are highly opaque.

The above mentioned two examples — “Flowers” and Fleximize — are simply two real-world types of ranking changes. I’ve selected one which appears clearly link-driven but strange, and something that shows how volatile situations are for additional competitive terms. I know you will find numerous massive folders available filled with situation studies that demonstrate links moving rankings — but the thing is that it may happen, yet it’s not always as easy as it appears.

How can we explain all this?

Many of the evidence I’ve been through above is contradictory. Links are correlated with rankings, and Google states they’re important, and often they clearly slowly move the needle, but however brand awareness appears to describe away many of their record effectiveness, and Google’s operating with increased subtle methods within the data-wealthy top finish.

My favored explanation now to let you know that this fit together is that this:

  • There’s two tiers — most likely fuzzily separated.
  • At the very top finish, user signals — and factors that Google’s algorithms affiliate with user signals — are everything. For competitive queries with a lot of amount of searches, links don’t tell Google anything it couldn’t determine anyway, and links don’t assist with the ultimate refinement of proper-grained ordering.
  • However, links can always be a huge part of methods you be eligible for a that competition within the top finish.

This is extremely much a piece happening, however, and I’d like to see the other party’s ideas, and particularly their fresh research. Allow me to read your comments within the comments below.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by talk involved backlink building and also the primary kinds of strategy he saw as still being relevant and efficient today. Throughout his introduction, he stated something which really got me thinking, about how exactly the entire reason for links and PageRank have been to approximate traffic.

Source

Basically, during the late ’90s, links were an even bigger a part of the way we experienced the net — consider hubs like Excite, America online, and Yahoo. Google’s big innovation ended up being to understand that, because individuals navigated the net by hitting links, they might approximate the relative recognition of pages by searching at individuals links.

A lot of links, such very little time.

Rand noticed that, given all the details available these days — being an Isp, a internet search engine, a browser, an operating-system, and so forth — Google could now much more precisely model whether a hyperlink drives traffic, which means you shouldn’t try to build links that don’t bring customers. This can be a pretty big advance in the link-building tactics of old, however it happened in my experience it it most likely doesn’t go far enough.

If Google has enough data to determine which links are genuinely getting visitors or traffic, why make use of links whatsoever? The entire point was to determine which websites and pages were popular, plus they are now able to answer that question directly. (It’s important to note that there is a dichotomy between “popular” and “trustworthy” which i shouldn’t get too stuck into, but which isn’t too large an offer here considering that both could be deduced from either link-based data sources, or from non-link-based data sources — for instance, SERP click-through rate might correlate well with “trustworthy,” while “search volume” might correlate well with “popular”).

However, there’s lots of evidence available suggesting that Google is actually still making significant utilization of links like a ranking factor, and so i made the decision to attempted to challenge the information on sides of this argument. The finish consequence of that scientific studies are this publish.

The horse’s mouth

One reasonably authoritative source on matters associated with Bing is Google themselves. Google continues to be fairly unequivocal, even just in recent occasions, that links continue to be an issue. For instance:

  • March 2016: Google Senior Search Quality Strategist Andrey Lipattsev confirms that content and links are the initial and 2nd finest ranking factors. (The entire quote is: “Yes I will tell you the things they [the most important and a pair of ranking factors] are. It’s content, and links pointing to your website.Inches)
  • April 2014: Matt Cutts confirms that Google has tested search quality without links, and found it inferior.
  • October 2016: Gary Illyes signifies that backlinks continue being valuable while playing lower the idea of Domain Authority.

Then, obviously, there’s their ongoing concentrate on abnormal backlinks and so forth — none which could be necessary inside a world where links aren’t a ranking factor.

However, I’d reason that this doesn’t indicate the finish in our discussion before it’s even begun. First of all, Google includes a great history of supplying dodgy Search engine optimization advice. Consider HTTPS migrations pre-2016. Will Critchlow spoken at SearchLove North Park about how exactly Google’s algorithms are in an amount of complexity and opaqueness where they’re no more even attempting to understand them themselves — not to mention there are many tales of unintended behaviors from machine learning algorithms in nature.

Third-party correlation studies

It isn’t difficult to construct your personal data and show a correlation between link-based metrics and rankings. Take, for instance:

  • Moz’s newest study in 2015, showing strong relationships between link-based factors and rankings overall.
  • This newer study by Stone Temple Talking to.

However, these studies fall under significant difficulties with correlation versus. causation.

You will find three primary mechanisms that could explain the relationships they show:

  1. Getting good links causes sites to position greater (yay!)
  2. Ranking greater causes sites to obtain more links
  3. Some third factor, for example brand awareness, relates to both links and rankings, making them correlated with one another despite the lack of an immediate causal relationship

I’ve yet to determine any correlation study that addresses these serious shortcomings, or perhaps particularly acknowledges them. Indeed, I am not certain it might be also possible to do this because of the available data, however this demonstrates that being an industry we have to apply some critical thinking towards the suggest that we’re consuming.

However, captured Used to do write down some investigation of my very own here around the Moz Blog, demonstrating that brand awareness could actually be considered a more helpful factor than links for predicting rankings.

Source

The issue with this particular study was it demonstrated rapport which was concrete (i.e. very statistically significant), however that was surprisingly missing in explanatory power. Indeed, I discussed for the reason that publish how I’d were left with a correlation which was cheaper than Moz’s for Domain Authority.

Fortunately, Malcolm Slade lately discussed his much the same research at BrightonSEO, by which he finds similar broad correlations to myself between brand factors and rankings, but far, far more powerful correlations for certain kinds of query, and particularly big, high-volume, highly competitive mind terms.

Exactly what do we conclude overall from all of these third-party studies? Two primary things:

  1. We ought to take having a large pinch of salt any study that doesn’t address the options of reverse causation, or perhaps a jointly-causing third factor.
  2. Links can also add hardly any explanatory capacity to a rankings conjecture model according to branded amount of searches, a minimum of in a domain level.

The real life: So why do rankings change?

In the finish during the day, we’re thinking about whether links really are a ranking factor because we’re thinking about whether you should be trying for their services to enhance the rankings in our sites, or our clients’ sites.

Fluctuation

The very first example I wish to take a look at here’s this graph, showing United kingdom rankings for that keyword “flowers” from May to December this past year:

The truth is our traditional knowledge of ranking changes — which breaks lower into links, on-site, and formula changes — cannot explain this amount of rapid fluctuation. Should you not trust me, the above mentioned information is available openly through platforms like SEMRush and Searchmetrics, so attempt to dig in it yourself and find out if there’s any exterior explanation.

This level and frequency of fluctuation is more and more common for hotly contested terms, also it shows a inclination by Google to continuously iterate and optimize — just like marketers do when they’re optimizing a compensated search advert, or perhaps a website landing page, or perhaps an email campaign.

What’s Google optimizing for?

Source

The above mentioned slide comes from Ray Kim’s presentation at SearchLove North Park, also it shows the way the greatest SERP positions are gaining click-through rate with time, despite all the new changes in the search engines Search (for example elevated non-organic results) which will drive the alternative.

Larry’s suggestion is this fact is really a characteristic of Google’s procedural optimization — not from the formula, but through the formula as well as results. This certainly matches with everything else you’ve seen.

Effective backlink building

However, in the other finish from the scale, we obtain examples such as this:

Picture1.png

The above mentioned graph (thanks to STAT) shows rankings for that commercial keywords for Fleximize.com throughout a Distilled creative campaign. This can be a particularly interesting example for 2 reasons:

  • Fleximize began off like a domain with relatively little equity, and therefore changes were measurable, which there have been simple enough gains to make
  • Nothing happened using the first couple of pieces (1, 2), while they scored high-quality coverage and were apparently very similar to the 3rd (3).

It appears that links did eventually slowly move the needle here, and massively so, however the mechanisms at the office are highly opaque.

The above mentioned two examples — “Flowers” and Fleximize — are simply two real-world types of ranking changes. I’ve selected one which appears clearly link-driven but strange, and something that shows how volatile situations are for additional competitive terms. I know you will find numerous massive folders available filled with situation studies that demonstrate links moving rankings — but the thing is that it may happen, yet it’s not always as easy as it appears.

How can we explain all this?

Many of the evidence I’ve been through above is contradictory. Links are correlated with rankings, and Google states they’re important, and often they clearly slowly move the needle, but however brand awareness appears to describe away many of their record effectiveness, and Google’s operating with increased subtle methods within the data-wealthy top finish.

My favored explanation now to let you know that this fit together is that this:

  • There’s two tiers — most likely fuzzily separated.
  • At the very top finish, user signals — and factors that Google’s algorithms affiliate with user signals — are everything. For competitive queries with a lot of amount of searches, links don’t tell Google anything it couldn’t determine anyway, and links don’t assist with the ultimate refinement of proper-grained ordering.
  • However, links can always be a huge part of methods you be eligible for a that competition within the top finish.

This is extremely much a piece happening, however, and I’d like to see the other party’s ideas, and particularly their fresh research. Allow me to read your comments within the comments below.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

1 / 2 of Page-1 Google Answers Are Now HTTPS

Published by 30% of page-1 Google leads to our 10,000-keyword tracking set were secure (HTTPS). By the 2009 week, time capped 50%:

While there weren’t any big jumps lately – suggesting this transformation is a result of steady adoption of HTTPS and never a significant formula update – the finish consequence of annually of small changes is dramatic. Increasingly more Google answers are secure.

MozCast is, obviously, only one data set, and so i requested individuals at Rank Ranger, who manage a similar (but entirely different) tracking system, when they thought I had been crazy…

Could both of us be crazy? Absolutely. However, we operate completely independent systems without any shared data, and so i think the consistency during these figures shows that we are not extremely off.

How about the long run?

Projecting the fairly stable trend line forward, the information shows that HTTPS could hit about 65% of page-1 results through the finish of 2017. The popularity lines are, obviously, an informed guess at the best, and lots of occasions could alter the adoption rate of HTTPS pages.

I have speculated formerly that, because the adoption rate elevated, Google might have more freedom to boost the algorithmic (i.e. ranking) boost for HTTPS pages. I requested Gary Illyes if this type of plan is at the whole shebang, and that he stated “no”:

Just like any Google statement, a number of you’ll take this as gospel truth and a few will require it as being devilish lies. As they is not promising that Google won’t ever raise the ranking advantages of HTTPS, In my opinion Gary about this one. I believe Bing is pleased with the present adoption rate and cautious about the collateral damage that the aggressive HTTPS ranking boost (or penalty) might cause. It seems sensible they would bide time..

Who has not converted?

A primary reason Google might be proceeding carefully on another HTTPS boost (or penalty) isn’t that all the big players have switched. Listed here are the very best 20 subdomains within the MozCast dataset, combined with the number of ranking URLs which use HTTPS:

(1) en.wikipedia.org — 100.%

(2) world wide web. amazon . com.com — 99.9%

(3) world wide web. facebook.com — 100.%

(4) world wide web. yelp.com — 99.7%

(5) world wide web. youtube.com — 99.6%

(6) world wide web. pinterest.com — 100.%

(7) world wide web. walmart.com — 100.%

(8) world wide web. tripadvisor.com — 99.7%

(9) world wide web. webmd.com — .2%

(10) allrecipes.com — .%

(11) world wide web. target.com — .%

(12) world wide web. foodnetwork.com — .%

(13) world wide web. ebay.com — .%

(14) play.google.com — 100.%

(15) world wide web. bestbuy.com — .%

(16) world wide web. mayoclinic.org — .%

(17) world wide web. homedepot.com — .%

(18) world wide web. indeed.com — .%

(19) world wide web. zillow.com — 100.%

(20) shop.nordstrom.com – .%

From the Top 20, exactly half have switched to HTTPS, although the majority of the Top Ten have converted. Unsurprisingly, switching is, with simply minor exceptions, almost all-or-none. Most sites naturally choose a site-wide switch, a minimum of after initial testing.

What in the event you do?

Even when Google does not show up the reward or penalty for HTTPS, other changes have been in play, for example Chrome warning visitors about non-secure pages when individuals pages collect sensitive data. Because the adoption rate increases, you may expect pressure to change to improve.

For brand new sites, I’d recommend jumping in as quickly as possible. Security certificates are affordable nowadays (many are free), and also the risks are low. For existing sites, it is a lot tougher. Any web site-wide change carries risks, there have certainly been a couple of horror tales a year ago. At least, make certain to secure pages that collect sensitive information or process transactions, and be on the lookout for additional changes.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by 30% of page-1 Google leads to our 10,000-keyword tracking set were secure (HTTPS). By the 2009 week, time capped 50%:

While there weren’t any big jumps lately – suggesting this transformation is a result of steady adoption of HTTPS and never a significant formula update – the finish consequence of annually of small changes is dramatic. Increasingly more Google answers are secure.

MozCast is, obviously, only one data set, and so i requested individuals at Rank Ranger, who manage a similar (but entirely different) tracking system, when they thought I had been crazy…

Could both of us be crazy? Absolutely. However, we operate completely independent systems without any shared data, and so i think the consistency during these figures shows that we are not extremely off.

How about the long run?

Projecting the fairly stable trend line forward, the information shows that HTTPS could hit about 65% of page-1 results through the finish of 2017. The popularity lines are, obviously, an informed guess at the best, and lots of occasions could alter the adoption rate of HTTPS pages.

I have speculated formerly that, because the adoption rate elevated, Google might have more freedom to boost the algorithmic (i.e. ranking) boost for HTTPS pages. I requested Gary Illyes if this type of plan is at the whole shebang, and that he stated “no”:

Just like any Google statement, a number of you’ll take this as gospel truth and a few will require it as being devilish lies. As they is not promising that Google won’t ever raise the ranking advantages of HTTPS, In my opinion Gary about this one. I believe Bing is pleased with the present adoption rate and cautious about the collateral damage that the aggressive HTTPS ranking boost (or penalty) might cause. It seems sensible they would bide time..

Who has not converted?

A primary reason Google might be proceeding carefully on another HTTPS boost (or penalty) isn’t that all the big players have switched. Listed here are the very best 20 subdomains within the MozCast dataset, combined with the number of ranking URLs which use HTTPS:

(1) en.wikipedia.org — 100.%

(2) world wide web. amazon . com.com — 99.9%

(3) world wide web. facebook.com — 100.%

(4) world wide web. yelp.com — 99.7%

(5) world wide web. youtube.com — 99.6%

(6) world wide web. pinterest.com — 100.%

(7) world wide web. walmart.com — 100.%

(8) world wide web. tripadvisor.com — 99.7%

(9) world wide web. webmd.com — .2%

(10) allrecipes.com — .%

(11) world wide web. target.com — .%

(12) world wide web. foodnetwork.com — .%

(13) world wide web. ebay.com — .%

(14) play.google.com — 100.%

(15) world wide web. bestbuy.com — .%

(16) world wide web. mayoclinic.org — .%

(17) world wide web. homedepot.com — .%

(18) world wide web. indeed.com — .%

(19) world wide web. zillow.com — 100.%

(20) shop.nordstrom.com – .%

From the Top 20, exactly half have switched to HTTPS, although the majority of the Top Ten have converted. Unsurprisingly, switching is, with simply minor exceptions, almost all-or-none. Most sites naturally choose a site-wide switch, a minimum of after initial testing.

What in the event you do?

Even when Google does not show up the reward or penalty for HTTPS, other changes have been in play, for example Chrome warning visitors about non-secure pages when individuals pages collect sensitive data. Because the adoption rate increases, you may expect pressure to change to improve.

For brand new sites, I’d recommend jumping in as quickly as possible. Security certificates are affordable nowadays (many are free), and also the risks are low. For existing sites, it is a lot tougher. Any web site-wide change carries risks, there have certainly been a couple of horror tales a year ago. At least, make certain to secure pages that collect sensitive information or process transactions, and be on the lookout for additional changes.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

Why Internet Neutrality Matters for Search engine optimization and Internet Marketing – White board Friday

Published by Why net neutrality matter for SEO and web marketing

Click the white board image above to spread out a higher-resolution version inside a new tab!

Video Transcription

Howdy, Moz fans, and thanks for visiting another edition of White board Friday. Now, we are going for a departure from your usual Search engine optimization tactics and marketing tactics to speak for any minute about internet neutrality. Internet neutrality is really something which is hugely critical and massively vital that you affiliate marketers, especially individuals people who help medium and small companies, local companies, and websites that are not within the best players most widely used sites and wealthiest sites on the internet.

Why we are likely to talk internet neutrality happens because, the very first time shortly, it’s really at high-risk and there’s something that people could possibly do about this. By protecting internet neutrality, especially within the U . s . States, even though this is true around the globe, wherever you may be, we are able to really assistance to preserve our jobs and our roles as marketers. I’ll talk you thru it in a sec.

What’s internet neutrality?

So, to begin, you may be wondering, “Okay, Rand, I would have come across internet neutrality, but show me what it’s.Inch I am going to provide you with a really fundamental introduction, after which I’ll invite you to definitely dig much deeper in it.

But basically, internet neutrality is that this concept that like a user from the Internet, through my Isp — that could be through my mobile phone network, that could be through my house Internet provider, through my Internet provider at the office, these ISPs, a Verizon or perhaps a Comcast or perhaps a Cox or perhaps a T-Mobile or AT&T, individuals are within the U . s . States and there are many others overseas — you basically can obtain access to the entire web equally, and therefore these ISPs aren’t controlling data transfer speed according to someone having to pay them pretty much or with different website being popular with them or of them or committed to by them. Basically, when you are getting accessibility web, you obtain access to it equally. There’s equality and neutrality for the whole Internet.

Non-neutrality

Inside a non-neutrality scenario, you can observe my little fellow here’s very unhappy, because his ISP is basically controlling and saying, “Hey, if you wish to pay us $50 per month, you could have use of Google, Facebook, Twitter, and MSN. Then if you wish to pay a bit more, $100 per month, you will get use of each one of these second-tier sites, and we’ll allow you to visit individuals and employ individuals services. If you wish to pay us $150 per month, you will get use of all websites.”

Case one type of the way a non-neutrality situation might work. There’s a couple of others. This really is most likely and not the most realistic one, and it may be slightly hyperbolic, however the idea behind it is usually exactly the same — that basically the ISP could work however they’d like. They aren’t bound by government rules and rules requiring these to serve the whole web equally.

Now, if you are an ISP, imaginable that this can be a wonderful scenario. If I am AT&T or I am Verizon, I would be maxing out how much cash I’m able to make from consumers, and I am constantly getting to become competitive against other ISPs. But when I’m able to do that, I’m able to then possess a bunch more vectors (a) to obtain money all these different websites and web services, and (b) to charge consumers a lot more according to tiering their access.

Making this wonderful for me personally, and that’s why ISPs like Comcast and Verizon and also at&T and Cox and all sorts of these others have put lots of money towards lobbyists to alter the opinions of the us government, and that is mostly, well, within the U . s . States at this time, it is the Republican administration and also the folks in Congress and also the Federal Communications Chair, who’s Ajit Pai, lately selected by Trump because the new FCC Chair.

Why must marketers care?

Reasons that you ought to worry about this like a web market are:

1. Equal footing for web access results in a more even arena.

  • Greater competition. It enables websites to contend with one another without getting to pay for and without getting to simply serve different consumers who might be having to pay different rates for their ISPs.
  • Additionally, it means more players, because anybody can go into the field. By simply registering an internet site and hosting it, you are now with an even arena technically with everybody, with Google, with Facebook, with Microsoft, with Amazon . com. You receive exactly the same access, a minimum of in the fundamental ISP level, with everybody else on the web. Which means there is lots more interest in competitive internet marketing services, since there are a lot more companies who require to compete and who are able to compete.
  • Also significantly less of the natural advantage of these big, established, wealthy players. If you are a really, very wealthy website, you’ve a small fortune, you’ve a lot of sources, plenty of influence, it’s not hard to say, “Hey, I am not going to bother with this since i know I’m able to continually be within this cheapest tier or whatever they are supplying free of charge since i will pay the ISP, and that i may influence government rules and rules, and that i can connect with the different ISPs available and make certain I am always accessible.” However for a little website, this is a nightmare scenario, incredibly hard, also it constitutes a huge competitive advantage when you are big and established already, meaning it’s much tougher to obtain began.

2. The expense of having began online tend to be lower under internet neutrality.

Presently, should you register your site and also you start doing all of your hosting:

  • You peut-rrtre un don’t have to repay any ISPs. You don’t have to go approach Comcast or Verizon or AT&T or Cox or anybody such as this and say, “Hey, we wish to get to your high-speed, fastest-tier, best-access plan.” It’s not necessary to do this, because internet neutrality, what the law states from the land means that you’re instantly guaranteed that.
  • Additionally, there are no separate process. So it’s not only the price, it is also the job involved to visit these ISPs. There are many hundred ISPs with thousands and thousands of active customers within the U . s . States today. Time has generally been shrinking as that industry continues to be consolidating a bit more. But nonetheless, this is a very, very challenging factor to need to do. If you’re a big insurance carrier, it’s one factor to possess someone go manage that task, but when you are a brandname-new startup website, that’s entirely another to perform that.

3. The talent, the process, the caliber of services and products and marketing that the new company, a brand new website has are likely to create winners and losers within their field today versus this potential non-neutrality situation, where it isn’t a significant rigged system, but I am calling it a rigged system a bit due to this built-in advantage you have for the money and influence.

I believe we’d all generally agree that, in 2017, at the end of-stage capitalist societies, that, in most cases, there’s already an enormous advantage by getting lots of money and influence. I am unsure individuals with money and influence always need another advantage on entrepreneurs and startups and people who are attempting to compete on the internet.

What could happen?

Now, maybe you’ll disagree, however i believe that these together create a very compelling situation scenario. This is what might really happen.

  • “Fast lanes” for many sites – Most observers from the space believe that fast lanes would be a default. So quick lanes, and therefore certain tiers, song from the web, certain web services and firms would get immediate access. Others could be slower or potentially even disallowed on certain ISPs. That will create some real challenges.
  • Free versus. compensated access by a few ISPs – There’d most likely be some free and a few compensated services. You can observe T-Mobile really attempted to get this done lately, where they essentially stated, “Hey, if you’re on the T-Mobile phone, even when you are having to pay us the tiniest amount, we are going to let you access these some things.Inch It would be a video service free of charge. Basically, that presently is from the rules.

In ways, “Rand, that appears unfair. Why should not T-Mobile have the ability to offer some accessibility web free of charge after which you just need to pay throughout it?” I hear you. I believe regrettably that’s a red sardines, because that specific implementation of the non-neutral situation isn’t that bad. It isn’t particularly harmful to consumers. It isn’t particularly harmful to companies.

If T-Mobile just billed their normal rate, and they have this, “Oh, incidentally, here you receive this little area of the web free of charge,Inch no a person’s likely to complain about this. It isn’t particularly terrible. However it does violate internet neutrality, which is a really slippery slope to some world such as this, a really painful world for several people. This is exactly why we are prepared to kind of go ahead and take sacrifices of claiming, “Hey, we don’t wish to allow this since it violates the key and also the law of internet neutrality.”

  • Payoffs needed for access or speed – Then your third factor that will probably happen is the fact that there’d be payoffs. There’d be payoffs on sides. You spend more like a consumer, for your ISP, to be able to connect to the web in the manner that you simply do today, so that as an internet site, to be able to achieve consumers who maybe can not afford or not pay more, you spend from the ISP to obtain that full access.

What is the risk?

Why shall we be held getting this up now?

  • Greater than ever before… Exactly why is the danger excessive? Well, as it happens the brand new American administration has essentially emerge against internet neutrality in an exceedingly aggressive fashion.
  • The brand new FCC Chair, Ajit Pai, continues to be fighting this beautiful hard. He’s made a lot of statements just within the last couple of days. He really overturned some rulings from years past that requested smaller sized ISPs to become transparent regarding their internet neutrality practices, and that is been overturned. He’s quarrelling this essentially under what I am likely to call a guise of free markets and competitive marketplaces. I believe that’s a total misnomer.

This results in a truly equal industry for everybody. Even though it is somewhat restrictive, I believe probably the most interesting items to observe relating to this is this fact is really a non-political issue or at best not really a very politicized problem for the majority of American voters. Really, 81% of Democrats inside a Gallup survey stated they support internet neutrality, and a much greater percent of Republicans, 85%, stated they support internet neutrality.* So, really, you’ve virtually a massive swath of voters within the U . s . States who’re saying this ought to be what the law states from the land.

Why this really is generally being fought against against by Congress and also the FCC happens because these big ISPs have lots of money, and they have compensated lots of lobbying dollars to influence politics. For individuals individuals outdoors the U . s . States, I understand that seems like it ought to be illegal. It isn’t within our country. I understand it’s illegal in many democracies, but it is kind of how democracy within the U . s . States works.

*Editor’s note: This poll was conducted through the College of Delaware.

So what can we all do?

If you wish to try taking some action about this and fight and inform your Congress person, your senator, your representatives in your area and federally that you’re from this, I’d take a look at SaveTheInternet.com for individuals people that have been in the U . s . States. For whatever country you are in, I’d urge you to look for “support internet neutrality” and look for the initiatives which may be obtainable in your country or perhaps your geography in your area to be able to try taking some action.

This really is something which we have fought against against as Online users previously so that as companies on the internet before, and i believe we are going to need to renew that fight to be able to keep up with the established order and equal footing with one another. This helps us preserve our careers in internet marketing, but it will help preserve a wide open, free, competitive Internet. I believe that’s something we all can agree is essential.

Okay. Thanks, everybody. Expect for your comments. Certainly available to your critiques. Please try to have them as kosher so that as kind as possible. I understand if this will get into political territory, it’s really a little frustrating. And we’ll help you again in a few days for an additional edition of White board Friday. Be mindful.

Video transcription by Speechpad.com

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by Why net neutrality matter for SEO and web marketing

Click the white board image above to spread out a higher-resolution version inside a new tab!

Video Transcription

Howdy, Moz fans, and thanks for visiting another edition of White board Friday. Now, we are going for a departure from your usual Search engine optimization tactics and marketing tactics to speak for any minute about internet neutrality. Internet neutrality is really something which is hugely critical and massively vital that you affiliate marketers, especially individuals people who help medium and small companies, local companies, and websites that are not within the best players most widely used sites and wealthiest sites on the internet.

Why we are likely to talk internet neutrality happens because, the very first time shortly, it’s really at high-risk and there’s something that people could possibly do about this. By protecting internet neutrality, especially within the U . s . States, even though this is true around the globe, wherever you may be, we are able to really assistance to preserve our jobs and our roles as marketers. I’ll talk you thru it in a sec.

What’s internet neutrality?

So, to begin, you may be wondering, “Okay, Rand, I would have come across internet neutrality, but show me what it’s.Inch I am going to provide you with a really fundamental introduction, after which I’ll invite you to definitely dig much deeper in it.

But basically, internet neutrality is that this concept that like a user from the Internet, through my Isp — that could be through my mobile phone network, that could be through my house Internet provider, through my Internet provider at the office, these ISPs, a Verizon or perhaps a Comcast or perhaps a Cox or perhaps a T-Mobile or AT&T, individuals are within the U . s . States and there are many others overseas — you basically can obtain access to the entire web equally, and therefore these ISPs aren’t controlling data transfer speed according to someone having to pay them pretty much or with different website being popular with them or of them or committed to by them. Basically, when you are getting accessibility web, you obtain access to it equally. There’s equality and neutrality for the whole Internet.

Non-neutrality

Inside a non-neutrality scenario, you can observe my little fellow here’s very unhappy, because his ISP is basically controlling and saying, “Hey, if you wish to pay us $50 per month, you could have use of Google, Facebook, Twitter, and MSN. Then if you wish to pay a bit more, $100 per month, you will get use of each one of these second-tier sites, and we’ll allow you to visit individuals and employ individuals services. If you wish to pay us $150 per month, you will get use of all websites.”

Case one type of the way a non-neutrality situation might work. There’s a couple of others. This really is most likely and not the most realistic one, and it may be slightly hyperbolic, however the idea behind it is usually exactly the same — that basically the ISP could work however they’d like. They aren’t bound by government rules and rules requiring these to serve the whole web equally.

Now, if you are an ISP, imaginable that this can be a wonderful scenario. If I am AT&T or I am Verizon, I would be maxing out how much cash I’m able to make from consumers, and I am constantly getting to become competitive against other ISPs. But when I’m able to do that, I’m able to then possess a bunch more vectors (a) to obtain money all these different websites and web services, and (b) to charge consumers a lot more according to tiering their access.

Making this wonderful for me personally, and that’s why ISPs like Comcast and Verizon and also at&T and Cox and all sorts of these others have put lots of money towards lobbyists to alter the opinions of the us government, and that is mostly, well, within the U . s . States at this time, it is the Republican administration and also the folks in Congress and also the Federal Communications Chair, who’s Ajit Pai, lately selected by Trump because the new FCC Chair.

Why must marketers care?

Reasons that you ought to worry about this like a web market are:

1. Equal footing for web access results in a more even arena.

  • Greater competition. It enables websites to contend with one another without getting to pay for and without getting to simply serve different consumers who might be having to pay different rates for their ISPs.
  • Additionally, it means more players, because anybody can go into the field. By simply registering an internet site and hosting it, you are now with an even arena technically with everybody, with Google, with Facebook, with Microsoft, with Amazon . com. You receive exactly the same access, a minimum of in the fundamental ISP level, with everybody else on the web. Which means there is lots more interest in competitive internet marketing services, since there are a lot more companies who require to compete and who are able to compete.
  • Also significantly less of the natural advantage of these big, established, wealthy players. If you are a really, very wealthy website, you’ve a small fortune, you’ve a lot of sources, plenty of influence, it’s not hard to say, “Hey, I am not going to bother with this since i know I’m able to continually be within this cheapest tier or whatever they are supplying free of charge since i will pay the ISP, and that i may influence government rules and rules, and that i can connect with the different ISPs available and make certain I am always accessible.” However for a little website, this is a nightmare scenario, incredibly hard, also it constitutes a huge competitive advantage when you are big and established already, meaning it’s much tougher to obtain began.

2. The expense of having began online tend to be lower under internet neutrality.

Presently, should you register your site and also you start doing all of your hosting:

  • You peut-rrtre un don’t have to repay any ISPs. You don’t have to go approach Comcast or Verizon or AT&T or Cox or anybody such as this and say, “Hey, we wish to get to your high-speed, fastest-tier, best-access plan.” It’s not necessary to do this, because internet neutrality, what the law states from the land means that you’re instantly guaranteed that.
  • Additionally, there are no separate process. So it’s not only the price, it is also the job involved to visit these ISPs. There are many hundred ISPs with thousands and thousands of active customers within the U . s . States today. Time has generally been shrinking as that industry continues to be consolidating a bit more. But nonetheless, this is a very, very challenging factor to need to do. If you’re a big insurance carrier, it’s one factor to possess someone go manage that task, but when you are a brandname-new startup website, that’s entirely another to perform that.

3. The talent, the process, the caliber of services and products and marketing that the new company, a brand new website has are likely to create winners and losers within their field today versus this potential non-neutrality situation, where it isn’t a significant rigged system, but I am calling it a rigged system a bit due to this built-in advantage you have for the money and influence.

I believe we’d all generally agree that, in 2017, at the end of-stage capitalist societies, that, in most cases, there’s already an enormous advantage by getting lots of money and influence. I am unsure individuals with money and influence always need another advantage on entrepreneurs and startups and people who are attempting to compete on the internet.

What could happen?

Now, maybe you’ll disagree, however i believe that these together create a very compelling situation scenario. This is what might really happen.

  • “Fast lanes” for many sites – Most observers from the space believe that fast lanes would be a default. So quick lanes, and therefore certain tiers, song from the web, certain web services and firms would get immediate access. Others could be slower or potentially even disallowed on certain ISPs. That will create some real challenges.
  • Free versus. compensated access by a few ISPs – There’d most likely be some free and a few compensated services. You can observe T-Mobile really attempted to get this done lately, where they essentially stated, “Hey, if you’re on the T-Mobile phone, even when you are having to pay us the tiniest amount, we are going to let you access these some things.Inch It would be a video service free of charge. Basically, that presently is from the rules.

In ways, “Rand, that appears unfair. Why should not T-Mobile have the ability to offer some accessibility web free of charge after which you just need to pay throughout it?” I hear you. I believe regrettably that’s a red sardines, because that specific implementation of the non-neutral situation isn’t that bad. It isn’t particularly harmful to consumers. It isn’t particularly harmful to companies.

If T-Mobile just billed their normal rate, and they have this, “Oh, incidentally, here you receive this little area of the web free of charge,Inch no a person’s likely to complain about this. It isn’t particularly terrible. However it does violate internet neutrality, which is a really slippery slope to some world such as this, a really painful world for several people. This is exactly why we are prepared to kind of go ahead and take sacrifices of claiming, “Hey, we don’t wish to allow this since it violates the key and also the law of internet neutrality.”

  • Payoffs needed for access or speed – Then your third factor that will probably happen is the fact that there’d be payoffs. There’d be payoffs on sides. You spend more like a consumer, for your ISP, to be able to connect to the web in the manner that you simply do today, so that as an internet site, to be able to achieve consumers who maybe can not afford or not pay more, you spend from the ISP to obtain that full access.

What is the risk?

Why shall we be held getting this up now?

  • Greater than ever before… Exactly why is the danger excessive? Well, as it happens the brand new American administration has essentially emerge against internet neutrality in an exceedingly aggressive fashion.
  • The brand new FCC Chair, Ajit Pai, continues to be fighting this beautiful hard. He’s made a lot of statements just within the last couple of days. He really overturned some rulings from years past that requested smaller sized ISPs to become transparent regarding their internet neutrality practices, and that is been overturned. He’s quarrelling this essentially under what I am likely to call a guise of free markets and competitive marketplaces. I believe that’s a total misnomer.

This results in a truly equal industry for everybody. Even though it is somewhat restrictive, I believe probably the most interesting items to observe relating to this is this fact is really a non-political issue or at best not really a very politicized problem for the majority of American voters. Really, 81% of Democrats inside a Gallup survey stated they support internet neutrality, and a much greater percent of Republicans, 85%, stated they support internet neutrality.* So, really, you’ve virtually a massive swath of voters within the U . s . States who’re saying this ought to be what the law states from the land.

Why this really is generally being fought against against by Congress and also the FCC happens because these big ISPs have lots of money, and they have compensated lots of lobbying dollars to influence politics. For individuals individuals outdoors the U . s . States, I understand that seems like it ought to be illegal. It isn’t within our country. I understand it’s illegal in many democracies, but it is kind of how democracy within the U . s . States works.

*Editor’s note: This poll was conducted through the College of Delaware.

So what can we all do?

If you wish to try taking some action about this and fight and inform your Congress person, your senator, your representatives in your area and federally that you’re from this, I’d take a look at SaveTheInternet.com for individuals people that have been in the U . s . States. For whatever country you are in, I’d urge you to look for “support internet neutrality” and look for the initiatives which may be obtainable in your country or perhaps your geography in your area to be able to try taking some action.

This really is something which we have fought against against as Online users previously so that as companies on the internet before, and i believe we are going to need to renew that fight to be able to keep up with the established order and equal footing with one another. This helps us preserve our careers in internet marketing, but it will help preserve a wide open, free, competitive Internet. I believe that’s something we all can agree is essential.

Okay. Thanks, everybody. Expect for your comments. Certainly available to your critiques. Please try to have them as kosher so that as kind as possible. I understand if this will get into political territory, it’s really a little frustrating. And we’ll help you again in a few days for an additional edition of White board Friday. Be mindful.

Video transcription by Speechpad.com

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

Large Site Search engine optimization Basics: Faceted Navigation

Published with this publish from 2011.

What I wish to concentrate on rather is narrowing this issue lower to some simple question, after which supply the possible methods to that question. The issue we have to response is, “What options do we must determine what Google crawls/indexes, and just what are their pros/cons?”

Brief summary of faceted navigation

Like a quick refresher, we are able to define faceted navigation just like any method to filter and/or sort results on the website by specific attributes that aren’t always related. For instance, the colour, processor type, and screen resolution of the laptop. Here’s a good example:

Because every possible mixture of facets is usually (a minumum of one) unique URL, faceted navigation can produce a couple of trouble for Search engine optimization:

  1. It makes lots of duplicate content, that is harmful to various reasons.
  2. It utilizes valuable crawl budget and may send Google incorrect signals.
  3. It dilutes link equity and passes equity to pages that people have no idea want indexed.

But first… some quick examples

It’s worth going for a couple of minutes and searching at a few examples of faceted navigation which are most likely hurting Search engine optimization. They are simple examples that illustrate how faceted navigation can (in most cases does) become a problem.

Macy’s

First of all, we’ve Macy’s. I’ve done an easy site:look for the domain and added “black dresses” like a keyword to determine what can appear. During the time of penning this publish, Macy’s has 1,991 items that fit under “black dresses” — why are gone 12,000 pages indexed with this keyword? The solution might have something related to how their faceted navigation is to establish. As SEOs, we are able to remedy this.

Lowe’s

Let’s collect Depot as the second example. Again, carrying out a simple site:search we discover 8,930 pages on left-hands/inswing front exterior doorways. What is the need to obtain that many pages within the index targeting similar products? Most likely not. The good thing is this is often fixed using the proper mixtures of tags (which we’ll explore below).

I’ll leave the examples at this. You can embark upon most large-scale e-commerce websites and discover difficulties with their navigation. What exactly is, many large websites which use faceted navigation might be doing better for Search engine optimization purposes.

Faceted navigation solutions

When deciding a faceted navigation solution, you’ll have to decide what you would like within the index, so what can go, after which steps to make which happen. Let’s check out exactly what the choices are.

“Noindex, follow”

Most likely the very first solution you think of could be using noindex tags. A noindex tag can be used for that sole reason for letting bots know to not incorporate a specific page within the index. So, when we wanted to get rid of pages in the index, this solution will make lots of sense.

The problem here’s that although you are able to reduce the quantity of duplicate content that’s within the index, you still be wasting crawl budget on pages. Also, these pages are experiencing link equity, that is a waste (because it doesn’t benefit any indexed page).

Example: When we desired to include our page for “black dresses” within the index, but we didn’t wish to have “black dresses under $100” within the index, adding a noindex tag towards the latter would exclude it. However, bots would be visiting the page (which wastes crawl budget), and also the page(s) would be receiving link equity (which will be a waste).

Canonicalization

Many sites approach this problem by utilizing canonical tags. Having a canonical tag, you are able to let Google realize that in an accumulation of similar pages, you’ve got a preferred version which should get credit. Since canonical tags specified for as a strategy to duplicate content, it might appear that this can be a reasonable solution. Furthermore, link equity is going to be consolidated towards the canonical page (the main one you deem most significant).

However, Google will still be wasting crawl budget on pages.

Example: /black-dresses?under-100/ might have the canonical URL set to /black-dresses/. In cases like this, Google will give the canonical page the authority and link equity. Furthermore, Google wouldn’t begin to see the “under $100” page like a duplicate from the canonical version.

Disallow via robots.txt

Disallowing parts of the website (for example certain parameters) might be a great solution. It’s quick, easy, and it is customizable. But, it will include some downsides. Namely, link equity is going to be trapped and not able to maneuver anywhere in your website (even when it’s originating from an exterior source). Another downside here’s even though you tell Google to not go to a certain page (or section) in your site, Google can continue to index it.

Example: We’re able to disallow *?under-100* within our robots.txt file. This could tell Google not to visit any page with this parameter. However, when there were any “follow” links pointing to any URL with this parameter inside it, Google could still index it.

“Nofollow” internal links to undesirable facets

A choice for solving the crawl budget concern is to “nofollow” all internal links to facets that aren’t essential for bots to crawl. Regrettably, “nofollow” tags don’t solve the problem entirely. Duplicate content can nonetheless be indexed, and link equity will get trapped.

Example: When we didn’t want Google to go to any page which had several facets indexed, adding a “nofollow” tag to any or all internal links pointing to individuals pages is needed us make it happen.

Staying away from the problem altogether

Clearly, when we could avoid this problem altogether, we ought to simply do that. If you’re presently while building or rebuilding your navigation or website, I’d recommend thinking about building your faceted navigation in a manner that limits the URL being altered (this really is generally completed with JavaScript). This is because simple: it offers the simplicity of browsing and filtering products, while potentially only establishing a single URL. However, this could get carried away within the other direction — you will have to by hand make sure that you have indexable squeeze pages for key facet combinations (e.g. black dresses).

Here’s a table outlining things i authored above inside a more digestible way.

Options:

Solves duplicate content?

Solves crawl budget?

Recycles link equity?

Passes equity from exterior links?

Enables internal link equity flow?

Other notes

“Noindex, follow”

Yes

No

No

Yes

Yes

Canonicalization

Yes

No

Yes

Yes

Yes

Are only able to be utilized on pages which are similar.

Robots.txt

Yes

Yes

No

No

No

Technically, pages which are blocked in robots.txt can nonetheless be indexed.

Nofollow internal links to undesirable facets

No

Yes

No

Yes

No

JavaScript setup

Yes

Yes

Yes

Yes

Yes

Requires more work to setup generally.

But what’s the perfect setup?

To begin with, it’s vital that you understand there’s no “one-size-fits-all solution.” To get for your ideal setup, you will likely want to use a mix of the above mentioned options. I will highlight a good example fix below that will work for many sites, but it’s vital that you realize that your solution might vary depending on how your internet site is built, the way your URLs are structured, etc.

Fortunately, we are able to break lower the way we reach a perfect solution by asking ourselves one question. “Do we love them much more about our crawl budget, or our link equity?” By answering this, we are capable of getting nearer to a perfect solution.

Think about this: You’ve got a website which has a faceted navigation that enables the indexation and discovery of each and every single facet and facet combination. You’re not worried about link equity, but clearly Bing is spending energy crawling countless pages that do not have to be crawled. What we should worry about within this scenario is crawl budget.

Within this specific scenario, I would suggest the next solution.

  1. Category, subcategory, and sub-subcategory pages should remain discoverable and indexable. (e.g. /clothing/, /clothing/womens/, /clothing/womens/dresses/)
  2. For every category page, only allow versions with 1 facet selected to become indexed.
    1. On pages which have a number of facets selected, all facet links become “nofollow” links (e.g. /clothing/womens/dresses?color=black/)
    2. On pages which have several facets selected, a “noindex” tag is added too (e.g. /clothing/womens/dresses?color=black?brand=express?/)
  3. Pick which facets might have an Search engine optimization benefit (for instance, “color” and “brand”) and whitelist them. Basically, throw them during the index for Search engine optimization purposes.
  4. Be sure that your canonical tags and rel=prev/next tags are setup appropriately.

This solution will (over time) begin to solve our difficulties with unnecessary pages finding yourself in the index because of the navigation from the site. Also, notice how within this scenario we used a mix of the potential solutions. We used “nofollow,” “noindex, nofollow,” and proper canonicalization to attain a far more desirable result.

Other points to consider

There are lots of more variables to think about about this subject — I wish to address two which i feel are the most crucial.

Breadcrumbs (and markup) helps a great deal

Without having breadcrumbs on every category/subcategory page in your website, you’re doing your disservice. Please go put them into action! In addition, for those who have breadcrumbs in your website but aren’t marking them track of microdata, you’re passing up on a huge win.

Exactly why is straightforward: You’ve got a complicated site navigation, and bots that go to your site may not be studying the hierarchy properly. With the addition of accurate breadcrumbs (and marking them up), we’re effectively telling Google, “Hey, I understand this navigation is confusing, but please consider crawling our website in this way.Inches

Enforcing a URL order for facet combinations

In extreme situations, you are able to stumbled upon a site which has a unique URL for each facet combination. For instance, if you’re on the laptop page and select “red” and “SSD” (for the reason that order) in the filters, the URL might be /laptops?color=red?SSD/. Imagine should you find the filters within the opposite order (first “SSD” then “red”) and also the URL that’s generated is /laptops?SSD?color=red/.

This really is really bad since it tremendously increases the quantity of URLs you’ve. Avoid this by enforcing a particular order for URLs!

Conclusions

My hope is you feel more outfitted (and also have ideas) regarding how to tackle determining your faceted navigation in a manner that benefits your research presence.

In summary, listed here are the primary takeaways:

  1. Faceted navigation could be ideal for users, but is generally setup in a manner that negatively impacts Search engine optimization.
  2. Many reasons exist why faceted navigation can negatively impact Search engine optimization, however the top three are:
    1. Duplicate content
    2. Crawl budget being wasted
    3. Link equity not in use as effectively accurately
  3. Steamed lower further, the issue you want to response to begin approaching an answer is, “What would be the ways we are able to control what Google crawls and indexes?”
  4. With regards to an answer, there’s no “one-size-fits-all” solution. There are many fixes (and combinations) you can use. Most generally:
    1. Noindex, follow
    2. Canonicalization
    3. Robots.txt
    4. Nofollow internal links to undesirable facets
    5. Staying away from the issue by having an AJAX/JavaScript solution
  5. When attempting to consider a perfect solution, the most crucial question you are able to think about is, “What’s more essential to the website: link equity, or crawl budget?” It will help focus your possible solutions.

I would like to hear any example setups. Whoever else found that’s labored well? Anything you’ve attempted which has impacted your website negatively? Let’s discuss within the comments or you can shoot us a tweet.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published with this publish from 2011.

What I wish to concentrate on rather is narrowing this issue lower to some simple question, after which supply the possible methods to that question. The issue we have to response is, “What options do we must determine what Google crawls/indexes, and just what are their pros/cons?”

Brief summary of faceted navigation

Like a quick refresher, we are able to define faceted navigation just like any method to filter and/or sort results on the website by specific attributes that aren’t always related. For instance, the colour, processor type, and screen resolution of the laptop. Here’s a good example:

Because every possible mixture of facets is usually (a minumum of one) unique URL, faceted navigation can produce a couple of trouble for Search engine optimization:

  1. It makes lots of duplicate content, that is harmful to various reasons.
  2. It utilizes valuable crawl budget and may send Google incorrect signals.
  3. It dilutes link equity and passes equity to pages that people have no idea want indexed.

But first… some quick examples

It’s worth going for a couple of minutes and searching at a few examples of faceted navigation which are most likely hurting Search engine optimization. They are simple examples that illustrate how faceted navigation can (in most cases does) become a problem.

Macy’s

First of all, we’ve Macy’s. I’ve done an easy site:look for the domain and added “black dresses” like a keyword to determine what can appear. During the time of penning this publish, Macy’s has 1,991 items that fit under “black dresses” — why are gone 12,000 pages indexed with this keyword? The solution might have something related to how their faceted navigation is to establish. As SEOs, we are able to remedy this.

Lowe’s

Let’s collect Depot as the second example. Again, carrying out a simple site:search we discover 8,930 pages on left-hands/inswing front exterior doorways. What is the need to obtain that many pages within the index targeting similar products? Most likely not. The good thing is this is often fixed using the proper mixtures of tags (which we’ll explore below).

I’ll leave the examples at this. You can embark upon most large-scale e-commerce websites and discover difficulties with their navigation. What exactly is, many large websites which use faceted navigation might be doing better for Search engine optimization purposes.

Faceted navigation solutions

When deciding a faceted navigation solution, you’ll have to decide what you would like within the index, so what can go, after which steps to make which happen. Let’s check out exactly what the choices are.

“Noindex, follow”

Most likely the very first solution you think of could be using noindex tags. A noindex tag can be used for that sole reason for letting bots know to not incorporate a specific page within the index. So, when we wanted to get rid of pages in the index, this solution will make lots of sense.

The problem here’s that although you are able to reduce the quantity of duplicate content that’s within the index, you still be wasting crawl budget on pages. Also, these pages are experiencing link equity, that is a waste (because it doesn’t benefit any indexed page).

Example: When we desired to include our page for “black dresses” within the index, but we didn’t wish to have “black dresses under $100” within the index, adding a noindex tag towards the latter would exclude it. However, bots would be visiting the page (which wastes crawl budget), and also the page(s) would be receiving link equity (which will be a waste).

Canonicalization

Many sites approach this problem by utilizing canonical tags. Having a canonical tag, you are able to let Google realize that in an accumulation of similar pages, you’ve got a preferred version which should get credit. Since canonical tags specified for as a strategy to duplicate content, it might appear that this can be a reasonable solution. Furthermore, link equity is going to be consolidated towards the canonical page (the main one you deem most significant).

However, Google will still be wasting crawl budget on pages.

Example: /black-dresses?under-100/ might have the canonical URL set to /black-dresses/. In cases like this, Google will give the canonical page the authority and link equity. Furthermore, Google wouldn’t begin to see the “under $100” page like a duplicate from the canonical version.

Disallow via robots.txt

Disallowing parts of the website (for example certain parameters) might be a great solution. It’s quick, easy, and it is customizable. But, it will include some downsides. Namely, link equity is going to be trapped and not able to maneuver anywhere in your website (even when it’s originating from an exterior source). Another downside here’s even though you tell Google to not go to a certain page (or section) in your site, Google can continue to index it.

Example: We’re able to disallow *?under-100* within our robots.txt file. This could tell Google not to visit any page with this parameter. However, when there were any “follow” links pointing to any URL with this parameter inside it, Google could still index it.

“Nofollow” internal links to undesirable facets

A choice for solving the crawl budget concern is to “nofollow” all internal links to facets that aren’t essential for bots to crawl. Regrettably, “nofollow” tags don’t solve the problem entirely. Duplicate content can nonetheless be indexed, and link equity will get trapped.

Example: When we didn’t want Google to go to any page which had several facets indexed, adding a “nofollow” tag to any or all internal links pointing to individuals pages is needed us make it happen.

Staying away from the problem altogether

Clearly, when we could avoid this problem altogether, we ought to simply do that. If you’re presently while building or rebuilding your navigation or website, I’d recommend thinking about building your faceted navigation in a manner that limits the URL being altered (this really is generally completed with JavaScript). This is because simple: it offers the simplicity of browsing and filtering products, while potentially only establishing a single URL. However, this could get carried away within the other direction — you will have to by hand make sure that you have indexable squeeze pages for key facet combinations (e.g. black dresses).

Here’s a table outlining things i authored above inside a more digestible way.

Options:

Solves duplicate content?

Solves crawl budget?

Recycles link equity?

Passes equity from exterior links?

Enables internal link equity flow?

Other notes

“Noindex, follow”

Yes

No

No

Yes

Yes

Canonicalization

Yes

No

Yes

Yes

Yes

Are only able to be utilized on pages which are similar.

Robots.txt

Yes

Yes

No

No

No

Technically, pages which are blocked in robots.txt can nonetheless be indexed.

Nofollow internal links to undesirable facets

No

Yes

No

Yes

No

JavaScript setup

Yes

Yes

Yes

Yes

Yes

Requires more work to setup generally.

But what’s the perfect setup?

To begin with, it’s vital that you understand there’s no “one-size-fits-all solution.” To get for your ideal setup, you will likely want to use a mix of the above mentioned options. I will highlight a good example fix below that will work for many sites, but it’s vital that you realize that your solution might vary depending on how your internet site is built, the way your URLs are structured, etc.

Fortunately, we are able to break lower the way we reach a perfect solution by asking ourselves one question. “Do we love them much more about our crawl budget, or our link equity?” By answering this, we are capable of getting nearer to a perfect solution.

Think about this: You’ve got a website which has a faceted navigation that enables the indexation and discovery of each and every single facet and facet combination. You’re not worried about link equity, but clearly Bing is spending energy crawling countless pages that do not have to be crawled. What we should worry about within this scenario is crawl budget.

Within this specific scenario, I would suggest the next solution.

  1. Category, subcategory, and sub-subcategory pages should remain discoverable and indexable. (e.g. /clothing/, /clothing/womens/, /clothing/womens/dresses/)
  2. For every category page, only allow versions with 1 facet selected to become indexed.
    1. On pages which have a number of facets selected, all facet links become “nofollow” links (e.g. /clothing/womens/dresses?color=black/)
    2. On pages which have several facets selected, a “noindex” tag is added too (e.g. /clothing/womens/dresses?color=black?brand=express?/)
  3. Pick which facets might have an Search engine optimization benefit (for instance, “color” and “brand”) and whitelist them. Basically, throw them during the index for Search engine optimization purposes.
  4. Be sure that your canonical tags and rel=prev/next tags are setup appropriately.

This solution will (over time) begin to solve our difficulties with unnecessary pages finding yourself in the index because of the navigation from the site. Also, notice how within this scenario we used a mix of the potential solutions. We used “nofollow,” “noindex, nofollow,” and proper canonicalization to attain a far more desirable result.

Other points to consider

There are lots of more variables to think about about this subject — I wish to address two which i feel are the most crucial.

Breadcrumbs (and markup) helps a great deal

Without having breadcrumbs on every category/subcategory page in your website, you’re doing your disservice. Please go put them into action! In addition, for those who have breadcrumbs in your website but aren’t marking them track of microdata, you’re passing up on a huge win.

Exactly why is straightforward: You’ve got a complicated site navigation, and bots that go to your site may not be studying the hierarchy properly. With the addition of accurate breadcrumbs (and marking them up), we’re effectively telling Google, “Hey, I understand this navigation is confusing, but please consider crawling our website in this way.Inches

Enforcing a URL order for facet combinations

In extreme situations, you are able to stumbled upon a site which has a unique URL for each facet combination. For instance, if you’re on the laptop page and select “red” and “SSD” (for the reason that order) in the filters, the URL might be /laptops?color=red?SSD/. Imagine should you find the filters within the opposite order (first “SSD” then “red”) and also the URL that’s generated is /laptops?SSD?color=red/.

This really is really bad since it tremendously increases the quantity of URLs you’ve. Avoid this by enforcing a particular order for URLs!

Conclusions

My hope is you feel more outfitted (and also have ideas) regarding how to tackle determining your faceted navigation in a manner that benefits your research presence.

In summary, listed here are the primary takeaways:

  1. Faceted navigation could be ideal for users, but is generally setup in a manner that negatively impacts Search engine optimization.
  2. Many reasons exist why faceted navigation can negatively impact Search engine optimization, however the top three are:
    1. Duplicate content
    2. Crawl budget being wasted
    3. Link equity not in use as effectively accurately
  3. Steamed lower further, the issue you want to response to begin approaching an answer is, “What would be the ways we are able to control what Google crawls and indexes?”
  4. With regards to an answer, there’s no “one-size-fits-all” solution. There are many fixes (and combinations) you can use. Most generally:
    1. Noindex, follow
    2. Canonicalization
    3. Robots.txt
    4. Nofollow internal links to undesirable facets
    5. Staying away from the issue by having an AJAX/JavaScript solution
  5. When attempting to consider a perfect solution, the most crucial question you are able to think about is, “What’s more essential to the website: link equity, or crawl budget?” It will help focus your possible solutions.

I would like to hear any example setups. Whoever else found that’s labored well? Anything you’ve attempted which has impacted your website negatively? Let’s discuss within the comments or you can shoot us a tweet.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

The Great Realm of Search engine optimization Meta Data [Refreshed for 2017]

Published by meta tag resource you can examine out if you are thinking about everything that’s available.

My primary suggestion: keep to the core minimum. Don’t add meta data you do not need — they simply occupy code space. The less code you’ve, the greater. Consider your page code as some step-by-step directions to obtain somewhere, however for a browser. Extraneous meta data would be the annoying “Go straight for 200 ft” line products in driving directions that merely let you know to remain on a single road you are already on!


The great meta data

Fundamental essentials meta data that needs to be on every page, regardless of what. Observe that this can be a small list fundamental essentials only ones which are needed, if you could work with only these, please.

  • Meta content type – This tag is essential to declare your character looking for the page and really should show up on every page. Departing this out could impact the way your page renders within the browser. A couple of options are highlighted below, however your web design service ought to know notebook computer for the site.
&ltmeta http-equiv="Content-Type" content="text/html charset=utf-8" /&gt

&ltmeta http-equiv="Content-Type" content="text/html charset=ISO-8859-1"&gt

  • Title – As the title tag doesn’t begin with “meta,” it is within the header and possesses information that’s necessary for Search engine optimization. It is best to possess a unique title tag on every page that describes the page. Read this publish for additional info on title tags.
  • Meta description – The infamous meta description tag can be used for just one major purpose: to explain the page to searchers because they go through the SERPs. This tag does not influence ranking, but it is important regardless. It is the ad copy which will determine whether users click your result. Ensure that it stays within 160 figures, and write it to trap anyone’s attention. Sell the page — encourage them to click the result. Here is a great article on meta descriptions which goes into more detail.
  • Viewport – Within this mobile world, you ought to be indicating the viewport. Should you not, you risk getting an undesirable mobile experience — google’s PageSpeed Insights Tool will explain much more about it. The conventional tag is:
&ltmeta name=viewport content="width=device-width, initial-scale=1"&gt


The indifferent meta data

Different sites will have to begin using these in specific conditions, however if you simply will go without, please.

  • Social meta data – I am departing these out. OpenGraph and Twitter data are essential to discussing, but aren’t needed by itself.
  • Robots – One huge misconception is you need to possess a robots meta tag. Let us get this to obvious: When it comes to indexing and link following, if you do not specify a meta robots tag, they read that as index,follow. It is just if you wish to change certainly one of individuals two instructions you need to add meta robots. Therefore, if you wish to noindex but stick to the links around the page, you’d add some following tag with simply the noindex, because the follow is implied. Just have to change what you would like to become not the same as standard.
&ltmeta name="robots" content="noindex" /&gt

  • Specific bots (Googlebot) – These tags are utilized to provide a specific bot instructions like noodp (forcing them not to apply your DMOZ listing information, RIP) and noydir (same, but rather the Yahoo Directory listing information). Generally the various search engines work great at this sort of factor by themselves, however if you simply want it, be at liberty. There has been certain cases I have seen where it is necessary, however if you simply must, think about using the general robots tag in the above list.
  • Language – The only real need to make use of this tag is that if you are moving worldwide and want to declare the primary language utilized on the page. Read this meta languages source of a complete listing of languages you are able to declare.
  • Geo – The final I heard, these meta data are based on Bing although not Google (one can market to to country inside Search Console). You will find three kinds: placename, position (latitude and longitude), and region.
&ltMETA NAME="geo.position" CONTENT="latitude longitude"&gt

&ltMETA NAME="geo.placename" CONTENT="Place Name"&gt

&ltMETA NAME="geo.region" CONTENT="Country Subdivision Code"&gt

  • Keywords – Yes, I put this around the “indifferent” list. While not good Search engine optimization will recommend spending whenever about this tag, there’s some really small possibility it can help you somewhere. Please let it rest out if you are creating a site, but when it’s automated, there is no need to take it off.
  • Refresh – This is actually the poor man’s redirect and cannot be utilized, if possible. It is best to make use of a server-side 301 redirect. I understand that typically things have to happen now, but Google isn’t a fan.
  • Site verification – Your internet site is verified with Google and Bing, right? That has the verification meta data on their own homepage? These are generally necessary since you can’t obtain the other kinds of site verification loaded, but if possible attempt to verify one other way. Google enables you to definitely verify by DNS, exterior file, or by linking your Google Analytics account. Bing still only enables by XML file or meta tag, so opt for the file if you’re able to.

Unhealthy meta data

Nothing bad may happen to your website if you are using these — allow me to simply make that obvious. They are a total waste of space though even Google states so (which was 12 years back now!). If you are willing, it may be here we are at some cleaning of the &lthead&gt area.

  • Author/web author – This tag can be used to mention the writer from the page. It is simply not essential around the page.
  • Revisit after – This meta tag is really a command towards the robots to go back to a webpage following a specific time period. It isn’t adopted by major internet search engine.
  • Rating – This tag can be used to indicate the maturity rating of content. I authored a publish on how to tag a webpage with adult images utilizing a really perplexing system which has since been upgraded (begin to see the post’s comments). It appears as though the easiest method to note bad images would be to put them on the separate directory using their company images in your site and alert Google.
  • Expiration/date – “Expiration” can be used to notice once the page expires, and “date” may be the date the page is made. Are all of your pages likely to expire? Just take them off if they’re (but do not keep updating content, even contests — allow it to be a yearly contest rather!). As well as for “date,” make an XML sitemap and it current. It’s a lot more helpful.
  • Copyright – That Google article debates this beside me a little, but consider the footer of the site. I’d guess it states “Copyright 20xx” in certain form. Why express it two times?
  • Abstract – This tag may also be accustomed to place an abstract from the content and used largely by educational pursuits.
  • Distribution – The “distribution” value is supposedly accustomed to control who are able to connect to the document, typically set to “global.” It’s inherently implied when the page is open (not password-protected, like with an intranet) it’s intended for the planet. Opt for it, and then leave the tag from the page.
  • Generator – This really is accustomed to note what program produced the page. Like “author,” it’s useless.
  • Cache control – This tag is placed hoping controlling how and when frequently a webpage is cached within the browser. It is best to do that within the HTTP header.
  • Resource type – This really is accustomed to name the kind of resource the page is, like “document.” Save time, because the DTD declaration will it for you personally.

There are plenty of meta data available, I’d like to learn about any you believe have to be added or perhaps removed! Shout in your comments ought to with suggestions or questions.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by meta tag resource you can examine out if you are thinking about everything that’s available.

My primary suggestion: keep to the core minimum. Don’t add meta data you do not need — they simply occupy code space. The less code you’ve, the greater. Consider your page code as some step-by-step directions to obtain somewhere, however for a browser. Extraneous meta data would be the annoying “Go straight for 200 ft” line products in driving directions that merely let you know to remain on a single road you are already on!


The great meta data

Fundamental essentials meta data that needs to be on every page, regardless of what. Observe that this can be a small list fundamental essentials only ones which are needed, if you could work with only these, please.

  • Meta content type – This tag is essential to declare your character looking for the page and really should show up on every page. Departing this out could impact the way your page renders within the browser. A couple of options are highlighted below, however your web design service ought to know notebook computer for the site.
<meta http-equiv="Content-Type" content="text/html charset=utf-8" />

<meta http-equiv="Content-Type" content="text/html charset=ISO-8859-1">

  • Title – As the title tag doesn’t begin with “meta,” it is within the header and possesses information that’s necessary for Search engine optimization. It is best to possess a unique title tag on every page that describes the page. Read this publish for additional info on title tags.
  • Meta description – The infamous meta description tag can be used for just one major purpose: to explain the page to searchers because they go through the SERPs. This tag does not influence ranking, but it is important regardless. It is the ad copy which will determine whether users click your result. Ensure that it stays within 160 figures, and write it to trap anyone’s attention. Sell the page — encourage them to click the result. Here is a great article on meta descriptions which goes into more detail.
  • Viewport – Within this mobile world, you ought to be indicating the viewport. Should you not, you risk getting an undesirable mobile experience — google’s PageSpeed Insights Tool will explain much more about it. The conventional tag is:
<meta name=viewport content="width=device-width, initial-scale=1">


The indifferent meta data

Different sites will have to begin using these in specific conditions, however if you simply will go without, please.

  • Social meta data – I am departing these out. OpenGraph and Twitter data are essential to discussing, but aren’t needed by itself.
  • Robots – One huge misconception is you need to possess a robots meta tag. Let us get this to obvious: When it comes to indexing and link following, if you do not specify a meta robots tag, they read that as index,follow. It is just if you wish to change certainly one of individuals two instructions you need to add meta robots. Therefore, if you wish to noindex but stick to the links around the page, you’d add some following tag with simply the noindex, because the follow is implied. Just have to change what you would like to become not the same as standard.
<meta name="robots" content="noindex" />

  • Specific bots (Googlebot) – These tags are utilized to provide a specific bot instructions like noodp (forcing them not to apply your DMOZ listing information, RIP) and noydir (same, but rather the Yahoo Directory listing information). Generally the various search engines work great at this sort of factor by themselves, however if you simply want it, be at liberty. There has been certain cases I have seen where it is necessary, however if you simply must, think about using the general robots tag in the above list.
  • Language – The only real need to make use of this tag is that if you are moving worldwide and want to declare the primary language utilized on the page. Read this meta languages source of a complete listing of languages you are able to declare.
  • Geo – The final I heard, these meta data are based on Bing although not Google (one can market to to country inside Search Console). You will find three kinds: placename, position (latitude and longitude), and region.
<META NAME="geo.position" CONTENT="latitude longitude">

<META NAME="geo.placename" CONTENT="Place Name">

<META NAME="geo.region" CONTENT="Country Subdivision Code">

  • Keywords – Yes, I put this around the “indifferent” list. While not good Search engine optimization will recommend spending whenever about this tag, there’s some really small possibility it can help you somewhere. Please let it rest out if you are creating a site, but when it’s automated, there is no need to take it off.
  • Refresh – This is actually the poor man’s redirect and cannot be utilized, if possible. It is best to make use of a server-side 301 redirect. I understand that typically things have to happen now, but Google isn’t a fan.
  • Site verification – Your internet site is verified with Google and Bing, right? That has the verification meta data on their own homepage? These are generally necessary since you can’t obtain the other kinds of site verification loaded, but if possible attempt to verify one other way. Google enables you to definitely verify by DNS, exterior file, or by linking your Google Analytics account. Bing still only enables by XML file or meta tag, so opt for the file if you’re able to.

Unhealthy meta data

Nothing bad may happen to your website if you are using these — allow me to simply make that obvious. They are a total waste of space though even Google states so (which was 12 years back now!). If you are willing, it may be here we are at some cleaning of the &lthead&gt area.

  • Author/web author – This tag can be used to mention the writer from the page. It is simply not essential around the page.
  • Revisit after – This meta tag is really a command towards the robots to go back to a webpage following a specific time period. It isn’t adopted by major internet search engine.
  • Rating – This tag can be used to indicate the maturity rating of content. I authored a publish on how to tag a webpage with adult images utilizing a really perplexing system which has since been upgraded (begin to see the post’s comments). It appears as though the easiest method to note bad images would be to put them on the separate directory using their company images in your site and alert Google.
  • Expiration/date – “Expiration” can be used to notice once the page expires, and “date” may be the date the page is made. Are all of your pages likely to expire? Just take them off if they’re (but do not keep updating content, even contests — allow it to be a yearly contest rather!). As well as for “date,” make an XML sitemap and it current. It’s a lot more helpful.
  • Copyright – That Google article debates this beside me a little, but consider the footer of the site. I’d guess it states “Copyright 20xx” in certain form. Why express it two times?
  • Abstract – This tag may also be accustomed to place an abstract from the content and used largely by educational pursuits.
  • Distribution – The “distribution” value is supposedly accustomed to control who are able to connect to the document, typically set to “global.” It’s inherently implied when the page is open (not password-protected, like with an intranet) it’s intended for the planet. Opt for it, and then leave the tag from the page.
  • Generator – This really is accustomed to note what program produced the page. Like “author,” it’s useless.
  • Cache control – This tag is placed hoping controlling how and when frequently a webpage is cached within the browser. It is best to do that within the HTTP header.
  • Resource type – This really is accustomed to name the kind of resource the page is, like “document.” Save time, because the DTD declaration will it for you personally.

There are plenty of meta data available, I’d like to learn about any you believe have to be added or perhaps removed! Shout in your comments ought to with suggestions or questions.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

Announcing the 2017 Local Search Ranking Factors Survey Results

Posted by Tidings, a genius service that automatically generates perfectly branded newsletters by pulling in the content from your Facebook page and leading content sources in your industry. While he will certainly still be connected to the local search industry, he’s spending less time on local search research, and has passed the reins to me to run the survey.

David is one of the smartest, nicest, most honest, and most generous people you will ever meet. In so many ways, he has helped direct and shape my career into what it is today. He has mentored me and promoted me by giving me my first speaking opportunities at Local U events, collaborated with me on research projects, and recommended me as a speaker at important industry conferences. And now, he has passed on one of the most important resources in our industry into my care. I am extremely grateful.

Thank you, David, for all that you have done for me personally, and for the local search industry. I am sure I speak for all who know you personally and those that know you through your work in this space; we wish you great success with your new venture!

I’m excited to dig into the results, so without further ado, read below for my observations, or:

Click here for the full results!

Shifting priorities

Here are the results of the thematic factors in 2017, compared to 2015:

Thematic Factors

2015

2017

Change

GMB Signals

21.63%

19.01%

-12.11%

Link Signals

14.83%

17.31%

+16.73%

On-Page Signals

14.23%

13.81%

-2.95%

Citation Signals

17.14%

13.31%

-22.36%

Review Signals

10.80%

13.13%

+21.53%

Behavioral Signals

8.60%

10.17%

+18.22%

Personalization

8.21%

9.76%

+18.81%

Social Signals

4.58%

3.53%

-22.89%

If you look at the Change column, you might get the impression that there were some major shifts in priorities this year, but the Change number doesn’t tell the whole story. Social factors may have seen the biggest drop with a -22.89% change, but a shift in emphasis on social factors from 4.58% to 3.53% isn’t particularly noteworthy.

The decreased emphasis on citations compared to the increased emphasis on link and review factors, is reflective of shifting focus, but as I’ll discuss below, citations are still crucial to laying down a proper foundation in local search. We’re just getting smarter about how far you need to go with them.

The importance of proximity

For the past two years, Physical Address in City of Search has been the #1 local pack/finder ranking factor. This makes sense. It’s tough to rank in the local pack of a city that you’re not physically located in.

Well, as of this year’s survey, the new #1 factor is… drumroll please…

Proximity of Address to the Point of Search

This factor has been climbing from position #8 in 2014, to position #4 in 2015, to claim the #1 spot in 2017. I’ve been seeing this factor’s increased importance for at least the past year, and clearly others have noticed as well. As I note in my recent post on proximity, this leads to poor results in most categories. I’m looking for the best lawyer in town, not the closest one. Hopefully we see the dial get turned down on this in the near future.

While Proximity of Address to the Point of Search is playing a stronger role than ever in the rankings, it’s certainly not the only factor impacting rankings. Businesses with higher relevancy and prominence will rank in a wider radius around their business and take a larger percentage of the local search pie. There’s still plenty to be gained from investing in local search strategies.

Here’s how the proximity factors changed from 2015 to 2017:

Proximity Factors

2015

2017

Change

Proximity of Address to the Point of Search

#4

#1

+3

Proximity of Address to Centroid of Other Businesses in Industry

#20

#30

-10

Proximity of Address to Centroid

#16

#50

-34

While we can see that Proximity to the Point of Search has seen a significant boost to become the new #1 factor, the other proximity factors which we once thought were extremely important have seen a major drop.

I’d caution people against ignoring Proximity of Address to Centroid, though. There is a situation where I think it still plays a role in local rankings. When you’re searching from outside of a city for a key phrase that contains the city name (Ex: Denver plumbers), then I believe Google geo-locates the search to the centroid and Proximity of Address to Centroid impacts rankings. This is important for business categories that are trying to attract searchers from outside of their city, such as attractions and hotels.

Local SEOs love links

Looking through the results and the comments, a clear theme emerges: Local SEOs are all about the links these days.

In this year’s survey results, we’re seeing significant increases for link-related factors across the board:

Local Pack/Finder Link Factors

2015

2017

Change

Quality/Authority of Inbound Links to Domain

#12

#4

+8

Domain Authority of Website

#6

#6

Diversity of Inbound Links to Domain

#27

#16

+11

Quality/Authority of Inbound Links to GMB Landing Page URL

#15

#11

+4

Quantity of Inbound Links to Domain

#34

#17

+17

Quantity of Inbound Links to Domain from Locally Relevant Domains

#31

#20

+11

Page Authority of GMB Landing Page URL

#24

#22

+2

Quantity of Inbound Links to Domain from Industry-Relevant Domains

#41

#28

+13

Product/Service Keywords in Anchor Text of Inbound Links to Domain

#33

+17

Location Keywords in Anchor Text of Inbound Links to Domain

#45

#38

+7

Diversity of Inbound Links to GMB Landing Page URL

#39

+11

Quantity of Inbound Links to GMB Landing Page URL from LocallyRelevant Domains

#48

+2

Google is still leaning heavily on links as a primary measure of a business’ authority and prominence, and the local search practitioners that invest time and resources to secure quality links for their clients are reaping the ranking rewards.

Fun fact: “links” appears 76 times in the commentary.

By comparison, “citations” were mentioned 32 times, and “reviews” were mentioned 45 times.

Shifting priorities with citations

At first glance at all the declining factors in the table below, you might think that yes, citations have declined in importance, but the situation is more nuanced than that.

Local Pack/Finder Citation Factors

2015

2017

Change

Consistency of Citations on The Primary Data Sources

n/a

#5

n/a

Quality/Authority of Structured Citations

#5

#8

-3

Consistency of Citations on Tier 1 Citation Sources

n/a

#9

n/a

Quality/Authority of Unstructured Citations (Newspaper Articles, Blog Posts, Gov Sites, Industry Associations)

#18

#21

-3

Quantity of Citations from Locally Relevant Domains

#21

#29

-8

Prominence on Key Industry-Relevant Domains

n/a

#37

n/a

Quantity of Citations from Industry-Relevant Domains

#19

#40

-21

Enhancement/Completeness of Citations

n/a

#44

n/a

Proper Category Associations on Aggregators and Tier 1 Citation Sources

n/a

#45

n/a

Quantity of Structured Citations (IYPs, Data Aggregators)

#14

#47

-33

Consistency of Structured Citations

#2

n/a

n/a

Quantity of Unstructured Citations (Newspaper Articles, Blog Posts)

#39

-11

You’ll notice that there are many “n/a” cells on this table. This is because I made some changes to the citation factors. I elaborate on this in the survey results, but for your quick reference here:

  1. To reflect the reality that you don’t need to clean up your citations on hundreds of sites, Consistency of Structured Citations has been broken down into 4 new factors:
    1. Consistency of Citations on The Primary Data Sources
    2. Consistency of Citations on Tier 1 Citation Sources
    3. Consistency of Citations on Tier 2 Citation Sources
    4. Consistency of Citations on Tier 3 Citation Sources
  2. I added these new citation factors:
    1. Enhancement/Completeness of Citations
    2. Presence of Business on Expert-Curated “Best of” and Similar Lists
    3. Prominence on Key Industry-Relevant Domains
    4. Proper Category Associations on Aggregators and Top Tier Citation Sources

Note that there are now more citation factors showing up, so some of the scores given to citation factors in 2015 are now being split across multiple factors in 2017:

  • In 2015, there were 7 citation factors in the top 50
  • In 2017, there are 10 citation factors in the top 50

That said, overall, I do think that the emphasis on citations has seen some decline (certainly in favor of links), and rightly so. In particular, there is an increasing focus on quality over quantity.

I was disappointed to see that Presence of Business on Expert-Curated “Best of” and Similar Lists didn’t make the top 50. I think this factor can provide a significant boost to a business’ local prominence and, in turn, their rankings. Granted, it’s a challenging factor to directly influence, but I would love to see an agency make a concerted effort to outreach to get their clients listed on these, measure the impact, and do a case study. Any takers?

GMB factors

There is no longer an editable description on your GMB listing, so any factors related to the GMB description field were removed from the survey. This is a good thing, since the field was typically poorly used, or abused, in the past. Google is on record saying that they didn’t use it for ranking, so stuffing it with keywords has always been more likely to get you penalized than to help you rank.

Here are the changes in GMB factors:

GMB Factors

2015

2017

Change

Proper GMB Category Associations

#3

#3

Product/Service Keyword in GMB Business Title

#7

#7

Location Keyword in GMB Business Title

#17

#12

+5

Verified GMB Listing

#13

#13

GMB Primary Category Matches a Broader Category of the Search Category (e.g. primary category=restaurant & search=pizza)

#22

#15

+7

Age of GMB Listing

#23

#25

-2

Local Area Code on GMB Listing

#33

#32

+1

Association of Photos with GMB Listing

#36

+14

Matching Google Account Domain to GMB Landing Page Domain

#36

-14

While we did see some upward movement in the Location Keyword in GMB Business Title factor, I’m shocked to see that Product/Service Keyword in GMB Business Title did not also go up this year. It is hands-down one of the strongest factors in local pack/finder rankings. Maybe THE strongest, after Proximity of Address to the Point of Search. It seems to me that everyone and their dog is complaining about how effective this is for spammers.

Be warned: if you decide to stuff your business title with keywords, international spam hunter Joy Hawkins will probably hunt your listing down and get you penalized. 🙂

Also, remember what happened back when everyone was spamming links with private blog networks, and then got slapped by the Penguin Update? Google has a complete history of changes to your GMB listing, and they could decide at any time to roll out an update that will retroactively penalize your listing. Is it really worth the risk?

Age of GMB Listing might have dropped two spots, but it was ranked extremely high by Joy Hawkins and Colan Neilsen. They’re both top contributors at the Google My Business forum, and I’m not saying they know something we don’t know, but uh, maybe they know something we don’t know.

Association of Photos with GMB Listing is a factor that I’ve heard some chatter about lately. It didn’t make the top 50 in 2015, but now it’s coming in at #36. Apparently, some Google support people have said it can help your rankings. I suppose it makes sense as a quality consideration. Listings with photos might indicate a more engaged business owner. I wonder if it matters whether the photos are uploaded by the business owner, or if it’s a steady stream of incoming photo uploads from the general public to the listing. I can imagine that a business that’s regularly getting photo uploads from users might be a signal of a popular and important business.

While this factor came in as somewhat benign in the Negative Factors section (#26), No Hours of Operation on GMB Listing might be something to pay attention to, as well. Nick Neels noted in the comments:

Our data showed listings that were incomplete and missing hours of operation were highly likely to be filtered out of the results and lose visibility. As a result, we worked with our clients to gather hours for any listings missing them. Once the hours of operation were uploaded, the listings no longer were filtered.

Behavioral factors

Here are the numbers:

GMB Factors

2015

2017

Change

Clicks to Call Business

#38

#35

+3

Driving Directions to Business Clicks

#29

#43

-14

Not very exciting, but these numbers do NOT reflect the serious impact that behavioral factors are having on local search rankings and the increased impact they will have in the future. In fact, we’re never going to get numbers that truly reflect the value of behavioral factors, because many of the factors that Google has access to are inaccessible and unmeasurable by SEOs. The best place to get a sense of the impact of these factors is in the comments. When asked about what he’s seeing driving rankings this year, Phil Rozek notes:

There seem to be more “black box” ranking scenarios, which to me suggests that behavioral factors have grown in importance. What terms do people type in before clicking on you? Where do those people search from? How many customers click on you rather than on the competitor one spot above you? If Google moves you up or down in the rankings, will many people still click? I think we’re somewhere past the beginning of the era of mushy ranking factors.

Mike Blumenthal also talks about behavioral factors in his comments:

Google is in a transition period from a web-based linking approach to a knowledge graph semantic approach. As we move towards a mobile-first index, the lack of linking as a common mobile practice, voice search, and single-response answers, Google needs to and has been developing ranking factors that are not link-dependent. Content, actual in-store visitations, on-page verifiable truth, third-party validation, and news-worthiness are all becoming increasingly important.

But Google never throws anything away. Citations and links as we have known them will continue to play a part in the ranking algo, but they will be less and less important as Google increases their understanding of entity prominence and the real world.

And David Mihm says:

It’s a very difficult concept to survey about, but the overriding ranking factor in local — across both pack and organic results — is entity authority. Ask yourself, “If I were Google, how would I define a local entity, and once I did, how would I rank it relative to others?” and you’ll have the underlying algorithmic logic for at least the next decade.

    • How widely known is the entity? Especially locally, but oh man, if it’s nationally known, searchers should REALLY know about it.
    • What are people saying about the entity? (It should probably rank for similar phrases)
    • What is the engagement with the entity? Do people recognize it when they see it in search results? How many Gmail users read its newsletter? How many call or visit it after seeing it in search results? How many visit its location?

David touches on this topic in the survey response above, and then goes full BEAST MODE on the future of local rankings in his must-read post on Tidings, The Difference-Making Local Ranking Factor of 2020. (David, thank you for letting me do the Local Search Ranking Factors, but please, don’t ever leave us.)

The thing is, Google has access to so much additional data now through Chrome, Android, Maps, Ads, and Search. They’d be crazy to not use this data to help them understand which businesses are favored by real, live humans, and then rank those businesses accordingly. You can’t game this stuff, folks. In the future, my ranking advice might just be: “Be an awesome business that people like and that people interact with.” Fortunately, David thinks we have until 2020 before this really sets in, so we have a few years left of keyword-stuffing business titles and building anchor text-optimized links. Phew.

To survey or to study? That is not the question

I’m a fan of Andrew Shotland’s and Dan Leibson’s Local SEO Ranking Factors Study. I think that the yearly Local Search Ranking Factors Survey and the yearly (hopefully) Local SEO Ranking Factors Study nicely complement each other. It’s great to see some hard data on what factors correlate with rankings. It confirms a lot of what the contributors to this survey are intuitively seeing impact rankings for their clients.

There are some factors that you just can’t get data for, though, and the number of these “black box” factors will continue to grow over the coming years. Factors such as:

  • Behavioral factors and entity authority, as described above. I don’t think Google is going to give SEOs this data anytime soon.
  • Relevancy. It’s tough to measure a general relevancy score for a business from all the different sources Google could be pulling this data from.
  • Even citation consistency is hard to measure. You can get a general sense of this from tools like Moz Local or Yext, but there is no single citation consistency metric you can use to score businesses by. The ecosystem is too large, too complicated, and too nuanced to get a value for consistency across all the location data that Google has access to.

The survey, on the other hand, aggregates opinions from the people that are practicing and studying local search day in and day out. They do work for clients, test things, and can see what had a positive impact on rankings and what didn’t. They can see that when they built out all of the service pages for a local home renovations company, their rankings across the board went up through increased relevancy for those terms. You can’t analyze these kinds of impacts with a quantitative study like the Local SEO Ranking Factors Study. It takes some amount of intuition and insight, and while the survey approach certainly has its flaws, it does a good job of surfacing those insights.

Going forward, I think there is great value in both the survey to get the general sense of what’s impacting rankings, and the study to back up any of our theories with data — or to potentially refute them, as they may have done with city names in webpage title tags. Andrew and Dan’s empirical study gives us more clues than we had before, so I’m looking forward to seeing what other data sources they can pull in for future editions.

Possum’s impact has been negligible

Other than Proper GMB Category Associations, which is definitely seeing a boost because of Possum, you can look at the results in this section more from the perspective of “this is what people are focusing on more IN GENERAL.” Possum hasn’t made much of an impact on what we do to rank businesses in local. It has simply added another point of failure in cases where a business gets filtered.

One question that’s still outstanding in my mind is: what do you do if you are filtered? Why is one business filtered and not the other? Can you do some work to make your business rank and demote the competitor to the filter? Is it more links? More relevancy? Hopefully someone puts out some case studies soon on how to defeat the dreaded Possum filter (paging Joy Hawkins).

Focusing on More Since Possum

#1

Proximity of Address to the Point of Search

#2

Proper GMB Category Associations

#3

Quality/Authority of Inbound Links to Domain

#4

Quantity of Inbound Links to Domain from Locally Relevant Domains

#5

Click-Through Rate from Search Results

Focusing on Less Since Possum

#1

Proximity of Address to Centroid

#2

Physical Address in City of Search

#3

Proximity of Address to Centroid of Other Businesses in Industry

#4

Quantity of Structured Citations (IYPs, Data Aggregators)

#5

Consistency of Citations on Tier 3 Citation Sources

Foundational factors vs. competitive difference-makers

There are many factors in this survey that I’d consider table stakes. To get a seat at the rankings table, you must at least have these factors in order. Then there are the factors which I’d consider competitive difference-makers. These are the factors that, once you have a seat at the table, will move your rankings beyond your competitors. It’s important to note that you need BOTH. You probably won’t rank with only the foundation unless you’re in an extremely low-competition market, and you definitely won’t rank if you’re missing that foundation, no matter how many links you have.

This year I added a section to try to get a sense of what the local search experts consider foundational factors and what they consider to be competitive difference-makers. Here are the top 5 in these two categories:

Foundational

Competitive Difference Makers

#1

Proper GMB Category Associations

Quality/Authority of Inbound Links to Domain

#2

Consistency of Citations on the Primary Data Sources

Quantity of Inbound Links to Domain from Industry-Relevant Domains

#3

Physical Address in City of Search

Quality/Authority of Inbound Links to GMB Landing Page URL

#4

Proximity of Address to the Point of Search (Searcher-Business Distance)

Quantity of Inbound Links to Domain from Locally Relevant Domains

#5

Consistency of Citations on Tier 1 Citation Sources

Quantity of Native Google Reviews (with text)

I love how you can look at just these 10 factors and pretty much extract the basics of how to rank in local:

“You need to have a physical location in the city you’re trying to rank in, and it’s helpful for it to be close to the searcher. Then, make sure to have the proper categories associated with your listing, and get your citations built out and consistent on the most important sites. Now, to really move the needle, focus on getting links and reviews.”

This is the much over-simplified version, of course, so I suggest you dive into the full survey results for all the juicy details. The amount of commentary from participants is double what it was in 2015, and it’s jam-packed with nuggets of wisdom. Well worth your time.

Got your coffee? Ready to dive in?

Take a look at the full results

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by Tidings, a genius service that automatically generates perfectly branded newsletters by pulling in the content from your Facebook page and leading content sources in your industry. While he will certainly still be connected to the local search industry, he’s spending less time on local search research, and has passed the reins to me to run the survey.

David is one of the smartest, nicest, most honest, and most generous people you will ever meet. In so many ways, he has helped direct and shape my career into what it is today. He has mentored me and promoted me by giving me my first speaking opportunities at Local U events, collaborated with me on research projects, and recommended me as a speaker at important industry conferences. And now, he has passed on one of the most important resources in our industry into my care. I am extremely grateful.

Thank you, David, for all that you have done for me personally, and for the local search industry. I am sure I speak for all who know you personally and those that know you through your work in this space; we wish you great success with your new venture!

I’m excited to dig into the results, so without further ado, read below for my observations, or:

Click here for the full results!

Shifting priorities

Here are the results of the thematic factors in 2017, compared to 2015:

Thematic Factors

2015

2017

Change

GMB Signals

21.63%

19.01%

-12.11%

Link Signals

14.83%

17.31%

+16.73%

On-Page Signals

14.23%

13.81%

-2.95%

Citation Signals

17.14%

13.31%

-22.36%

Review Signals

10.80%

13.13%

+21.53%

Behavioral Signals

8.60%

10.17%

+18.22%

Personalization

8.21%

9.76%

+18.81%

Social Signals

4.58%

3.53%

-22.89%

If you look at the Change column, you might get the impression that there were some major shifts in priorities this year, but the Change number doesn’t tell the whole story. Social factors may have seen the biggest drop with a -22.89% change, but a shift in emphasis on social factors from 4.58% to 3.53% isn’t particularly noteworthy.

The decreased emphasis on citations compared to the increased emphasis on link and review factors, is reflective of shifting focus, but as I’ll discuss below, citations are still crucial to laying down a proper foundation in local search. We’re just getting smarter about how far you need to go with them.

The importance of proximity

For the past two years, Physical Address in City of Search has been the #1 local pack/finder ranking factor. This makes sense. It’s tough to rank in the local pack of a city that you’re not physically located in.

Well, as of this year’s survey, the new #1 factor is… drumroll please…

Proximity of Address to the Point of Search

This factor has been climbing from position #8 in 2014, to position #4 in 2015, to claim the #1 spot in 2017. I’ve been seeing this factor’s increased importance for at least the past year, and clearly others have noticed as well. As I note in my recent post on proximity, this leads to poor results in most categories. I’m looking for the best lawyer in town, not the closest one. Hopefully we see the dial get turned down on this in the near future.

While Proximity of Address to the Point of Search is playing a stronger role than ever in the rankings, it’s certainly not the only factor impacting rankings. Businesses with higher relevancy and prominence will rank in a wider radius around their business and take a larger percentage of the local search pie. There’s still plenty to be gained from investing in local search strategies.

Here’s how the proximity factors changed from 2015 to 2017:

Proximity Factors

2015

2017

Change

Proximity of Address to the Point of Search

#4

#1

+3

Proximity of Address to Centroid of Other Businesses in Industry

#20

#30

-10

Proximity of Address to Centroid

#16

#50

-34

While we can see that Proximity to the Point of Search has seen a significant boost to become the new #1 factor, the other proximity factors which we once thought were extremely important have seen a major drop.

I’d caution people against ignoring Proximity of Address to Centroid, though. There is a situation where I think it still plays a role in local rankings. When you’re searching from outside of a city for a key phrase that contains the city name (Ex: Denver plumbers), then I believe Google geo-locates the search to the centroid and Proximity of Address to Centroid impacts rankings. This is important for business categories that are trying to attract searchers from outside of their city, such as attractions and hotels.

Local SEOs love links

Looking through the results and the comments, a clear theme emerges: Local SEOs are all about the links these days.

In this year’s survey results, we’re seeing significant increases for link-related factors across the board:

Local Pack/Finder Link Factors

2015

2017

Change

Quality/Authority of Inbound Links to Domain

#12

#4

+8

Domain Authority of Website

#6

#6

Diversity of Inbound Links to Domain

#27

#16

+11

Quality/Authority of Inbound Links to GMB Landing Page URL

#15

#11

+4

Quantity of Inbound Links to Domain

#34

#17

+17

Quantity of Inbound Links to Domain from Locally Relevant Domains

#31

#20

+11

Page Authority of GMB Landing Page URL

#24

#22

+2

Quantity of Inbound Links to Domain from Industry-Relevant Domains

#41

#28

+13

Product/Service Keywords in Anchor Text of Inbound Links to Domain

#33

+17

Location Keywords in Anchor Text of Inbound Links to Domain

#45

#38

+7

Diversity of Inbound Links to GMB Landing Page URL

#39

+11

Quantity of Inbound Links to GMB Landing Page URL from LocallyRelevant Domains

#48

+2

Google is still leaning heavily on links as a primary measure of a business’ authority and prominence, and the local search practitioners that invest time and resources to secure quality links for their clients are reaping the ranking rewards.

Fun fact: “links” appears 76 times in the commentary.

By comparison, “citations” were mentioned 32 times, and “reviews” were mentioned 45 times.

Shifting priorities with citations

At first glance at all the declining factors in the table below, you might think that yes, citations have declined in importance, but the situation is more nuanced than that.

Local Pack/Finder Citation Factors

2015

2017

Change

Consistency of Citations on The Primary Data Sources

n/a

#5

n/a

Quality/Authority of Structured Citations

#5

#8

-3

Consistency of Citations on Tier 1 Citation Sources

n/a

#9

n/a

Quality/Authority of Unstructured Citations (Newspaper Articles, Blog Posts, Gov Sites, Industry Associations)

#18

#21

-3

Quantity of Citations from Locally Relevant Domains

#21

#29

-8

Prominence on Key Industry-Relevant Domains

n/a

#37

n/a

Quantity of Citations from Industry-Relevant Domains

#19

#40

-21

Enhancement/Completeness of Citations

n/a

#44

n/a

Proper Category Associations on Aggregators and Tier 1 Citation Sources

n/a

#45

n/a

Quantity of Structured Citations (IYPs, Data Aggregators)

#14

#47

-33

Consistency of Structured Citations

#2

n/a

n/a

Quantity of Unstructured Citations (Newspaper Articles, Blog Posts)

#39

-11

You’ll notice that there are many “n/a” cells on this table. This is because I made some changes to the citation factors. I elaborate on this in the survey results, but for your quick reference here:

  1. To reflect the reality that you don’t need to clean up your citations on hundreds of sites, Consistency of Structured Citations has been broken down into 4 new factors:
    1. Consistency of Citations on The Primary Data Sources
    2. Consistency of Citations on Tier 1 Citation Sources
    3. Consistency of Citations on Tier 2 Citation Sources
    4. Consistency of Citations on Tier 3 Citation Sources
  2. I added these new citation factors:
    1. Enhancement/Completeness of Citations
    2. Presence of Business on Expert-Curated “Best of” and Similar Lists
    3. Prominence on Key Industry-Relevant Domains
    4. Proper Category Associations on Aggregators and Top Tier Citation Sources

Note that there are now more citation factors showing up, so some of the scores given to citation factors in 2015 are now being split across multiple factors in 2017:

  • In 2015, there were 7 citation factors in the top 50
  • In 2017, there are 10 citation factors in the top 50

That said, overall, I do think that the emphasis on citations has seen some decline (certainly in favor of links), and rightly so. In particular, there is an increasing focus on quality over quantity.

I was disappointed to see that Presence of Business on Expert-Curated “Best of” and Similar Lists didn’t make the top 50. I think this factor can provide a significant boost to a business’ local prominence and, in turn, their rankings. Granted, it’s a challenging factor to directly influence, but I would love to see an agency make a concerted effort to outreach to get their clients listed on these, measure the impact, and do a case study. Any takers?

GMB factors

There is no longer an editable description on your GMB listing, so any factors related to the GMB description field were removed from the survey. This is a good thing, since the field was typically poorly used, or abused, in the past. Google is on record saying that they didn’t use it for ranking, so stuffing it with keywords has always been more likely to get you penalized than to help you rank.

Here are the changes in GMB factors:

GMB Factors

2015

2017

Change

Proper GMB Category Associations

#3

#3

Product/Service Keyword in GMB Business Title

#7

#7

Location Keyword in GMB Business Title

#17

#12

+5

Verified GMB Listing

#13

#13

GMB Primary Category Matches a Broader Category of the Search Category (e.g. primary category=restaurant & search=pizza)

#22

#15

+7

Age of GMB Listing

#23

#25

-2

Local Area Code on GMB Listing

#33

#32

+1

Association of Photos with GMB Listing

#36

+14

Matching Google Account Domain to GMB Landing Page Domain

#36

-14

While we did see some upward movement in the Location Keyword in GMB Business Title factor, I’m shocked to see that Product/Service Keyword in GMB Business Title did not also go up this year. It is hands-down one of the strongest factors in local pack/finder rankings. Maybe THE strongest, after Proximity of Address to the Point of Search. It seems to me that everyone and their dog is complaining about how effective this is for spammers.

Be warned: if you decide to stuff your business title with keywords, international spam hunter Joy Hawkins will probably hunt your listing down and get you penalized. 🙂

Also, remember what happened back when everyone was spamming links with private blog networks, and then got slapped by the Penguin Update? Google has a complete history of changes to your GMB listing, and they could decide at any time to roll out an update that will retroactively penalize your listing. Is it really worth the risk?

Age of GMB Listing might have dropped two spots, but it was ranked extremely high by Joy Hawkins and Colan Neilsen. They’re both top contributors at the Google My Business forum, and I’m not saying they know something we don’t know, but uh, maybe they know something we don’t know.

Association of Photos with GMB Listing is a factor that I’ve heard some chatter about lately. It didn’t make the top 50 in 2015, but now it’s coming in at #36. Apparently, some Google support people have said it can help your rankings. I suppose it makes sense as a quality consideration. Listings with photos might indicate a more engaged business owner. I wonder if it matters whether the photos are uploaded by the business owner, or if it’s a steady stream of incoming photo uploads from the general public to the listing. I can imagine that a business that’s regularly getting photo uploads from users might be a signal of a popular and important business.

While this factor came in as somewhat benign in the Negative Factors section (#26), No Hours of Operation on GMB Listing might be something to pay attention to, as well. Nick Neels noted in the comments:

Our data showed listings that were incomplete and missing hours of operation were highly likely to be filtered out of the results and lose visibility. As a result, we worked with our clients to gather hours for any listings missing them. Once the hours of operation were uploaded, the listings no longer were filtered.

Behavioral factors

Here are the numbers:

GMB Factors

2015

2017

Change

Clicks to Call Business

#38

#35

+3

Driving Directions to Business Clicks

#29

#43

-14

Not very exciting, but these numbers do NOT reflect the serious impact that behavioral factors are having on local search rankings and the increased impact they will have in the future. In fact, we’re never going to get numbers that truly reflect the value of behavioral factors, because many of the factors that Google has access to are inaccessible and unmeasurable by SEOs. The best place to get a sense of the impact of these factors is in the comments. When asked about what he’s seeing driving rankings this year, Phil Rozek notes:

There seem to be more “black box” ranking scenarios, which to me suggests that behavioral factors have grown in importance. What terms do people type in before clicking on you? Where do those people search from? How many customers click on you rather than on the competitor one spot above you? If Google moves you up or down in the rankings, will many people still click? I think we’re somewhere past the beginning of the era of mushy ranking factors.

Mike Blumenthal also talks about behavioral factors in his comments:

Google is in a transition period from a web-based linking approach to a knowledge graph semantic approach. As we move towards a mobile-first index, the lack of linking as a common mobile practice, voice search, and single-response answers, Google needs to and has been developing ranking factors that are not link-dependent. Content, actual in-store visitations, on-page verifiable truth, third-party validation, and news-worthiness are all becoming increasingly important.

But Google never throws anything away. Citations and links as we have known them will continue to play a part in the ranking algo, but they will be less and less important as Google increases their understanding of entity prominence and the real world.

And David Mihm says:

It’s a very difficult concept to survey about, but the overriding ranking factor in local — across both pack and organic results — is entity authority. Ask yourself, “If I were Google, how would I define a local entity, and once I did, how would I rank it relative to others?” and you’ll have the underlying algorithmic logic for at least the next decade.

    • How widely known is the entity? Especially locally, but oh man, if it’s nationally known, searchers should REALLY know about it.
    • What are people saying about the entity? (It should probably rank for similar phrases)
    • What is the engagement with the entity? Do people recognize it when they see it in search results? How many Gmail users read its newsletter? How many call or visit it after seeing it in search results? How many visit its location?

David touches on this topic in the survey response above, and then goes full BEAST MODE on the future of local rankings in his must-read post on Tidings, The Difference-Making Local Ranking Factor of 2020. (David, thank you for letting me do the Local Search Ranking Factors, but please, don’t ever leave us.)

The thing is, Google has access to so much additional data now through Chrome, Android, Maps, Ads, and Search. They’d be crazy to not use this data to help them understand which businesses are favored by real, live humans, and then rank those businesses accordingly. You can’t game this stuff, folks. In the future, my ranking advice might just be: “Be an awesome business that people like and that people interact with.” Fortunately, David thinks we have until 2020 before this really sets in, so we have a few years left of keyword-stuffing business titles and building anchor text-optimized links. Phew.

To survey or to study? That is not the question

I’m a fan of Andrew Shotland’s and Dan Leibson’s Local SEO Ranking Factors Study. I think that the yearly Local Search Ranking Factors Survey and the yearly (hopefully) Local SEO Ranking Factors Study nicely complement each other. It’s great to see some hard data on what factors correlate with rankings. It confirms a lot of what the contributors to this survey are intuitively seeing impact rankings for their clients.

There are some factors that you just can’t get data for, though, and the number of these “black box” factors will continue to grow over the coming years. Factors such as:

  • Behavioral factors and entity authority, as described above. I don’t think Google is going to give SEOs this data anytime soon.
  • Relevancy. It’s tough to measure a general relevancy score for a business from all the different sources Google could be pulling this data from.
  • Even citation consistency is hard to measure. You can get a general sense of this from tools like Moz Local or Yext, but there is no single citation consistency metric you can use to score businesses by. The ecosystem is too large, too complicated, and too nuanced to get a value for consistency across all the location data that Google has access to.

The survey, on the other hand, aggregates opinions from the people that are practicing and studying local search day in and day out. They do work for clients, test things, and can see what had a positive impact on rankings and what didn’t. They can see that when they built out all of the service pages for a local home renovations company, their rankings across the board went up through increased relevancy for those terms. You can’t analyze these kinds of impacts with a quantitative study like the Local SEO Ranking Factors Study. It takes some amount of intuition and insight, and while the survey approach certainly has its flaws, it does a good job of surfacing those insights.

Going forward, I think there is great value in both the survey to get the general sense of what’s impacting rankings, and the study to back up any of our theories with data — or to potentially refute them, as they may have done with city names in webpage title tags. Andrew and Dan’s empirical study gives us more clues than we had before, so I’m looking forward to seeing what other data sources they can pull in for future editions.

Possum’s impact has been negligible

Other than Proper GMB Category Associations, which is definitely seeing a boost because of Possum, you can look at the results in this section more from the perspective of “this is what people are focusing on more IN GENERAL.” Possum hasn’t made much of an impact on what we do to rank businesses in local. It has simply added another point of failure in cases where a business gets filtered.

One question that’s still outstanding in my mind is: what do you do if you are filtered? Why is one business filtered and not the other? Can you do some work to make your business rank and demote the competitor to the filter? Is it more links? More relevancy? Hopefully someone puts out some case studies soon on how to defeat the dreaded Possum filter (paging Joy Hawkins).

Focusing on More Since Possum

#1

Proximity of Address to the Point of Search

#2

Proper GMB Category Associations

#3

Quality/Authority of Inbound Links to Domain

#4

Quantity of Inbound Links to Domain from Locally Relevant Domains

#5

Click-Through Rate from Search Results

Focusing on Less Since Possum

#1

Proximity of Address to Centroid

#2

Physical Address in City of Search

#3

Proximity of Address to Centroid of Other Businesses in Industry

#4

Quantity of Structured Citations (IYPs, Data Aggregators)

#5

Consistency of Citations on Tier 3 Citation Sources

Foundational factors vs. competitive difference-makers

There are many factors in this survey that I’d consider table stakes. To get a seat at the rankings table, you must at least have these factors in order. Then there are the factors which I’d consider competitive difference-makers. These are the factors that, once you have a seat at the table, will move your rankings beyond your competitors. It’s important to note that you need BOTH. You probably won’t rank with only the foundation unless you’re in an extremely low-competition market, and you definitely won’t rank if you’re missing that foundation, no matter how many links you have.

This year I added a section to try to get a sense of what the local search experts consider foundational factors and what they consider to be competitive difference-makers. Here are the top 5 in these two categories:

Foundational

Competitive Difference Makers

#1

Proper GMB Category Associations

Quality/Authority of Inbound Links to Domain

#2

Consistency of Citations on the Primary Data Sources

Quantity of Inbound Links to Domain from Industry-Relevant Domains

#3

Physical Address in City of Search

Quality/Authority of Inbound Links to GMB Landing Page URL

#4

Proximity of Address to the Point of Search (Searcher-Business Distance)

Quantity of Inbound Links to Domain from Locally Relevant Domains

#5

Consistency of Citations on Tier 1 Citation Sources

Quantity of Native Google Reviews (with text)

I love how you can look at just these 10 factors and pretty much extract the basics of how to rank in local:

“You need to have a physical location in the city you’re trying to rank in, and it’s helpful for it to be close to the searcher. Then, make sure to have the proper categories associated with your listing, and get your citations built out and consistent on the most important sites. Now, to really move the needle, focus on getting links and reviews.”

This is the much over-simplified version, of course, so I suggest you dive into the full survey results for all the juicy details. The amount of commentary from participants is double what it was in 2015, and it’s jam-packed with nuggets of wisdom. Well worth your time.

Got your coffee? Ready to dive in?

Take a look at the full results

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”

XML Sitemaps: Probably The Most Misinterpreted Tool within the SEO’s Toolbox

Published by Minnesota Historic Society on Flickr.)


Overall site quality

It appears that Bing is a little way of measuring overall site quality, and taking advantage of that website-wide metric to affect ranking — and I am not speaking about backlinks here.

Consider this from Google’s perspective. Your house you have one great page filled with fabulous content that ticks all of the boxes, from relevance to Panda to social networking engagement. If Google sees your website as 1,000 pages of content, which only 5–6 pages are such as this one great page… well, if Google transmits a person to 1 of individuals great pages, what’s the consumer experience likely to be like when they click a hyperlink with that page and visit another thing in your site? Odds are, they’re likely to find a webpage that sucks. It’s bad UX. Why would they would like to send a person to some site like this?

Google engineers certainly realize that every site includes a certain quantity of “utility” pages which are helpful to users, although not always content-type pages that needs to be squeeze pages from search: pages for discussing quite happy with others, replying to comments, signing in, retrieving a lost password, etc.

In case your XML sitemap includes many of these pages, what exactly are you communicating to Google? Pretty much you have not a clue in regards to what constitutes good content in your site and just what does not.

Here’s the image you need to paint for Google rather. Yes, there exists a site here with 1,000 pages… and listed here are the 475 of individuals 1,000 which are our great content pages. You are able to disregard the others — they’re utility pages.

Now, let us say Google crawls individuals 475 pages, with their metrics, decides that 175 of individuals are “A” grade, 200 are “B+,” and 100 are “B” or “B-.” That’s an excellent overall average, and most likely signifies a fairly solid site to transmit users to.

Contrast by using a website that submits all 1,000 pages through the XML sitemap. Now, Google compares the 1,000 pages you say are great content, and sees 50 plusPercent are “D” or “F” pages. Typically, your internet site is pretty sucky Google most likely doesn’t wish to send users to some site like this.


The hidden fluff

Remember, Google will use that which you submit inside your XML sitemap like a clue to what’s most likely important in your site. But simply since it is not inside your XML sitemap does not always imply that Google will ignore individuals pages. You can have thousands of pages with barely enough content and link equity to obtain them indexed, however , should not be.

You need to perform a site: search to determine all the web pages that Bing is indexing out of your site to be able to uncover pages that you simply didn’t remember about, and clean individuals from that “average grade” Google will provide your site by setting meta robots “noindex,follow” (or blocking in robots.txt). Generally, the weakest pages that also made the index will be listed last inside a site: search.


Noindex versus. robots.txt

There’s an essential but subtle distinction between using meta robots and taking advantage of robots.txt to avoid indexation of the page. Using meta robots “noindex,follow” enables the hyperlink equity going to that page to circulate out towards the pages it links to. Should you block the page with robots.txt, you’re just flushing that lower the bathroom ..

Within the example above, I am blocking pages that are not real pages — they are tracking scripts — so I am not losing link equity, because these pages Don’t have the header using the primary menu links, etc.

Consider a webpage just like a Call Us page, or perhaps a Online Privacy Policy page — most likely associated with by each and every page in your site via either the primary menu or even the footer menu. So there’s a lot of backlinks likely to individuals pages do you want to throw that away? Or can you rather allow that to link equity flow to all things in your primary menu? Easy question to reply to, is it not?


Crawl bandwidth management

When will you really desire to use robots.txt rather? Possibly if you are getting crawl bandwidth issues and Googlebot is spending time and effort fetching utility pages, simply to uncover meta robots “noindex,follow” inside them and getting to bail out. If you have lots of these that Googlebot isn’t dealing with your important pages, then you might want to block via robots.txt.

I have seen numerous clients see ranking enhancements overall by clearing up their XML sitemaps and noindexing their utility pages:

Will I genuinely have 6,000 to twenty,000 pages that require crawling daily? Or perhaps is Googlebot chasing reply-to-comment or share-via-email URLs?

For your information, if you have a core group of pages where content changes regularly (just like a blog, new items, or product category pages) and you have a lot of pages (like single product pages) where it’d be nice if Google indexed them, although not at the fee for not re-crawling and indexing the main pages, you are able to submit the main pages within an XML sitemap to provide Google an idea that you simply consider them more essential than those that aren’t blocked, but aren’t within the sitemap.


Indexation problem debugging

Here’s in which the XML sitemap is actually helpful to SEOs: when you are submitting a lot of pages to Google for indexing, and just a number of them are really being indexed. Search Console won’t let you know which pages they’re indexing, only a general number indexed in every XML sitemap.

Your house you’re an e-commerce site and you’ve got 100,000 product pages, 5,000 category pages, and 20,000 subcategory pages. You submit your XML sitemap of 125,000 pages, and discover that Bing is indexing 87,000 of these. But which 87,000?

To begin with, your category and subcategory pages are most likely Important search targets for you personally. I’d produce a category-sitemap.xml and subcategory-sitemap.xml and submit individuals individually. You’re looking to see near 100% indexation there — and when you aren’t setting it up, you already know you have to take a look at building out more content on individuals, growing backlinks for them, or both. You may uncover something similar to product category or subcategory pages that aren’t being indexed simply because they only have 1 product inside them (or none whatsoever) — by which situation you most likely wish to set meta robots “noindex,follow” on individuals, and pull them in the XML sitemap.

Odds are, the issue is based on a few of the 100,000 product pages — but which of them?

Begin with a hypothesis, and split your products pages into different XML sitemaps to check individuals ideas. That you can do several at the same time — no problem with getting a URL appear in multiple sitemaps.

You could begin with 3 theories:

  1. Pages that do not possess a product image aren’t being indexed
  2. Pages which have under 200 words of unique description aren’t being indexed
  3. Pages that do not have comments/reviews aren’t being indexed

Create an XML sitemap having a significant quantity of pages that fall under all of individuals groups. It need not be all pages for the reason that category — sufficient the sample size causes it to be reasonable to attract a conclusion in line with the indexation. You may do 100 pages in every, for example.

Your ultimate goal here is by using the general percent indexation associated with a given sitemap to recognize features of pages which are making them get indexed or otherwise get indexed.

Knowing what the issue is, you may either customize the page content (or links towards the pages), or noindex the web pages. For instance, you may have 20,000 of the 100,000 product pages in which the description of product is under 50 words. If these aren’t big-traffic terms and you’re obtaining the descriptions from the manufacturer’s feed, it’s most likely not worthwhile to by hand write additional 200 words of description for every of individuals 20,000 pages. You may as well set meta robots to “noindex,follow” for those pages with under 50 words of description of product, since Google isn’t likely to index them anyway and they’re just getting lower your general site quality rating. Out on another forget to get rid of individuals out of your XML sitemap.


Dynamic XML sitemaps

Now you’re thinking, “OK, great, Michael. However There is to by hand keep my XML sitemap synchronized with my meta robots on our 100,000 pages,” and it is not prone to happen.

But there’s you don’t need to do that by hand. XML sitemaps don’t need to be static files. Actually, it normally won’t even require a .XML extension to publish them in the search engines Search Console.

Rather, setup rules logic for whether a webpage will get incorporated within the XML sitemap or otherwise, and employ that very same logic within the page itself to create meta robots index or noindex. This way, as soon as that description of product in the manufacturer’s feed will get updated through the manufacturer and ranges from 42 words to 215 words, that page on your site magically turns up within the XML sitemap and will get its meta robots set to “index,follow.”

On my small travel website, I actually do this for a lot of different types of pages. I’m using classic ASP for individuals pages, and so i have sitemaps such as this:

  • https://world wide web.visualitineraries.com/ItinSiteMap.asp

When these sitemaps are fetched, rather of rendering an HTML page, the server-side code simply spits back the XML. That one iterates over some records in one of my database tables and spits out an archive for every one which meets a particular criteria.


Video sitemaps

Oh, and just what about individuals annoying video XML sitemaps? They are so 2015. Wistia does not even bother generating them any longer you need to you need to be using JSON-LD and schema.org/VideoObject markup within the page itself.


Summary

  1. Remain consistent — if it is blocked in robots.txt or by meta robots “noindex,” it do not maintain your XML sitemap.
  2. Make use of your XML sitemaps as sleuthing tools to uncover and eliminate indexation problems, and just let/ask Google to index the web pages you realize Google will wish to index.
  3. If you have a large site, use dynamic XML sitemaps — don’t attempt to by hand keep all of this synchronized between robots.txt, meta robots, and also the XML sitemaps.

Cornfield image thanks to Robert Nunnally on Flickr.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by Minnesota Historic Society on Flickr.)


Overall site quality

It appears that Bing is a little way of measuring overall site quality, and taking advantage of that website-wide metric to affect ranking — and I am not speaking about backlinks here.

Consider this from Google’s perspective. Your house you have one great page filled with fabulous content that ticks all of the boxes, from relevance to Panda to social networking engagement. If Google sees your website as 1,000 pages of content, which only 5–6 pages are such as this one great page… well, if Google transmits a person to 1 of individuals great pages, what’s the consumer experience likely to be like when they click a hyperlink with that page and visit another thing in your site? Odds are, they’re likely to find a webpage that sucks. It’s bad UX. Why would they would like to send a person to some site like this?

Google engineers certainly realize that every site includes a certain quantity of “utility” pages which are helpful to users, although not always content-type pages that needs to be squeeze pages from search: pages for discussing quite happy with others, replying to comments, signing in, retrieving a lost password, etc.

In case your XML sitemap includes many of these pages, what exactly are you communicating to Google? Pretty much you have not a clue in regards to what constitutes good content in your site and just what does not.

Here’s the image you need to paint for Google rather. Yes, there exists a site here with 1,000 pages… and listed here are the 475 of individuals 1,000 which are our great content pages. You are able to disregard the others — they’re utility pages.

Now, let us say Google crawls individuals 475 pages, with their metrics, decides that 175 of individuals are “A” grade, 200 are “B+,” and 100 are “B” or “B-.” That’s an excellent overall average, and most likely signifies a fairly solid site to transmit users to.

Contrast by using a website that submits all 1,000 pages through the XML sitemap. Now, Google compares the 1,000 pages you say are great content, and sees 50 plusPercent are “D” or “F” pages. Typically, your internet site is pretty sucky Google most likely doesn’t wish to send users to some site like this.


The hidden fluff

Remember, Google will use that which you submit inside your XML sitemap like a clue to what’s most likely important in your site. But simply since it is not inside your XML sitemap does not always imply that Google will ignore individuals pages. You can have thousands of pages with barely enough content and link equity to obtain them indexed, however , should not be.

You need to perform a site: search to determine all the web pages that Bing is indexing out of your site to be able to uncover pages that you simply didn’t remember about, and clean individuals from that “average grade” Google will provide your site by setting meta robots “noindex,follow” (or blocking in robots.txt). Generally, the weakest pages that also made the index will be listed last inside a site: search.


Noindex versus. robots.txt

There’s an essential but subtle distinction between using meta robots and taking advantage of robots.txt to avoid indexation of the page. Using meta robots “noindex,follow” enables the hyperlink equity going to that page to circulate out towards the pages it links to. Should you block the page with robots.txt, you’re just flushing that lower the bathroom ..

Within the example above, I am blocking pages that are not real pages — they are tracking scripts — so I am not losing link equity, because these pages Don’t have the header using the primary menu links, etc.

Consider a webpage just like a Call Us page, or perhaps a Online Privacy Policy page — most likely associated with by each and every page in your site via either the primary menu or even the footer menu. So there’s a lot of backlinks likely to individuals pages do you want to throw that away? Or can you rather allow that to link equity flow to all things in your primary menu? Easy question to reply to, is it not?


Crawl bandwidth management

When will you really desire to use robots.txt rather? Possibly if you are getting crawl bandwidth issues and Googlebot is spending time and effort fetching utility pages, simply to uncover meta robots “noindex,follow” inside them and getting to bail out. If you have lots of these that Googlebot isn’t dealing with your important pages, then you might want to block via robots.txt.

I have seen numerous clients see ranking enhancements overall by clearing up their XML sitemaps and noindexing their utility pages:

Will I genuinely have 6,000 to twenty,000 pages that require crawling daily? Or perhaps is Googlebot chasing reply-to-comment or share-via-email URLs?

For your information, if you have a core group of pages where content changes regularly (just like a blog, new items, or product category pages) and you have a lot of pages (like single product pages) where it’d be nice if Google indexed them, although not at the fee for not re-crawling and indexing the main pages, you are able to submit the main pages within an XML sitemap to provide Google an idea that you simply consider them more essential than those that aren’t blocked, but aren’t within the sitemap.


Indexation problem debugging

Here’s in which the XML sitemap is actually helpful to SEOs: when you are submitting a lot of pages to Google for indexing, and just a number of them are really being indexed. Search Console won’t let you know which pages they’re indexing, only a general number indexed in every XML sitemap.

Your house you’re an e-commerce site and you’ve got 100,000 product pages, 5,000 category pages, and 20,000 subcategory pages. You submit your XML sitemap of 125,000 pages, and discover that Bing is indexing 87,000 of these. But which 87,000?

To begin with, your category and subcategory pages are most likely Important search targets for you personally. I’d produce a category-sitemap.xml and subcategory-sitemap.xml and submit individuals individually. You’re looking to see near 100% indexation there — and when you aren’t setting it up, you already know you have to take a look at building out more content on individuals, growing backlinks for them, or both. You may uncover something similar to product category or subcategory pages that aren’t being indexed simply because they only have 1 product inside them (or none whatsoever) — by which situation you most likely wish to set meta robots “noindex,follow” on individuals, and pull them in the XML sitemap.

Odds are, the issue is based on a few of the 100,000 product pages — but which of them?

Begin with a hypothesis, and split your products pages into different XML sitemaps to check individuals ideas. That you can do several at the same time — no problem with getting a URL appear in multiple sitemaps.

You could begin with 3 theories:

  1. Pages that do not possess a product image aren’t being indexed
  2. Pages which have under 200 words of unique description aren’t being indexed
  3. Pages that do not have comments/reviews aren’t being indexed

Create an XML sitemap having a significant quantity of pages that fall under all of individuals groups. It need not be all pages for the reason that category — sufficient the sample size causes it to be reasonable to attract a conclusion in line with the indexation. You may do 100 pages in every, for example.

Your ultimate goal here is by using the general percent indexation associated with a given sitemap to recognize features of pages which are making them get indexed or otherwise get indexed.

Knowing what the issue is, you may either customize the page content (or links towards the pages), or noindex the web pages. For instance, you may have 20,000 of the 100,000 product pages in which the description of product is under 50 words. If these aren’t big-traffic terms and you’re obtaining the descriptions from the manufacturer’s feed, it’s most likely not worthwhile to by hand write additional 200 words of description for every of individuals 20,000 pages. You may as well set meta robots to “noindex,follow” for those pages with under 50 words of description of product, since Google isn’t likely to index them anyway and they’re just getting lower your general site quality rating. Out on another forget to get rid of individuals out of your XML sitemap.


Dynamic XML sitemaps

Now you’re thinking, “OK, great, Michael. However There is to by hand keep my XML sitemap synchronized with my meta robots on our 100,000 pages,” and it is not prone to happen.

But there’s you don’t need to do that by hand. XML sitemaps don’t need to be static files. Actually, it normally won’t even require a .XML extension to publish them in the search engines Search Console.

Rather, setup rules logic for whether a webpage will get incorporated within the XML sitemap or otherwise, and employ that very same logic within the page itself to create meta robots index or noindex. This way, as soon as that description of product in the manufacturer’s feed will get updated through the manufacturer and ranges from 42 words to 215 words, that page on your site magically turns up within the XML sitemap and will get its meta robots set to “index,follow.”

On my small travel website, I actually do this for a lot of different types of pages. I’m using classic ASP for individuals pages, and so i have sitemaps such as this:

  • https://world wide web.visualitineraries.com/ItinSiteMap.asp

When these sitemaps are fetched, rather of rendering an HTML page, the server-side code simply spits back the XML. That one iterates over some records in one of my database tables and spits out an archive for every one which meets a particular criteria.


Video sitemaps

Oh, and just what about individuals annoying video XML sitemaps? They are so 2015. Wistia does not even bother generating them any longer you need to you need to be using JSON-LD and schema.org/VideoObject markup within the page itself.


Summary

  1. Remain consistent — if it is blocked in robots.txt or by meta robots “noindex,” it do not maintain your XML sitemap.
  2. Make use of your XML sitemaps as sleuthing tools to uncover and eliminate indexation problems, and just let/ask Google to index the web pages you realize Google will wish to index.
  3. If you have a large site, use dynamic XML sitemaps — don’t attempt to by hand keep all of this synchronized between robots.txt, meta robots, and also the XML sitemaps.

Cornfield image thanks to Robert Nunnally on Flickr.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

[Case Study] How We Ranked #1 for a High-Volume Keyword in Under 3 Months

Posted by Brad Zomick, the former Director of Content Marketing at Pipedrive, where this case study took place.

It’s tough out there for SEOs and content marketers. With the sheer amount of quality content being produced, it has become nearly impossible to stand out in most industries.

Recently we were running content marketing for Pipedrive, a sales CRM. We created a content strategy that used educational sales content to educate and build trust with our target audience.

This was a great idea, in theory — we’d educate readers, establish trust, and turn some of our readers into customers.

The problem is that there are already countless others producing similar sales-focused content. We weren’t just competing against other startups for readers; we also had to contend with established companies, sales trainers, strategists, bloggers and large business sites.

The good news is that ranking a strategic keyword is still very much possible. It’s certainly not easy, but with the right process, anyone can rank for their target keyword.

Below, we’re going to show you the process we used to rank on page one for a high-volume keyword.

If you’re not sure about reading ahead, here is a quick summary:

We were able to rank #1 for a high-volume keyword: “sales management” (9,900 search volume). We outranked established sites including SalesManagement.org, Apptus, InsightSquared, Docurated, and even US News, Wikipedia, and the Bureau of Labor Statistics. We managed this through good old-fashioned content creation + outreach + guest posting, aka the “Skyscraper Technique.”

Here are the eight steps we took to reach our goal (click on a step to jump straight to that section):

  1. Select the right topic
  2. Create bad-ass content for our own blog
  3. Optimize on-page SEO & engagement metrics
  4. Build internal links
  5. Find people who would link to this content
  6. Ask people to link to our content
  7. Write guest posts on leading blogs
  8. Fine-tuning content with TF * IDF

Before we start, understand that this is a labor-intensive process. Winning a top SERP spot required the focus of a 3-person team for the better part of 3 months.

If you’re willing to invest a similar amount of time and effort, read on!


Step 1: Finding a good topic

We wanted three things from our target keyword:

1. Significant keyword volume

If you’re going to spend months ranking for a single keyword, you need to pick something big enough to justify the effort.

In our case, we settled on a keyword with 9,900 searches each month as per the Keyword Planner (1k–10k range after the last update).

That same keyword registered a search volume of 1.7–2.9k in Moz Keyword Explorer, so take AdWords’ estimates with a grain of salt.

One way to settle on a target volume is to see it in terms of your conversion rate and buyer’s journey:

  • Buyer’s journey: Search volume decreases as customers move further along the buyer’s journey. Fewer searches are okay if you’re targeting Decision-stage keywords.
  • Conversion rate: The stronger your conversion rate for each stage of the buyer’s journey, the more you can get away with by targeting a low search volume keyword.

Also consider the actual traffic from the keyword, not just search volume.

For instance, we knew from Moz’s research that the first result gets about 30% of all clicks.

For a keyword with 9,900 search volume, this would translate into over 3,000 visitors/month for a top position.

If we could convert even 5% of these into leads, we’d net over 1,800 leads each year, which makes it worth our time.

2. Pick a winnable topic

Some SERPs are incredibly competitive. For instance, if you’re trying to rank for “content marketing,” you’ll find that the first page is dominated by CMI (DA 84):

You might be able to fight out a first-page rank, but it’s really not worth the effort in 99% of cases.

So our second requirement was to see if we could actually rank for our shortlisted keywords.

This can be done in one of two ways:

Informal method

The old-fashioned way to gauge keyword difficulty is to simply eyeball SERPs for your selected keywords.

If you see a lot of older articles, web 1.0 pages, unrecognizable brands, and generic content sites, the keyword should be solid.

On the other hand, if the first page is dominated by big niche brands with in-depth articles, you’ll have a hard time ranking well.

I also recommend using the MozBar to check metrics on the fly. If you see a ton of high DA/PA pages, move on to another keyword.

In our case, the top results mostly comprised of generic content sites or newish domains.

Moz Keyword Explorer

Moz’s Keyword Explorer gives you a more quantifiable way to gauge keyword difficulty. You’ll get actual difficulty vs. potential scores.

Aim for a competitiveness score under 50 and opportunity/potential scores above 50. If you get scores beyond this threshold, keep looking.

Of course, if you have an established domain, you can target more difficult keywords.

Following this step, we had a shortlist of four keywords:

  1. sales techniques (8100)
  2. sales process (8100)
  3. sales management (9900)
  4. sales forecast (4400)

We could have honestly picked anything from this list, but for added impact, we decided to add another filter.

3. Strategic relevance

If you’re going to turn visitors into leads, it’s important to focus on keywords that are strategically relevant to your conversion goals.

In our case, we chose “sales management” as the target keyword.

We did this because Pipedrive is a sales management tool, so the keyword describes us perfectly.

Additionally, a small business owner searching for “sales management” has likely moved from Awareness to Consideration and thus, is one step closer to buying.

In contrast, “sales techniques” and “sales forecast” are keywords a sales person would search for, not a sales leader or small business owner (decision-makers).


Step 2: Writing a bad-ass piece of content

Content might not be king anymore, but it is still the foundation of good SEO. We wanted to get this part absolutely right.

Here’s the process we followed to create our content:

1. Extremely thorough research

We had a simple goal from the start: create something substantially better than anything in the top SERPs.

To get there, we started by reviewing every article ranking for “sales management,” noting what we liked and what we didn’t.

For instance, we liked how InsightSquared started the article with a substantive quote. We didn’t like how Apptus went overboard with headers.

We also looked for anomalies. One thing that caught our attention was that two of the top 10 results were dedicated to the keyword “sales manager.”

We took note of this and made sure to talk about “sales managers” in our article.

We also looked at related searches at the bottom of the page:

We also scoured more than 50 sales-related books for chapters about sales management.

Finally, we also talked to some real salespeople. This step helped us add expert insight that outsourced article writers just don’t have.

At the end, we had a superior outline of what we were going to write.

2. Content creation

You don’t need to be a subject matter expert to create an excellent piece of content.

What you do need is good writing skills… and the discipline to actually finish an article.

Adopt a journalistic style where you report insight from experts. This gives you a better end-product since you’re curating insight and writing it far better than subject matter experts.

Unfortunately, there is no magic bullet to speed up the writing part — you’ll just have to grind it out. Set aside a few days at least to write anything substantive.

There are a few things we learned through the content creation experience:

  1. Don’t multi-task. Go all-in on writing and don’t stop until it’s done.
  2. Work alone. Writing is a solitary endeavor. Work in a place where you won’t be bothered by coworkers.
  3. Listen to ambient music. Search “homework edit” on YouTube for some ambient tracks, or use a site like Noisli.com

Take tip #1 as non-negotiable. We tried to juggle a couple of projects and finishing the article ended up taking two weeks. Learn from our mistake — focus on writing alone!

Before you hit publish, make sure to get some editorial feedback from someone on your team, or if possible, a professional editor.

We also added a note at the end of the article where we solicit feedback for future revisions.

If you can’t get access to editors, at the very least put your article through Grammarly.

3. Add lots of visuals and make content more readable

Getting visuals in B2B content can be surprisingly challenging. This is mostly due to the fact that there are a lot of abstract, hard-to-visualize concepts in B2B writing.

This is why we found a lot of blog posts like this with meaningless stock images:

To avoid this, we decided to use four custom images spread throughout the article.

We wanted to use visuals to:

  • Illustrate abstract concepts and ideas
  • Break up the content into more readable chunks.
  • Emphasize key takeaways in a readily digestible format

We could have done even more — prolific content creators like Neil Patel often use images every 200–300 words.

Aside from imagery, there are a few other ways to break up and highlight text to make your content more readable.

  • Section headers
  • Bullets and numbered lists
  • Small paragraphs
  • Highlighted text
  • Blockquotes
  • Use simple words

We used most of these tactics, especially blockquotes to create sub-sections.

Given our audience — sales leaders and managers — we didn’t have to bother with dumbing down our writing. But if you’re worried that your writing is too complex, try using an app like Hemingway to edit your draft.


Step 3: Optimize on-page SEO and engagement metrics

Here’s what we did to optimize on-page SEO:

1. Fix title

We wanted traffic from people searching for keywords related to “sales management,” such as:

  • “Sales management definition” (currently #2)
  • “Sales management process” (currently #1)
  • “Sales management strategies” (currently #4)
  • “Sales management resources” (currently #3)

To make sure we tapped all these keywords, we changed our main H1 header tag to include the words definition, process, strategies, and resources.

These are called “modifiers” in SEO terms.

Google is now smart enough to know that a single article can cover multiple related keywords. Adding such modifiers helped us increase our potential traffic.

2. Fix section headers

Next, we used the right headers for each section:

Instead of writing “sales management definition,” we used an actual question a reader might ask.

Here’s why:

  • It makes the article easier to read
  • It’s a natural question, which makes it more likely to rank for voice searches and Google’s “answers”

We also peppered related keywords in headers throughout the article. Note how we used the keyword at the beginning of the header, not at the end:

We didn’t want to go overboard with the keywords. Our goal was to give readers something they’d actually want to read.

This is why our <h2> tag headers did not have any obvious keywords:

This helps the article read naturally while still using our target keywords.

3. Improve content engagement

Notice the colon and the line break at the very start of the article:

This is a “bucket brigade”: an old copywriting trick to grab a reader’s attention.

We used it at the beginning of the article to stop readers from hitting the back button and going back to Google (i.e. increase our dwell time).

We also added outgoing and internal links to the article.

4. Fix URL

According to research, shorter URLs tend to rank better than longer ones.

We didn’t pay a lot of attention to the URL length when we first started blogging.

Here’s one of our blog post URLs from 2013:

Not very nice, right?

For this post, we used a simple, keyword-rich URL:

Ideally, we wouldn’t have the /2016/05/ bit, but by now, it’s too late to change.

5. Improve keyword density

One common piece of on-page SEO advice is to add your keywords to the first 100 words of your content.

If you search for “sales management” on our site, this is what you’ll see:

If you’re Googlebot, you’d have no confusion what this article was about: sales management.

We also wanted to use related keywords in the article without it sounding over-optimized. Gaetano DiNardi, our SEO manager at the time, came up with a great solution to fix this:

We created a “resources” or “glossary” section to hit a number of related keywords while still being useful. Here’s an example:

It’s important to make these keyword mentions as organic as possible.

As a result of this on-page keyword optimization, traffic increased sharply.

We over-optimized keyword density in the beginning, which likely hurt rankings. Once we spotted this, we changed things around and saw an immediate improvement (more on this below).


Step 4: Build internal links to article

Building internal links to your new content can be surprisingly effective when promoting content.

As Moz has already written before:

“Internal links are most useful for establishing site architecture and spreading link juice.”

Essentially, these links:

  • Help Googlebot discover your content
  • Tell Google that a particular page is “important” on your site since a lot of pages point to it

Our approach to internal linking was highly strategic. We picked two kinds of pages:

1. Pages that had high traffic and PA. You can find these in Google Analytics under Behavior –> Site Content.

2. Pages where the keyword already existed unlinked. You can use this query to find such pages:

Site:[yoursite.com] “your keyword”

In our case, searching for “sales management” showed us a number of mentions:

After making a list of these pages, we dove into our CMS and added internal links by hand.

These new links from established posts showed Google that we thought of this page as “important.”


Step 5: Finding link targets

This is where things become more fun. In this step, we used our detective SEO skills to find targets for our outreach campaign.

There are multiple ways to approach this process, but the easiest — and the one we followed — is to simply find sites that had linked to our top competitors.

We used Open Site Explorer to crawl the top ten results for backlinks.

By digging beyond the first page, we managed to build up a list of hundreds of prospects, which we exported to Excel.

This was still a very “raw” list. To maximize our outreach efficiency, we filtered out the following from our list:

  • Sites with DA under 30.
  • Sites on free blog hosts like Blogspot.com, WordPress.com, etc.

This gave us a highly targeted list of hundreds of prospects.

Here’s how we organized our Excel file:

Finding email addresses

Next step: find email addresses.

This has become much easier than it used to be thanks to a bunch of new tools. We used EmailHunter (Hunter.io) but you can also use VoilaNorbert, Email Finder, etc.

EmailHunter works by finding the pattern people use for emails on a domain name, like this:

To use this tool, you will need either the author’s name or the editor/webmaster’s name.

In some cases, the author of the article is clearly displayed.

In case you can’t find the author’s name (happens in case of guest posts), you’ll want to find the site’s editor or content manager.

LinkedIn is very helpful here.

Try a query like this:

site:linkedin.com “Editor/Blog Editor” at “[SiteName]”.

Once you have a name, plug the domain name into Hunter.io to get an email address guess of important contacts.


Step 6: Outreach like crazy

After all the data retrieval, prioritization, deduping, and clean up, we were left with hundreds of contacts to reach out to.

To make things easier, we segmented our list into two categories:

  • Category 1: Low-quality, generic sites with poor domain authority. You can send email templates to them without any problems.
  • Category 2: Up-and-coming bloggers/authoritative sites we wanted to build relationships with. To these sites, we sent personalized emails by hand.

With the first category of sites, our goal was volume instead of accuracy.

For the second category, our objective was to get a response. It didn’t matter whether we got a backlink or not — we wanted to start a conversation which could yield a link or, better, a relationship.

You can use a number of tools to make outreach easier. Here are a few of these tools:

  1. JustReachOut
  2. MixMax
  3. LeadIQ
  4. Toutapp
  5. Prospectify

We loved using a sales tool called MixMax. Its ability to mail merge outreach templates and track open rates works wonderfully well for SEO outreach.

If you’re looking for templates, here’s one email we sent out:

Let’s break it down:

  1. Curiosity-evoking headline: Small caps in the subject line makes the email look authentic. The “something missing” part evokes curiosity.
  2. Name drop familiar brands: Name dropping your relationship to familiar brands is another good way to show your legitimacy. It’s also a good idea to include a link to their article to jog their memory.
  3. What’s missing: The meat of the email. Make sure that you’re specific here.
  4. The “why”: Your prospects need a “because” to link to you. Give actual details as to what makes it great — in-depth research, new data, or maybe a quote or two from Rand Fishkin.
  5. Never demand a link: Asking for feedback first is a good way to show that you want a genuine conversation, not just a link.

This is just one example. We tested 3 different emails initially and used the best one for the rest of the campaign. Our response rate for the whole campaign was 42%.


Step 7: Be prepared to guest post

Does guest blogging still work?

If you’re doing it for traffic and authority, I say: go ahead. You are likely putting your best work out there on industry-leading blogs. Neither your readers nor Google will mind that.

In our case, guest blogging was already a part of our long-term content marketing strategy. The only thing we changed was adding links to our sales management post within guest posts.

Your guest post links should have contextual reference, i.e. the post topic and link content should match. Otherwise, Google might discount the link, even if it is dofollow.

Keep this in mind when you start a guest blogging campaign. Getting links isn’t enough; you need contextually relevant links.

Here are some of the guest posts we published:

  • 7 Keys to Scaling a Startup Globally [INC]
  • An Introduction to Activity-Based Selling [LinkedIn]
  • 7 Tips for MBAs Entering Sales Management Careers [TopMBA]

We weren’t exclusively promoting our sales management post in any of these guest posts. The sales management post just fit naturally into the context, so we linked to it.

If you’re guest blogging in 2017, this is the approach you need to adopt.


Step 8: Fine-tuning content with TF * IDF

After the article went live, we realized that we had heavily over-optimized it for the term “sales management.” It occurred 48 times throughout the article, too much for a 2,500 word piece.

Moreover, we hadn’t always used the term naturally in the article.

To solve this problem, we turned to TF-IDF.

Recognizing TF-IDF as a ranking factor

TF-IDF (Term Frequency-Inverse Document Frequency) is a way to figure out how important a word is in a document based on how frequently it appears in it.

This is a pretty standard statistical process in information retrieval. It is also one of the oldest ranking factors in Google’s algorithms.

Hypothesis: We hypothesized that dropping the number of “sales management” occurrences from 48 to 20 and replacing it with terms that have high lexical relevance would improve rankings.

Were we right?

See for yourself:

Our organic pageviews increased from nearly 0 to over 5,000 in just over 8 months.

Note that no new links or link acquisition initiatives were actively in-progress during the time of this mini-experiment.

Experiment timeline:

  • July 18th – Over-optimized keyword recognized.
  • July 25th – Content team finished updating body copy, H2s with relevant topics/synonyms.
  • July 26th – Updated internal anchor text to include relevant terms.
  • July 27th – Flushed cache & re-submitted to Search Console.
  • August 4th – Improved from #4 to #2 for “Sales Management”
  • August 17 – Improved from #2 to #1 for “Sales Management”

The results were fast. We were able to normalize our content and see results within weeks.

We’ll show you our exact process below.

Normalization process — How did we do it?

The normalization process focused on identifying over-optimized terms, replacing them with related words and submitting the new page to search engines.

Here’s how we did it:

1. Identifying over-optimized term(s)

We started off using Moz’s on-page optimization tool to scan our page.

According to Moz, we shouldn’t have used the target term — “sales management” — more than 15 times. This means we had to drop 33 occurrences.

2. Finding synonymous terms with high lexical relevance

Next, we had to replace our 28+ mentions with synonyms that wouldn’t feel out of place.

We used Moz’s Keyword Explorer to get some ideas.

3. Removed “sales management” from H2 headings

Initially, we had the keyword in both H1 and H2 headings, which was just overkill.

We removed it from H2 headings and used lexically similar variants instead for better flow.

4. Diluted “sales management” from body copy

We used our list of lexically relevant words to bring down the number of “sales management” occurrences to under 20. This was perfect for 2,500+ word article.

5. Diversify internal anchors

While we were changing our body copy, we realized that we also needed more anchor text diversity for our internal links.

Our anchors cloud was mostly “sales management” links:

We diversified this list by adding links to related terms like “sales manager,” “sales process,” etc.

6. Social amplification

We ramped up our activity on LinkedIn and Facebook to get the ball rolling on social shares.

The end result of this experimentation was an over 100% increase in traffic between August ‘16 to January ‘17.

The lesson?

Don’t just build backlinks — optimize your on-page content as well!


Conclusion

There’s a lot to learn from this case study. Some findings were surprising for us as well, particularly the impact of keyword density normalization.

While there are a lot of tricks and tactics detailed here, you’ll find that the fundamentals are essentially the same as what Rand and team have been preaching here for years. Create good content, reach out to link prospects, and use strategic guest posts to get your page to rank.

This might sound like a lot of work, but the results are worth it. Big industry players like Salesforce and Oracle actually advertise on AdWords for this term. While they have to pay for every single click, Pipedrive gets its clicks for free.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by Brad Zomick, the former Director of Content Marketing at Pipedrive, where this case study took place.

It’s tough out there for SEOs and content marketers. With the sheer amount of quality content being produced, it has become nearly impossible to stand out in most industries.

Recently we were running content marketing for Pipedrive, a sales CRM. We created a content strategy that used educational sales content to educate and build trust with our target audience.

This was a great idea, in theory — we’d educate readers, establish trust, and turn some of our readers into customers.

The problem is that there are already countless others producing similar sales-focused content. We weren’t just competing against other startups for readers; we also had to contend with established companies, sales trainers, strategists, bloggers and large business sites.

The good news is that ranking a strategic keyword is still very much possible. It’s certainly not easy, but with the right process, anyone can rank for their target keyword.

Below, we’re going to show you the process we used to rank on page one for a high-volume keyword.

If you’re not sure about reading ahead, here is a quick summary:

We were able to rank #1 for a high-volume keyword: “sales management” (9,900 search volume). We outranked established sites including SalesManagement.org, Apptus, InsightSquared, Docurated, and even US News, Wikipedia, and the Bureau of Labor Statistics. We managed this through good old-fashioned content creation + outreach + guest posting, aka the “Skyscraper Technique.”

Here are the eight steps we took to reach our goal (click on a step to jump straight to that section):

  1. Select the right topic
  2. Create bad-ass content for our own blog
  3. Optimize on-page SEO & engagement metrics
  4. Build internal links
  5. Find people who would link to this content
  6. Ask people to link to our content
  7. Write guest posts on leading blogs
  8. Fine-tuning content with TF * IDF

Before we start, understand that this is a labor-intensive process. Winning a top SERP spot required the focus of a 3-person team for the better part of 3 months.

If you’re willing to invest a similar amount of time and effort, read on!


Step 1: Finding a good topic

We wanted three things from our target keyword:

1. Significant keyword volume

If you’re going to spend months ranking for a single keyword, you need to pick something big enough to justify the effort.

In our case, we settled on a keyword with 9,900 searches each month as per the Keyword Planner (1k–10k range after the last update).

That same keyword registered a search volume of 1.7–2.9k in Moz Keyword Explorer, so take AdWords’ estimates with a grain of salt.

One way to settle on a target volume is to see it in terms of your conversion rate and buyer’s journey:

  • Buyer’s journey: Search volume decreases as customers move further along the buyer’s journey. Fewer searches are okay if you’re targeting Decision-stage keywords.
  • Conversion rate: The stronger your conversion rate for each stage of the buyer’s journey, the more you can get away with by targeting a low search volume keyword.

Also consider the actual traffic from the keyword, not just search volume.

For instance, we knew from Moz’s research that the first result gets about 30% of all clicks.

For a keyword with 9,900 search volume, this would translate into over 3,000 visitors/month for a top position.

If we could convert even 5% of these into leads, we’d net over 1,800 leads each year, which makes it worth our time.

2. Pick a winnable topic

Some SERPs are incredibly competitive. For instance, if you’re trying to rank for “content marketing,” you’ll find that the first page is dominated by CMI (DA 84):

You might be able to fight out a first-page rank, but it’s really not worth the effort in 99% of cases.

So our second requirement was to see if we could actually rank for our shortlisted keywords.

This can be done in one of two ways:

Informal method

The old-fashioned way to gauge keyword difficulty is to simply eyeball SERPs for your selected keywords.

If you see a lot of older articles, web 1.0 pages, unrecognizable brands, and generic content sites, the keyword should be solid.

On the other hand, if the first page is dominated by big niche brands with in-depth articles, you’ll have a hard time ranking well.

I also recommend using the MozBar to check metrics on the fly. If you see a ton of high DA/PA pages, move on to another keyword.

In our case, the top results mostly comprised of generic content sites or newish domains.

Moz Keyword Explorer

Moz’s Keyword Explorer gives you a more quantifiable way to gauge keyword difficulty. You’ll get actual difficulty vs. potential scores.

Aim for a competitiveness score under 50 and opportunity/potential scores above 50. If you get scores beyond this threshold, keep looking.

Of course, if you have an established domain, you can target more difficult keywords.

Following this step, we had a shortlist of four keywords:

  1. sales techniques (8100)
  2. sales process (8100)
  3. sales management (9900)
  4. sales forecast (4400)

We could have honestly picked anything from this list, but for added impact, we decided to add another filter.

3. Strategic relevance

If you’re going to turn visitors into leads, it’s important to focus on keywords that are strategically relevant to your conversion goals.

In our case, we chose “sales management” as the target keyword.

We did this because Pipedrive is a sales management tool, so the keyword describes us perfectly.

Additionally, a small business owner searching for “sales management” has likely moved from Awareness to Consideration and thus, is one step closer to buying.

In contrast, “sales techniques” and “sales forecast” are keywords a sales person would search for, not a sales leader or small business owner (decision-makers).


Step 2: Writing a bad-ass piece of content

Content might not be king anymore, but it is still the foundation of good SEO. We wanted to get this part absolutely right.

Here’s the process we followed to create our content:

1. Extremely thorough research

We had a simple goal from the start: create something substantially better than anything in the top SERPs.

To get there, we started by reviewing every article ranking for “sales management,” noting what we liked and what we didn’t.

For instance, we liked how InsightSquared started the article with a substantive quote. We didn’t like how Apptus went overboard with headers.

We also looked for anomalies. One thing that caught our attention was that two of the top 10 results were dedicated to the keyword “sales manager.”

We took note of this and made sure to talk about “sales managers” in our article.

We also looked at related searches at the bottom of the page:

We also scoured more than 50 sales-related books for chapters about sales management.

Finally, we also talked to some real salespeople. This step helped us add expert insight that outsourced article writers just don’t have.

At the end, we had a superior outline of what we were going to write.

2. Content creation

You don’t need to be a subject matter expert to create an excellent piece of content.

What you do need is good writing skills… and the discipline to actually finish an article.

Adopt a journalistic style where you report insight from experts. This gives you a better end-product since you’re curating insight and writing it far better than subject matter experts.

Unfortunately, there is no magic bullet to speed up the writing part — you’ll just have to grind it out. Set aside a few days at least to write anything substantive.

There are a few things we learned through the content creation experience:

  1. Don’t multi-task. Go all-in on writing and don’t stop until it’s done.
  2. Work alone. Writing is a solitary endeavor. Work in a place where you won’t be bothered by coworkers.
  3. Listen to ambient music. Search “homework edit” on YouTube for some ambient tracks, or use a site like Noisli.com

Take tip #1 as non-negotiable. We tried to juggle a couple of projects and finishing the article ended up taking two weeks. Learn from our mistake — focus on writing alone!

Before you hit publish, make sure to get some editorial feedback from someone on your team, or if possible, a professional editor.

We also added a note at the end of the article where we solicit feedback for future revisions.

If you can’t get access to editors, at the very least put your article through Grammarly.

3. Add lots of visuals and make content more readable

Getting visuals in B2B content can be surprisingly challenging. This is mostly due to the fact that there are a lot of abstract, hard-to-visualize concepts in B2B writing.

This is why we found a lot of blog posts like this with meaningless stock images:

To avoid this, we decided to use four custom images spread throughout the article.

We wanted to use visuals to:

  • Illustrate abstract concepts and ideas
  • Break up the content into more readable chunks.
  • Emphasize key takeaways in a readily digestible format

We could have done even more — prolific content creators like Neil Patel often use images every 200–300 words.

Aside from imagery, there are a few other ways to break up and highlight text to make your content more readable.

  • Section headers
  • Bullets and numbered lists
  • Small paragraphs
  • Highlighted text
  • Blockquotes
  • Use simple words

We used most of these tactics, especially blockquotes to create sub-sections.

Given our audience — sales leaders and managers — we didn’t have to bother with dumbing down our writing. But if you’re worried that your writing is too complex, try using an app like Hemingway to edit your draft.


Step 3: Optimize on-page SEO and engagement metrics

Here’s what we did to optimize on-page SEO:

1. Fix title

We wanted traffic from people searching for keywords related to “sales management,” such as:

  • “Sales management definition” (currently #2)
  • “Sales management process” (currently #1)
  • “Sales management strategies” (currently #4)
  • “Sales management resources” (currently #3)

To make sure we tapped all these keywords, we changed our main H1 header tag to include the words definition, process, strategies, and resources.

These are called “modifiers” in SEO terms.

Google is now smart enough to know that a single article can cover multiple related keywords. Adding such modifiers helped us increase our potential traffic.

2. Fix section headers

Next, we used the right headers for each section:

Instead of writing “sales management definition,” we used an actual question a reader might ask.

Here’s why:

  • It makes the article easier to read
  • It’s a natural question, which makes it more likely to rank for voice searches and Google’s “answers”

We also peppered related keywords in headers throughout the article. Note how we used the keyword at the beginning of the header, not at the end:

We didn’t want to go overboard with the keywords. Our goal was to give readers something they’d actually want to read.

This is why our <h2> tag headers did not have any obvious keywords:

This helps the article read naturally while still using our target keywords.

3. Improve content engagement

Notice the colon and the line break at the very start of the article:

This is a “bucket brigade”: an old copywriting trick to grab a reader’s attention.

We used it at the beginning of the article to stop readers from hitting the back button and going back to Google (i.e. increase our dwell time).

We also added outgoing and internal links to the article.

4. Fix URL

According to research, shorter URLs tend to rank better than longer ones.

We didn’t pay a lot of attention to the URL length when we first started blogging.

Here’s one of our blog post URLs from 2013:

Not very nice, right?

For this post, we used a simple, keyword-rich URL:

Ideally, we wouldn’t have the /2016/05/ bit, but by now, it’s too late to change.

5. Improve keyword density

One common piece of on-page SEO advice is to add your keywords to the first 100 words of your content.

If you search for “sales management” on our site, this is what you’ll see:

If you’re Googlebot, you’d have no confusion what this article was about: sales management.

We also wanted to use related keywords in the article without it sounding over-optimized. Gaetano DiNardi, our SEO manager at the time, came up with a great solution to fix this:

We created a “resources” or “glossary” section to hit a number of related keywords while still being useful. Here’s an example:

It’s important to make these keyword mentions as organic as possible.

As a result of this on-page keyword optimization, traffic increased sharply.

We over-optimized keyword density in the beginning, which likely hurt rankings. Once we spotted this, we changed things around and saw an immediate improvement (more on this below).


Step 4: Build internal links to article

Building internal links to your new content can be surprisingly effective when promoting content.

As Moz has already written before:

“Internal links are most useful for establishing site architecture and spreading link juice.”

Essentially, these links:

  • Help Googlebot discover your content
  • Tell Google that a particular page is “important” on your site since a lot of pages point to it

Our approach to internal linking was highly strategic. We picked two kinds of pages:

1. Pages that had high traffic and PA. You can find these in Google Analytics under Behavior –> Site Content.

2. Pages where the keyword already existed unlinked. You can use this query to find such pages:

Site:[yoursite.com] “your keyword”

In our case, searching for “sales management” showed us a number of mentions:

After making a list of these pages, we dove into our CMS and added internal links by hand.

These new links from established posts showed Google that we thought of this page as “important.”


Step 5: Finding link targets

This is where things become more fun. In this step, we used our detective SEO skills to find targets for our outreach campaign.

There are multiple ways to approach this process, but the easiest — and the one we followed — is to simply find sites that had linked to our top competitors.

We used Open Site Explorer to crawl the top ten results for backlinks.

By digging beyond the first page, we managed to build up a list of hundreds of prospects, which we exported to Excel.

This was still a very “raw” list. To maximize our outreach efficiency, we filtered out the following from our list:

  • Sites with DA under 30.
  • Sites on free blog hosts like Blogspot.com, WordPress.com, etc.

This gave us a highly targeted list of hundreds of prospects.

Here’s how we organized our Excel file:

Finding email addresses

Next step: find email addresses.

This has become much easier than it used to be thanks to a bunch of new tools. We used EmailHunter (Hunter.io) but you can also use VoilaNorbert, Email Finder, etc.

EmailHunter works by finding the pattern people use for emails on a domain name, like this:

To use this tool, you will need either the author’s name or the editor/webmaster’s name.

In some cases, the author of the article is clearly displayed.

In case you can’t find the author’s name (happens in case of guest posts), you’ll want to find the site’s editor or content manager.

LinkedIn is very helpful here.

Try a query like this:

site:linkedin.com “Editor/Blog Editor” at “[SiteName]”.

Once you have a name, plug the domain name into Hunter.io to get an email address guess of important contacts.


Step 6: Outreach like crazy

After all the data retrieval, prioritization, deduping, and clean up, we were left with hundreds of contacts to reach out to.

To make things easier, we segmented our list into two categories:

  • Category 1: Low-quality, generic sites with poor domain authority. You can send email templates to them without any problems.
  • Category 2: Up-and-coming bloggers/authoritative sites we wanted to build relationships with. To these sites, we sent personalized emails by hand.

With the first category of sites, our goal was volume instead of accuracy.

For the second category, our objective was to get a response. It didn’t matter whether we got a backlink or not — we wanted to start a conversation which could yield a link or, better, a relationship.

You can use a number of tools to make outreach easier. Here are a few of these tools:

  1. JustReachOut
  2. MixMax
  3. LeadIQ
  4. Toutapp
  5. Prospectify

We loved using a sales tool called MixMax. Its ability to mail merge outreach templates and track open rates works wonderfully well for SEO outreach.

If you’re looking for templates, here’s one email we sent out:

Let’s break it down:

  1. Curiosity-evoking headline: Small caps in the subject line makes the email look authentic. The “something missing” part evokes curiosity.
  2. Name drop familiar brands: Name dropping your relationship to familiar brands is another good way to show your legitimacy. It’s also a good idea to include a link to their article to jog their memory.
  3. What’s missing: The meat of the email. Make sure that you’re specific here.
  4. The “why”: Your prospects need a “because” to link to you. Give actual details as to what makes it great — in-depth research, new data, or maybe a quote or two from Rand Fishkin.
  5. Never demand a link: Asking for feedback first is a good way to show that you want a genuine conversation, not just a link.

This is just one example. We tested 3 different emails initially and used the best one for the rest of the campaign. Our response rate for the whole campaign was 42%.


Step 7: Be prepared to guest post

Does guest blogging still work?

If you’re doing it for traffic and authority, I say: go ahead. You are likely putting your best work out there on industry-leading blogs. Neither your readers nor Google will mind that.

In our case, guest blogging was already a part of our long-term content marketing strategy. The only thing we changed was adding links to our sales management post within guest posts.

Your guest post links should have contextual reference, i.e. the post topic and link content should match. Otherwise, Google might discount the link, even if it is dofollow.

Keep this in mind when you start a guest blogging campaign. Getting links isn’t enough; you need contextually relevant links.

Here are some of the guest posts we published:

  • 7 Keys to Scaling a Startup Globally [INC]
  • An Introduction to Activity-Based Selling [LinkedIn]
  • 7 Tips for MBAs Entering Sales Management Careers [TopMBA]

We weren’t exclusively promoting our sales management post in any of these guest posts. The sales management post just fit naturally into the context, so we linked to it.

If you’re guest blogging in 2017, this is the approach you need to adopt.


Step 8: Fine-tuning content with TF * IDF

After the article went live, we realized that we had heavily over-optimized it for the term “sales management.” It occurred 48 times throughout the article, too much for a 2,500 word piece.

Moreover, we hadn’t always used the term naturally in the article.

To solve this problem, we turned to TF-IDF.

Recognizing TF-IDF as a ranking factor

TF-IDF (Term Frequency-Inverse Document Frequency) is a way to figure out how important a word is in a document based on how frequently it appears in it.

This is a pretty standard statistical process in information retrieval. It is also one of the oldest ranking factors in Google’s algorithms.

Hypothesis: We hypothesized that dropping the number of “sales management” occurrences from 48 to 20 and replacing it with terms that have high lexical relevance would improve rankings.

Were we right?

See for yourself:

Our organic pageviews increased from nearly 0 to over 5,000 in just over 8 months.

Note that no new links or link acquisition initiatives were actively in-progress during the time of this mini-experiment.

Experiment timeline:

  • July 18th – Over-optimized keyword recognized.
  • July 25th – Content team finished updating body copy, H2s with relevant topics/synonyms.
  • July 26th – Updated internal anchor text to include relevant terms.
  • July 27th – Flushed cache & re-submitted to Search Console.
  • August 4th – Improved from #4 to #2 for “Sales Management”
  • August 17 – Improved from #2 to #1 for “Sales Management”

The results were fast. We were able to normalize our content and see results within weeks.

We’ll show you our exact process below.

Normalization process — How did we do it?

The normalization process focused on identifying over-optimized terms, replacing them with related words and submitting the new page to search engines.

Here’s how we did it:

1. Identifying over-optimized term(s)

We started off using Moz’s on-page optimization tool to scan our page.

According to Moz, we shouldn’t have used the target term — “sales management” — more than 15 times. This means we had to drop 33 occurrences.

2. Finding synonymous terms with high lexical relevance

Next, we had to replace our 28+ mentions with synonyms that wouldn’t feel out of place.

We used Moz’s Keyword Explorer to get some ideas.

3. Removed “sales management” from H2 headings

Initially, we had the keyword in both H1 and H2 headings, which was just overkill.

We removed it from H2 headings and used lexically similar variants instead for better flow.

4. Diluted “sales management” from body copy

We used our list of lexically relevant words to bring down the number of “sales management” occurrences to under 20. This was perfect for 2,500+ word article.

5. Diversify internal anchors

While we were changing our body copy, we realized that we also needed more anchor text diversity for our internal links.

Our anchors cloud was mostly “sales management” links:

We diversified this list by adding links to related terms like “sales manager,” “sales process,” etc.

6. Social amplification

We ramped up our activity on LinkedIn and Facebook to get the ball rolling on social shares.

The end result of this experimentation was an over 100% increase in traffic between August ‘16 to January ‘17.

The lesson?

Don’t just build backlinks — optimize your on-page content as well!


Conclusion

There’s a lot to learn from this case study. Some findings were surprising for us as well, particularly the impact of keyword density normalization.

While there are a lot of tricks and tactics detailed here, you’ll find that the fundamentals are essentially the same as what Rand and team have been preaching here for years. Create good content, reach out to link prospects, and use strategic guest posts to get your page to rank.

This might sound like a lot of work, but the results are worth it. Big industry players like Salesforce and Oracle actually advertise on AdWords for this term. While they have to pay for every single click, Pipedrive gets its clicks for free.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”