Optimizing AngularJS Single-Page Applications for Googlebot Crawlers

Published by single-page applications, or SPAs. While a conventional website loads every individual page because the user navigates the website, including calls towards the server and cache, loading sources, and rendering the page, SPAs eliminate a lot of the rear-finish activity by loading the whole site whenever a user first arrives at a webpage. Rather of loading a brand new page every time you click a hyperlink, the website dynamically updates just one HTML page because the user interacts using the site.

image001.png

Image c/o Microsoft

How can this be movement taking on the internet? With SPAs, users are treated to some screaming fast site by which they are able to navigate almost immediately, while developers possess a template that enables these to personalize, test, and optimize pages seamlessly and efficiently. AngularJS and ReactJS use advanced Javascript templates to render the website, meaning the HTML/CSS page speed overhead is nearly nothing. All site activity runs behind the curtain, from look at the consumer.

Regrettably, anybody who’s attempted performing Search engine optimization with an Angular or React site recognizes that the website activity is hidden from not only website visitors: it is also hidden from web crawlers. Crawlers like Googlebot depend heavily on HTML/CSS data to render and interpret the information on the site. When that HTML submissions are hidden behind website scripts, crawlers don’t have any website happy to index and serve searching results.

Obviously, Google claims they are able to crawl Javascript (and SEOs have tested and supported this claim), but if that’s true, Googlebot still struggles to crawl sites built on the Health spa framework. Among the first issues we experienced whenever a client first contacted us by having an Angular site was that nothing past the homepage was appearing within the SERPs. ScreamingFrog crawls uncovered the homepage and a number of other Javascript sources, which could it have been.

SF Javascript.png

Another common concern is recording Google Analytics data. Consider it: Analytics information is tracked by recording pageviews whenever a user navigates to some page. How will you track site analytics when there isn’t any HTML reaction to trigger a pageview?

We have spent with several clients on their own Health spa websites, we’ve created a process for performing Search engine optimization on individuals sites. Applying this process, we’ve not just enabled Health spa sites to become listed in search engines like google, but to position on page one for keywords.

5-step means to fix Search engine optimization for AngularJS

  1. Create a list of pages on the website
  2. Install Prerender
  3. “Fetch as Google”
  4. Configure Analytics
  5. Recrawl the website

1) Create a list of pages in your site

If this describes a lengthy and tiresome process, that’s since it certainly could be. For many sites, this is as simple as conveying the XML sitemap for that site. For other sites, especially individuals with hundreds or a large number of pages, developing a comprehensive list of all of the pages on the website may take hrs or days. However, I am unable to highlight enough how useful this task continues to be for all of us. Getting a catalog of pages on the website provides you with helpful tips for reference and consult while you focus on having your site indexed. It’s nearly impossible to calculate every issue that you’re likely to encounter by having an Health spa, and should you not come with an all-inclusive listing of happy to reference during your Search engine optimization optimization, it’s highly likely you’ll leave some area of the site united nations-listed in search engines like google unintentionally.

One solution that may allow you to streamline this method would be to divide content into directories rather of person pages. For instance, knowing that you’ve a listing of storeroom pages, incorporate your /storeroom/ directory and take note of the number of pages which includes. Or you come with an e-commerce site, take note of the number of products you’ve in every shopping category and compile your list this way (though for those who have an e-commerce site, I really hope for your own personel sake you’ve got a master listing of products somewhere). It doesn’t matter what you need to do to create this task a shorter period-consuming, make certain you’ve got a full list before ongoing to step two.

2) Install Prerender

Prerender will probably be other people you know when conducting Search engine optimization for SPAs. Prerender is really a service which will render your site inside a virtual browser, then serve the static HTML happy to web crawlers. From your Search engine optimization perspective, this is because good of the solution as possible expect: users get the short, dynamic Health spa experience while internet search engine crawlers can identify indexable content for search engine results.

Prerender’s prices varies in line with the size your website and also the freshness from the cache offered to Google. Smaller sized sites (as much as 250 pages) may use Prerender free of charge, while bigger sites (or websites that update constantly) might need to pay around $200+/month. However, getting an indexable form of your website that allows you to attract customers through search is invaluable. This is when that list you compiled in step one is necessary: if you’re able to prioritize what parts of your website have to be offered to look engines, or using what frequency, you might be able to save some money every month while still achieving Search engine optimization progress.

3) “Fetch as Google”

Within Search Console is definitely an incredibly helpful feature known as “Fetch as Google.” “Fetch as Google” enables you to definitely enter a URL out of your site and fetch it as being Googlebot would throughout a crawl. “Fetch” returns the HTTP response in the page, with a full download from the page source code as Googlebot sees it. “Fetch and Render” will return the HTTP response as well as give a screenshot from the page as Googlebot first viewed it so that as a website customer would view it.

It has effective applications for AngularJS sites. Despite Prerender installed, you might find that Bing is still only partly displaying your site, or it might be omitting key options that come with your website which are useful to users. Plugging the URL into “Fetch as Google” enables you to review the way your site seems to look engines and just what further steps you may want to decide to try optimize your keyword rankings. Furthermore, after requesting a “Fetch” or “Fetch and Render,” you can “Request Indexing” for your page, which may be handy catalyst to get your website to look searching results.

4) Configure Google Analytics (or Google Tag Manager)

When I pointed out above, SPAs might have serious challenge with recording Google Analytics data given that they don’t track pageviews what sort of standard website does. Rather from the traditional Google Analytics tracking code, it’s important to install Analytics through some type of alternative method.

One way that work well is by using the Angulartics wordpress plugin. Angulartics replaces standard pageview occasions with virtual pageview tracking, which tracks the whole user navigation across the application. Since SPAs dynamically load HTML content, these virtual pageviews are recorded according to user interactions using the site, which ultimately tracks exactly the same user behavior while you would through traditional Analytics. Others have discovered success using Google Tag Manager “History Change” triggers or any other innovative methods, that are perfectly acceptable implementations. As lengthy as the Google Analytics tracking records user interactions rather of conventional pageviews, your Analytics configuration should suffice.

5) Recrawl the website

We have spent through steps 1–4, you’re likely to wish to crawl the website you to ultimately find individuals errors that does not even Googlebot was anticipating. One issue we discovered early having a client was that whenever installing Prerender, our crawlers remained as encountering a spider trap:

As possible most likely tell, there have been not really 150,000 pages with that particular site. Our crawlers just discovered a recursive loop that stored generating longer and longer URL strings for that websites content. This really is something we will not have present in Search Console or Analytics. SPAs are well known for causing tiresome, inexplicable problems that you’ll only uncover by crawling the website yourself. Even though you stick to the steps above and take as numerous safeguards as you possibly can, I’m able to still almost guarantee you will find a distinctive issue that may simply be diagnosed via a crawl.

If you’ve encounter these unique issues, tell me within the comments! I’d like to hear the other issues individuals have experienced with SPAs.

Results

When I pointed out earlier within the article, the procedure outlined above has allowed us not only to get client sites indexed, but to obtain individuals sites ranking on first page for a number of keywords. Here’s a good example of the keyword progress we designed for one client by having an AngularJS site:

Also, the organic traffic growth for your client during the period of seven several weeks:

All this proves that although Search engine optimization for SPAs could be tiresome, laborious, and difficult, it’s not impossible. Stick to the steps above, and you may have Search engine optimization success together with your single-page application website.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by single-page applications, or SPAs. While a conventional website loads every individual page because the user navigates the website, including calls towards the server and cache, loading sources, and rendering the page, SPAs eliminate a lot of the rear-finish activity by loading the whole site whenever a user first arrives at a webpage. Rather of loading a brand new page every time you click a hyperlink, the website dynamically updates just one HTML page because the user interacts using the site.

image001.png

Image c/o Microsoft

How can this be movement taking on the internet? With SPAs, users are treated to some screaming fast site by which they are able to navigate almost immediately, while developers possess a template that enables these to personalize, test, and optimize pages seamlessly and efficiently. AngularJS and ReactJS use advanced Javascript templates to render the website, meaning the HTML/CSS page speed overhead is nearly nothing. All site activity runs behind the curtain, from look at the consumer.

Regrettably, anybody who’s attempted performing Search engine optimization with an Angular or React site recognizes that the website activity is hidden from not only website visitors: it is also hidden from web crawlers. Crawlers like Googlebot depend heavily on HTML/CSS data to render and interpret the information on the site. When that HTML submissions are hidden behind website scripts, crawlers don’t have any website happy to index and serve searching results.

Obviously, Google claims they are able to crawl Javascript (and SEOs have tested and supported this claim), but if that’s true, Googlebot still struggles to crawl sites built on the Health spa framework. Among the first issues we experienced whenever a client first contacted us by having an Angular site was that nothing past the homepage was appearing within the SERPs. ScreamingFrog crawls uncovered the homepage and a number of other Javascript sources, which could it have been.

SF Javascript.png

Another common concern is recording Google Analytics data. Consider it: Analytics information is tracked by recording pageviews whenever a user navigates to some page. How will you track site analytics when there isn’t any HTML reaction to trigger a pageview?

We have spent with several clients on their own Health spa websites, we’ve created a process for performing Search engine optimization on individuals sites. Applying this process, we’ve not just enabled Health spa sites to become listed in search engines like google, but to position on page one for keywords.

5-step means to fix Search engine optimization for AngularJS

  1. Create a list of pages on the website
  2. Install Prerender
  3. “Fetch as Google”
  4. Configure Analytics
  5. Recrawl the website

1) Create a list of pages in your site

If this describes a lengthy and tiresome process, that’s since it certainly could be. For many sites, this is as simple as conveying the XML sitemap for that site. For other sites, especially individuals with hundreds or a large number of pages, developing a comprehensive list of all of the pages on the website may take hrs or days. However, I am unable to highlight enough how useful this task continues to be for all of us. Getting a catalog of pages on the website provides you with helpful tips for reference and consult while you focus on having your site indexed. It’s nearly impossible to calculate every issue that you’re likely to encounter by having an Health spa, and should you not come with an all-inclusive listing of happy to reference during your Search engine optimization optimization, it’s highly likely you’ll leave some area of the site united nations-listed in search engines like google unintentionally.

One solution that may allow you to streamline this method would be to divide content into directories rather of person pages. For instance, knowing that you’ve a listing of storeroom pages, incorporate your /storeroom/ directory and take note of the number of pages which includes. Or you come with an e-commerce site, take note of the number of products you’ve in every shopping category and compile your list this way (though for those who have an e-commerce site, I really hope for your own personel sake you’ve got a master listing of products somewhere). It doesn’t matter what you need to do to create this task a shorter period-consuming, make certain you’ve got a full list before ongoing to step two.

2) Install Prerender

Prerender will probably be other people you know when conducting Search engine optimization for SPAs. Prerender is really a service which will render your site inside a virtual browser, then serve the static HTML happy to web crawlers. From your Search engine optimization perspective, this is because good of the solution as possible expect: users get the short, dynamic Health spa experience while internet search engine crawlers can identify indexable content for search engine results.

Prerender’s prices varies in line with the size your website and also the freshness from the cache offered to Google. Smaller sized sites (as much as 250 pages) may use Prerender free of charge, while bigger sites (or websites that update constantly) might need to pay around $200+/month. However, getting an indexable form of your website that allows you to attract customers through search is invaluable. This is when that list you compiled in step one is necessary: if you’re able to prioritize what parts of your website have to be offered to look engines, or using what frequency, you might be able to save some money every month while still achieving Search engine optimization progress.

3) “Fetch as Google”

Within Search Console is definitely an incredibly helpful feature known as “Fetch as Google.” “Fetch as Google” enables you to definitely enter a URL out of your site and fetch it as being Googlebot would throughout a crawl. “Fetch” returns the HTTP response in the page, with a full download from the page source code as Googlebot sees it. “Fetch and Render” will return the HTTP response as well as give a screenshot from the page as Googlebot first viewed it so that as a website customer would view it.

It has effective applications for AngularJS sites. Despite Prerender installed, you might find that Bing is still only partly displaying your site, or it might be omitting key options that come with your website which are useful to users. Plugging the URL into “Fetch as Google” enables you to review the way your site seems to look engines and just what further steps you may want to decide to try optimize your keyword rankings. Furthermore, after requesting a “Fetch” or “Fetch and Render,” you can “Request Indexing” for your page, which may be handy catalyst to get your website to look searching results.

4) Configure Google Analytics (or Google Tag Manager)

When I pointed out above, SPAs might have serious challenge with recording Google Analytics data given that they don’t track pageviews what sort of standard website does. Rather from the traditional Google Analytics tracking code, it’s important to install Analytics through some type of alternative method.

One way that work well is by using the Angulartics wordpress plugin. Angulartics replaces standard pageview occasions with virtual pageview tracking, which tracks the whole user navigation across the application. Since SPAs dynamically load HTML content, these virtual pageviews are recorded according to user interactions using the site, which ultimately tracks exactly the same user behavior while you would through traditional Analytics. Others have discovered success using Google Tag Manager “History Change” triggers or any other innovative methods, that are perfectly acceptable implementations. As lengthy as the Google Analytics tracking records user interactions rather of conventional pageviews, your Analytics configuration should suffice.

5) Recrawl the website

We have spent through steps 1–4, you’re likely to wish to crawl the website you to ultimately find individuals errors that does not even Googlebot was anticipating. One issue we discovered early having a client was that whenever installing Prerender, our crawlers remained as encountering a spider trap:

As possible most likely tell, there have been not really 150,000 pages with that particular site. Our crawlers just discovered a recursive loop that stored generating longer and longer URL strings for that websites content. This really is something we will not have present in Search Console or Analytics. SPAs are well known for causing tiresome, inexplicable problems that you’ll only uncover by crawling the website yourself. Even though you stick to the steps above and take as numerous safeguards as you possibly can, I’m able to still almost guarantee you will find a distinctive issue that may simply be diagnosed via a crawl.

If you’ve encounter these unique issues, tell me within the comments! I’d like to hear the other issues individuals have experienced with SPAs.

Results

When I pointed out earlier within the article, the procedure outlined above has allowed us not only to get client sites indexed, but to obtain individuals sites ranking on first page for a number of keywords. Here’s a good example of the keyword progress we designed for one client by having an AngularJS site:

Also, the organic traffic growth for your client during the period of seven several weeks:

All this proves that although Search engine optimization for SPAs could be tiresome, laborious, and difficult, it’s not impossible. Stick to the steps above, and you may have Search engine optimization success together with your single-page application website.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

No, Compensated Search Audiences Won&rsquot Replace Keywords

Published by follow him) authored a fabulously under-shared publish revealing these weaknesses differently: What You Believe You Understand Your Customers’ Persona is Wrong

In the following paragraphs, Aaron first fearlessly broaches the topic of audience targeting by describing how it’s not even close to the precise science everyone has wished so that it is. He noted a couple of ways in which audience targeting could be erroneous, as well as *gasp* used data to formulate his conclusions.

It’s Alright to question audience targeting — really!

Allow me to be obvious: In my opinion audience targeting is popular since there genuinely is value inside it (it’s amazing data to have… when it is accurate!). The insights we are able to get about personas, which we are able to then use to power our ads, are very amazing and effective.

So, why the heck shall we be held droning on about audience targeting weaknesses? Well, I’m attempting to pave the way for something. I’m looking to get us to confess that audience targeting itself has some weaknesses, and isn’t the savior of internet marketing that some allow it to be to be, which there’s a attempted-and-true solution that matches well with demographic targeting, however is not substituted with it. It’s a targeting that people compensated searchers used enjoyably and effectively for a long time now.

It’s the keyword.

Whereas audience targeting chafes underneath the law of averages (i.e., “at some time, someone within my demographic targeted list needs to really want to consider things i am selling”), keyword targeting shines in individual-revealing user intent.

Keyword targeting does something a crowd can’t ever, ever, ever do…

Keywords: Personal intent powerhouses

A keyword continues to be my personal favorite type of targeting in compensated search since it reveals individual, personal, and temporal intent. Individuals aren’t just three buzzwords I pulled from the air since i required to stretch this already obesely-lengthy publish out further. They’re intentional, and price exploring.

Individual

A keyword is really a effective targeting method since it is written (or spoken!) by an individual. I am talking about, let’s be truthful, it’s rare to possess several people huddled round the computer shouting in internet marketing. Keywords are usually in the mind of 1 individual, and due to they have frightening potential.

Remember, audience targeting relies from assumptions. That’s, you are taking someone who “probably” think exactly the same means by a particular area, but does which means that they can’t have unique tastes? For example, one individual preferring to purchase athletic shoes with another preferring to purchase heels?

Keyword targeting is demographic-blind.

It doesn’t care what you are, where you’re from, that which you did, as lengthy while you love me… err, I am talking about, it doesn’t worry about your demographic, nearly what you are individually thinking about.

Personal

The following facet of keywords powering their targeting masterdom is they reveal personal intent. Whereas the “individual” facet of keyword targeting narrows our targeting from someone one person, the “personal” facet of keyword targeting adopts the mind of this individual.

Don’t you want there is a method to sell to people that you could truly discern the intentions of the hearts? Lovely be considered a effective approach to targeting? Well, yes — and that’s keyword targeting!

Consider it: a keyword is a kind of communication. It’s a person typing or suggesting what’s on their own mind. For any moment, within their search, you and they’re as connected through communication as Alexander Graham Bell and Thomas Watson around the first telephone call. That individual is revealing for you what’s on her behalf mind, and that is an electrical which can’t be undervalued.

Whenever a person informs Google they would like to know “how does someone earn a black belt,” that’s telling the consumer — the Jumping Judo Janes of Jordan — this individual genuinely wants to understand more about their professional services plus they can show an advertisement that suits that intent (Ready for your Black Belt? It’s Easy, Let’s Help!). Compensated search keywords officiate the marriage of private intent with advertising in a manner that previous marketers could only imagine. We aren’t finding random people we believe may be interested based on their current address. We’re responding to someone telling us they’re interested.

Temporal

The ultimate note of keyword targeting that can’t be undervalued, may be the temporal aspect. Anybody worth their salt in marketing let you know “timing is everything”. With keyword targeting, the timing is inseparable in the intent. Just when was this individual wondering regarding your Judo classes? At that time they’re searching, NOW!

You aren’t blasting your ads to your users lives, interrupting them because they start their business or family time wishing to jumpstart their interest by distracting them using their activities. You’re answering their query, in the very time they are curious about learning more.

Timing. Is. Everything.

The problem settles into stickiness

Thus, in summary: a “search” is performed when a person reveals his/her personal intent with communication (keywords/queries) in a specific time. For that reason, I maintain that keyword targeting trumps audience targeting in compensated search.

Compensated search is definitely an evolving industry, but it’s still “search,” which requires communication, which requires words (until that point once the emoji gets control the British language, but that’s okay since the rioting within the roads may have become us first).

Obviously, we’d be remiss in ignoring some legitimate questions which inevitably arise. As ideal because the outline I have organized before you decide to sounds, you are most likely starting to formulate something similar to the next four questions.

  • How about low amount of searches keywords?
  • Let’s say the various search engines kill keyword targeting?
  • Let’s say IoT monsters kill search engines like google?
  • How about social ads?

We’ll near by discussing all these four questions.

Low amount of searches terms (LSVs)

Low amount of searches keywords stink like poo (excuse the rather strong language there). I am not confident that there’s data about this available (if that’s the case, please share it below), however i have encounter low amount of searches terms much more previously year than initially when i first began managing PPC campaigns this year.

I do not understand all the causes of this possibly it’s worth another blog publish, but the truth is it’s getting harder to become creative and target high-value lengthy-tail keywords when a lot of are becoming turn off because of low amount of searches.

This appears just like a fairly smooth way being paved for Google/Bing to eventually “take over” (i.e., “automate for the good”) keyword targeting, at the minimum for SMBs (small-medium companies) where LSVs could be a serious problem. In cases like this, the keyword would be around, it simply wouldn’t be managed by us PPCers directly. Boo.

Internet search engine decrees

I’ve already addressed the ability search engines like google have here, but I’ll be the first one to admit that, around I love keyword targeting and around I’ve hopefully proven how valuable it’s, still it will be a simple enough factor for Google or Bing to get rid of completely. Major boo.

Since compensated search depends on keywords and queries and language to operate, I picture this would look a lot more like an automatic solution (think DSAs and shopping), that they make keyword targeting right into a dynamic system that actually works along with audience targeting.

Although this involved annually . 5 ago, it’s important to note that at Hero Conference working in london, Bing Ads’ ebullient Tor Crockett made the general public statement that Bing at that time didn’t have intends to sunset the keyword like a putting in a bid option. We only hope this sentiment remains, and transfers to Google too.

But Internet of products (IoT) Frankenstein devices!

Finally, maybe search engines like google will not be around forever. Possibly this may be like IoT devices for example Alexa that contain some degree of search into them, but pull traffic from using Google/Bing search bars. To illustrate this in tangible existence, its not necessary to inquire about Google how to locate (queries, keywords, communication, search) the very best cost on laundry soap if you’re able to just push the Dash button, or perhaps your smart washer can simply order you more with no search effort.

Image source

However, I still believe we are a lengthy way removed from this in the same manner the freak-out over cellular devices killing pcs has slowed lower. That’s, we still utilize our computers for education & work (even when personal usage involves tablets and cellular devices and IoT freaks-of-nature… smart toasters anybody?) and our cellular devices for queries on the run. Computers continue to be a principal supply of search when it comes to work and education in addition to more intensive personal activities (vacation planning, for example), and therefore computers still depend heavily on search. Cellular devices continue to be heavily query-centered for a number of tasks, especially as voice search (still query-centered!) takes over harder.

The social effect

Social is its very own animal in ways, and why I believe that it is already and continuously impact search and keywords (though not inside a terribly worrisome way). Social certainly pulls an amount of traffic from search, particularly in product queries. “Who has utilized this dishwasher before, every other recommendations?” Social ads are exploding in recognition too, and mainly since they’re working. Individuals are purchasing greater than they have from social ads and marketers are hurrying to become there on their behalf.

The switch side of the: a social and compensated search comparison is apples-to-oranges. There are various motivations and purposes for implementing search engines like google and querying your buddies.

Audience targeting is effective inside a social setting since that social networking has phenomenally accurate and particular targeting for people, but it’s the rare individual interested in the perfect condom to buy who queries his family and buddies on Facebook. There’ll always be aspects of social and check which are unique and valuable in their own individual way, and audience targeting for social and keyword targeting for search complement individuals unique aspects of each.

Idealism incarnate

Thus, it is indeed my thought that as lengthy once we have search, we’ll have keywords and keyword targeting is going to be the easiest method to target — as lengthy as costs remain low enough to become realistic for budgets and the various search engines don’t kill keyword putting in a bid to have an automated solution.

Don’t quit, the keyword isn’t dead. Remain focused, and continue with your match types!

I wish to near by re-acknowledging the important point I opened up with.

It is not my intention by any means to setup an incorrect dichotomy. Actually, when i consider it, I’d argue that i’m penning this in response as to the I’ve heard be a false dichotomy. That’s, that audience targeting is preferable to keyword targeting and can eventually replace it…

In my opinion the keyword remains the best type of targeting for any compensated search marketer, however i also think that audience census can enjoy an invaluable complementary role in putting in a bid.

An excellent example that people already me is remarketing lists for search ads, by which we are able to layer on remarketing audiences both in Google and Bing into our searches. Wouldn’t it’s amazing when we could at some point do that with massive levels of audience data? I have stated this before, but were Bing Ads to make use of its LinkedIn acquisition to let us layer on LinkedIn audiences into our current keyword framework, the Business to business angels would surely rejoice over us (Bing has responded, incidentally, that something is incorporated in the works!).

In either case, I really hope I have shown that not even close to standing on its deathbed, the keyword remains the most important tool within the compensated search marketer’s toolbox.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

Published by follow him) authored a fabulously under-shared publish revealing these weaknesses differently: What You Believe You Understand Your Customers’ Persona is Wrong

In the following paragraphs, Aaron first fearlessly broaches the topic of audience targeting by describing how it’s not even close to the precise science everyone has wished so that it is. He noted a couple of ways in which audience targeting could be erroneous, as well as *gasp* used data to formulate his conclusions.

It’s Alright to question audience targeting — really!

Allow me to be obvious: In my opinion audience targeting is popular since there genuinely is value inside it (it’s amazing data to have… when it is accurate!). The insights we are able to get about personas, which we are able to then use to power our ads, are very amazing and effective.

So, why the heck shall we be held droning on about audience targeting weaknesses? Well, I’m attempting to pave the way for something. I’m looking to get us to confess that audience targeting itself has some weaknesses, and isn’t the savior of internet marketing that some allow it to be to be, which there’s a attempted-and-true solution that matches well with demographic targeting, however is not substituted with it. It’s a targeting that people compensated searchers used enjoyably and effectively for a long time now.

It’s the keyword.

Whereas audience targeting chafes underneath the law of averages (i.e., “at some time, someone within my demographic targeted list needs to really want to consider things i am selling”), keyword targeting shines in individual-revealing user intent.

Keyword targeting does something a crowd can’t ever, ever, ever do…

Keywords: Personal intent powerhouses

A keyword continues to be my personal favorite type of targeting in compensated search since it reveals individual, personal, and temporal intent. Individuals aren’t just three buzzwords I pulled from the air since i required to stretch this already obesely-lengthy publish out further. They’re intentional, and price exploring.

Individual

A keyword is really a effective targeting method since it is written (or spoken!) by an individual. I am talking about, let’s be truthful, it’s rare to possess several people huddled round the computer shouting in internet marketing. Keywords are usually in the mind of 1 individual, and due to they have frightening potential.

Remember, audience targeting relies from assumptions. That’s, you are taking someone who “probably” think exactly the same means by a particular area, but does which means that they can’t have unique tastes? For example, one individual preferring to purchase athletic shoes with another preferring to purchase heels?

Keyword targeting is demographic-blind.

It doesn’t care what you are, where you’re from, that which you did, as lengthy while you love me… err, I am talking about, it doesn’t worry about your demographic, nearly what you are individually thinking about.

Personal

The following facet of keywords powering their targeting masterdom is they reveal personal intent. Whereas the “individual” facet of keyword targeting narrows our targeting from someone one person, the “personal” facet of keyword targeting adopts the mind of this individual.

Don’t you want there is a method to sell to people that you could truly discern the intentions of the hearts? Lovely be considered a effective approach to targeting? Well, yes — and that’s keyword targeting!

Consider it: a keyword is a kind of communication. It’s a person typing or suggesting what’s on their own mind. For any moment, within their search, you and they’re as connected through communication as Alexander Graham Bell and Thomas Watson around the first telephone call. That individual is revealing for you what’s on her behalf mind, and that is an electrical which can’t be undervalued.

Whenever a person informs Google they would like to know “how does someone earn a black belt,” that’s telling the consumer — the Jumping Judo Janes of Jordan — this individual genuinely wants to understand more about their professional services plus they can show an advertisement that suits that intent (Ready for your Black Belt? It’s Easy, Let’s Help!). Compensated search keywords officiate the marriage of private intent with advertising in a manner that previous marketers could only imagine. We aren’t finding random people we believe may be interested based on their current address. We’re responding to someone telling us they’re interested.

Temporal

The ultimate note of keyword targeting that can’t be undervalued, may be the temporal aspect. Anybody worth their salt in marketing let you know “timing is everything”. With keyword targeting, the timing is inseparable in the intent. Just when was this individual wondering regarding your Judo classes? At that time they’re searching, NOW!

You aren’t blasting your ads to your users lives, interrupting them because they start their business or family time wishing to jumpstart their interest by distracting them using their activities. You’re answering their query, in the very time they are curious about learning more.

Timing. Is. Everything.

The problem settles into stickiness

Thus, in summary: a “search” is performed when a person reveals his/her personal intent with communication (keywords/queries) in a specific time. For that reason, I maintain that keyword targeting trumps audience targeting in compensated search.

Compensated search is definitely an evolving industry, but it’s still “search,” which requires communication, which requires words (until that point once the emoji gets control the British language, but that’s okay since the rioting within the roads may have become us first).

Obviously, we’d be remiss in ignoring some legitimate questions which inevitably arise. As ideal because the outline I have organized before you decide to sounds, you are most likely starting to formulate something similar to the next four questions.

  • How about low amount of searches keywords?
  • Let’s say the various search engines kill keyword targeting?
  • Let’s say IoT monsters kill search engines like google?
  • How about social ads?

We’ll near by discussing all these four questions.

Low amount of searches terms (LSVs)

Low amount of searches keywords stink like poo (excuse the rather strong language there). I am not confident that there’s data about this available (if that’s the case, please share it below), however i have encounter low amount of searches terms much more previously year than initially when i first began managing PPC campaigns this year.

I do not understand all the causes of this possibly it’s worth another blog publish, but the truth is it’s getting harder to become creative and target high-value lengthy-tail keywords when a lot of are becoming turn off because of low amount of searches.

This appears just like a fairly smooth way being paved for Google/Bing to eventually “take over” (i.e., “automate for the good”) keyword targeting, at the minimum for SMBs (small-medium companies) where LSVs could be a serious problem. In cases like this, the keyword would be around, it simply wouldn’t be managed by us PPCers directly. Boo.

Internet search engine decrees

I’ve already addressed the ability search engines like google have here, but I’ll be the first one to admit that, around I love keyword targeting and around I’ve hopefully proven how valuable it’s, still it will be a simple enough factor for Google or Bing to get rid of completely. Major boo.

Since compensated search depends on keywords and queries and language to operate, I picture this would look a lot more like an automatic solution (think DSAs and shopping), that they make keyword targeting right into a dynamic system that actually works along with audience targeting.

Although this involved annually . 5 ago, it’s important to note that at Hero Conference working in london, Bing Ads’ ebullient Tor Crockett made the general public statement that Bing at that time didn’t have intends to sunset the keyword like a putting in a bid option. We only hope this sentiment remains, and transfers to Google too.

But Internet of products (IoT) Frankenstein devices!

Finally, maybe search engines like google will not be around forever. Possibly this may be like IoT devices for example Alexa that contain some degree of search into them, but pull traffic from using Google/Bing search bars. To illustrate this in tangible existence, its not necessary to inquire about Google how to locate (queries, keywords, communication, search) the very best cost on laundry soap if you’re able to just push the Dash button, or perhaps your smart washer can simply order you more with no search effort.

Image source

However, I still believe we are a lengthy way removed from this in the same manner the freak-out over cellular devices killing pcs has slowed lower. That’s, we still utilize our computers for education & work (even when personal usage involves tablets and cellular devices and IoT freaks-of-nature… smart toasters anybody?) and our cellular devices for queries on the run. Computers continue to be a principal supply of search when it comes to work and education in addition to more intensive personal activities (vacation planning, for example), and therefore computers still depend heavily on search. Cellular devices continue to be heavily query-centered for a number of tasks, especially as voice search (still query-centered!) takes over harder.

The social effect

Social is its very own animal in ways, and why I believe that it is already and continuously impact search and keywords (though not inside a terribly worrisome way). Social certainly pulls an amount of traffic from search, particularly in product queries. “Who has utilized this dishwasher before, every other recommendations?” Social ads are exploding in recognition too, and mainly since they’re working. Individuals are purchasing greater than they have from social ads and marketers are hurrying to become there on their behalf.

The switch side of the: a social and compensated search comparison is apples-to-oranges. There are various motivations and purposes for implementing search engines like google and querying your buddies.

Audience targeting is effective inside a social setting since that social networking has phenomenally accurate and particular targeting for people, but it’s the rare individual interested in the perfect condom to buy who queries his family and buddies on Facebook. There’ll always be aspects of social and check which are unique and valuable in their own individual way, and audience targeting for social and keyword targeting for search complement individuals unique aspects of each.

Idealism incarnate

Thus, it is indeed my thought that as lengthy once we have search, we’ll have keywords and keyword targeting is going to be the easiest method to target — as lengthy as costs remain low enough to become realistic for budgets and the various search engines don’t kill keyword putting in a bid to have an automated solution.

Don’t quit, the keyword isn’t dead. Remain focused, and continue with your match types!

I wish to near by re-acknowledging the important point I opened up with.

It is not my intention by any means to setup an incorrect dichotomy. Actually, when i consider it, I’d argue that i’m penning this in response as to the I’ve heard be a false dichotomy. That’s, that audience targeting is preferable to keyword targeting and can eventually replace it…

In my opinion the keyword remains the best type of targeting for any compensated search marketer, however i also think that audience census can enjoy an invaluable complementary role in putting in a bid.

An excellent example that people already me is remarketing lists for search ads, by which we are able to layer on remarketing audiences both in Google and Bing into our searches. Wouldn’t it’s amazing when we could at some point do that with massive levels of audience data? I have stated this before, but were Bing Ads to make use of its LinkedIn acquisition to let us layer on LinkedIn audiences into our current keyword framework, the Business to business angels would surely rejoice over us (Bing has responded, incidentally, that something is incorporated in the works!).

In either case, I really hope I have shown that not even close to standing on its deathbed, the keyword remains the most important tool within the compensated search marketer’s toolbox.

Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

“”

Evidence of the Surprising State of JavaScript Indexing

Posted by escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.

In this article, I want to explore some things we’ve seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I’ve drawn about how it must be working.

A brief introduction to JS indexing

At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.

There are some complexities even in this basic definition (answers in brackets as I understand them):

  • What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
  • What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
  • What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
  • What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)

For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.

A high-level overview of my view of JavaScript best practices

Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.

Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that’s too susceptible to silent failures and falling out of date. We’ve seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.

These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).

Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it’s served in response to any fresh request.

I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:

Resources for auditing JavaScript

If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).

To do that, here are some resources I’ve found useful:

  • Justin again, describing the difference between working with the DOM and viewing source
  • The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
    • The console is where you can see errors and interact with the state of the page
    • As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
  • This post from Google’s John Mueller has a decent checklist of best practices
  • Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.

Some surprising/interesting results

There are likely to be timeouts on JavaScript execution

I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).

It may be more complicated than that, however. This segment of a thread is interesting. It’s from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):

“Actually, we did care about this content. I’m not at liberty to explain the details, but we did execute setTimeouts up to some time limit.

If they’re smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”

What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).

It matters how your JS is executed

I referenced this recent study earlier. In it, the author found:

Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot

The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they’re called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:

Slide5.PNG

It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There’s more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.

CRO tests are getting indexed

When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:

  • For users:
    • CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
    • Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
    • A cookie is then set to make sure that the user sees the same version if they revisit that page later
  • For Googlebot:
    • The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
    • With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed

I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don’t do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.

Split tests show SEO improvements from removing a reliance on JS

Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.

odn_additional_sessions.png

A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.

Conclusion: How JavaScript indexing might work from a systems perspective

Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:

  • Googlebot crawls and caches HTML and core resources regularly
  • Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
    • Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
    • Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
    • Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
  • The JS rendering also, in addition to adding pages to the index:
    • May make modifications to the link graph
    • May add new URLs to the discovery/crawling queue for Googlebot

The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:

“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.

Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”

This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.

My best guess is that they’re using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.

Run a test, get publicity

I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.

In this article, I want to explore some things we’ve seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I’ve drawn about how it must be working.

A brief introduction to JS indexing

At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.

There are some complexities even in this basic definition (answers in brackets as I understand them):

  • What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
  • What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
  • What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
  • What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)

For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.

A high-level overview of my view of JavaScript best practices

Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.

Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that’s too susceptible to silent failures and falling out of date. We’ve seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.

These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).

Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it’s served in response to any fresh request.

I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:

Resources for auditing JavaScript

If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).

To do that, here are some resources I’ve found useful:

  • Justin again, describing the difference between working with the DOM and viewing source
  • The developer tools built into Chrome are excellent, and some of the documentation is actually really good:
    • The console is where you can see errors and interact with the state of the page
    • As soon as you get past debugging the most basic JavaScript, you will want to start setting breakpoints, which allow you to step through the code from specified points
  • This post from Google’s John Mueller has a decent checklist of best practices
  • Although it’s about a broader set of technical skills, anyone who hasn’t already read it should definitely check out Mike’s post on the technical SEO renaissance.

Some surprising/interesting results

There are likely to be timeouts on JavaScript execution

I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).

It may be more complicated than that, however. This segment of a thread is interesting. It’s from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):

“Actually, we did care about this content. I’m not at liberty to explain the details, but we did execute setTimeouts up to some time limit.

If they’re smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”

What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).

It matters how your JS is executed

I referenced this recent study earlier. In it, the author found:

Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot

The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they’re called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:

Slide5.PNG

It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There’s more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.

CRO tests are getting indexed

When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:

  • For users:
    • CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
    • Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
    • A cookie is then set to make sure that the user sees the same version if they revisit that page later
  • For Googlebot:
    • The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
    • With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed

I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don’t do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.

Split tests show SEO improvements from removing a reliance on JS

Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.

odn_additional_sessions.png

A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.

Conclusion: How JavaScript indexing might work from a systems perspective

Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:

  • Googlebot crawls and caches HTML and core resources regularly
  • Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
    • Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
    • Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
    • Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
  • The JS rendering also, in addition to adding pages to the index:
    • May make modifications to the link graph
    • May add new URLs to the discovery/crawling queue for Googlebot

The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:

“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.

Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”

This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.

My best guess is that they’re using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.

Run a test, get publicity

I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”

Should SEOs Care About Internal Links? – Whiteboard Friday

Posted by Should SEOs Care About Internl Links?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about internal links and internal link structures. Now, it is not the most exciting thing in the SEO world, but it’s something that you have to get right and getting it wrong can actually cause lots of problems.

Attributes of internal links

So let’s start by talking about some of the things that are true about internal links. Internal links, when I say that phrase, what I mean is a link that exists on a website, let’s say ABC.com here, that is linking to a page on the same website, so over here, linking to another page on ABC.com. We’ll do /A and /B. This is actually my shipping routes page. So you can see I’m linking from A to B with the anchor text “shipping routes.”

The idea of an internal link is really initially to drive visitors from one place to another, to show them where they need to go to navigate from one spot on your site to another spot. They’re different from internal links only in that, in the HTML code, you’re pointing to the same fundamental root domain. In the initial early versions of the internet, that didn’t matter all that much, but for SEO, it matters quite a bit because external links are treated very differently from internal links. That is not to say, however, that internal links have no power or no ability to change rankings, to change crawling patterns and to change how a search engine views your site. That’s what we need to chat about.

1. Anchor text is something that can be considered. The search engines have generally minimized its importance, but it’s certainly something that’s in there for internal links.

2. The location on the page actually matters quite a bit, just as it does with external links. Internal links, it’s almost more so in that navigation and footers specifically have attributes around internal links that can be problematic.

Those are essentially when Google in particular sees manipulation in the internal link structure, specifically things like you’ve stuffed anchor text into all of the internal links trying to get this shipping routes page ranking by putting a little link down here in the footer of every single page and then pointing over here trying to game and manipulate us, they hate that. In fact, there is an algorithmic penalty for that kind of stuff, and we can see it very directly.

We’ve actually run tests where we’ve observed that jamming this type of anchor text-rich links into footers or into navigation and then removing it gets a site indexed, well let’s not say indexed, let’s say ranking well and then ranking poorly when you do it. Google reverses that penalty pretty quickly too, which is nice. So if you are not ranking well and you’re like, “Oh no, Rand, I’ve been doing a lot of that,” maybe take it away. Your rankings might come right back. That’s great.

3. The link target matters obviously from one place to another.

4. The importance of the linking page, this is actually a big one with internal links. So it is generally the case that if a page on your website has lots of external links pointing to it, it gains authority and it has more ability to sort of generate a little bit, not nearly as much as external links, but a little bit of ranking power and influence by linking to other pages. So if you have very well-linked two pages on your site, you should make sure to link out from those to pages on your site that a) need it and b) are actually useful for your users. That’s another signal we’ll talk about.

5. The relevance of the link, so pointing to my shipping routes page from a page about other types of shipping information, totally great. Pointing to it from my dog food page, well, it doesn’t make great sense. Unless I’m talking about shipping routes of dog food specifically, it seems like it’s lacking some of that context, and search engines can pick up on that as well.

6. The first link on the page. So this matters mostly in terms of the anchor text, just as it does for external links. Basically, if you are linking in a bunch of different places to this page from this one, Google will usually, at least in all of our experiments so far, count the first anchor text only. So if I have six different links to this and the first link says “Click here,” “Click here” is the anchor text that Google is going to apply, not “Click here” and “shipping routes” and “shipping.” Those subsequent links won’t matter as much.

7. Then the type of link matters too. Obviously, I would recommend that you keep it in the HTML link format rather than trying to do something fancy with JavaScript. Even though Google can technically follow those, it looks to us like they’re not treated with quite the same authority and ranking influence. Text is slightly, slightly better than images in our testing, although that testing is a few years old at this point. So maybe image links are treated exactly the same. Either way, do make sure you have that. If you’re doing image links, by the way, remember that the alt attribute of that image is what becomes the anchor text of that link.

Internal versus external links

A. External links usually give more authority and ranking ability.

That shouldn’t be surprising. An external link is like a vote from an independent, hopefully independent, hopefully editorially given website to your website saying, “This is a good place for you to go for this type of information.” On your own site, it’s like a vote for yourself, so engines don’t treat it the same.

B. Anchor text of internal links generally have less influence.

So, as we mentioned, me pointing to my page with the phrase that I want to rank for isn’t necessarily a bad thing, but I shouldn’t do it in a manipulative way. I shouldn’t do it in a way that’s going to look spammy or sketchy to visitors, because if visitors stop clicking around my site or engaging with it or they bounce more, I will definitely lose ranking influence much faster than if I simply make those links credible and usable and useful to visitors. Besides, the anchor text of internal links is not as powerful anyway.

C. A lack of internal links can seriously hamper a page’s ability to get crawled + ranked.

It is, however, the case that a lack of internal links, like an orphan page that doesn’t have many internal or any internal links from the rest of its website, that can really hamper a page’s ability to rank. Sometimes it will happen. External links will point to a page. You’ll see that page in your analytics or in a report about your links from Moz or Ahrefs or Majestic, and then you go, “Oh my gosh, I’m not linking to that page at all from anywhere else on my site.” That’s a bad idea. Don’t do that. That is definitely problematic.

D. It’s still the case, by the way, that, broadly speaking, pages with more links on them will send less link value per link.

So, essentially, you remember the original PageRank formula from Google. It said basically like, “Oh, well, if there are five links, send one-fifth of the PageRank power to each of those, and if there are four links, send one-fourth.” Obviously, one-fourth is bigger than one-fifth. So taking away that fifth link could mean that each of the four pages that you’ve linked to get a little bit more ranking authority and influence in the original PageRank algorithm.

Look, PageRank is old, very, very old at this point, but at least the theories behind it are not completely gone. So it is the case that if you have a page with tons and tons of links on it, that tends to send out less authority and influence than a page with few links on it, which is why it can definitely pay to do some spring cleaning on your website and clear out any rubbish pages or rubbish links, ones that visitors don’t want, that search engines don’t want, that you don’t care about. Clearing that up can actually have a positive influence. We’ve seen that on a number of websites where they’ve cleaned up their information architecture, whittled down their links to just the stuff that matters the most and the pages that matter the most, and then seen increased rankings across the board from all sorts of signals, positive signals, user engagement signals, link signals, context signals that help the engine them rank better.

E. Internal link flow (aka PR sculpting) is rarely effective, and usually has only mild effects… BUT a little of the right internal linking can go a long way.

Then finally, I do want to point out that what was previous called — you probably have heard of it in the SEO world — PageRank sculpting. This was a practice that I’d say from maybe 2003, 2002 to about 2008, 2009, had this life where there would be panel discussions about PageRank sculpting and all these examples of how to do it and software that would crawl your site and show you the ideal PageRank sculpting system to use and which pages to link to and not.

When PageRank was the dominant algorithm inside of Google’s ranking system, yeah, it was the case that PageRank sculpting could have some real effect. These days, that is dramatically reduced. It’s not entirely gone because of some of these other principles that we’ve talked about, just having lots of links on a page for no particularly good reason is generally bad and can have harmful effects and having few carefully chosen ones has good effects. But most of the time, internal linking, optimizing internal linking beyond a certain point is not very valuable, not a great value add.

But a little of what I’m calling the right internal linking, that’s what we’re going to talk about, can go a long way. For example, if you have those orphan pages or pages that are clearly the next step in a process or that users want and they cannot find them or engines can’t find them through the link structure, it’s bad. Fixing that can have a positive impact.

Ideal internal link structures

So ideally, in an internal linking structure system, you want something kind of like this. This is a very rough illustration here. But the homepage, which has maybe 100 links on it to internal pages. One hop away from that, you’ve got your 100 different pages of whatever it is, subcategories or category pages, places that can get folks deeper into your website. Then from there, each of those have maybe a maximum of 100 unique links, and they get you 2 hops away from a homepage, which takes you to 10,000 pages who do the same thing.

I. No page should be more than 3 link “hops” away from another (on most small–>medium sites).

Now, the idea behind this is that basically in one, two, three hops, three links away from the homepage and three links away from any page on the site, I can get to up to a million pages. So when you talk about, “How many clicks do I have to get? How far away is this in terms of link distance from any other page on the site?” a great internal linking structure should be able to get you there in three or fewer link hops. If it’s a lot more, you might have an internal linking structure that’s really creating sort of these long pathways of forcing you to click before you can ever reach something, and that is not ideal, which is why it can make very good sense to build smart categories and subcategories to help people get in there.

I’ll give you the most basic example in the world, a traditional blog. In order to reach any post that was published two years ago, I’ve got to click Next, Next, Next, Next, Next, Next through all this pagination until I finally get there. Or if I’ve done a really good job with my categories and my subcategories, I can click on the category of that blog post and I can find it very quickly in a list of the last 50 blog posts in that particular category, great, or by author or by tag, however you’re doing your navigation.

II. Pages should contain links that visitors will find relevant and useful.

If no one ever clicks on a link, that is a bad signal for your site, and it is a bad signal for Google as well. I don’t just mean no one ever. Very, very few people ever and many of them who do click it click the back button because it wasn’t what they wanted. That’s also a bad sign.

III. Just as no two pages should be targeting the same keyword or searcher intent, likewise no two links should be using the same anchor text to point to different pages. Canonicalize!

For example, if over here I had a shipping routes link that pointed to this page and then another shipping routes link, same anchor text pointing to a separate page, page C, why am I doing that? Why am I creating competition between my own two pages? Why am I having two things that serve the same function or at least to visitors would appear to serve the same function and search engines too? I should canonicalize those. Canonicalize those links, canonicalize those pages. If a page is serving the same intent and keywords, keep it together.

IV. Limit use of the rel=”nofollow” to UGC or specific untrusted external links. It won’t help your internal link flow efforts for SEO.

Rel=”nofollow” was sort of the classic way that people had been doing PageRank sculpting that we talked about earlier here. I would strongly recommend against using it for that purpose. Google said that they’ve put in some preventative measures so that rel=”nofollow” links sort of do this leaking PageRank thing, as they call it. I wouldn’t stress too much about that, but I certainly wouldn’t use rel=”nofollow.”

What I would do is if I’m trying to do internal link sculpting, I would just do careful curation of the links and pages that I’ve got. That is the best way to help your internal link flow. That’s things like…

V. Removing low-value content, low-engagement content and creating internal links that people actually do want. That is going to give you the best results.

VI. Don’t orphan! Make sure pages that matter have links to (and from) them. Last, but not least, there should never be an orphan. There should never be a page with no links to it, and certainly there should never be a page that is well linked to that isn’t linking back out to portions of your site that are of interest or value to visitors and to Google.

So following these practices, I think you can do some awesome internal link analysis, internal link optimization and help your SEO efforts and the value visitors get from your site. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by Should SEOs Care About Internl Links?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about internal links and internal link structures. Now, it is not the most exciting thing in the SEO world, but it’s something that you have to get right and getting it wrong can actually cause lots of problems.

Attributes of internal links

So let’s start by talking about some of the things that are true about internal links. Internal links, when I say that phrase, what I mean is a link that exists on a website, let’s say ABC.com here, that is linking to a page on the same website, so over here, linking to another page on ABC.com. We’ll do /A and /B. This is actually my shipping routes page. So you can see I’m linking from A to B with the anchor text “shipping routes.”

The idea of an internal link is really initially to drive visitors from one place to another, to show them where they need to go to navigate from one spot on your site to another spot. They’re different from internal links only in that, in the HTML code, you’re pointing to the same fundamental root domain. In the initial early versions of the internet, that didn’t matter all that much, but for SEO, it matters quite a bit because external links are treated very differently from internal links. That is not to say, however, that internal links have no power or no ability to change rankings, to change crawling patterns and to change how a search engine views your site. That’s what we need to chat about.

1. Anchor text is something that can be considered. The search engines have generally minimized its importance, but it’s certainly something that’s in there for internal links.

2. The location on the page actually matters quite a bit, just as it does with external links. Internal links, it’s almost more so in that navigation and footers specifically have attributes around internal links that can be problematic.

Those are essentially when Google in particular sees manipulation in the internal link structure, specifically things like you’ve stuffed anchor text into all of the internal links trying to get this shipping routes page ranking by putting a little link down here in the footer of every single page and then pointing over here trying to game and manipulate us, they hate that. In fact, there is an algorithmic penalty for that kind of stuff, and we can see it very directly.

We’ve actually run tests where we’ve observed that jamming this type of anchor text-rich links into footers or into navigation and then removing it gets a site indexed, well let’s not say indexed, let’s say ranking well and then ranking poorly when you do it. Google reverses that penalty pretty quickly too, which is nice. So if you are not ranking well and you’re like, “Oh no, Rand, I’ve been doing a lot of that,” maybe take it away. Your rankings might come right back. That’s great.

3. The link target matters obviously from one place to another.

4. The importance of the linking page, this is actually a big one with internal links. So it is generally the case that if a page on your website has lots of external links pointing to it, it gains authority and it has more ability to sort of generate a little bit, not nearly as much as external links, but a little bit of ranking power and influence by linking to other pages. So if you have very well-linked two pages on your site, you should make sure to link out from those to pages on your site that a) need it and b) are actually useful for your users. That’s another signal we’ll talk about.

5. The relevance of the link, so pointing to my shipping routes page from a page about other types of shipping information, totally great. Pointing to it from my dog food page, well, it doesn’t make great sense. Unless I’m talking about shipping routes of dog food specifically, it seems like it’s lacking some of that context, and search engines can pick up on that as well.

6. The first link on the page. So this matters mostly in terms of the anchor text, just as it does for external links. Basically, if you are linking in a bunch of different places to this page from this one, Google will usually, at least in all of our experiments so far, count the first anchor text only. So if I have six different links to this and the first link says “Click here,” “Click here” is the anchor text that Google is going to apply, not “Click here” and “shipping routes” and “shipping.” Those subsequent links won’t matter as much.

7. Then the type of link matters too. Obviously, I would recommend that you keep it in the HTML link format rather than trying to do something fancy with JavaScript. Even though Google can technically follow those, it looks to us like they’re not treated with quite the same authority and ranking influence. Text is slightly, slightly better than images in our testing, although that testing is a few years old at this point. So maybe image links are treated exactly the same. Either way, do make sure you have that. If you’re doing image links, by the way, remember that the alt attribute of that image is what becomes the anchor text of that link.

Internal versus external links

A. External links usually give more authority and ranking ability.

That shouldn’t be surprising. An external link is like a vote from an independent, hopefully independent, hopefully editorially given website to your website saying, “This is a good place for you to go for this type of information.” On your own site, it’s like a vote for yourself, so engines don’t treat it the same.

B. Anchor text of internal links generally have less influence.

So, as we mentioned, me pointing to my page with the phrase that I want to rank for isn’t necessarily a bad thing, but I shouldn’t do it in a manipulative way. I shouldn’t do it in a way that’s going to look spammy or sketchy to visitors, because if visitors stop clicking around my site or engaging with it or they bounce more, I will definitely lose ranking influence much faster than if I simply make those links credible and usable and useful to visitors. Besides, the anchor text of internal links is not as powerful anyway.

C. A lack of internal links can seriously hamper a page’s ability to get crawled + ranked.

It is, however, the case that a lack of internal links, like an orphan page that doesn’t have many internal or any internal links from the rest of its website, that can really hamper a page’s ability to rank. Sometimes it will happen. External links will point to a page. You’ll see that page in your analytics or in a report about your links from Moz or Ahrefs or Majestic, and then you go, “Oh my gosh, I’m not linking to that page at all from anywhere else on my site.” That’s a bad idea. Don’t do that. That is definitely problematic.

D. It’s still the case, by the way, that, broadly speaking, pages with more links on them will send less link value per link.

So, essentially, you remember the original PageRank formula from Google. It said basically like, “Oh, well, if there are five links, send one-fifth of the PageRank power to each of those, and if there are four links, send one-fourth.” Obviously, one-fourth is bigger than one-fifth. So taking away that fifth link could mean that each of the four pages that you’ve linked to get a little bit more ranking authority and influence in the original PageRank algorithm.

Look, PageRank is old, very, very old at this point, but at least the theories behind it are not completely gone. So it is the case that if you have a page with tons and tons of links on it, that tends to send out less authority and influence than a page with few links on it, which is why it can definitely pay to do some spring cleaning on your website and clear out any rubbish pages or rubbish links, ones that visitors don’t want, that search engines don’t want, that you don’t care about. Clearing that up can actually have a positive influence. We’ve seen that on a number of websites where they’ve cleaned up their information architecture, whittled down their links to just the stuff that matters the most and the pages that matter the most, and then seen increased rankings across the board from all sorts of signals, positive signals, user engagement signals, link signals, context signals that help the engine them rank better.

E. Internal link flow (aka PR sculpting) is rarely effective, and usually has only mild effects… BUT a little of the right internal linking can go a long way.

Then finally, I do want to point out that what was previous called — you probably have heard of it in the SEO world — PageRank sculpting. This was a practice that I’d say from maybe 2003, 2002 to about 2008, 2009, had this life where there would be panel discussions about PageRank sculpting and all these examples of how to do it and software that would crawl your site and show you the ideal PageRank sculpting system to use and which pages to link to and not.

When PageRank was the dominant algorithm inside of Google’s ranking system, yeah, it was the case that PageRank sculpting could have some real effect. These days, that is dramatically reduced. It’s not entirely gone because of some of these other principles that we’ve talked about, just having lots of links on a page for no particularly good reason is generally bad and can have harmful effects and having few carefully chosen ones has good effects. But most of the time, internal linking, optimizing internal linking beyond a certain point is not very valuable, not a great value add.

But a little of what I’m calling the right internal linking, that’s what we’re going to talk about, can go a long way. For example, if you have those orphan pages or pages that are clearly the next step in a process or that users want and they cannot find them or engines can’t find them through the link structure, it’s bad. Fixing that can have a positive impact.

Ideal internal link structures

So ideally, in an internal linking structure system, you want something kind of like this. This is a very rough illustration here. But the homepage, which has maybe 100 links on it to internal pages. One hop away from that, you’ve got your 100 different pages of whatever it is, subcategories or category pages, places that can get folks deeper into your website. Then from there, each of those have maybe a maximum of 100 unique links, and they get you 2 hops away from a homepage, which takes you to 10,000 pages who do the same thing.

I. No page should be more than 3 link “hops” away from another (on most small–>medium sites).

Now, the idea behind this is that basically in one, two, three hops, three links away from the homepage and three links away from any page on the site, I can get to up to a million pages. So when you talk about, “How many clicks do I have to get? How far away is this in terms of link distance from any other page on the site?” a great internal linking structure should be able to get you there in three or fewer link hops. If it’s a lot more, you might have an internal linking structure that’s really creating sort of these long pathways of forcing you to click before you can ever reach something, and that is not ideal, which is why it can make very good sense to build smart categories and subcategories to help people get in there.

I’ll give you the most basic example in the world, a traditional blog. In order to reach any post that was published two years ago, I’ve got to click Next, Next, Next, Next, Next, Next through all this pagination until I finally get there. Or if I’ve done a really good job with my categories and my subcategories, I can click on the category of that blog post and I can find it very quickly in a list of the last 50 blog posts in that particular category, great, or by author or by tag, however you’re doing your navigation.

II. Pages should contain links that visitors will find relevant and useful.

If no one ever clicks on a link, that is a bad signal for your site, and it is a bad signal for Google as well. I don’t just mean no one ever. Very, very few people ever and many of them who do click it click the back button because it wasn’t what they wanted. That’s also a bad sign.

III. Just as no two pages should be targeting the same keyword or searcher intent, likewise no two links should be using the same anchor text to point to different pages. Canonicalize!

For example, if over here I had a shipping routes link that pointed to this page and then another shipping routes link, same anchor text pointing to a separate page, page C, why am I doing that? Why am I creating competition between my own two pages? Why am I having two things that serve the same function or at least to visitors would appear to serve the same function and search engines too? I should canonicalize those. Canonicalize those links, canonicalize those pages. If a page is serving the same intent and keywords, keep it together.

IV. Limit use of the rel=”nofollow” to UGC or specific untrusted external links. It won’t help your internal link flow efforts for SEO.

Rel=”nofollow” was sort of the classic way that people had been doing PageRank sculpting that we talked about earlier here. I would strongly recommend against using it for that purpose. Google said that they’ve put in some preventative measures so that rel=”nofollow” links sort of do this leaking PageRank thing, as they call it. I wouldn’t stress too much about that, but I certainly wouldn’t use rel=”nofollow.”

What I would do is if I’m trying to do internal link sculpting, I would just do careful curation of the links and pages that I’ve got. That is the best way to help your internal link flow. That’s things like…

V. Removing low-value content, low-engagement content and creating internal links that people actually do want. That is going to give you the best results.

VI. Don’t orphan! Make sure pages that matter have links to (and from) them. Last, but not least, there should never be an orphan. There should never be a page with no links to it, and certainly there should never be a page that is well linked to that isn’t linking back out to portions of your site that are of interest or value to visitors and to Google.

So following these practices, I think you can do some awesome internal link analysis, internal link optimization and help your SEO efforts and the value visitors get from your site. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”

It’s Here: The Finalized MozCon 2017 Agenda

Posted by pre-MozCon SEO workshops on Sunday, July 16. Keep reading for more info.

You will, however, need a ticket to attend the event, so you might want to take care of that sooner rather later, since it always sells out:

Buy my MozCon 2017 ticket!

Now for the meaty details you’ve been waiting for.

The MozCon 2017 Agenda

Monday


08:00–09:00am
Breakfast


Rand Fishkin

09:00–09:20am
Welcome to MozCon 2017

Rand Fishkin, Wizard of Moz
@randfish

Rand Fishkin is the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org. Rand’s an un-save-able addict of all things content, search, and social on the web.


lisa-myers-150x150-33348.jpg09:20–10:05am
How to Get Big Links

Lisa Myers, Verve Search
@LisaDMyers

Everyone wants links and coverage from sites such as New York Times, the Wall Street Journal, and the BBC, but very few achieve it. This is how we cracked it. Over and over.

Lisa is the founder and CEO of award-winning SEO agency Verve Search and founder of Womeninsearch.net. Feminist, mother of two, and modern-day shield maiden.


oli-gardner-150x150-47067.jpg

10:05–10:35am
Data-Driven Design

Oli Gardner, Unbounce
@oligardner

Data-Driven Design (3D) is an actionable, evidence-based framework for creating websites & landing pages that will increase your leads, sales, and customers. In this session you’ll learn how to use the latest industry conversion data to inform copywriting and design decisions that impact conversions. Additionally, I’ll share a new methodology for prioritizing your marketing optimization that will show you which pages are awesome (leave them alone), which pages aren’t (massive ROI potential here), and help you develop a common language that your teams of marketers, designers, and copywriters can use to work better together to collectively increase your conversion rates.

Oli, founder of Unbounce, is on a mission to rid the world of marketing mediocrity by using data-informed copywriting, design, interaction, and psychology to create a more delightful experience for marketers and customers alike.


10:35–11:05am
AM Break


11:10–11:30am
How to Write Customer-Driven Copy That Converts

Joel Klettke, Business Casual Copywriting & Case Study Buddy
@JoelKlettke

If you want to write copy that converts, you need to get into your customers’ heads. But how do you do that? How do you know which pain points you need to address, features customers care about, or benefits your audience needs to hear? Marketers are sick and tired of hearing “it depends.” I’ll give the audience a practical framework for writing customer-driven copy that any business can apply.

Joel is a freelance conversion copywriter and strategist for Business Casual Copywriting. He also owns and runs Case Study Buddy, a done-for-you case studies service.


11:30–11:50am
What We Learned From Reddit & How It Can Help Your Brand Take Content Marketing to the Next Level

Daniel Russell, Go Fish Digital
@dnlRussell ‏

It almost seems too good to be true — online forums where people automatically segment themselves into different markets and demographics and then vote on what content they like best. These forums, including Reddit, are treasure troves of content ideas. I’ll share actionable insights from three case studies that demonstrate how your marketing can benefit from content on Reddit.

Daniel is a director at Go Fish Digital whose work has hit the front page of Reddit, earned the #1 spot on YouTube, and been featured in Entrepreneur, Inc., The Washington Post, WSJ, and Fast Company.


11:50am–12:10pm
How to Build an SEO-Intent-Based Framework for Any Business

Kathryn Cunningham, Adept Marketing
@kac4509

Everyone knows intent behind the search matters. In e-commerce, intent is somewhat easy to see. B2B, or better yet healthcare, isn’t quite as easy. Matching persona intent to keywords requires a bit more thought. I will cover how to find intent modifiers during keyword research, how to organize those modifiers into the search funnel, and how to quickly find unique universal results at different levels of the search funnel to utilize.

Kathryn is an SEO consultant for Adept Marketing, although to many of her office mates she is known as the Excel nerd.


12:10–01:40pm
Lunch


ian-lurie-150x150-40285.jpg01:45–02:30pm
Size Doesn’t Matter: Great Content by Teams of One

Ian Lurie, Portent, Inc.
@portentint

Feel the energy surge through your veins as you gain content creation powers THE LIKES OF WHICH YOU HAVE NEVER EXPERIENCED… Or, just learn a process for creating great content when it’s just you and your little teeny team. Because size doesn’t matter.

Ian Lurie is founder, CEO, and nerdiest marketing nerd at Portent, a digital marketing agency he started in the Cretaceous era, aka 1995. Ian’s meandering career includes marketing copywriting, expert dungeon master, bike messenger-ing, and office temp worker.


justine-jordan-150x150-39303.jpg

02:30–03:00pm
The Tie That Binds: Why Email is Key to Maximizing Marketing ROI

Justine Jordan, Litmus@meladorri

If nailing the omnichannel experience (whatever that means!) is key to getting more traffic and converting more leads, what happens if we have our channel priorities out of order? Justine will show you how email — far from being an old-school afterthought — is core to hitting marketing goals, building lifetime value, and making customers happy.

Justine is obsessed with helping marketers create, test, and send better email. Named 2015 Email Marketer Thought Leader of the Year, she is strangely passionate about email marketing, hates being called a spammer, and still gets nervous when pressing send.


03:00–03:30pm
PM Break


tara-nicholle-nelson-150x150-39664.jpg

03:35–04:05pm

How to Be a Happy Marketer: Survive the Content Crisis and Drive Results by Mastering Your Customer’s Transformational Journey

Tara-Nicholle Nelson, Transformational Consumer Insights
@taranicholle

Branded content is way up, but customer engagement with that content is plummeting. This whole scene makes it hard to get up in the morning, as a marketer. But there’s a new path beyond the epidemic of disengagement and, at the end of it, your brand and your content become regular stops along your customer’s everyday journey.

Tara-Nicholle Nelson is the CEO of Transformational Consumer Insights, the former VP of Marketing for MyFitnessPal, and author of the Transformational Consumer.


phil-nottingham-150x150-38081.jpg04:05–04:50pm
Thinking Smaller: Optimizing for the New Wave of Social Video Platforms

Phil Nottingham, Wistia
@philnottingham ‏

SnapChat, Facebook, Twitter, Instagram, Periscope… the list goes on. All social networks are now video platforms, but it’s hard to know where to invest. In this session, Phil will be giving you all the tips and tricks for what to make, how to get your content in front of the right audiences, and how get the most value from the investment you’re making in social video.

Phil Nottingham is a strategist who believes in the power of creative video content to improve the way companies speak to their customers, and regularly speaks around the world about video strategy, SEO, and technical marketing.


07:00–10:00pm
Monday Night #MozCrawl

The Monday night pub crawl is back.

For the uninitiated, “pub crawl” is not meant to convey what you do after a night of drinking.

Rather, during the MozCon pub crawl, attendees visit some of the best bars in Seattle.

(Each stop is sponsored by a trusted partner; You’ll need to bring your MozCon badge for free drinks and light appetizers. You’ll also need your US ID or passport.)

More deets to follow.


Tuesday


08:00–09:00am
Breakfast


wil-reynolds-150x150-33027.jpg

09:05–09:50am
I’d Rather Be Thanked Than Ranked

Wil Reynolds, Seer Interactive
@wilreynolds

Ego and assumptions led me to chose the wrong keywords for my own site — yeah, me, Wil Reynolds, Mr. RCS. How did I spend three years optimizing my site and building links to finally crack the top three for six critical keywords, only to find out that I wasted all that time? However, in spite of targeting the wrong words, Seer grew the business. In this presentation, I’ll show you the mistakes I made and share with you to approaches that can help you to build content that gets you thanked.

A former teacher with a knack for advising, he’s been helping Fortune 500 companies develop SEO strategies since 1999. Today, Seer is home to over 100 employees across Philadelphia and San Diego.


dawn-anderson-150x150-8516.jpg09:50–10:35am
Winning Value Propositions for Crawlers and Consumers

Dawn Anderson, Move It Marketing/Manchester Metropolitan University
@dawnieando

In an evolving mobile-first web, we can utilize preempting solutions to create winning value propositions, which are designed to attract and satisfy search engine crawlers and keep consumers happy. I’ll outline a strategy and share tactics that help ensure increased organic reach, in addition to highlighting smart ways to view data, intent, consumer choice theory, and crawl optimization.

Dawn Anderson is an International and Technical SEO Consultant, Director of Move It Marketing, and a lecturer at Manchester Metropolitan University.


10:35–11:05am
AM Break


11:10–11:15am
MozCon Ignite Preview


11:15–11:35am
More Than SEO: 3 Ways To Prove UX Matters Too

Matthew Edgar, Elementive
@MatthewEdgarCO

Great SEO is increasingly dependent on having a website with a great user experience. To make your user experience great requires carefully tracking what people do so that you always know where to improve. But what do you track? In this 15-minute talk, I’ll cover three effective and advanced ways to use event tracking in Google Analytics to understand a website’s user

Matthew is a web analytics and technical marketing consultant at Elementive.


11:35–11:55am
A Site Migration: Redirects, Resources, & Reflection

Jayna Grassel, Dick’s Sporting Goods
@jaynagrassel

Site. Migration. No two words elicit more fear, joy, or excitement to a digital marketer. When the idea was shared three years ago, the company was excited. They dreamed of new features and efficiency. But as SEOs, we knew better. We knew there would be midnight strategy sessions with IT. More UAT environments than we could track. Deadlines, requirements, and compromises forged through hallway chats. …The result was a stable transition with minimal dips in traffic. What we didn’t know, however, was the amount of cross-functional coordination that was required to pull it off.

Jayna is the SEO manager at Dick’s Sporting Goods and is the unofficial world’s second-fastest crocheter.


11:55am–12:15pm
The 8 Paid Promotion Tactics That Will Get You To Quit Organic Traffic

Kane Jamison, Content Harmony
@kanejamison

Digital marketers are ignoring huge opportunities to promote their content through paid channels, and I want to give them the tools to get started. How many brands out there are spending $500+ on a blog post, then moving on to the next one before that post has been seen by 500 people, or even 50? For some reason, everyone thinks about Outbrain and native ads when we talk about paid content distribution, but the real opportunity is in highly targeted paid social.

Kane is the founder of Content Harmony, a content marketing agency based here in Seattle. The Content Harmony team specializes in full funnel content marketing and content promotion.


12:15–01:45pm
Lunch


purna-virji-150x150-46694.jpg01:50–02:20pm
Marketing in a Conversational World: How to Get Discovered, Delight Your Customers and Earn the Conversion

Purna Virji, Microsoft
@purnavirji

Capturing and keeping attention is one of the hardest parts of our job today. Fact: It’s just going to get harder with the advent of new technology and conversational interfaces. In the brave new world we’re stepping into, the key questions are: How do we get discovered? How can we delight our audiences? And how can we grow revenue for our clients? Come to this session to learn how to make your marketing and advertising efforts something people are going to want to consume.

Named by PPC Hero as the #1 most influential PPC expert in the world, Purna specializes in SEM, SEO, and future search trends. She is a popular global keynote speaker and columnist, an avid traveler, aspiring top chef, and amateur knitter.


matthew-barby-150x150-37740.jpg

02:20–02:50pm
Up and to the Right: Growing Traffic, Conversions, & Revenue

Matthew Barby, HubSpot
@matthewbarby

So many of the case studies that document how a company has grown from 0 to X forget to mention that solutions that they found are applicable to their specific scenario and won’t work for everyone. This falls into the dangerous category of bad advice for generic problems. Instead of building up a list of other companies’ tactics, marketers need to understand how to diagnose and solve problems across their entire funnel. Illustrated with real-world examples, I’ll be talking you through the process that I take to come up with ideas that none of my competitors are thinking of.

Matt, who heads up user acquisition at HubSpot, is an award-winning blogger, startup advisor, and a lecturer.


joanna-lord-150x150-66788.jpg

02:50–03:20pm
How to Operationalize Growth for Maximum Revenue

Joanna Lord, ClassPass@JoannaLord

Joanna will walk through tactical ways to organize your team, build system foundations, and create processes that fuel growth across the company. You’ll hear how to coordinate with product, engineering, CX, and sales to ensure you’re maximizing your opportunity to acquire, retain, and monetize your customers.

Joanna is the CMO of ClassPass, the world’s leading fitness membership. Prior to that she was VP of Marketing at Porch and CMO of BigDoor. She is a global keynote and digital evangelist. Joanna is a recognized thought leader in digital marketing and a startup mentor.


03:20–03:50pm
PM Break


03:55–04:25pm
Analytics to Drive Optimization & Personalization

Krista Seiden, Google
@kristaseiden

Getting the most out of your optimization efforts means understanding the data you’re collecting, from analytics implementation, to report setup, to analysis techniques. In this session, Krista walks you through several tips for using analytics data to empower your optimization efforts, and then takes it further to show you how to up-level your efforts to take advantage of personalization from mass scale all the way down to individual user actions.

Krista Seiden is the Analytics Advocate for Google, advocating for all things data, web, mobile, optimization, and more. Keynote speaker, practitioner, writer on Analytics and Optimization, and passionate supporter of #WomenInAnalytics.


dr-pete-meyers-150x150-40534.jpg

04:25–05:10pm
Facing the Future: 5 Simple Tactics for 5 Scary Changes

Dr. Pete Meyers, Moz
@dr_pete ‏

We’ve seen big changes to SEO recently, from an explosion in SERP features to RankBrain to voice search. These fundamental changes to organic search marketing can be daunting, and it’s hard to know where to get started. Dr. Pete will walk you through five big changes and five tactics for coping with those changes today.

Dr. Peter J. Meyers (aka “Dr. Pete”) is Marketing Scientist for Seattle-based Moz, where he works with the marketing and data science teams on product research and data-driven content.


07:00–10:00pm
MozCon Ignite

Join us for an evening of networking and passion-talks. Laugh, cheer, and be inspired as your peers share their 5-minute talks about their hobbies, passion projects, and life lessons.

Be sure to bring your MozCon badge.


Wednesday


09:00–10:00am
Breakfast


cindy-krum-150x150-58917.jpg10:05–10:50am
The Truth About Mobile-First Indexing

Cindy Krum, MobileMoxie, LLC
@suzzicks

Mobile-first design has been a best practice for a while, and Google is finally about to support it with mobile-first indexing. But mobile-first design and mobile-first indexing are not the same thing. Mobile-first indexing is about cross-device accessibility of information, to help integrate digital assistants and web-enabled devices that don’t even have browsers to achieve Google’s larger goals. Learn how mobile-first indexing will give digital marketers their first real swing at influencing Google’s new AI (Artificial Intelligence) landscape. Marketers who embrace an accurate understanding of mobile-first indexing could see a huge first-mover advantage, similar to the early days of the web, and we all need to be prepared.

Cindy, the CEO and Founder of MobileMoxie, LLC, is the author of Mobile Marketing: Finding Your Customers No Matter Where They Are. She brings fresh and creative ideas to her clients, and regularly speaks at US and international digital marketing events.


tara-reed-150x150-45070.jpg

10:50–11:20am
Powerful Brands Have Communities

Tara Reed, Apps Without Code
@TaraReed_

You are laser-focused on user growth. Meanwhile, you’re neglecting a gold mine of existing customers who desperately want to be part of your brand’s community. Tara Reed shares how to use communities, gamification, and membership content to grow your revenue.

Tara Reed is a tech entrepreneur & marketer. After running marketing initiatives at Google, Foursquare, & Microsoft, Tara branched out to launch her own apps & startups. Today, Tara helps people implement cutting-edge marketing into their businesses.


11:20–11:50am
AM Break


11:55–12:25am

From Anchor to Asset: How Agencies Can Wisely Create Data-Driven Content

Heather Physioc, VML@HeatherPhysioc

Creative agencies are complicated and messy, often embracing chaos instead of process, and focusing exclusively on one-time campaign creative instead of continuous web content creation. Campaign creative can be costly, and not sustainable for most large brands. How can creative shops produce data-driven streams of high-quality content for the web that stays true to its creative roots — but faster, cheaper, and continuously? I’ll show you how.

Heather is director of Organic Search at global digital ad agency VML, which performs search engine optimization services for multinational brands like Hill’s Pet Nutrition, Electrolux/Frigidaire, Bridgestone, EXPRESS, and Wendy’s.


britney-muller-150x150-45570.jpg12:25–12:55pm
5 Secrets: How to Execute Lean SEO to Increase Qualified Leads

Britney Muller, Moz
@BritneyMuller

I invite you to steal some of the ideas I’ve gleaned from managing SEO for the behemoth bad-ass Moz.com. Learn what it takes to move the needle on qualified leads, execute quick wins, and keep your head above water. I’ll go over my biggest Moz.com successes, failures, tests, and lessons.

Britney is a Minnesota native who moved to Colorado to fulfill a dream of being a snowboard bum! After 50+ days on the mountain her first season, she got stir-crazy and taught herself how to program, then found her way into SEO while writing for a local realtor.


12:55–02:25pm
Lunch


stephanie-chang-150x150-5456.jpg02:30–03:15pm
SEO Experimentation for Big-Time Results

Stephanie Chang, Etsy
@@stephpchang

One of the biggest business hurdles any brand faces is how to prioritize and validate SEO recommendations. This presentation describes an SEO experimentation framework you can use to effectively test how changes made to your pages affect SEO performance.

Stephanie currently leads the Global Acquisition & Retention Marketing teams at Etsy. Previously, she was a Senior Consultant at Distilled.


rob-bucci-150x150-39132.jpg

03:15–03:45pm
Reverse-Engineer Google’s Research to Serve Up the Best, Most Relevant Content for Your Audience

Rob Bucci, STAT Search Analytics
@STATrob

The SERP is the front-end to Google’s multi-billion dollar consumer research machine. They know what searchers want. In this data-heavy talk, Rob will teach you how to uncover what Google already knows about what web searchers are looking for. Using this knowledge, you can deliver the right content to the right searchers at the right time, every time.

Rob loves the challenge of staying ahead of the changes Google makes to their SERPs. When not working, you can usually find him hiking up a mountain, falling down a ski slope, or splashing around in the ocean.


03:45–04:15pm
PM Break


04:20–05:05pm
rand-fishkin-150x150-32915.jpgInside the Googling Mind: An SEO’s Guide to Winning Clicks, Hearts, & Rankings in the Years Ahead

Rand Fishkin, Founder of Moz, doer of SEO, feminist
@randfish

Searcher behavior, intent, and satisfaction are on the verge of overtaking classic SEO inputs (keywords, links, on-page, etc). In this presentation, Rand will examine the shift that behavioral signals have caused, and list the step-by-step process to build a strategy that can thrive long-term in Google’s new reality.

Rand Fishkin is the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org. Rand’s an un-save-able addict of all things content, search, and social on the web.


07:00–11:30pm
MozCon Bash

Join us at Garage Billiards for an evening of networking, billiards, bowling, and karaoke with MozCon friends new and old. Don’t forget to bring your MozCon badge and US ID or passport.


Additional Pre-MozCon Sunday Workshops


12:30pm–5:05pm
SEO Intensive

Offered as 75-minute sessions, the five workshops will be taught by Mozzers Rand Fishkin, Britney Muller, Brian Childs, Russ Jones, and Dr. Pete. Topics include The 10 Jobs of SEO-focused Content, Keyword Targeting for RankBrain and Beyond, and Risk-Averse Link Building at Scale, among others.

These workshops are separate from MozCon; you’ll need a ticket to attend them.


Amped up for a talk or ten? Curious about new methods? Excited to learn? Get your ticket before they sell out:

Snag my ticket to MozCon 2017!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Posted by pre-MozCon SEO workshops on Sunday, July 16. Keep reading for more info.

You will, however, need a ticket to attend the event, so you might want to take care of that sooner rather later, since it always sells out:

Buy my MozCon 2017 ticket!

Now for the meaty details you’ve been waiting for.

The MozCon 2017 Agenda

Monday


08:00–09:00am
Breakfast


Rand Fishkin

09:00–09:20am
Welcome to MozCon 2017

Rand Fishkin, Wizard of Moz
@randfish

Rand Fishkin is the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org. Rand’s an un-save-able addict of all things content, search, and social on the web.


lisa-myers-150x150-33348.jpg09:20–10:05am
How to Get Big Links

Lisa Myers, Verve Search
@LisaDMyers

Everyone wants links and coverage from sites such as New York Times, the Wall Street Journal, and the BBC, but very few achieve it. This is how we cracked it. Over and over.

Lisa is the founder and CEO of award-winning SEO agency Verve Search and founder of Womeninsearch.net. Feminist, mother of two, and modern-day shield maiden.


oli-gardner-150x150-47067.jpg

10:05–10:35am
Data-Driven Design

Oli Gardner, Unbounce
@oligardner

Data-Driven Design (3D) is an actionable, evidence-based framework for creating websites & landing pages that will increase your leads, sales, and customers. In this session you’ll learn how to use the latest industry conversion data to inform copywriting and design decisions that impact conversions. Additionally, I’ll share a new methodology for prioritizing your marketing optimization that will show you which pages are awesome (leave them alone), which pages aren’t (massive ROI potential here), and help you develop a common language that your teams of marketers, designers, and copywriters can use to work better together to collectively increase your conversion rates.

Oli, founder of Unbounce, is on a mission to rid the world of marketing mediocrity by using data-informed copywriting, design, interaction, and psychology to create a more delightful experience for marketers and customers alike.


10:35–11:05am
AM Break


11:10–11:30am
How to Write Customer-Driven Copy That Converts

Joel Klettke, Business Casual Copywriting & Case Study Buddy
@JoelKlettke

If you want to write copy that converts, you need to get into your customers’ heads. But how do you do that? How do you know which pain points you need to address, features customers care about, or benefits your audience needs to hear? Marketers are sick and tired of hearing “it depends.” I’ll give the audience a practical framework for writing customer-driven copy that any business can apply.

Joel is a freelance conversion copywriter and strategist for Business Casual Copywriting. He also owns and runs Case Study Buddy, a done-for-you case studies service.


11:30–11:50am
What We Learned From Reddit & How It Can Help Your Brand Take Content Marketing to the Next Level

Daniel Russell, Go Fish Digital
@dnlRussell ‏

It almost seems too good to be true — online forums where people automatically segment themselves into different markets and demographics and then vote on what content they like best. These forums, including Reddit, are treasure troves of content ideas. I’ll share actionable insights from three case studies that demonstrate how your marketing can benefit from content on Reddit.

Daniel is a director at Go Fish Digital whose work has hit the front page of Reddit, earned the #1 spot on YouTube, and been featured in Entrepreneur, Inc., The Washington Post, WSJ, and Fast Company.


11:50am–12:10pm
How to Build an SEO-Intent-Based Framework for Any Business

Kathryn Cunningham, Adept Marketing
@kac4509

Everyone knows intent behind the search matters. In e-commerce, intent is somewhat easy to see. B2B, or better yet healthcare, isn’t quite as easy. Matching persona intent to keywords requires a bit more thought. I will cover how to find intent modifiers during keyword research, how to organize those modifiers into the search funnel, and how to quickly find unique universal results at different levels of the search funnel to utilize.

Kathryn is an SEO consultant for Adept Marketing, although to many of her office mates she is known as the Excel nerd.


12:10–01:40pm
Lunch


ian-lurie-150x150-40285.jpg01:45–02:30pm
Size Doesn’t Matter: Great Content by Teams of One

Ian Lurie, Portent, Inc.
@portentint

Feel the energy surge through your veins as you gain content creation powers THE LIKES OF WHICH YOU HAVE NEVER EXPERIENCED… Or, just learn a process for creating great content when it’s just you and your little teeny team. Because size doesn’t matter.

Ian Lurie is founder, CEO, and nerdiest marketing nerd at Portent, a digital marketing agency he started in the Cretaceous era, aka 1995. Ian’s meandering career includes marketing copywriting, expert dungeon master, bike messenger-ing, and office temp worker.


justine-jordan-150x150-39303.jpg

02:30–03:00pm
The Tie That Binds: Why Email is Key to Maximizing Marketing ROI

Justine Jordan, Litmus@meladorri

If nailing the omnichannel experience (whatever that means!) is key to getting more traffic and converting more leads, what happens if we have our channel priorities out of order? Justine will show you how email — far from being an old-school afterthought — is core to hitting marketing goals, building lifetime value, and making customers happy.

Justine is obsessed with helping marketers create, test, and send better email. Named 2015 Email Marketer Thought Leader of the Year, she is strangely passionate about email marketing, hates being called a spammer, and still gets nervous when pressing send.


03:00–03:30pm
PM Break


tara-nicholle-nelson-150x150-39664.jpg

03:35–04:05pm

How to Be a Happy Marketer: Survive the Content Crisis and Drive Results by Mastering Your Customer’s Transformational Journey

Tara-Nicholle Nelson, Transformational Consumer Insights
@taranicholle

Branded content is way up, but customer engagement with that content is plummeting. This whole scene makes it hard to get up in the morning, as a marketer. But there’s a new path beyond the epidemic of disengagement and, at the end of it, your brand and your content become regular stops along your customer’s everyday journey.

Tara-Nicholle Nelson is the CEO of Transformational Consumer Insights, the former VP of Marketing for MyFitnessPal, and author of the Transformational Consumer.


phil-nottingham-150x150-38081.jpg04:05–04:50pm
Thinking Smaller: Optimizing for the New Wave of Social Video Platforms

Phil Nottingham, Wistia
@philnottingham ‏

SnapChat, Facebook, Twitter, Instagram, Periscope… the list goes on. All social networks are now video platforms, but it’s hard to know where to invest. In this session, Phil will be giving you all the tips and tricks for what to make, how to get your content in front of the right audiences, and how get the most value from the investment you’re making in social video.

Phil Nottingham is a strategist who believes in the power of creative video content to improve the way companies speak to their customers, and regularly speaks around the world about video strategy, SEO, and technical marketing.


07:00–10:00pm
Monday Night #MozCrawl

The Monday night pub crawl is back.

For the uninitiated, “pub crawl” is not meant to convey what you do after a night of drinking.

Rather, during the MozCon pub crawl, attendees visit some of the best bars in Seattle.

(Each stop is sponsored by a trusted partner; You’ll need to bring your MozCon badge for free drinks and light appetizers. You’ll also need your US ID or passport.)

More deets to follow.


Tuesday


08:00–09:00am
Breakfast


wil-reynolds-150x150-33027.jpg

09:05–09:50am
I’d Rather Be Thanked Than Ranked

Wil Reynolds, Seer Interactive
@wilreynolds

Ego and assumptions led me to chose the wrong keywords for my own site — yeah, me, Wil Reynolds, Mr. RCS. How did I spend three years optimizing my site and building links to finally crack the top three for six critical keywords, only to find out that I wasted all that time? However, in spite of targeting the wrong words, Seer grew the business. In this presentation, I’ll show you the mistakes I made and share with you to approaches that can help you to build content that gets you thanked.

A former teacher with a knack for advising, he’s been helping Fortune 500 companies develop SEO strategies since 1999. Today, Seer is home to over 100 employees across Philadelphia and San Diego.


dawn-anderson-150x150-8516.jpg09:50–10:35am
Winning Value Propositions for Crawlers and Consumers

Dawn Anderson, Move It Marketing/Manchester Metropolitan University
@dawnieando

In an evolving mobile-first web, we can utilize preempting solutions to create winning value propositions, which are designed to attract and satisfy search engine crawlers and keep consumers happy. I’ll outline a strategy and share tactics that help ensure increased organic reach, in addition to highlighting smart ways to view data, intent, consumer choice theory, and crawl optimization.

Dawn Anderson is an International and Technical SEO Consultant, Director of Move It Marketing, and a lecturer at Manchester Metropolitan University.


10:35–11:05am
AM Break


11:10–11:15am
MozCon Ignite Preview


11:15–11:35am
More Than SEO: 3 Ways To Prove UX Matters Too

Matthew Edgar, Elementive
@MatthewEdgarCO

Great SEO is increasingly dependent on having a website with a great user experience. To make your user experience great requires carefully tracking what people do so that you always know where to improve. But what do you track? In this 15-minute talk, I’ll cover three effective and advanced ways to use event tracking in Google Analytics to understand a website’s user

Matthew is a web analytics and technical marketing consultant at Elementive.


11:35–11:55am
A Site Migration: Redirects, Resources, & Reflection

Jayna Grassel, Dick’s Sporting Goods
@jaynagrassel

Site. Migration. No two words elicit more fear, joy, or excitement to a digital marketer. When the idea was shared three years ago, the company was excited. They dreamed of new features and efficiency. But as SEOs, we knew better. We knew there would be midnight strategy sessions with IT. More UAT environments than we could track. Deadlines, requirements, and compromises forged through hallway chats. …The result was a stable transition with minimal dips in traffic. What we didn’t know, however, was the amount of cross-functional coordination that was required to pull it off.

Jayna is the SEO manager at Dick’s Sporting Goods and is the unofficial world’s second-fastest crocheter.


11:55am–12:15pm
The 8 Paid Promotion Tactics That Will Get You To Quit Organic Traffic

Kane Jamison, Content Harmony
@kanejamison

Digital marketers are ignoring huge opportunities to promote their content through paid channels, and I want to give them the tools to get started. How many brands out there are spending $500+ on a blog post, then moving on to the next one before that post has been seen by 500 people, or even 50? For some reason, everyone thinks about Outbrain and native ads when we talk about paid content distribution, but the real opportunity is in highly targeted paid social.

Kane is the founder of Content Harmony, a content marketing agency based here in Seattle. The Content Harmony team specializes in full funnel content marketing and content promotion.


12:15–01:45pm
Lunch


purna-virji-150x150-46694.jpg01:50–02:20pm
Marketing in a Conversational World: How to Get Discovered, Delight Your Customers and Earn the Conversion

Purna Virji, Microsoft
@purnavirji

Capturing and keeping attention is one of the hardest parts of our job today. Fact: It’s just going to get harder with the advent of new technology and conversational interfaces. In the brave new world we’re stepping into, the key questions are: How do we get discovered? How can we delight our audiences? And how can we grow revenue for our clients? Come to this session to learn how to make your marketing and advertising efforts something people are going to want to consume.

Named by PPC Hero as the #1 most influential PPC expert in the world, Purna specializes in SEM, SEO, and future search trends. She is a popular global keynote speaker and columnist, an avid traveler, aspiring top chef, and amateur knitter.


matthew-barby-150x150-37740.jpg

02:20–02:50pm
Up and to the Right: Growing Traffic, Conversions, & Revenue

Matthew Barby, HubSpot
@matthewbarby

So many of the case studies that document how a company has grown from 0 to X forget to mention that solutions that they found are applicable to their specific scenario and won’t work for everyone. This falls into the dangerous category of bad advice for generic problems. Instead of building up a list of other companies’ tactics, marketers need to understand how to diagnose and solve problems across their entire funnel. Illustrated with real-world examples, I’ll be talking you through the process that I take to come up with ideas that none of my competitors are thinking of.

Matt, who heads up user acquisition at HubSpot, is an award-winning blogger, startup advisor, and a lecturer.


joanna-lord-150x150-66788.jpg

02:50–03:20pm
How to Operationalize Growth for Maximum Revenue

Joanna Lord, ClassPass@JoannaLord

Joanna will walk through tactical ways to organize your team, build system foundations, and create processes that fuel growth across the company. You’ll hear how to coordinate with product, engineering, CX, and sales to ensure you’re maximizing your opportunity to acquire, retain, and monetize your customers.

Joanna is the CMO of ClassPass, the world’s leading fitness membership. Prior to that she was VP of Marketing at Porch and CMO of BigDoor. She is a global keynote and digital evangelist. Joanna is a recognized thought leader in digital marketing and a startup mentor.


03:20–03:50pm
PM Break


03:55–04:25pm
Analytics to Drive Optimization & Personalization

Krista Seiden, Google
@kristaseiden

Getting the most out of your optimization efforts means understanding the data you’re collecting, from analytics implementation, to report setup, to analysis techniques. In this session, Krista walks you through several tips for using analytics data to empower your optimization efforts, and then takes it further to show you how to up-level your efforts to take advantage of personalization from mass scale all the way down to individual user actions.

Krista Seiden is the Analytics Advocate for Google, advocating for all things data, web, mobile, optimization, and more. Keynote speaker, practitioner, writer on Analytics and Optimization, and passionate supporter of #WomenInAnalytics.


dr-pete-meyers-150x150-40534.jpg

04:25–05:10pm
Facing the Future: 5 Simple Tactics for 5 Scary Changes

Dr. Pete Meyers, Moz
@dr_pete ‏

We’ve seen big changes to SEO recently, from an explosion in SERP features to RankBrain to voice search. These fundamental changes to organic search marketing can be daunting, and it’s hard to know where to get started. Dr. Pete will walk you through five big changes and five tactics for coping with those changes today.

Dr. Peter J. Meyers (aka “Dr. Pete”) is Marketing Scientist for Seattle-based Moz, where he works with the marketing and data science teams on product research and data-driven content.


07:00–10:00pm
MozCon Ignite

Join us for an evening of networking and passion-talks. Laugh, cheer, and be inspired as your peers share their 5-minute talks about their hobbies, passion projects, and life lessons.

Be sure to bring your MozCon badge.


Wednesday


09:00–10:00am
Breakfast


cindy-krum-150x150-58917.jpg10:05–10:50am
The Truth About Mobile-First Indexing

Cindy Krum, MobileMoxie, LLC
@suzzicks

Mobile-first design has been a best practice for a while, and Google is finally about to support it with mobile-first indexing. But mobile-first design and mobile-first indexing are not the same thing. Mobile-first indexing is about cross-device accessibility of information, to help integrate digital assistants and web-enabled devices that don’t even have browsers to achieve Google’s larger goals. Learn how mobile-first indexing will give digital marketers their first real swing at influencing Google’s new AI (Artificial Intelligence) landscape. Marketers who embrace an accurate understanding of mobile-first indexing could see a huge first-mover advantage, similar to the early days of the web, and we all need to be prepared.

Cindy, the CEO and Founder of MobileMoxie, LLC, is the author of Mobile Marketing: Finding Your Customers No Matter Where They Are. She brings fresh and creative ideas to her clients, and regularly speaks at US and international digital marketing events.


tara-reed-150x150-45070.jpg

10:50–11:20am
Powerful Brands Have Communities

Tara Reed, Apps Without Code
@TaraReed_

You are laser-focused on user growth. Meanwhile, you’re neglecting a gold mine of existing customers who desperately want to be part of your brand’s community. Tara Reed shares how to use communities, gamification, and membership content to grow your revenue.

Tara Reed is a tech entrepreneur & marketer. After running marketing initiatives at Google, Foursquare, & Microsoft, Tara branched out to launch her own apps & startups. Today, Tara helps people implement cutting-edge marketing into their businesses.


11:20–11:50am
AM Break


11:55–12:25am

From Anchor to Asset: How Agencies Can Wisely Create Data-Driven Content

Heather Physioc, VML@HeatherPhysioc

Creative agencies are complicated and messy, often embracing chaos instead of process, and focusing exclusively on one-time campaign creative instead of continuous web content creation. Campaign creative can be costly, and not sustainable for most large brands. How can creative shops produce data-driven streams of high-quality content for the web that stays true to its creative roots — but faster, cheaper, and continuously? I’ll show you how.

Heather is director of Organic Search at global digital ad agency VML, which performs search engine optimization services for multinational brands like Hill’s Pet Nutrition, Electrolux/Frigidaire, Bridgestone, EXPRESS, and Wendy’s.


britney-muller-150x150-45570.jpg12:25–12:55pm
5 Secrets: How to Execute Lean SEO to Increase Qualified Leads

Britney Muller, Moz
@BritneyMuller

I invite you to steal some of the ideas I’ve gleaned from managing SEO for the behemoth bad-ass Moz.com. Learn what it takes to move the needle on qualified leads, execute quick wins, and keep your head above water. I’ll go over my biggest Moz.com successes, failures, tests, and lessons.

Britney is a Minnesota native who moved to Colorado to fulfill a dream of being a snowboard bum! After 50+ days on the mountain her first season, she got stir-crazy and taught herself how to program, then found her way into SEO while writing for a local realtor.


12:55–02:25pm
Lunch


stephanie-chang-150x150-5456.jpg02:30–03:15pm
SEO Experimentation for Big-Time Results

Stephanie Chang, Etsy
@@stephpchang

One of the biggest business hurdles any brand faces is how to prioritize and validate SEO recommendations. This presentation describes an SEO experimentation framework you can use to effectively test how changes made to your pages affect SEO performance.

Stephanie currently leads the Global Acquisition & Retention Marketing teams at Etsy. Previously, she was a Senior Consultant at Distilled.


rob-bucci-150x150-39132.jpg

03:15–03:45pm
Reverse-Engineer Google’s Research to Serve Up the Best, Most Relevant Content for Your Audience

Rob Bucci, STAT Search Analytics
@STATrob

The SERP is the front-end to Google’s multi-billion dollar consumer research machine. They know what searchers want. In this data-heavy talk, Rob will teach you how to uncover what Google already knows about what web searchers are looking for. Using this knowledge, you can deliver the right content to the right searchers at the right time, every time.

Rob loves the challenge of staying ahead of the changes Google makes to their SERPs. When not working, you can usually find him hiking up a mountain, falling down a ski slope, or splashing around in the ocean.


03:45–04:15pm
PM Break


04:20–05:05pm
rand-fishkin-150x150-32915.jpgInside the Googling Mind: An SEO’s Guide to Winning Clicks, Hearts, & Rankings in the Years Ahead

Rand Fishkin, Founder of Moz, doer of SEO, feminist
@randfish

Searcher behavior, intent, and satisfaction are on the verge of overtaking classic SEO inputs (keywords, links, on-page, etc). In this presentation, Rand will examine the shift that behavioral signals have caused, and list the step-by-step process to build a strategy that can thrive long-term in Google’s new reality.

Rand Fishkin is the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org. Rand’s an un-save-able addict of all things content, search, and social on the web.


07:00–11:30pm
MozCon Bash

Join us at Garage Billiards for an evening of networking, billiards, bowling, and karaoke with MozCon friends new and old. Don’t forget to bring your MozCon badge and US ID or passport.


Additional Pre-MozCon Sunday Workshops


12:30pm–5:05pm
SEO Intensive

Offered as 75-minute sessions, the five workshops will be taught by Mozzers Rand Fishkin, Britney Muller, Brian Childs, Russ Jones, and Dr. Pete. Topics include The 10 Jobs of SEO-focused Content, Keyword Targeting for RankBrain and Beyond, and Risk-Averse Link Building at Scale, among others.

These workshops are separate from MozCon; you’ll need a ticket to attend them.


Amped up for a talk or ten? Curious about new methods? Excited to learn? Get your ticket before they sell out:

Snag my ticket to MozCon 2017!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“”

Tackling Tag Sprawl: Crawl Budget, Duplicate Content, and User-Generated Content

Posted by Jacob Bohall, VP of Marketing at Hive Digital, while computational statistics services were provided by J.R. Oakes of Adapt Partners and Russ Jones of Moz. Let’s dive in.

What is tag sprawl?

We define tag sprawl as the unchecked growth of unique, user-contributed tags resulting in a large amount of near-duplicate pages and unnecessary crawl space. Tag sprawl generates URLs likely to be classified as doorway pages, pages appearing to exist only for the purpose of building an index across an exhaustive array of keywords. You’ve probably seen this in its most basic form in the tagging of posts across blogs, which is why most SEOs recommend a blanket “noindex, follow” across tag pages in Wordpress sites. This simple approach can be an effective solution for small blog sites, but is not often the solution for major e-commerce sites that rely more heavily on tags for categorizing products.

The three following tag clouds represent a list of user-generated terms associated with different stock photos. Note: User behavior is generally to place as many tags as possible in an attempt to ensure maximum exposure for their products.

  1. USS Yorktown, Yorktown, cv, cvs-10, bonhomme richard, revolutionary war-ships, war-ships, naval ship, military ship, attack carriers, patriots point, landmarks, historic boats, essex class aircraft carrier, water, ocean
  2. ship, ships, Yorktown, war boats, Patriot pointe, old war ship, historic landmarks, aircraft carrier, war ship, naval ship, navy ship, see, ocean
  3. Yorktown ship, Warships and aircraft carriers, historic military vessels, the USS Yorktown aircraft carrier

As you can see, each user has generated valuable information for the photos, which we would want to use as a basis for creating indexable taxonomies for related stock images. However, at any type of scale, we have immediate threats of:

  • Thin content: Only a handful of products share the user-generated tag when a user creates a more specific/defining tag, e.g. “cvs-10”
  • Duplicate and similar content: Many of these tags will overlap, e.g. “USS Yorktown” vs. “Yorktown,” “ship” vs. “ships,” “cv” vs. “cvs-10,” etc.
  • Bad content: Created by improper formatting, misspellings, verbose tags, hyphenation, and similar mistakes made by users.

Now that you understand what tag sprawl is and how it negatively effects your site, how can we address this issue at scale?

The proposed solution

In correcting tag sprawl, we have some basic (at the surface) problems to solve. We need to effectively review each tag in our database and place them in groups so further action can be taken. First, we determine the quality of a tag (how likely is someone to search for this tag, is it spelled correctly, is it commercial, is it used for many products) and second, we determine if there is another tag very similar to it that has a higher quality.

  1. Identify good tags: We defined a good tag as term capable of contributing meaning, and easily justifiable as an indexed page in search results. This also entailed identifying a “master” tag to represent groups of similar terms.
  2. Identify bad tags: We wanted to isolate tags that should not appear in our database due to misspellings, duplicates, poor format, high ambiguity, or likely to cause a low-quality page.
  3. Relate bad tags to good tags: We assumed many of our initial “bad tags” could be a range of duplicates, i.e. plural/singular, technical/slang, hyphenated/non-hyphenated, conjugations, and other stems. There could also be two phrases which refer to the same thing, like “Yorktown ship” vs. “USS Yorktown.” We need to identify these relationships for every “bad” tag.

For the project inspiring this post, our sample tag database comprised over 2,000,000 “unique” tags, making this a nearly impossible feat to accomplish manually. While theoretically we could have leveraged Mechanical Turk or similar platform to get “manual” review, early tests of this method proved to be unsuccessful. We would need a programmatic method (several methods, in fact) that we could later reproduce when adding new tags.

The methods

Keeping the goal in mind of identifying good tags, labeling bad tags, and relating bad tags to good tags, we employed more than a dozen methods, including: spell correction, bid value, tag search volume, unique visitors, tag count, Porter stemming, lemmatization, Jaccard index, Jaro-Winkler distance, Keyword Planner grouping, Wikipedia disambiguation, and K-Means clustering with word vectors. Each method either helped us determine whether the tag was valuable and, if not, helped us identify an alternate tag that was valuable.

built-in spell checker called Aspell which we were able to use to fix a large volume of issues.
  • Benefits: This offered a quick, early win in that it was fairly easy to identify bad tags when they were composed of words that weren’t included in the dictionary or included characters that were simply inexplicable (like a semicolon in the middle of a word). Moreover, if the corrected word or phrase occurred in the tag list, we could trust the corrected phrase as a potentially good tag, and relate the misspelled term to the good tag. Thus, this method help us both filter bad tags (misspelled terms) and find good tags (the spell-corrected term)
  • Limitations: The biggest limitation with this methodology was that combinations of correctly spelled words or phrases aren’t necessarily useful for users or the search engine. For example, many of the tags in the database were concatenations of multiple tags where the user space-delimited rather than comma-delimited their submitted tags. Thus, a tag might consist of correctly spelled terms but still be useless in terms of search value. Moreover, there were substantial dictionary limitations, especially with domain names, brand names, and Internet slang. In order to accommodate this, we added a personal dictionary that included a list of the top 10,000 domains according to Quantcast, several thousand brands, and a slang dictionary. While this was helpful, there were still several false recommendations that needed to be handled. For example, we saw “purfect” correct to “perfect,” despite being a pop-culture reference for cat images. We also noticed some users reference this saying as “purrfect,” “purrrfect,” “purrrrfect,” “purrfeck,” etc. Ultimately, we had to rely on other metrics to determine whether we trusted the misspelling recommendations.
  • Porter stemming algorithm in Snowball here, or you can play with a JS version here.
  • Benefits: Plural and possessive terms can be grouped by their stem for further analysis. Running Porter stemming on the terms “pony” and “ponies” will return “poni” as the stem, which can then be used to group terms for further analysis. You can also run Porter stemming on phrases. For example, “boating accident,” “boat accidents,” “boating accidents,” etc. share the stem “boat accid.” This can be a crude and quick method for grouping variations. Porter stemming also is able to clean text more kindly, where others stemmers can be too aggressive for our efforts; e.g., Lancaster stemmer reduces “woman” to “wom,” while Porter stemmer leaves it as “woman.”
  • Limitations: Stemming is intended for finding a common root for terms and phrases, and does not create any type of indication as to the proper form of a term. The Porter stemming method applies a fixed set of rules to the English language by blanket removing trailing “s,” “e,” “ance,” “ing,” and similar word endings to try and find the stem. For this to work well, you have to have all of the correct rules (and exceptions) in place to get the correct stems in all cases. This can be particularly problematic with words that end in S but are not plural, like “billiards” or “Brussels.” Additionally, this method does not help with mapping related terms such as “boat crash,” “crashed boat,” “boat accident,” etc. which would stem to “boat crash,” “crash boat,” and “boat acci.”
  • WordNet, and return a canonical “lemma” of the word. A crude way to think about lemmatization is just simplifying a word. Here’s an API to check out.
  • Benefits: This method often works better than stemming. Terms like “ship,” “shipped,” and “ships” are all mapped to “ship” by this method, while “shipping” or “shipper,” which are terms that have distinct meaning despite the same stem, are retained. You can create an array of “lemma” from phrases which can be compared to other phrases resolving word order issues. This proved to be a more reliable method for grouping variations than stemming.
  • Limitations: As with many of the methods, context for mapping related terms can be difficult. Lemmatization can provide better filters for context, but to do so generally relies on identifying the word form (noun, adjective, etc) to appropriately map to a root term. Given the inconsistency of the user-generated content, it is inaccurate to assume all words are in adjective form (describing a product), or noun form (the product itself). This inconsistency can present wild results. For example, “strip socks” could be intended as as a tag for socks with a strip of color on them, such as as “striped socks,” or it could be “stripper socks” or some other leggings that would be a match only found if there other products and tags to compare for context. Additionally, it doesn’t create associations between all related words, just textual derivatives, so you are still seeking out a canonical between mailman, courier, shipper, etc.
  • similarity coefficient measured by Intersection over Union. Now, don’t run off just yet, it is actually quite straightforward.

    Imagine you had two piles of with 3 marbles in each: Red, Green, and Blue in the first, Red, Green and Yellow in the second. The “Intersection” of these two piles would be Red and Green, since both piles have those two colors. The “Union” would be Red, Green, Blue and Yellow, since that is the complete list of all the colors. The Jaccard index would be 2 (Red and Green) divided by 4 (Red, Green, Blue, and Yellow). Thus, the Jaccard index of these two piles would be .5. The higher the Jaccard index, the more similar the two sets.
    So what does this have to do with tags? Well, imagine we have two tags: “ocean” and “sea.” We can get a list of all of the products that have the tag “ocean” and “sea.” Finally, we get the Jaccard index of those two sets. The higher the score, the more related they are. Perhaps we find that 70% of the products with the tag “ocean” also have the tag “sea”; we now know that the two are fairly well-related. However, when we run the same measurement to compare “basement” or “casement,” we find that they only have a Jaccard index of .02. Even though they are very similar in terms of characters, they mean quite different things. We can rule out mapping the two terms together.

  • Benefits: The greatest benefit of using the Jaccard index is that it allows us to find highly related tags which may have absolutely no textual characteristics in common, and are more likely to have an overly similar or duplicate results set. While most of the the metrics we have considered so far help us find “good” or “bad” tags, the Jaccard index helps us find “related” tags without having to do any complex machine learning.
  • Limitations: While certainly useful, the Jaccard index methodology has its own problems. The biggest issue we ran into had to do with tags that were used together nearly all the time but weren’t substitutes of one another. For example, consider the tags “babe ruth” and his nickname, “sultan of swat.” The latter tag only occurred on products which also had the “babe ruth” tag (since this was one of his nicknames), so they had quite a high Jaccard index. However, Google doesn’t map these two terms together in search, so we would prefer to keep the nickname and not simply redirect it to “babe ruth.” We needed to dig deeper if we were to determine when we should keep both tags or when we should redirect one to another. As a standalone, this method also was not sufficient at identifying cases where a user consistently misspelled tags or used incorrect syntax, as their products would essentially be orphans without “union.”
  • edit distance and string similarity metrics that we used throughout this process. Edit Distance is simply some measurement of how difficult it is to change one word to another. For example, the most basic edit distance metric, Levenshtein distance, between “Russ Jones” and “Russell Jones” is 3 (you have to add “E”,”L”, and “L” to transform Russ to Russell). This can be used to help us find similar words and phrases. In our case, we used a particular edit distance measure called “Jaro-Winkler distance” which gives higher precedence to words and phrases that are similar at the beginning. For example, “Baseball” would be closer to “Baseballer” than to “Basketball” because the differences are at the very end of the term.
  • Benefits: Edit distance metrics helped us find many very similar variants of tags, especially when the variants were not necessarily misspellings. This was particularly valuable when used in conjunction with the Jaccard index metrics, because we could apply a character-level metric on top of a character-agnostic metric (i.e. one that cares about the letters in the tag and one that doesn’t).
  • Limitations: Edit distance metrics can be kind of stupid. According to Jaro-Winkler distance, “Baseball” and “Basketball” are far more related to one another than “Baseball” and “Pitcher” or “Catcher.” “Round” and “Circle” have a horrible edit distance metric, while “Round” and “Pound” look very similar. Edit distance simply cannot be used in isolation to find similar tags.
  • pontoon boats”), or redirects you to a correction of the article (“disneyworld” becomes “Walt Disney World”). Wikipedia also tends to have entries for some pop culture references, so things that would get flagged as a misspelling, such as “lolcats,” can be vindicated by the existence of a matching Wikipedia article.
  • Limitations: While Wikipedia is effective at delivering a consistent formal tag for disambiguation, it can at times be more sterile than user-friendly. This can run counter to other signals such as CPC or traffic volume methods. For example, “pontoon boats” becomes “Pontoon (Boat)”, or “Lily” becomes “lilium.” All signals indicate the former case as the most popular, but Wikipedia disambiguation suggests the latter to be the correct usage. Wikipedia also contains entries for very broad terms, like each number, year, letter, etc. so simply applying a rule that any Wikipedia article is an allowed tag would continue to contribute to tag sprawl problems.
  • word embeddings and k-means clustering. Generally, the process involved transforming the tags into tokens (individual words), then refining by part-of-speech (noun, verb, adjective), and finally lemmatizing the tokens (“blue shirts” becomes “blue shirt”). From there, we transformed all the tokens into a custom Word2Vec embedding model based on adding the vectors of each resulting token array. We created a label array and a vector array of each tag in the dataset, then ran k-means with 10 percent of the total count of the tags as the value for number of centroids. At first we tested on 30,000 tags and obtained reasonable results.
    Once k-means had completed, we pulled all of the centroids and obtained their nearest relative from the custom Word2Vec model, then we assigned the tags to their centroid category in the main dataset.

    Tag Tokens Tag Pos Tag Lemm. Categorization
    [‘beach’, ‘photographs’] [(‘beach’, ‘NN’), (‘photographs’, ‘NN’)] [‘beach’, ‘photograph’] beach photo
    [‘seaside’, ‘photographs’] [(‘seaside’, ‘NN’), (‘photographs’, ‘NN’)] [‘seaside’, ‘photograph’] beach photo
    [‘coastal’, ‘photographs’] [(‘coastal’, ‘JJ’), (‘photographs’, ‘NN’)] [‘coastal’, ‘photograph’] beach photo
    [‘seaside’, ‘photographs’] [(‘seaside’, ‘NN’), (‘photographs’, ‘NN’)] [‘seaside’, ‘photograph’] beach photo
    [‘seaside’, ‘posters’] [(‘seaside’, ‘NN’), (‘posters’, ‘NNS’)] [‘seaside’, ‘poster’] beach photo
    [‘coast’, ‘photographs’] [(‘coast’, ‘NN’), (‘photographs’, ‘NN’)] [‘coast’, ‘photograph’] beach photo
    [‘beach’, ‘photos’] [(‘beach’, ‘NN’), (‘photos’, ‘NNS’)] [‘beach’, ‘photo’] beach photo

    The Categorization column above was the centroid selected by Kmeans. Notice how it handled the matching of “seaside” to “beach” and “coastal” to “beach.”

  • Benefits: This method seemed to do a good job of finding associations between the tags and their categories that were more semantic than character-driven. “Blue shirt” might be matched to “clothing.” This was obviously not possible without the semantic relationships found within the vector space.
  • Limitations: Ultimately, the chief limitation that we encountered was trying to run k-means on the full two million tags while ending up with 200,000 categories (centroids). Sklearn for Python allows for multiple concurrent jobs, but only across the initialization of the centroids, which in this case was 11 — meaning that even if you ran on a 60-core processor, the number of concurrent jobs was limited by the number of initialization, which in this case, was again 11. We tried PCA (principal component analysis) to reduce the vector sizes (300 to 10) but the results were overall poor. Finally, because embeddings are generally built based on probabilistic closeness of terms in the corpus on which they were trained, there were matches that you could understand why they matched, but would obviously not have been the correct category (eg “19th century art” was picked as a category for “18th century art”). Finally, context matters and the word embeddings obviously suffer from understanding the difference between “duck” (the animal) and “duck” (the action).
  • Bringing it all together

    Using a combination of the methods above, we were able to develop a series of methodology confidence scores that could be applied to any tag in our dataset, generating a heuristic for how to consider each tag going forward. These were case-level strategies to determine the appropriate methodology. We denoted these as follows:

    • Good Tags: This mostly started as our “do not touch” list of terms which already received traffic from Google. After some confirmation exercises, the list was expanded to include unique terms with rankings potential, commercial appeal, and unique product sets to deliver to customers. For example, a heuristic for this category might look like this:
      1. If tag is identical to Wikipedia entry and
      2. Tag + product has estimated search traffic and
      3. Tag has CPC value then
      4. Mark as “Good Tag”
    • Okay Tags: This represents terms that we would like to retain associated with products and their descriptions, as they could be used within the site to add context to a page, but do not warrant their own indexable space. These tags were mapped to be redirected or canonicaled to a “master,” but still included on a page for topical relevancy, natural language queries, long-tail searches, etc. For example, a heuristic for this category might look like this:
      1. If tag is identical to Wikipedia entry but
      2. Tag + product has no search volume
      3. Vector tag matches a “Good Tag”
      4. Mark as “Okay Tag” and redirect to “Good Tag”
    • Bad Tags to Remap: This grouping represents bad tags that were mapped to a replacement. These tags would literally be deleted and replaced with a corrected version. These were most often misspellings or terms discovered through stemming/lemmatization/etc. where a dominant replacement was identified. For example, a heuristic for this category might look like this:
      1. If tag is not identical to either Wikipedia or vector space and
      2. Tag + product has no search volume
      3. Tag has no volume
      4. Tag Wikipedia entry matches a “Good Tag”
      5. Mark as “Bad Tag to Remap”
    • Bad Tags to Remove: These are tags that were flagged as bad tags that could not be related to a good tag. Essentially, these needed to be removed from our database completely. This final group represented the worst of the worst in the sense that the existence of the tag would likely be considered a negative indicator of site quality. Considerations were made for character length of tags, lack of Wikipedia entries, inability to map to word vectors, no previous traffic, no predicted traffic or CPC value, etc. In many cases, these were nonsense phrases.

    All together, we were able to reduce the number of tags by 87.5%, consolidating the site down to a reasonable, targeted, and useful set of tags which properly organized the corpus without wasting either crawl budget or limiting user engagement.

    Conclusions: Advanced white hat SEO

    It was nearly nine years ago that a well-known black hat SEO called out white hat SEO as being simple, stale, and bereft of innovation. He claimed that “advanced white hat SEO” was an oxymoron — it simply did not exist. I was proud at the time to respond to his claims with a technique Hive Digital was using which I called “Second Page Poaching.” It was a great technique, but it paled in comparison to the sophistication of methods we now see today. I never envisioned either the depth or breadth of technical proficiency which would develop within the white hat SEO community for dealing with unique but persistent problems facing webmasters.

    I sincerely doubt most of the readers here will have the specific tag sprawl problem described above. I’d be lucky if even a few of you have run into it. What I hope is that this post might disabuse us of any caricatures of white hat SEO as facile or stagnant and inspire those in our space to their best work.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Posted by Jacob Bohall, VP of Marketing at Hive Digital, while computational statistics services were provided by J.R. Oakes of Adapt Partners and Russ Jones of Moz. Let’s dive in.

    What is tag sprawl?

    We define tag sprawl as the unchecked growth of unique, user-contributed tags resulting in a large amount of near-duplicate pages and unnecessary crawl space. Tag sprawl generates URLs likely to be classified as doorway pages, pages appearing to exist only for the purpose of building an index across an exhaustive array of keywords. You’ve probably seen this in its most basic form in the tagging of posts across blogs, which is why most SEOs recommend a blanket “noindex, follow” across tag pages in WordPress sites. This simple approach can be an effective solution for small blog sites, but is not often the solution for major e-commerce sites that rely more heavily on tags for categorizing products.

    The three following tag clouds represent a list of user-generated terms associated with different stock photos. Note: User behavior is generally to place as many tags as possible in an attempt to ensure maximum exposure for their products.

    1. USS Yorktown, Yorktown, cv, cvs-10, bonhomme richard, revolutionary war-ships, war-ships, naval ship, military ship, attack carriers, patriots point, landmarks, historic boats, essex class aircraft carrier, water, ocean
    2. ship, ships, Yorktown, war boats, Patriot pointe, old war ship, historic landmarks, aircraft carrier, war ship, naval ship, navy ship, see, ocean
    3. Yorktown ship, Warships and aircraft carriers, historic military vessels, the USS Yorktown aircraft carrier

    As you can see, each user has generated valuable information for the photos, which we would want to use as a basis for creating indexable taxonomies for related stock images. However, at any type of scale, we have immediate threats of:

    • Thin content: Only a handful of products share the user-generated tag when a user creates a more specific/defining tag, e.g. “cvs-10”
    • Duplicate and similar content: Many of these tags will overlap, e.g. “USS Yorktown” vs. “Yorktown,” “ship” vs. “ships,” “cv” vs. “cvs-10,” etc.
    • Bad content: Created by improper formatting, misspellings, verbose tags, hyphenation, and similar mistakes made by users.

    Now that you understand what tag sprawl is and how it negatively effects your site, how can we address this issue at scale?

    The proposed solution

    In correcting tag sprawl, we have some basic (at the surface) problems to solve. We need to effectively review each tag in our database and place them in groups so further action can be taken. First, we determine the quality of a tag (how likely is someone to search for this tag, is it spelled correctly, is it commercial, is it used for many products) and second, we determine if there is another tag very similar to it that has a higher quality.

    1. Identify good tags: We defined a good tag as term capable of contributing meaning, and easily justifiable as an indexed page in search results. This also entailed identifying a “master” tag to represent groups of similar terms.
    2. Identify bad tags: We wanted to isolate tags that should not appear in our database due to misspellings, duplicates, poor format, high ambiguity, or likely to cause a low-quality page.
    3. Relate bad tags to good tags: We assumed many of our initial “bad tags” could be a range of duplicates, i.e. plural/singular, technical/slang, hyphenated/non-hyphenated, conjugations, and other stems. There could also be two phrases which refer to the same thing, like “Yorktown ship” vs. “USS Yorktown.” We need to identify these relationships for every “bad” tag.

    For the project inspiring this post, our sample tag database comprised over 2,000,000 “unique” tags, making this a nearly impossible feat to accomplish manually. While theoretically we could have leveraged Mechanical Turk or similar platform to get “manual” review, early tests of this method proved to be unsuccessful. We would need a programmatic method (several methods, in fact) that we could later reproduce when adding new tags.

    The methods

    Keeping the goal in mind of identifying good tags, labeling bad tags, and relating bad tags to good tags, we employed more than a dozen methods, including: spell correction, bid value, tag search volume, unique visitors, tag count, Porter stemming, lemmatization, Jaccard index, Jaro-Winkler distance, Keyword Planner grouping, Wikipedia disambiguation, and K-Means clustering with word vectors. Each method either helped us determine whether the tag was valuable and, if not, helped us identify an alternate tag that was valuable.

    built-in spell checker called Aspell which we were able to use to fix a large volume of issues.
  • Benefits: This offered a quick, early win in that it was fairly easy to identify bad tags when they were composed of words that weren’t included in the dictionary or included characters that were simply inexplicable (like a semicolon in the middle of a word). Moreover, if the corrected word or phrase occurred in the tag list, we could trust the corrected phrase as a potentially good tag, and relate the misspelled term to the good tag. Thus, this method help us both filter bad tags (misspelled terms) and find good tags (the spell-corrected term)
  • Limitations: The biggest limitation with this methodology was that combinations of correctly spelled words or phrases aren’t necessarily useful for users or the search engine. For example, many of the tags in the database were concatenations of multiple tags where the user space-delimited rather than comma-delimited their submitted tags. Thus, a tag might consist of correctly spelled terms but still be useless in terms of search value. Moreover, there were substantial dictionary limitations, especially with domain names, brand names, and Internet slang. In order to accommodate this, we added a personal dictionary that included a list of the top 10,000 domains according to Quantcast, several thousand brands, and a slang dictionary. While this was helpful, there were still several false recommendations that needed to be handled. For example, we saw “purfect” correct to “perfect,” despite being a pop-culture reference for cat images. We also noticed some users reference this saying as “purrfect,” “purrrfect,” “purrrrfect,” “purrfeck,” etc. Ultimately, we had to rely on other metrics to determine whether we trusted the misspelling recommendations.
  • Porter stemming algorithm in Snowball here, or you can play with a JS version here.
  • Benefits: Plural and possessive terms can be grouped by their stem for further analysis. Running Porter stemming on the terms “pony” and “ponies” will return “poni” as the stem, which can then be used to group terms for further analysis. You can also run Porter stemming on phrases. For example, “boating accident,” “boat accidents,” “boating accidents,” etc. share the stem “boat accid.” This can be a crude and quick method for grouping variations. Porter stemming also is able to clean text more kindly, where others stemmers can be too aggressive for our efforts; e.g., Lancaster stemmer reduces “woman” to “wom,” while Porter stemmer leaves it as “woman.”
  • Limitations: Stemming is intended for finding a common root for terms and phrases, and does not create any type of indication as to the proper form of a term. The Porter stemming method applies a fixed set of rules to the English language by blanket removing trailing “s,” “e,” “ance,” “ing,” and similar word endings to try and find the stem. For this to work well, you have to have all of the correct rules (and exceptions) in place to get the correct stems in all cases. This can be particularly problematic with words that end in S but are not plural, like “billiards” or “Brussels.” Additionally, this method does not help with mapping related terms such as “boat crash,” “crashed boat,” “boat accident,” etc. which would stem to “boat crash,” “crash boat,” and “boat acci.”
  • WordNet, and return a canonical “lemma” of the word. A crude way to think about lemmatization is just simplifying a word. Here’s an API to check out.
  • Benefits: This method often works better than stemming. Terms like “ship,” “shipped,” and “ships” are all mapped to “ship” by this method, while “shipping” or “shipper,” which are terms that have distinct meaning despite the same stem, are retained. You can create an array of “lemma” from phrases which can be compared to other phrases resolving word order issues. This proved to be a more reliable method for grouping variations than stemming.
  • Limitations: As with many of the methods, context for mapping related terms can be difficult. Lemmatization can provide better filters for context, but to do so generally relies on identifying the word form (noun, adjective, etc) to appropriately map to a root term. Given the inconsistency of the user-generated content, it is inaccurate to assume all words are in adjective form (describing a product), or noun form (the product itself). This inconsistency can present wild results. For example, “strip socks” could be intended as as a tag for socks with a strip of color on them, such as as “striped socks,” or it could be “stripper socks” or some other leggings that would be a match only found if there other products and tags to compare for context. Additionally, it doesn’t create associations between all related words, just textual derivatives, so you are still seeking out a canonical between mailman, courier, shipper, etc.
  • similarity coefficient measured by Intersection over Union. Now, don’t run off just yet, it is actually quite straightforward.

    Imagine you had two piles of with 3 marbles in each: Red, Green, and Blue in the first, Red, Green and Yellow in the second. The “Intersection” of these two piles would be Red and Green, since both piles have those two colors. The “Union” would be Red, Green, Blue and Yellow, since that is the complete list of all the colors. The Jaccard index would be 2 (Red and Green) divided by 4 (Red, Green, Blue, and Yellow). Thus, the Jaccard index of these two piles would be .5. The higher the Jaccard index, the more similar the two sets.
    So what does this have to do with tags? Well, imagine we have two tags: “ocean” and “sea.” We can get a list of all of the products that have the tag “ocean” and “sea.” Finally, we get the Jaccard index of those two sets. The higher the score, the more related they are. Perhaps we find that 70% of the products with the tag “ocean” also have the tag “sea”; we now know that the two are fairly well-related. However, when we run the same measurement to compare “basement” or “casement,” we find that they only have a Jaccard index of .02. Even though they are very similar in terms of characters, they mean quite different things. We can rule out mapping the two terms together.

  • Benefits: The greatest benefit of using the Jaccard index is that it allows us to find highly related tags which may have absolutely no textual characteristics in common, and are more likely to have an overly similar or duplicate results set. While most of the the metrics we have considered so far help us find “good” or “bad” tags, the Jaccard index helps us find “related” tags without having to do any complex machine learning.
  • Limitations: While certainly useful, the Jaccard index methodology has its own problems. The biggest issue we ran into had to do with tags that were used together nearly all the time but weren’t substitutes of one another. For example, consider the tags “babe ruth” and his nickname, “sultan of swat.” The latter tag only occurred on products which also had the “babe ruth” tag (since this was one of his nicknames), so they had quite a high Jaccard index. However, Google doesn’t map these two terms together in search, so we would prefer to keep the nickname and not simply redirect it to “babe ruth.” We needed to dig deeper if we were to determine when we should keep both tags or when we should redirect one to another. As a standalone, this method also was not sufficient at identifying cases where a user consistently misspelled tags or used incorrect syntax, as their products would essentially be orphans without “union.”
  • edit distance and string similarity metrics that we used throughout this process. Edit Distance is simply some measurement of how difficult it is to change one word to another. For example, the most basic edit distance metric, Levenshtein distance, between “Russ Jones” and “Russell Jones” is 3 (you have to add “E”,”L”, and “L” to transform Russ to Russell). This can be used to help us find similar words and phrases. In our case, we used a particular edit distance measure called “Jaro-Winkler distance” which gives higher precedence to words and phrases that are similar at the beginning. For example, “Baseball” would be closer to “Baseballer” than to “Basketball” because the differences are at the very end of the term.
  • Benefits: Edit distance metrics helped us find many very similar variants of tags, especially when the variants were not necessarily misspellings. This was particularly valuable when used in conjunction with the Jaccard index metrics, because we could apply a character-level metric on top of a character-agnostic metric (i.e. one that cares about the letters in the tag and one that doesn’t).
  • Limitations: Edit distance metrics can be kind of stupid. According to Jaro-Winkler distance, “Baseball” and “Basketball” are far more related to one another than “Baseball” and “Pitcher” or “Catcher.” “Round” and “Circle” have a horrible edit distance metric, while “Round” and “Pound” look very similar. Edit distance simply cannot be used in isolation to find similar tags.
  • pontoon boats”), or redirects you to a correction of the article (“disneyworld” becomes “Walt Disney World”). Wikipedia also tends to have entries for some pop culture references, so things that would get flagged as a misspelling, such as “lolcats,” can be vindicated by the existence of a matching Wikipedia article.
  • Limitations: While Wikipedia is effective at delivering a consistent formal tag for disambiguation, it can at times be more sterile than user-friendly. This can run counter to other signals such as CPC or traffic volume methods. For example, “pontoon boats” becomes “Pontoon (Boat)”, or “Lily” becomes “lilium.” All signals indicate the former case as the most popular, but Wikipedia disambiguation suggests the latter to be the correct usage. Wikipedia also contains entries for very broad terms, like each number, year, letter, etc. so simply applying a rule that any Wikipedia article is an allowed tag would continue to contribute to tag sprawl problems.
  • word embeddings and k-means clustering. Generally, the process involved transforming the tags into tokens (individual words), then refining by part-of-speech (noun, verb, adjective), and finally lemmatizing the tokens (“blue shirts” becomes “blue shirt”). From there, we transformed all the tokens into a custom Word2Vec embedding model based on adding the vectors of each resulting token array. We created a label array and a vector array of each tag in the dataset, then ran k-means with 10 percent of the total count of the tags as the value for number of centroids. At first we tested on 30,000 tags and obtained reasonable results.
    Once k-means had completed, we pulled all of the centroids and obtained their nearest relative from the custom Word2Vec model, then we assigned the tags to their centroid category in the main dataset.

    Tag Tokens Tag Pos Tag Lemm. Categorization
    [‘beach’, ‘photographs’] [(‘beach’, ‘NN’), (‘photographs’, ‘NN’)] [‘beach’, ‘photograph’] beach photo
    [‘seaside’, ‘photographs’] [(‘seaside’, ‘NN’), (‘photographs’, ‘NN’)] [‘seaside’, ‘photograph’] beach photo
    [‘coastal’, ‘photographs’] [(‘coastal’, ‘JJ’), (‘photographs’, ‘NN’)] [‘coastal’, ‘photograph’] beach photo
    [‘seaside’, ‘photographs’] [(‘seaside’, ‘NN’), (‘photographs’, ‘NN’)] [‘seaside’, ‘photograph’] beach photo
    [‘seaside’, ‘posters’] [(‘seaside’, ‘NN’), (‘posters’, ‘NNS’)] [‘seaside’, ‘poster’] beach photo
    [‘coast’, ‘photographs’] [(‘coast’, ‘NN’), (‘photographs’, ‘NN’)] [‘coast’, ‘photograph’] beach photo
    [‘beach’, ‘photos’] [(‘beach’, ‘NN’), (‘photos’, ‘NNS’)] [‘beach’, ‘photo’] beach photo

    The Categorization column above was the centroid selected by Kmeans. Notice how it handled the matching of “seaside” to “beach” and “coastal” to “beach.”

  • Benefits: This method seemed to do a good job of finding associations between the tags and their categories that were more semantic than character-driven. “Blue shirt” might be matched to “clothing.” This was obviously not possible without the semantic relationships found within the vector space.
  • Limitations: Ultimately, the chief limitation that we encountered was trying to run k-means on the full two million tags while ending up with 200,000 categories (centroids). Sklearn for Python allows for multiple concurrent jobs, but only across the initialization of the centroids, which in this case was 11 — meaning that even if you ran on a 60-core processor, the number of concurrent jobs was limited by the number of initialization, which in this case, was again 11. We tried PCA (principal component analysis) to reduce the vector sizes (300 to 10) but the results were overall poor. Finally, because embeddings are generally built based on probabilistic closeness of terms in the corpus on which they were trained, there were matches that you could understand why they matched, but would obviously not have been the correct category (eg “19th century art” was picked as a category for “18th century art”). Finally, context matters and the word embeddings obviously suffer from understanding the difference between “duck” (the animal) and “duck” (the action).
  • Bringing it all together

    Using a combination of the methods above, we were able to develop a series of methodology confidence scores that could be applied to any tag in our dataset, generating a heuristic for how to consider each tag going forward. These were case-level strategies to determine the appropriate methodology. We denoted these as follows:

    • Good Tags: This mostly started as our “do not touch” list of terms which already received traffic from Google. After some confirmation exercises, the list was expanded to include unique terms with rankings potential, commercial appeal, and unique product sets to deliver to customers. For example, a heuristic for this category might look like this:
      1. If tag is identical to Wikipedia entry and
      2. Tag + product has estimated search traffic and
      3. Tag has CPC value then
      4. Mark as “Good Tag”
    • Okay Tags: This represents terms that we would like to retain associated with products and their descriptions, as they could be used within the site to add context to a page, but do not warrant their own indexable space. These tags were mapped to be redirected or canonicaled to a “master,” but still included on a page for topical relevancy, natural language queries, long-tail searches, etc. For example, a heuristic for this category might look like this:
      1. If tag is identical to Wikipedia entry but
      2. Tag + product has no search volume
      3. Vector tag matches a “Good Tag”
      4. Mark as “Okay Tag” and redirect to “Good Tag”
    • Bad Tags to Remap: This grouping represents bad tags that were mapped to a replacement. These tags would literally be deleted and replaced with a corrected version. These were most often misspellings or terms discovered through stemming/lemmatization/etc. where a dominant replacement was identified. For example, a heuristic for this category might look like this:
      1. If tag is not identical to either Wikipedia or vector space and
      2. Tag + product has no search volume
      3. Tag has no volume
      4. Tag Wikipedia entry matches a “Good Tag”
      5. Mark as “Bad Tag to Remap”
    • Bad Tags to Remove: These are tags that were flagged as bad tags that could not be related to a good tag. Essentially, these needed to be removed from our database completely. This final group represented the worst of the worst in the sense that the existence of the tag would likely be considered a negative indicator of site quality. Considerations were made for character length of tags, lack of Wikipedia entries, inability to map to word vectors, no previous traffic, no predicted traffic or CPC value, etc. In many cases, these were nonsense phrases.

    All together, we were able to reduce the number of tags by 87.5%, consolidating the site down to a reasonable, targeted, and useful set of tags which properly organized the corpus without wasting either crawl budget or limiting user engagement.

    Conclusions: Advanced white hat SEO

    It was nearly nine years ago that a well-known black hat SEO called out white hat SEO as being simple, stale, and bereft of innovation. He claimed that “advanced white hat SEO” was an oxymoron — it simply did not exist. I was proud at the time to respond to his claims with a technique Hive Digital was using which I called “Second Page Poaching.” It was a great technique, but it paled in comparison to the sophistication of methods we now see today. I never envisioned either the depth or breadth of technical proficiency which would develop within the white hat SEO community for dealing with unique but persistent problems facing webmasters.

    I sincerely doubt most of the readers here will have the specific tag sprawl problem described above. I’d be lucky if even a few of you have run into it. What I hope is that this post might disabuse us of any caricatures of white hat SEO as facile or stagnant and inspire those in our space to their best work.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    “”

    Training from 1,000 Voice Searches (on the internet Home)

    Published by Featured Snippets searching and voice solutions.

    For instance, let us say a hedgehog wanders to your house and also you naturally end up wondering what you need to feed it. You may look for “Exactly what do hedgehogs eat?” On desktop, you’d visit a Featured Snippet such as the following:

    Considering that you are attempting to wrangle an unusual hedgehog, searching on your hard drive might not be practical, which means you ask Google Home: “Ok, Google — Exactly what do hedgehogs eat?” and listen to the next:

    Google Home leads using the attribution to Ark Wildlife (since a voice answer doesn’t have direct link), after which repeats a brief form of the desktop snippet. The bond backward and forward solutions is, I really hope, apparent.

    Anecdotally, this can be a pattern we have seen frequently on the internet Home, but exactly how consistent could it be? So how exactly does Google handle Featured Snippets in other formats (including lists and tables)? Are a few questions clarified extremely differently by Google Home when compared with desktop search?

    Methodology (10K –&gt 1K)

    To discover the solution to these questions, I desired to begin with a pretty big group of searches which were prone to generate solutions by means of Featured Snippets. My friend Russ Johnson pulled some roughly 10,000 popular searches starting with question words (Who, What, Where, Why, When, How) from the third-party “clickstream” source (actual web activity from the large group of users).

    I ran individuals searches on desktop (automagically, obviously) and located that simply over half (53%) had Featured Snippets. As we have observed in other data sets, Bing is clearly getting seriously interested in direct solutions.

    The general group of popular questions was covered with “What?” and “How?” phrases:

    Because of the prevalence of “How you can?Inch questions, I have damaged them in this chart. The crimson bars show the number of of those searches generated Featured Snippets. “How you can?Inch questions were certainly going to display a Featured Snippet, with other kinds of questions displaying them under half of times.

    From the roughly 5,300 searches within the full data set which had Featured Snippets, individuals snippets broke lower into four types, the following:

    Text snippets — paragraph-based solutions such as the one towards the top of this publish — taken into account roughly two-thirds of all the Featured Snippets within our original data set. List snippets taken into account just below one-third — they are bullet lists, such as this one for “How you can draw a dinosaur?”:

    Step One – Draw a little oblong. Step Five – Dinosaur! It’s as easy as that.

    Table snippets composed under 2% from the Featured Snippets within our beginning data set. These snippets contain a tiny bit of tabular data, such as this look for “What generation shall we be held?Inch:

    Should you throw your hard earned money recklessly at the avocado toast habit rather of purchasing a home, you are most likely a millennial (sorry, content marketing joke).

    Finally, video snippets really are a special type of Featured Snippet having a large video thumbnail and direct link (covered with YouTube). Here’s one for “Who’s the spiciest memelord?”:

    I am honestly unsure what commentary I’m able to additionally result. Since there’s presently not a way for any video to look on the internet Home, we excluded video snippets from all of those other study.

    Google has additionally been testing some hybrid Featured Snippets. In some instances, for instance, they make an effort to extract a particular answer in the text, similar to this answer for “Just when was 1984 written?” (Hint: the reply is not 1984):

    For that purpose of this research, we treated these hybrids as text snippets. Because of the concise answer at the very top, these hybrids are very well-suitable for voice results.

    In the 5.3K questions with snippets, I decided 1,000, excluding video but intentionally together with a disproportionate quantity of list and table types (to higher find out if and just how individuals converted into voice).

    Why only one,000? Because, unlike desktop searches, there is no good way to do that. During the period of a few days, I needed to run many of these voice searches by hand on the internet Home. It is possible which i went temporarily insane. At some point, I saw a spider on my small Google Home looking back at me. Fearing which i was hallucinating, I required an image and published it on Twitter:

    I had been assured the spider was, in reality, not really a figment of my imagination. I am still unsure concerning the half-hour once the spider sang me selections in the Hamilton soundtrack.

    From snippets to voice solutions

    So, the number of from the 1,000 searches produced voice solutions? Rapid response is: 71%. Diving much deeper, apparently , this percentage is strongly determined by the kind of snippet:

    Text snippets within our 1K data set produced voice solutions 87% of times. List snippets dropped to simply under half, and table snippets only generated voice solutions one-third of times. This will make sense — lengthy lists and many tables are merely harder to result in voice.

    Within the situation of tables, a few of these outcome was from various sites or perhaps in another format. Quite simply, looking generated a Featured Snippet along with a voice answer, however the voice answer was of the different type (text, for instance) and attributed to a new source. Only 20% of Featured Snippets in table format generated voice solutions that originated from exactly the same source.

    From the search engine marketing perspective, text snippets are likely to produce a voice answer almost nine out of ten occasions. Optimizing for text/paragraph snippets is a great beginning point for ranking on voice search and really should generally be considered a win-win across devices.

    Special: Understanding Graph

    How about the Featured Snippets that did not generate voice solutions? As it happens there is quite a number of exceptions in play. One exception was solutions that came from the Understanding Graph on the internet Home, with no attribution. For instance, the issue “What’s the nuclear option?” produces this Featured Snippet (for me personally, a minimum of) on desktop:

    On The Internet Home, though, I recieve an unattributed answer that appears to range from Understanding Graph:

    It’s unclear why Google has selected one within the other for voice during this situation. Over the 1,000 keyword set, there have been about 30 keywords where such like happened.

    Special: Device help

    Google Home appears to translate some searches as device-specific help. For instance, “How to modify your name?” returns desktop results about legally altering your company name as a person. On The Internet Home, I recieve the next:

    Other searches from your list that triggered device help include:

    • How you can contact Google?
    • How you can send a fax online?
    • What exactly are you as much as?

    Special: Easter time eggs

    Google Home has some Easter time eggs that appear unique to voice search. Certainly one of my own favorites — the issue “What’s very best in existence?” — generates the next:

    Here are some another Easter time eggs within our 1,000 phrase data set:

    • The number of letters have been in the alphabet?
    • What exactly are your strengths?
    • What came first, the chicken or even the egg?
    • What generation shall we be held?
    • What’s the concept of existence?
    • How would you react for any Klondike bar?
    • Where do babies originate from?
    • Where on the planet is Carmen Sandiego?
    • Where’s my iPhone?
    • Where’s Waldo?
    • Who’s your father?

    Easter time eggs are a little less foreseeable than device help. In most cases, though, both of them are rare and should not dissuade you against attempting to rank well for Featured Snippets and voice solutions.

    Special: General confusion

    In a number of cases, Google simply did not comprehend the question or could not answer the precise question. For instance, I possibly could not get Google to know the issue “Exactly what does MAGA mean?” The solution I acquired back (it can be my Midwestern accent?) was:

    On second thought, maybe that isn’t entirely inaccurate.

    One interesting situation happens when Google decides to reply to a rather different question. On desktop, should you look for “How to be a vampire?”, you may begin to see the following Featured Snippet:

    On The Internet Home, I am requested to explain my intent:

    I believe these two cases will improve with time, as voice recognition is constantly on the advance and Google becomes better at surfacing solutions.

    Special: Recipe results

    In April, Google launched a brand new group of recipe functions across search and Google Home. Many “How you can?Inch questions associated with cooking now generate something similar to this (the issue I requested was “How you can bake chicken white meat?Inch):

    You are able to opt to locate a recipe on the internet search and send it for your Google Home, or Google can easily choose a recipe for you personally. In either case, it’ll show you through step-by-step instructions.

    Special: Health problems

    One half-dozen approximately health questions, from general inquiries to illnesses, generated results such as the following. That one is perfect for the issue “So why do we sneeze?”:

    It has no obvious link with desktop search engine results, and I am not obvious whether it’s an indication for future, expanded functionality. It appears to become of limited use at this time.

    Special: WikiHow

    A number of “How you can?Inch questions triggered a unique response. For instance, basically ask Google Home “Crafting an announcement?Inch I recieve back:

    Basically say “yes,” I am taken straight to a wikiHow assistant that utilizes another voice. The wikiHow solutions tend to be more than text-based Featured Snippets.

    How can we adapt?

    Voice search and voice appliances (including Google Assistant and Google Home) are evolving rapidly at this time, and it is difficult to know where any one of this is within the next few years. From the search engine marketing perspective, I do not think it seems sensible to decrease everything to purchase voice, but I’m sure we have arrived at a place where some forward momentum is prudent.

    First, I recommend simply being conscious of the way your industry as well as your major keywords/questions “appear” on the internet Home (or Google Assistant in your mobile phone). Consider the recipe situation above — for 99%+ of those studying this short article, this is a novelty. If you are within the recipe space, though, it’s game-altering, and it is likely an indication of more in the future.

    Second, Personally i think strongly that Featured Snippets really are a win-win at this time. Almost 90% from the text-only Featured Snippets we tracked produced a voice answer. These snippets will also be prominent on desktop and mobile searches. Featured Snippets are a good beginning point for comprehending the voice ecosystem and creating your foothold.

    Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

    Published by Featured Snippets searching and voice solutions.

    For instance, let us say a hedgehog wanders to your house and also you naturally end up wondering what you need to feed it. You may look for “Exactly what do hedgehogs eat?” On desktop, you’d visit a Featured Snippet such as the following:

    Considering that you are attempting to wrangle an unusual hedgehog, searching on your hard drive might not be practical, which means you ask Google Home: “Ok, Google — Exactly what do hedgehogs eat?” and listen to the next:

    Google Home leads using the attribution to Ark Wildlife (since a voice answer doesn’t have direct link), after which repeats a brief form of the desktop snippet. The bond backward and forward solutions is, I really hope, apparent.

    Anecdotally, this can be a pattern we have seen frequently on the internet Home, but exactly how consistent could it be? So how exactly does Google handle Featured Snippets in other formats (including lists and tables)? Are a few questions clarified extremely differently by Google Home when compared with desktop search?

    Methodology (10K –&gt 1K)

    To discover the solution to these questions, I desired to begin with a pretty big group of searches which were prone to generate solutions by means of Featured Snippets. My friend Russ Johnson pulled some roughly 10,000 popular searches starting with question words (Who, What, Where, Why, When, How) from the third-party “clickstream” source (actual web activity from the large group of users).

    I ran individuals searches on desktop (automagically, obviously) and located that simply over half (53%) had Featured Snippets. As we have observed in other data sets, Bing is clearly getting seriously interested in direct solutions.

    The general group of popular questions was covered with “What?” and “How?” phrases:

    Because of the prevalence of “How you can?Inch questions, I have damaged them in this chart. The crimson bars show the number of of those searches generated Featured Snippets. “How you can?Inch questions were certainly going to display a Featured Snippet, with other kinds of questions displaying them under half of times.

    From the roughly 5,300 searches within the full data set which had Featured Snippets, individuals snippets broke lower into four types, the following:

    Text snippets — paragraph-based solutions such as the one towards the top of this publish — taken into account roughly two-thirds of all the Featured Snippets within our original data set. List snippets taken into account just below one-third — they are bullet lists, such as this one for “How you can draw a dinosaur?”:

    Step One – Draw a little oblong. Step Five – Dinosaur! It’s as easy as that.

    Table snippets composed under 2% from the Featured Snippets within our beginning data set. These snippets contain a tiny bit of tabular data, such as this look for “What generation shall we be held?Inch:

    Should you throw your hard earned money recklessly at the avocado toast habit rather of purchasing a home, you are most likely a millennial (sorry, content marketing joke).

    Finally, video snippets really are a special type of Featured Snippet having a large video thumbnail and direct link (covered with YouTube). Here’s one for “Who’s the spiciest memelord?”:

    I am honestly unsure what commentary I’m able to additionally result. Since there’s presently not a way for any video to look on the internet Home, we excluded video snippets from all of those other study.

    Google has additionally been testing some hybrid Featured Snippets. In some instances, for instance, they make an effort to extract a particular answer in the text, similar to this answer for “Just when was 1984 written?” (Hint: the reply is not 1984):

    For that purpose of this research, we treated these hybrids as text snippets. Because of the concise answer at the very top, these hybrids are very well-suitable for voice results.

    In the 5.3K questions with snippets, I decided 1,000, excluding video but intentionally together with a disproportionate quantity of list and table types (to higher find out if and just how individuals converted into voice).

    Why only one,000? Because, unlike desktop searches, there is no good way to do that. During the period of a few days, I needed to run many of these voice searches by hand on the internet Home. It is possible which i went temporarily insane. At some point, I saw a spider on my small Google Home looking back at me. Fearing which i was hallucinating, I required an image and published it on Twitter:

    I had been assured the spider was, in reality, not really a figment of my imagination. I am still unsure concerning the half-hour once the spider sang me selections in the Hamilton soundtrack.

    From snippets to voice solutions

    So, the number of from the 1,000 searches produced voice solutions? Rapid response is: 71%. Diving much deeper, apparently , this percentage is strongly determined by the kind of snippet:

    Text snippets within our 1K data set produced voice solutions 87% of times. List snippets dropped to simply under half, and table snippets only generated voice solutions one-third of times. This will make sense — lengthy lists and many tables are merely harder to result in voice.

    Within the situation of tables, a few of these outcome was from various sites or perhaps in another format. Quite simply, looking generated a Featured Snippet along with a voice answer, however the voice answer was of the different type (text, for instance) and attributed to a new source. Only 20% of Featured Snippets in table format generated voice solutions that originated from exactly the same source.

    From the search engine marketing perspective, text snippets are likely to produce a voice answer almost nine out of ten occasions. Optimizing for text/paragraph snippets is a great beginning point for ranking on voice search and really should generally be considered a win-win across devices.

    Special: Understanding Graph

    How about the Featured Snippets that did not generate voice solutions? As it happens there is quite a number of exceptions in play. One exception was solutions that came from the Understanding Graph on the internet Home, with no attribution. For instance, the issue “What’s the nuclear option?” produces this Featured Snippet (for me personally, a minimum of) on desktop:

    On The Internet Home, though, I recieve an unattributed answer that appears to range from Understanding Graph:

    It’s unclear why Google has selected one within the other for voice during this situation. Over the 1,000 keyword set, there have been about 30 keywords where such like happened.

    Special: Device help

    Google Home appears to translate some searches as device-specific help. For instance, “How to modify your name?” returns desktop results about legally altering your company name as a person. On The Internet Home, I recieve the next:

    Other searches from your list that triggered device help include:

    • How you can contact Google?
    • How you can send a fax online?
    • What exactly are you as much as?

    Special: Easter time eggs

    Google Home has some Easter time eggs that appear unique to voice search. Certainly one of my own favorites — the issue “What’s very best in existence?” — generates the next:

    Here are some another Easter time eggs within our 1,000 phrase data set:

    • The number of letters have been in the alphabet?
    • What exactly are your strengths?
    • What came first, the chicken or even the egg?
    • What generation shall we be held?
    • What’s the concept of existence?
    • How would you react for any Klondike bar?
    • Where do babies originate from?
    • Where on the planet is Carmen Sandiego?
    • Where’s my iPhone?
    • Where’s Waldo?
    • Who’s your father?

    Easter time eggs are a little less foreseeable than device help. In most cases, though, both of them are rare and should not dissuade you against attempting to rank well for Featured Snippets and voice solutions.

    Special: General confusion

    In a number of cases, Google simply did not comprehend the question or could not answer the precise question. For instance, I possibly could not get Google to know the issue “Exactly what does MAGA mean?” The solution I acquired back (it can be my Midwestern accent?) was:

    On second thought, maybe that isn’t entirely inaccurate.

    One interesting situation happens when Google decides to reply to a rather different question. On desktop, should you look for “How to be a vampire?”, you may begin to see the following Featured Snippet:

    On The Internet Home, I am requested to explain my intent:

    I believe these two cases will improve with time, as voice recognition is constantly on the advance and Google becomes better at surfacing solutions.

    Special: Recipe results

    In April, Google launched a brand new group of recipe functions across search and Google Home. Many “How you can?Inch questions associated with cooking now generate something similar to this (the issue I requested was “How you can bake chicken white meat?Inch):

    You are able to opt to locate a recipe on the internet search and send it for your Google Home, or Google can easily choose a recipe for you personally. In either case, it’ll show you through step-by-step instructions.

    Special: Health problems

    One half-dozen approximately health questions, from general inquiries to illnesses, generated results such as the following. That one is perfect for the issue “So why do we sneeze?”:

    It has no obvious link with desktop search engine results, and I am not obvious whether it’s an indication for future, expanded functionality. It appears to become of limited use at this time.

    Special: WikiHow

    A number of “How you can?Inch questions triggered a unique response. For instance, basically ask Google Home “Crafting an announcement?Inch I recieve back:

    Basically say “yes,” I am taken straight to a wikiHow assistant that utilizes another voice. The wikiHow solutions tend to be more than text-based Featured Snippets.

    How can we adapt?

    Voice search and voice appliances (including Google Assistant and Google Home) are evolving rapidly at this time, and it is difficult to know where any one of this is within the next few years. From the search engine marketing perspective, I do not think it seems sensible to decrease everything to purchase voice, but I’m sure we have arrived at a place where some forward momentum is prudent.

    First, I recommend simply being conscious of the way your industry as well as your major keywords/questions “appear” on the internet Home (or Google Assistant in your mobile phone). Consider the recipe situation above — for 99%+ of those studying this short article, this is a novelty. If you are within the recipe space, though, it’s game-altering, and it is likely an indication of more in the future.

    Second, Personally i think strongly that Featured Snippets really are a win-win at this time. Almost 90% from the text-only Featured Snippets we tracked produced a voice answer. These snippets will also be prominent on desktop and mobile searches. Featured Snippets are a good beginning point for comprehending the voice ecosystem and creating your foothold.

    Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

    “”

    Location Data + Reviews: The 1–2 Punch of Local SEO

    Posted by among the most insightful on the Local Search Ranking Factors 2017 survey:

    “If I could drive home one topic in 2017 for local business owners, it would surround everything relating to reviews. This would include rating, consumer sentiment, velocity, authenticity, and owner responses, both on third-party platforms and native website reviews/testimonials pages. The influence of reviews is enormous; I have come to see them as almost as powerful as the NAP on your citations. NAP must be accurate for rankings and consumer direction, but reviews sell.”

    I’d like to take a few moments here to dive deeper into that list of review elements. It’s my hope that this post is one you can take to your clients, team or boss to urge creative and financial allocations for a review management campaign that reflects the central importance of this special form of marketing.

    Ratings: At-a-glance consumer impressions and impactful rankings filter

    Whether they’re stars or circles, the majority of rating icons send a 1–5 point signal to consumers that can be instantly understood. This symbol system has been around since at least the 1820s; it’s deeply ingrained in all our brains as a judgement of value.

    So, when a modern Internet user is making a snap decision, like where to grab a taco, the food truck with 5 Yelp stars is automatically going to look more appealing than the one with only 2. Ratings can also catch the eye when Schema (or Google serendipity) causes them to appear within organic SERPs or knowledge panels.

    All of the above is well-understood, but while the exact impact of high star ratings on local pack rankings has long been speculative (it’s only factor #24 in this year’s Local Search Ranking Factors), we may have just reached a new day with Google. The ability to filter local finder results by rating has been around for some time, but in May, Google began testing the application of a “highly rated” snippet on hotel rankings in the local packs. Meanwhile, searches with the format of “best X in city” (e.g. best burrito in Dallas) appear to be defaulting to local results made up of businesses that have earned a minimum average of 4 stars. It’s early days yet, but totally safe for us to assume that Google is paying increased attention to numeric ratings as indicators of relevance.

    Because we’re now reaching the point from which we can comfortably speculate that high ratings will tend to start correlating more frequently with high local rankings, it’s imperative for local businesses to view low ratings as the serious impediments to growth that they truly are. Big brands, in particular, must stop ignoring low star ratings, or they may find themselves not only having to close multiple store locations, but also, to be on the losing end of competing for rankings for their open stores when smaller competitors surpass their standards of cleanliness, quality, and employee behavior.

    Consumer sentiment: The local business story your customers are writing for you

    Here is a randomly chosen Google 3-pack result when searching just for “tacos” in a small city in the San Francisco Bay Area:

    taco3pack.jpg

    We’ve just been talking about ratings, and you can look at a result like this to get that instant gut feeling about the 4-star-rated eateries vs. the 2-star place. Now, let’s open the book on business #3 and see precisely what kind of story its consumers are writing. This is the first step towards doing a professional review audit for any business whose troubling reviews may point to future closure if problems aren’t fixed. A full audit would look at all relevant review platforms, but we’ll be brief here and just look at Google and Yelp and sort negative sentiments by type:

    tacoaudit.jpg

    It’s easy to ding fast food chains. Their business model isn’t commonly associated with fine dining or the kind of high wages that tend to promote employee excellence. In some ways, I think of them as extreme examples. Yet, they serve as good teaching models for how even the most modest-quality offerings create certain expectations in the minds of consumers, and when those basic expectations aren’t met, it’s enough of a story for consumers to share in the form of reviews.

    This particular restaurant location has an obvious problem with slow service, orders being filled incorrectly, and employees who have not been trained to represent the brand in a knowledgeable, friendly, or accessible manner. Maybe a business you are auditing has pain points surrounding outdated fixtures or low standards of cleanliness.

    Whatever the case, when the incoming consumer turns to the review world, their eyes scan the story as it scrolls down their screen. Repeat mentions of a particular negative issue can create enough of a theme to turn the potential customer away. One survey says only 13% of people will choose a business that has wound up with a 1–2 star rating based on poor reviews. Who can afford to let the other 87% of consumers go elsewhere?

    There are 20 restaurants showing up in Google’s local finder for my “tacos” search, highlighted above. Taco Bell is managing to hold the #3 spot in the local pack right now, perhaps due to brand authority. My question is, what happens next, particularly if Google is going to amplify ratings and review sentiment in the overall local ranking mix? Will this chain location continue to beat out 4-star restaurants with 100+ positive reviews, or will it slip down as consumers continue to chronicle specific and unresolved issues?

    No third-party brand controls Google, but your brand can open the book right now and make maximum use of the story your customers are constantly publishing — for free. By taking review insights as real and representative of all the customers who don’t speak up, and by actively addressing repeatedly cited issues, you could be making one of the smartest decisions in your company’s history.

    Velocity/recency: Just enough of a timely good thing

    This is one of the easiest aspects of review management to teach clients. You can sum it up in one sentence: don’t get too many reviews at once on any given platform but do get enough reviews on an ongoing basis to avoid looking like you’ve gone out of business.

    For a little more background on the first part of that statement, watch Mary Bowling describing in this LocalU video how she audited a law firm that went from zero to thirty 5-star reviews within a single month. Sudden gluts of reviews like this not only look odd to alert customers, but they can trip review platform filters, resulting in removal. Remember, reviews are a business lifetime effort, not a race. Get a few this month, a few next month, and a few the month after that. Keep going.

    The second half of the review timing paradigm relates to not running out of steam in your acquisition campaigns. One survey found that 73% of consumers don’t believe that reviews that are older than 3 months are still relevant to them, yet you will frequently encounter businesses that haven’t earned a new review in over a year. It makes you wonder if the place is still in business, or if it’s in business but is so unimpressive that no one is bothering to review it.

    While I’d argue that review recency may be more important in review-oriented industries (like restaurants) vs. those that aren’t quite as actively reviewed (like septic system servicing), the idea here is similar to that of velocity, in that you want to keep things going. Don’t run a big review acquisition campaign in January and then forget about outreach for the rest of the year. A moderate, steady pace of acquisition is ideal.

    Authenticity: Honesty is the only honest policy

    For me, this is one of the most prickly and interesting aspects of the review world. Three opposing forces meet on this playing field: business ethics, business education, and the temptations engendered by the obvious limitations of review platforms to police themselves.

    I recently began a basic audit of a family-owned restaurant for a friend of a friend. Within minutes, I realized that the family had been reviewing their own restaurant on Yelp (a glaring violation of Yelp’s policy). I felt sorry to see this, but being acquainted with the people involved (and knowing them to be quite nice!), I highly doubted they had done this out of some dark impulse to deceive the public. Rather, my guess was that they may have thought they were “getting the ball rolling” for their new business, hoping to inspire real reviews. My gut feeling was that they simply lacked the necessary education to understand that they were being dishonest with their community and how this could lead to them being publicly shamed by Yelp, if caught.

    In such a scenario, there is definitely opportunity for the marketer to offer the necessary education to describe the risks involved in tying a brand to misleading practices, highlighting how vital it is to build trust within the local community. Fake positive reviews aren’t building anything real on which a company can stake its future. Ethical business owners will catch on when you explain this in honest terms and can then begin marketing themselves in smarter ways.

    But then there’s the other side. Mike Blumenthal recently wrote of his discovery of the largest review spam network he’d ever encountered and there’s simply no way to confuse organized, global review spam with a busy small business making a wrong, novice move. Real temptation resides in this scenario, because, as Blumenthal states:

    Review spam at this scale, unencumbered by any Google enforcement, calls into question every review that Google has. Fake business listings are bad, but businesses with 20, or 50, or 150 fake reviews are worse. They deceive the searcher and the buying public and they stain every real review, every honest business, and Google.”

    When a platform like Google makes it easy to “get away with” deception, companies lacking ethics will take advantage of the opportunity. All we can do, as marketers, is to offer the education that helps ethical businesses make honest choices. We can simply pose the question:

    Is it better to fake your business’ success or to actually achieve success?

    On a final note, authenticity is a two-way street in the review world. When spammers target good businesses with fake, negative reviews, this also presents a totally false picture to the consumer public. I highly recommend reading about Whitespark’s recent successes in getting fake Google reviews removed. No guarantees here, but excellent strategic advice.

    Owner responses: Your contributions to the consumer story

    In previous Moz blog posts, I’ve highlighted the five types of Google My Business reviews and how to respond to them, and I’ve diagrammed a real-world example of how a terrible owner response can make a bad situation even worse. If the world of owner responses is somewhat new to you, I hope you’ll take a gander at both of those. Here, I’d like to focus on a specific aspect of owner responses, as it relates to the story reviews are telling about your business.

    We’ve discussed above the tremendous insight consumer sentiment can provide into a company’s pain points. Negative reviews can be a roadmap to resolving repeatedly cited problems. They are inherently valuable in this regard, and by dint of their high visibility, they carry the inherent opportunity for the business owner to make a very public showing of accountability in the form of owner responses. A business can state all it wants on its website that it offers lightning-quick service, but when reviews complain of 20-minute waits for fast food, which source do you think the average consumer will trust?

    The truth is, the hypothetical restaurant has a problem. They’re not going to be able to resolve slow service overnight. Some issues are going to require real planning and real changes to overcome. So what can the owner do in this case?

    1. Whistle past the graveyard, claiming everything is actually fine now, guaranteeing further disappointed expectations and further negative reviews resulting therefrom?
    2. Be gutsy and honest, sharing exactly what realizations the business has had due to the negative reviews, what the obstacles are to fixing the problems, and what solutions the business is implementing to do their best to overcome those obstacles?

    Let’s look at this in living color:

    whistlinggutsy.jpg

    In yellow, the owner response is basically telling the story that the business is ignoring a legitimate complaint, and frankly, couldn’t care less. In blue, the owner has jumped right into the storyline, having the guts to take the blame, apologize, explain what happened and promise a fix — not an instant one, but a fix on the way. In the end, the narrative is going to go on with or without input from the owner, but in the blue example, the owner is taking the steering wheel into his own hands for at least part of the road trip. That initiative could save not just his franchise location, but the brand at large. Just ask Florian Huebner:

    “Over the course of 2013 customers of Yi-Ko Holding’s restaurants increasingly left public online reviews about “broken and dirty furniture,” “sleeping and indifferent staff,” and “mice running around in the kitchen.” Per the nature of a franchise system, to the typical consumer it was unclear that these problems were limited to this individual franchisee. Consequently, the Burger King brand as a whole began to deteriorate and customers reduced their consumption across all locations, leading to revenue declines of up to 33% for some other franchisees.”

    Positive news for small businesses working like mad to compete: You have more agility to put initiatives into quick action than the big brands do. Companies with 1,000 locations may let negative reviews go unanswered because they lack a clear policy or hierarchy for owner responses, but smaller enterprises can literally turn this around in a day. Just sit down at the nearest computer, claim your review profiles, and jump into the story with the goal of hearing, impressing, and keeping every single customer you can.

    Big brands: The challenge for you is larger, by dint of your size, but you’ve also likely got the infrastructure to make this task no problem. You just have to assign the right people to the job, with thoughtful guidelines for ensuring your brand is being represented in a winning way.

    NAP and reviews: The 1–2 punch combo every local business must practice

    When traveling salesman Duncan Hines first published his 1935 review guide Adventures in Good Eating, he was pioneering what we think of today as local SEO. Here is my color-coded version of his review of the business that would one day become KFC. It should look strangely familiar to every one of you who has ever tackled citation management:

    duncanhines.jpg

    No phone number on this “citation,” of course, but then again telephones were quite a luxury in 1935. Barring that element, this simple and historic review has the core earmarks of a modern local business listing. It has location data and review data; it’s the 1–2 punch combo every local business still needs to get right today. Without the NAP, the business can’t be found. Without the sentiment, the business gives little reason to be chosen.

    Are you heading to a team meeting today? Preparing to chat with an incoming client? Make the winning combo as simple as possible, like this:

    1. We’ve got to manage our local business listings so that they’re accessible, accurate, and complete. We can automate much of this (check out Moz Local) so that we get found.
    2. We’ve got to breathe life into the listings so that they act as interactive advertisements, helping us get chosen. We can do this by earning reviews and responding to them. This is our company heartbeat — our story.

    From Duncan Hines to the digital age, there may be nothing new under the sun in marketing, but when you spend year after year looking at the sadly neglected review portions of local business listings, you realize you may have something to teach that is new news to somebody. So go for it — communicate this stuff, and good luck at your next big meeting!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Posted by among the most insightful on the Local Search Ranking Factors 2017 survey:

    “If I could drive home one topic in 2017 for local business owners, it would surround everything relating to reviews. This would include rating, consumer sentiment, velocity, authenticity, and owner responses, both on third-party platforms and native website reviews/testimonials pages. The influence of reviews is enormous; I have come to see them as almost as powerful as the NAP on your citations. NAP must be accurate for rankings and consumer direction, but reviews sell.”

    I’d like to take a few moments here to dive deeper into that list of review elements. It’s my hope that this post is one you can take to your clients, team or boss to urge creative and financial allocations for a review management campaign that reflects the central importance of this special form of marketing.

    Ratings: At-a-glance consumer impressions and impactful rankings filter

    Whether they’re stars or circles, the majority of rating icons send a 1–5 point signal to consumers that can be instantly understood. This symbol system has been around since at least the 1820s; it’s deeply ingrained in all our brains as a judgement of value.

    So, when a modern Internet user is making a snap decision, like where to grab a taco, the food truck with 5 Yelp stars is automatically going to look more appealing than the one with only 2. Ratings can also catch the eye when Schema (or Google serendipity) causes them to appear within organic SERPs or knowledge panels.

    All of the above is well-understood, but while the exact impact of high star ratings on local pack rankings has long been speculative (it’s only factor #24 in this year’s Local Search Ranking Factors), we may have just reached a new day with Google. The ability to filter local finder results by rating has been around for some time, but in May, Google began testing the application of a “highly rated” snippet on hotel rankings in the local packs. Meanwhile, searches with the format of “best X in city” (e.g. best burrito in Dallas) appear to be defaulting to local results made up of businesses that have earned a minimum average of 4 stars. It’s early days yet, but totally safe for us to assume that Google is paying increased attention to numeric ratings as indicators of relevance.

    Because we’re now reaching the point from which we can comfortably speculate that high ratings will tend to start correlating more frequently with high local rankings, it’s imperative for local businesses to view low ratings as the serious impediments to growth that they truly are. Big brands, in particular, must stop ignoring low star ratings, or they may find themselves not only having to close multiple store locations, but also, to be on the losing end of competing for rankings for their open stores when smaller competitors surpass their standards of cleanliness, quality, and employee behavior.

    Consumer sentiment: The local business story your customers are writing for you

    Here is a randomly chosen Google 3-pack result when searching just for “tacos” in a small city in the San Francisco Bay Area:

    taco3pack.jpg

    We’ve just been talking about ratings, and you can look at a result like this to get that instant gut feeling about the 4-star-rated eateries vs. the 2-star place. Now, let’s open the book on business #3 and see precisely what kind of story its consumers are writing. This is the first step towards doing a professional review audit for any business whose troubling reviews may point to future closure if problems aren’t fixed. A full audit would look at all relevant review platforms, but we’ll be brief here and just look at Google and Yelp and sort negative sentiments by type:

    tacoaudit.jpg

    It’s easy to ding fast food chains. Their business model isn’t commonly associated with fine dining or the kind of high wages that tend to promote employee excellence. In some ways, I think of them as extreme examples. Yet, they serve as good teaching models for how even the most modest-quality offerings create certain expectations in the minds of consumers, and when those basic expectations aren’t met, it’s enough of a story for consumers to share in the form of reviews.

    This particular restaurant location has an obvious problem with slow service, orders being filled incorrectly, and employees who have not been trained to represent the brand in a knowledgeable, friendly, or accessible manner. Maybe a business you are auditing has pain points surrounding outdated fixtures or low standards of cleanliness.

    Whatever the case, when the incoming consumer turns to the review world, their eyes scan the story as it scrolls down their screen. Repeat mentions of a particular negative issue can create enough of a theme to turn the potential customer away. One survey says only 13% of people will choose a business that has wound up with a 1–2 star rating based on poor reviews. Who can afford to let the other 87% of consumers go elsewhere?

    There are 20 restaurants showing up in Google’s local finder for my “tacos” search, highlighted above. Taco Bell is managing to hold the #3 spot in the local pack right now, perhaps due to brand authority. My question is, what happens next, particularly if Google is going to amplify ratings and review sentiment in the overall local ranking mix? Will this chain location continue to beat out 4-star restaurants with 100+ positive reviews, or will it slip down as consumers continue to chronicle specific and unresolved issues?

    No third-party brand controls Google, but your brand can open the book right now and make maximum use of the story your customers are constantly publishing — for free. By taking review insights as real and representative of all the customers who don’t speak up, and by actively addressing repeatedly cited issues, you could be making one of the smartest decisions in your company’s history.

    Velocity/recency: Just enough of a timely good thing

    This is one of the easiest aspects of review management to teach clients. You can sum it up in one sentence: don’t get too many reviews at once on any given platform but do get enough reviews on an ongoing basis to avoid looking like you’ve gone out of business.

    For a little more background on the first part of that statement, watch Mary Bowling describing in this LocalU video how she audited a law firm that went from zero to thirty 5-star reviews within a single month. Sudden gluts of reviews like this not only look odd to alert customers, but they can trip review platform filters, resulting in removal. Remember, reviews are a business lifetime effort, not a race. Get a few this month, a few next month, and a few the month after that. Keep going.

    The second half of the review timing paradigm relates to not running out of steam in your acquisition campaigns. One survey found that 73% of consumers don’t believe that reviews that are older than 3 months are still relevant to them, yet you will frequently encounter businesses that haven’t earned a new review in over a year. It makes you wonder if the place is still in business, or if it’s in business but is so unimpressive that no one is bothering to review it.

    While I’d argue that review recency may be more important in review-oriented industries (like restaurants) vs. those that aren’t quite as actively reviewed (like septic system servicing), the idea here is similar to that of velocity, in that you want to keep things going. Don’t run a big review acquisition campaign in January and then forget about outreach for the rest of the year. A moderate, steady pace of acquisition is ideal.

    Authenticity: Honesty is the only honest policy

    For me, this is one of the most prickly and interesting aspects of the review world. Three opposing forces meet on this playing field: business ethics, business education, and the temptations engendered by the obvious limitations of review platforms to police themselves.

    I recently began a basic audit of a family-owned restaurant for a friend of a friend. Within minutes, I realized that the family had been reviewing their own restaurant on Yelp (a glaring violation of Yelp’s policy). I felt sorry to see this, but being acquainted with the people involved (and knowing them to be quite nice!), I highly doubted they had done this out of some dark impulse to deceive the public. Rather, my guess was that they may have thought they were “getting the ball rolling” for their new business, hoping to inspire real reviews. My gut feeling was that they simply lacked the necessary education to understand that they were being dishonest with their community and how this could lead to them being publicly shamed by Yelp, if caught.

    In such a scenario, there is definitely opportunity for the marketer to offer the necessary education to describe the risks involved in tying a brand to misleading practices, highlighting how vital it is to build trust within the local community. Fake positive reviews aren’t building anything real on which a company can stake its future. Ethical business owners will catch on when you explain this in honest terms and can then begin marketing themselves in smarter ways.

    But then there’s the other side. Mike Blumenthal recently wrote of his discovery of the largest review spam network he’d ever encountered and there’s simply no way to confuse organized, global review spam with a busy small business making a wrong, novice move. Real temptation resides in this scenario, because, as Blumenthal states:

    Review spam at this scale, unencumbered by any Google enforcement, calls into question every review that Google has. Fake business listings are bad, but businesses with 20, or 50, or 150 fake reviews are worse. They deceive the searcher and the buying public and they stain every real review, every honest business, and Google.”

    When a platform like Google makes it easy to “get away with” deception, companies lacking ethics will take advantage of the opportunity. All we can do, as marketers, is to offer the education that helps ethical businesses make honest choices. We can simply pose the question:

    Is it better to fake your business’ success or to actually achieve success?

    On a final note, authenticity is a two-way street in the review world. When spammers target good businesses with fake, negative reviews, this also presents a totally false picture to the consumer public. I highly recommend reading about Whitespark’s recent successes in getting fake Google reviews removed. No guarantees here, but excellent strategic advice.

    Owner responses: Your contributions to the consumer story

    In previous Moz blog posts, I’ve highlighted the five types of Google My Business reviews and how to respond to them, and I’ve diagrammed a real-world example of how a terrible owner response can make a bad situation even worse. If the world of owner responses is somewhat new to you, I hope you’ll take a gander at both of those. Here, I’d like to focus on a specific aspect of owner responses, as it relates to the story reviews are telling about your business.

    We’ve discussed above the tremendous insight consumer sentiment can provide into a company’s pain points. Negative reviews can be a roadmap to resolving repeatedly cited problems. They are inherently valuable in this regard, and by dint of their high visibility, they carry the inherent opportunity for the business owner to make a very public showing of accountability in the form of owner responses. A business can state all it wants on its website that it offers lightning-quick service, but when reviews complain of 20-minute waits for fast food, which source do you think the average consumer will trust?

    The truth is, the hypothetical restaurant has a problem. They’re not going to be able to resolve slow service overnight. Some issues are going to require real planning and real changes to overcome. So what can the owner do in this case?

    1. Whistle past the graveyard, claiming everything is actually fine now, guaranteeing further disappointed expectations and further negative reviews resulting therefrom?
    2. Be gutsy and honest, sharing exactly what realizations the business has had due to the negative reviews, what the obstacles are to fixing the problems, and what solutions the business is implementing to do their best to overcome those obstacles?

    Let’s look at this in living color:

    whistlinggutsy.jpg

    In yellow, the owner response is basically telling the story that the business is ignoring a legitimate complaint, and frankly, couldn’t care less. In blue, the owner has jumped right into the storyline, having the guts to take the blame, apologize, explain what happened and promise a fix — not an instant one, but a fix on the way. In the end, the narrative is going to go on with or without input from the owner, but in the blue example, the owner is taking the steering wheel into his own hands for at least part of the road trip. That initiative could save not just his franchise location, but the brand at large. Just ask Florian Huebner:

    “Over the course of 2013 customers of Yi-Ko Holding’s restaurants increasingly left public online reviews about “broken and dirty furniture,” “sleeping and indifferent staff,” and “mice running around in the kitchen.” Per the nature of a franchise system, to the typical consumer it was unclear that these problems were limited to this individual franchisee. Consequently, the Burger King brand as a whole began to deteriorate and customers reduced their consumption across all locations, leading to revenue declines of up to 33% for some other franchisees.”

    Positive news for small businesses working like mad to compete: You have more agility to put initiatives into quick action than the big brands do. Companies with 1,000 locations may let negative reviews go unanswered because they lack a clear policy or hierarchy for owner responses, but smaller enterprises can literally turn this around in a day. Just sit down at the nearest computer, claim your review profiles, and jump into the story with the goal of hearing, impressing, and keeping every single customer you can.

    Big brands: The challenge for you is larger, by dint of your size, but you’ve also likely got the infrastructure to make this task no problem. You just have to assign the right people to the job, with thoughtful guidelines for ensuring your brand is being represented in a winning way.

    NAP and reviews: The 1–2 punch combo every local business must practice

    When traveling salesman Duncan Hines first published his 1935 review guide Adventures in Good Eating, he was pioneering what we think of today as local SEO. Here is my color-coded version of his review of the business that would one day become KFC. It should look strangely familiar to every one of you who has ever tackled citation management:

    duncanhines.jpg

    No phone number on this “citation,” of course, but then again telephones were quite a luxury in 1935. Barring that element, this simple and historic review has the core earmarks of a modern local business listing. It has location data and review data; it’s the 1–2 punch combo every local business still needs to get right today. Without the NAP, the business can’t be found. Without the sentiment, the business gives little reason to be chosen.

    Are you heading to a team meeting today? Preparing to chat with an incoming client? Make the winning combo as simple as possible, like this:

    1. We’ve got to manage our local business listings so that they’re accessible, accurate, and complete. We can automate much of this (check out Moz Local) so that we get found.
    2. We’ve got to breathe life into the listings so that they act as interactive advertisements, helping us get chosen. We can do this by earning reviews and responding to them. This is our company heartbeat — our story.

    From Duncan Hines to the digital age, there may be nothing new under the sun in marketing, but when you spend year after year looking at the sadly neglected review portions of local business listings, you realize you may have something to teach that is new news to somebody. So go for it — communicate this stuff, and good luck at your next big meeting!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    “”

    Blog Post Ideas: Maximize Your Reach with the Right Topics – Whiteboard Friday

    Posted by Blog post ideas

    Click on the whiteboard image above to open a high resolution version in a new tab!

    Video transcription

    Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat about blog post ideas, how to have great ones, how to make sure that the topics that you’re covering on your blog actually accomplish the goals that you want, and how to not run out of ideas as well.

    The goals of your blog

    So let’s start with the goals of a blog and then what an individual post needs to do, and then I’ll walk you through kind of six formats for coming up with great ideas for what to blog about. But generally speaking, you have created a blog, either on your company’s website or your personal website or for the project that you’re working on, because you want to:

    • Attract a certain audience, which is great.
    • Capture the attention and amplification, the sharing of certain types of influencers, so that you can grow that audience.
    • Rank highly in search engines. That’s not just necessarily a goal for the blog’s content itself. But one of the reasons that you started a blog is to grow the authority, the ranking signals, the ability to rank for the website as a whole, and the blog hopefully is helping with that.
    • Inspire some trust, some likeability, loyalty, and maybe even some evangelism from your readers.
    • Provide a reference point for their opinions. So if you are a writer, an author, a journalist, a contributor to all sorts of sources, a speaker, whatever it is, you’re trying to provide a home for your ideas and your content, potentially your opinions too.
    • Covert our audience to take an action. Then, finally, many times a blog is crafted with the idea that it is a first step in capturing an audience that will then take an action. That could be buy something from you, sign up for an email list, potentially take a free trial of something, maybe take some action. A political blog might be about, “Call your Congress person.” But those types of actions.

    What should an individual post do?

    From there, we get into an individual post. An individual post is supposed to help with these goals, but on its own doesn’t do all of them. It certainly doesn’t need to do more than one at a time. It can hopefully do some. But one of those is, generally speaking, a great blog post will do one of these four things and hopefully two or even three.

    I. Help readers to accomplish a goal that they have.

    So if I’m trying to figure out which hybrid electric vehicle should I buy and I read a great blog post from someone who’s very, very knowledgeable in the field, and they have two or three recommendations to help me narrow down my search, that is wonderful. It helps me accomplish my goal of figuring out which hybrid car to buy. That accomplishment of goal, that helping of people hits a bunch of these very, very nicely.

    II. Designed to inform people and/or entertain them.

    So it doesn’t have to be purely informational. It doesn’t have to be purely entertainment, but some combination of those, or one of the two, about a particular topic. So you might be trying to make someone excited about something or give them knowledge around it. It may be knowledge that they didn’t previously know that they wanted, and they may not actually be trying to accomplish a goal, but they are interested in the information or interested in finding the humor.

    III. Inspiring some amplification and linking.

    So you’re trying to earn signals to your site that will help you rank in search engines, that will help you grow your audience, that will help you reach more influencers. Thus, inspiring that amplification behavior by creating content that is designed to be shared, designed to be referenced and linked to is another big goal.

    IV. Creating a more positive association with the brand.

    So you might have a post that doesn’t really do any of these things. Maybe it touches a little on informational or entertaining. But it is really about crafting a personal story, or sharing an experience that then draws the reader closer to you and creates that association of what we talked about up here — loyalty, trust, evangelism, likeability.

    6 paths to great blog topic ideas

    So knowing what our blog needs to do and what our individual posts are trying to do, what are some great ways that we can come up with the ideas, the actual topics that we should be covering? I have kind of six paths. These six paths actually cover almost everything you will read in every other article about how to come up with blog post ideas. But I think that’s what’s great. These frameworks will get you into the mindset that will lead you to the path that can give you an infinite number of blog post ideas.

    1. Are there any unanswered or poorly answered questions that are in your field, that your audience already has/is asking, and do you have a way to provide great answers to those?

    So that’s basically this process of I’m going to research my audience through a bunch of methodologies, going to come up with topics that I know I could cover. I could deliver something that would answer their preexisting questions, and I could come up with those through…

    • Surveys of my readers.
    • In-person meetings or emails or interviews.
    • Informal conversations just in passing around events, or if I’m interacting with members of my audience in any way, social settings.
    • Keyword research, especially questions.

    So if you’re using a tool like Moz’s Keyword Explorer, or I think some of the other ones out there, Ahrefs might have this as well, where you can filter by only questions. There are also free tools like Answer the Public, which many folks like, that show you what people are typing into Google, specifically in the form of questions, “Who? What? When? Where? Why? How? Do?” etc.

    So I’m not just going to walk you through the ideas. I’m also going to challenge myself to give you some examples. So I’ve got two — one less challenging, one much more challenging. Two websites, both have blogs, and coming up with topic ideas based on this.

    So one is called Remoters. It’s remoters.net. It’s run by Aleyda Solis, who many of you in the SEO world might know. They talk about remote work, so people who are working remotely. It’s a content platform for them and a service for them. Then, the second one is a company, I think, called Schweiss Doors. They run hydraulicdoors.com. Very B2B. Very, very niche. Pretty challenging to come up with good blog topics, but I think we’ve got some.

    Remote Worker: I might say here, “You know what? One of the questions that’s asked very often by remote workers, but is not well-answered on the internet yet is: ‘How do I conduct myself in a remote interview and present myself as a remote worker in a way that I can be competitive with people who are actually, physically on premises and in the room? That is a big challenge. I feel like I’m always losing out to them. Remote workers, it seems, don’t get the benefits of being there in person.'” So a piece of content on how to sell yourself on a remote interview or as a remote worker could work great here.

    Hydraulic doors: One of the big things that I see many people asking about online, both in forums which actually rank well for it, the questions that are asked in forums around this do rank around costs and prices for hydraulic doors. Therefore, I think this is something that many companies are uncomfortable answering right online. But if you can be transparent where no one else can, I think these Schweiss Doors guys have a shot at doing really well with that. So how much do hydraulic doors cost versus alternatives? There you go.

    2. Do you have access to unique types of assets that other people don’t?

    That could be research. It could be data. It could be insights. It might be stories or narratives, experiences that can help you stand out in a topic area. This is a great way to come up with blog post content. So basically, the idea is you could say, “Gosh, for our quarterly internal report, we had to prepare some data on the state of the market. Actually, some of that data, if we got permission to share it, would be fascinating.”

    We can see through keyword research that people are talking about this or querying Google for it already. So we’re going to transform it into a piece of blog content, and we’re going to delight many, many people, except for maybe this guy. He seems unhappy about it. I don’t know what his problem is. We won’t worry about him. Wait. I can fix it. Look at that. So happy. Ignore that he kind of looks like the Joker now.

    We can get these through a bunch of methodologies:

    • Research, so statistical research, quantitative research.
    • Crowdsourcing. That could be through audiences that you’ve already got through email or Facebook or Twitter or LinkedIn.
    • Insider interviews, interviews with people on your sales team or your product team or your marketing team, people in your industry, buyers of yours.
    • Proprietary data, like what you’ve collected for your internal annual reports.
    • Curation of public data. So if there’s stuff out there on the web and it just needs to be publicly curated, you can figure out what that is. You can visit all those websites. You could use an extraction tool, or you could manually extract that data, or you could pay an intern to go extract that data for you, and then synthesize that in a useful way.
    • Multimedia talent. Maybe you have someone, like we happen to here at Moz, who has great talent with video production, or with audio production, or with design of visuals or photography, or whatever that might be in the multimedia realm that you could do.
    • Special access to people or information, or experiences that no one else does and you can present that.

    Those assets can become the topic of great content that can turn into really great blog posts and great post ideas.

    Remote Workers: They might say, “Well, gosh, we have access to data on the destinations people go and the budgets that they have around those destinations when they’re staying and working remotely, because of how our service interacts with them. Therefore, we can craft things like the most and least expensive places to work remotely on the planet,” which is very cool. That’s content that a lot of people are very interested in.

    Hydraulic doors: We can look at, “Hey, you know what? We actually have a visual overlay tool that helps an architect or a building owner visualize what it will look like if a hydraulic door were put into place. We can go use that in our downtime to come up with we can see how notable locations in the city might look with hydraulic doors or notable locations around the world. We could potentially even create a tool, where you could upload your own visual, photograph, and then see how the hydraulic door looked on there.” So now we can create images that will help you share.

    3. Relating a personal experience or passion to your topic in a resonant way.

    I like this and I think that many personal bloggers use it well. I think far too few business bloggers do, but it can be quite powerful, and we’ve used it here at Moz, which is relating a personal experience you have or a passion to your topic in some way that resonates. So, for example, you have an interaction that is very complex, very nuanced, very passionate, perhaps even very angry. From that experience, you can craft a compelling story and a headline that draws people in, that creates intrigue and that describes something with an amount of emotion that is resonant, that makes them want to connect with it. Because of that, you can inspire people to further connect with the brand and potentially to inform and entertain.

    There’s a lot of value from that. Usually, it comes from your own personal creativity around experiences that you’ve had. I say “you,” you, the writer or the author, but it could be anyone in your organization too. Some resources I really like for that are:

    • Photos. Especially, if you are someone who photographs a reasonable portion of your life on your mobile device, that can help inspire you to remember things.
    • A journal can also do the same thing.
    • Conversations that you have can do that, conversations in person, over email, on social media.
    • Travel. I think any time you are outside your comfort zone, that tends to be those unique things.

    Remote workers: I visited an artist collective in Santa Fe, New Mexico, and I realized that, “My gosh, one of the most frustrating parts of remote work is that if you’re not just about remote working with a laptop and your brain, you’re almost removed from the experience. How can you do remote work if you require specialized equipment?” But in fact, there are ways. There are maker labs and artist labs in cities all over the planet at this point. So I think this is a topic that potentially hasn’t been well-covered, has a lot of interest, and that personal experience that I, the writer, had could dig into that.

    Hydraulic doors: So I’ve had some conversations with do-it-yourselfers, people who are very, very passionate about DIY stuff. It turns out, hydraulic doors, this is not a thing that most DIYers can do. In fact, this is a very, very dramatic investment. That is an intense type of project. Ninety-nine percent of DIYers will not do it, but it turns out there’s actually search volume for this.

    People do want to, or at least want to learn how to, DIY their own hydraulic doors. One of my favorite things, after realizing this, I searched, and then I found that Schweiss Doors actually created a product where they will ship you a DIY kit to build your own hydraulic door. So they did recognize this need. I thought that was very, very impressive. They didn’t just create a blog post for it. They even served it with a product. Super-impressive.

    4. Covering a topic that is “hot” in your field or trending in your field or in the news or on other blogs.

    The great part about this is it builds in the amplification piece. Because you’re talking about something that other people are already talking about and potentially you’re writing about what they’ve written about, you are including an element of pre-built-in amplification. Because if I write about what Darren Rowse at ProBlogger has written about last week, or what Danny Sullivan wrote about on Search Engine Land two weeks ago, now it’s not just my audience that I can reach, but it’s theirs as well. Potentially, they have some incentive to check out what I’ve written about them and share that.

    So I could see that someone potentially maybe posted something very interesting or inflammatory, or wrong, or really right on Twitter, and then I could say, “Oh, I agree with that,” or, “disagree,” or, “I have nuance,” or, “I have some exceptions to that.” Or, “Actually, I think that’s an interesting conversation to which I can add even more value,” and then I create content from that. Certainly, social networks like:

    • Twitter
    • Instagram
    • Forums
    • Subreddits. I really like Pocket for this, where I’ll save a bunch of articles, and then I’ll see which one might be very interesting to cover or write about in the future. News aggregators are great for this too. So that could be a Techmeme in the technology space, or a Memeorandum in the political space, or many others.

    Remote workers: You might note, well, health care, last week in the United States and for many months now, has been very hot in the political arena. So for remoters, that is a big problem and a big question, because if your health insurance is tied to your employer again, as it was before the American Care Act, then you could be in real trouble. Then you might have a lot of problems and challenges. So what does the politics of health care mean for remote workers? Great. Now, you’ve created a real connection, and that could be something that other outlets would cover and that people who’ve written about health care might be willing to link to your piece.

    Hydraulic doors: One of the things that you might note is that Eater, which is a big blog in the restaurant space, has written about indoor and outdoor space trends in the restaurant industry. So you could, with the data that you’ve got and the hydraulic doors that you provide, which are very, very common, well moderately common, at least in the restaurant indoor/outdoor seating space, potentially cover that. That’s a great way to tie in your audience and Eater’s audience into something that’s interesting. Eater might be willing to cover that and link to you and talk about it, etc.

    The last two, I’m not going to go too into depth, because they’re a little more basic.

    5. Pure keyword research-driven.

    So this is using Google AdWords or keywordtool.io, or Moz’s Keyword Explorer, or any of the other keyword research tools that you like to figure out: What are people searching for around my topic? Can I cover it? Can I make great content there?

    6. Readers who care about my topics also care about ______________?

    Essentially taking any of these topics, but applying one level of abstraction. What I mean by that is there are people who care about your topic, but also there’s an overlap of people who care about this other topic and who also care about yours.

    hydraulic doors: People who care about restaurant building trends and hydraulic doors has a considerable overlap, and that is quite interesting.

    Remote workers: It could be something like, “I care about remote work. I also care about the gear that I use, my laptop and my bag, and those kinds of things.” So gear trends could be a very interesting intersect. Then, you can apply any of these other four processes, five processes onto that intersection or one level of an abstraction.

    All right, everyone. We have done a tremendous amount here to cover a lot about blog topics. But I think you will have some great ideas from this, and I look forward to hearing about other processes that you’ve got in the comments. Hopefully, we’ll see you again next week for another edition of Whiteboard Friday. Take care.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Posted by Blog post ideas

    Click on the whiteboard image above to open a high resolution version in a new tab!

    Video transcription

    Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat about blog post ideas, how to have great ones, how to make sure that the topics that you’re covering on your blog actually accomplish the goals that you want, and how to not run out of ideas as well.

    The goals of your blog

    So let’s start with the goals of a blog and then what an individual post needs to do, and then I’ll walk you through kind of six formats for coming up with great ideas for what to blog about. But generally speaking, you have created a blog, either on your company’s website or your personal website or for the project that you’re working on, because you want to:

    • Attract a certain audience, which is great.
    • Capture the attention and amplification, the sharing of certain types of influencers, so that you can grow that audience.
    • Rank highly in search engines. That’s not just necessarily a goal for the blog’s content itself. But one of the reasons that you started a blog is to grow the authority, the ranking signals, the ability to rank for the website as a whole, and the blog hopefully is helping with that.
    • Inspire some trust, some likeability, loyalty, and maybe even some evangelism from your readers.
    • Provide a reference point for their opinions. So if you are a writer, an author, a journalist, a contributor to all sorts of sources, a speaker, whatever it is, you’re trying to provide a home for your ideas and your content, potentially your opinions too.
    • Covert our audience to take an action. Then, finally, many times a blog is crafted with the idea that it is a first step in capturing an audience that will then take an action. That could be buy something from you, sign up for an email list, potentially take a free trial of something, maybe take some action. A political blog might be about, “Call your Congress person.” But those types of actions.

    What should an individual post do?

    From there, we get into an individual post. An individual post is supposed to help with these goals, but on its own doesn’t do all of them. It certainly doesn’t need to do more than one at a time. It can hopefully do some. But one of those is, generally speaking, a great blog post will do one of these four things and hopefully two or even three.

    I. Help readers to accomplish a goal that they have.

    So if I’m trying to figure out which hybrid electric vehicle should I buy and I read a great blog post from someone who’s very, very knowledgeable in the field, and they have two or three recommendations to help me narrow down my search, that is wonderful. It helps me accomplish my goal of figuring out which hybrid car to buy. That accomplishment of goal, that helping of people hits a bunch of these very, very nicely.

    II. Designed to inform people and/or entertain them.

    So it doesn’t have to be purely informational. It doesn’t have to be purely entertainment, but some combination of those, or one of the two, about a particular topic. So you might be trying to make someone excited about something or give them knowledge around it. It may be knowledge that they didn’t previously know that they wanted, and they may not actually be trying to accomplish a goal, but they are interested in the information or interested in finding the humor.

    III. Inspiring some amplification and linking.

    So you’re trying to earn signals to your site that will help you rank in search engines, that will help you grow your audience, that will help you reach more influencers. Thus, inspiring that amplification behavior by creating content that is designed to be shared, designed to be referenced and linked to is another big goal.

    IV. Creating a more positive association with the brand.

    So you might have a post that doesn’t really do any of these things. Maybe it touches a little on informational or entertaining. But it is really about crafting a personal story, or sharing an experience that then draws the reader closer to you and creates that association of what we talked about up here — loyalty, trust, evangelism, likeability.

    6 paths to great blog topic ideas

    So knowing what our blog needs to do and what our individual posts are trying to do, what are some great ways that we can come up with the ideas, the actual topics that we should be covering? I have kind of six paths. These six paths actually cover almost everything you will read in every other article about how to come up with blog post ideas. But I think that’s what’s great. These frameworks will get you into the mindset that will lead you to the path that can give you an infinite number of blog post ideas.

    1. Are there any unanswered or poorly answered questions that are in your field, that your audience already has/is asking, and do you have a way to provide great answers to those?

    So that’s basically this process of I’m going to research my audience through a bunch of methodologies, going to come up with topics that I know I could cover. I could deliver something that would answer their preexisting questions, and I could come up with those through…

    • Surveys of my readers.
    • In-person meetings or emails or interviews.
    • Informal conversations just in passing around events, or if I’m interacting with members of my audience in any way, social settings.
    • Keyword research, especially questions.

    So if you’re using a tool like Moz’s Keyword Explorer, or I think some of the other ones out there, Ahrefs might have this as well, where you can filter by only questions. There are also free tools like Answer the Public, which many folks like, that show you what people are typing into Google, specifically in the form of questions, “Who? What? When? Where? Why? How? Do?” etc.

    So I’m not just going to walk you through the ideas. I’m also going to challenge myself to give you some examples. So I’ve got two — one less challenging, one much more challenging. Two websites, both have blogs, and coming up with topic ideas based on this.

    So one is called Remoters. It’s remoters.net. It’s run by Aleyda Solis, who many of you in the SEO world might know. They talk about remote work, so people who are working remotely. It’s a content platform for them and a service for them. Then, the second one is a company, I think, called Schweiss Doors. They run hydraulicdoors.com. Very B2B. Very, very niche. Pretty challenging to come up with good blog topics, but I think we’ve got some.

    Remote Worker: I might say here, “You know what? One of the questions that’s asked very often by remote workers, but is not well-answered on the internet yet is: ‘How do I conduct myself in a remote interview and present myself as a remote worker in a way that I can be competitive with people who are actually, physically on premises and in the room? That is a big challenge. I feel like I’m always losing out to them. Remote workers, it seems, don’t get the benefits of being there in person.'” So a piece of content on how to sell yourself on a remote interview or as a remote worker could work great here.

    Hydraulic doors: One of the big things that I see many people asking about online, both in forums which actually rank well for it, the questions that are asked in forums around this do rank around costs and prices for hydraulic doors. Therefore, I think this is something that many companies are uncomfortable answering right online. But if you can be transparent where no one else can, I think these Schweiss Doors guys have a shot at doing really well with that. So how much do hydraulic doors cost versus alternatives? There you go.

    2. Do you have access to unique types of assets that other people don’t?

    That could be research. It could be data. It could be insights. It might be stories or narratives, experiences that can help you stand out in a topic area. This is a great way to come up with blog post content. So basically, the idea is you could say, “Gosh, for our quarterly internal report, we had to prepare some data on the state of the market. Actually, some of that data, if we got permission to share it, would be fascinating.”

    We can see through keyword research that people are talking about this or querying Google for it already. So we’re going to transform it into a piece of blog content, and we’re going to delight many, many people, except for maybe this guy. He seems unhappy about it. I don’t know what his problem is. We won’t worry about him. Wait. I can fix it. Look at that. So happy. Ignore that he kind of looks like the Joker now.

    We can get these through a bunch of methodologies:

    • Research, so statistical research, quantitative research.
    • Crowdsourcing. That could be through audiences that you’ve already got through email or Facebook or Twitter or LinkedIn.
    • Insider interviews, interviews with people on your sales team or your product team or your marketing team, people in your industry, buyers of yours.
    • Proprietary data, like what you’ve collected for your internal annual reports.
    • Curation of public data. So if there’s stuff out there on the web and it just needs to be publicly curated, you can figure out what that is. You can visit all those websites. You could use an extraction tool, or you could manually extract that data, or you could pay an intern to go extract that data for you, and then synthesize that in a useful way.
    • Multimedia talent. Maybe you have someone, like we happen to here at Moz, who has great talent with video production, or with audio production, or with design of visuals or photography, or whatever that might be in the multimedia realm that you could do.
    • Special access to people or information, or experiences that no one else does and you can present that.

    Those assets can become the topic of great content that can turn into really great blog posts and great post ideas.

    Remote Workers: They might say, “Well, gosh, we have access to data on the destinations people go and the budgets that they have around those destinations when they’re staying and working remotely, because of how our service interacts with them. Therefore, we can craft things like the most and least expensive places to work remotely on the planet,” which is very cool. That’s content that a lot of people are very interested in.

    Hydraulic doors: We can look at, “Hey, you know what? We actually have a visual overlay tool that helps an architect or a building owner visualize what it will look like if a hydraulic door were put into place. We can go use that in our downtime to come up with we can see how notable locations in the city might look with hydraulic doors or notable locations around the world. We could potentially even create a tool, where you could upload your own visual, photograph, and then see how the hydraulic door looked on there.” So now we can create images that will help you share.

    3. Relating a personal experience or passion to your topic in a resonant way.

    I like this and I think that many personal bloggers use it well. I think far too few business bloggers do, but it can be quite powerful, and we’ve used it here at Moz, which is relating a personal experience you have or a passion to your topic in some way that resonates. So, for example, you have an interaction that is very complex, very nuanced, very passionate, perhaps even very angry. From that experience, you can craft a compelling story and a headline that draws people in, that creates intrigue and that describes something with an amount of emotion that is resonant, that makes them want to connect with it. Because of that, you can inspire people to further connect with the brand and potentially to inform and entertain.

    There’s a lot of value from that. Usually, it comes from your own personal creativity around experiences that you’ve had. I say “you,” you, the writer or the author, but it could be anyone in your organization too. Some resources I really like for that are:

    • Photos. Especially, if you are someone who photographs a reasonable portion of your life on your mobile device, that can help inspire you to remember things.
    • A journal can also do the same thing.
    • Conversations that you have can do that, conversations in person, over email, on social media.
    • Travel. I think any time you are outside your comfort zone, that tends to be those unique things.

    Remote workers: I visited an artist collective in Santa Fe, New Mexico, and I realized that, “My gosh, one of the most frustrating parts of remote work is that if you’re not just about remote working with a laptop and your brain, you’re almost removed from the experience. How can you do remote work if you require specialized equipment?” But in fact, there are ways. There are maker labs and artist labs in cities all over the planet at this point. So I think this is a topic that potentially hasn’t been well-covered, has a lot of interest, and that personal experience that I, the writer, had could dig into that.

    Hydraulic doors: So I’ve had some conversations with do-it-yourselfers, people who are very, very passionate about DIY stuff. It turns out, hydraulic doors, this is not a thing that most DIYers can do. In fact, this is a very, very dramatic investment. That is an intense type of project. Ninety-nine percent of DIYers will not do it, but it turns out there’s actually search volume for this.

    People do want to, or at least want to learn how to, DIY their own hydraulic doors. One of my favorite things, after realizing this, I searched, and then I found that Schweiss Doors actually created a product where they will ship you a DIY kit to build your own hydraulic door. So they did recognize this need. I thought that was very, very impressive. They didn’t just create a blog post for it. They even served it with a product. Super-impressive.

    4. Covering a topic that is “hot” in your field or trending in your field or in the news or on other blogs.

    The great part about this is it builds in the amplification piece. Because you’re talking about something that other people are already talking about and potentially you’re writing about what they’ve written about, you are including an element of pre-built-in amplification. Because if I write about what Darren Rowse at ProBlogger has written about last week, or what Danny Sullivan wrote about on Search Engine Land two weeks ago, now it’s not just my audience that I can reach, but it’s theirs as well. Potentially, they have some incentive to check out what I’ve written about them and share that.

    So I could see that someone potentially maybe posted something very interesting or inflammatory, or wrong, or really right on Twitter, and then I could say, “Oh, I agree with that,” or, “disagree,” or, “I have nuance,” or, “I have some exceptions to that.” Or, “Actually, I think that’s an interesting conversation to which I can add even more value,” and then I create content from that. Certainly, social networks like:

    • Twitter
    • Instagram
    • Forums
    • Subreddits. I really like Pocket for this, where I’ll save a bunch of articles, and then I’ll see which one might be very interesting to cover or write about in the future. News aggregators are great for this too. So that could be a Techmeme in the technology space, or a Memeorandum in the political space, or many others.

    Remote workers: You might note, well, health care, last week in the United States and for many months now, has been very hot in the political arena. So for remoters, that is a big problem and a big question, because if your health insurance is tied to your employer again, as it was before the American Care Act, then you could be in real trouble. Then you might have a lot of problems and challenges. So what does the politics of health care mean for remote workers? Great. Now, you’ve created a real connection, and that could be something that other outlets would cover and that people who’ve written about health care might be willing to link to your piece.

    Hydraulic doors: One of the things that you might note is that Eater, which is a big blog in the restaurant space, has written about indoor and outdoor space trends in the restaurant industry. So you could, with the data that you’ve got and the hydraulic doors that you provide, which are very, very common, well moderately common, at least in the restaurant indoor/outdoor seating space, potentially cover that. That’s a great way to tie in your audience and Eater’s audience into something that’s interesting. Eater might be willing to cover that and link to you and talk about it, etc.

    The last two, I’m not going to go too into depth, because they’re a little more basic.

    5. Pure keyword research-driven.

    So this is using Google AdWords or keywordtool.io, or Moz’s Keyword Explorer, or any of the other keyword research tools that you like to figure out: What are people searching for around my topic? Can I cover it? Can I make great content there?

    6. Readers who care about my topics also care about ______________?

    Essentially taking any of these topics, but applying one level of abstraction. What I mean by that is there are people who care about your topic, but also there’s an overlap of people who care about this other topic and who also care about yours.

    hydraulic doors: People who care about restaurant building trends and hydraulic doors has a considerable overlap, and that is quite interesting.

    Remote workers: It could be something like, “I care about remote work. I also care about the gear that I use, my laptop and my bag, and those kinds of things.” So gear trends could be a very interesting intersect. Then, you can apply any of these other four processes, five processes onto that intersection or one level of an abstraction.

    All right, everyone. We have done a tremendous amount here to cover a lot about blog topics. But I think you will have some great ideas from this, and I look forward to hearing about other processes that you’ve got in the comments. Hopefully, we’ll see you again next week for another edition of Whiteboard Friday. Take care.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    “”

    Tasty Search engine optimization Report Recipes in order to save Time &amp Add Value for Clients [Next Stage]

    Published by Keyword Explorer, it can save you your foraged terms to lists. By bundling together similar “species,” you will get a high-level look at the breadth and depth of search behavior inside the groups of the niche. Easily compare volume, difficulty, chance, and possibility to instigate an information-driven method of website architecture. You’ll also know, instantly, where you can expand on certain topics and apply more sources to article marketing.

    Using these metrics in hands as well as your client’s industry understanding, you are able to cherry-pick keywords to trace ranking positions week over week and add these to your Moz Pro campaign using the mouse click.

    What’s the recipe?

    Step One: Pluck keywords in the category pages of the client’s site.

    Step Two: Find keyword suggestions in Keyword Explorer.

    Step Three: Group by low lexicon to bundle together similar keywords to collect up that lengthy tail.

    Step Four: Evaluate and save relevant leads to a listing

    Step Five: Mind towards the Keyword Lists and compare the metrics: where’s the chance? Are you able to contend with the amount of difficulty? What is the high-volume lengthy tail that you could dig directly into?

    Step Six: Add sample keywords out of your containers straight to your campaign.

    Bonus step: Repeat for products or any other subject segments from the niche.

    Remember to drill in to the keywords which are arriving here to find out if you will find groups and subcategories you hadn’t considered. These may be targeted in existing happy to further extend the relevancy and achieve of the client’s content. Or it might inspire new content which will help to develop the authority from the site.

    Why the consumer is going to be impressed

    Through solid, informed research, you can demonstrate why their website ought to be structured with certain groups on top-level navigation right lower to product pages. You’ll likewise be able you prioritized focus on building, improving, or refining content on certain parts of the website by comprehending the introduction to search behavior and demand. Are you currently seeing plenty of keywords with a decent degree of volume minimizing difficulty? Or even more in-depth lengthy tail with low amount of searches? Or less different keywords rich in amount of searches but more powerful competition?

    Allow the demand drive the device forward and make certain you’re giving the hordes what they need.

    All of this helps you to further build up your understanding of the methods people finder so that you can make informed decisions about which keywords to trace.

    [Part two] Palate-cleansing lemon keyword label sorbet

    Before diving in to the next course you have to cleanse your palate having a lemon “label” sorbet.

    Partly One, we spoken concerning the struggle of maintaining gigantic lists of keywords. We’ve sampled keywords from your foraged containers, keeping these arranged and segmented within our Moz Pro campaign.

    Now you need to give individuals tracked keywords a far more defined purpose in existence. This helps to strengthen for your client why you’re tracking these keywords, exactly what the goal is perfect for tracking them, as well as in what type of time-frame you’re anticipating results.

    Kinds of labels can include:

    • Local keywords: Is the business serving residents, just like a mushroom walking tour? You can include geo modifiers for your keywords and label them as a result.
    • Lengthy-tail keywords: May have lower amount of searches, but focused intent can convert well for the client.
    • High-priority keywords: Where you’re shoveling more sources, these keywords are more inclined impacting another keyword segments.
    • Brand keywords: Mirror, mirror on your wall… yeah, everybody wants individuals vanity keywords, don’t lie. You can handle brand keywords instantly through “Manage Brand Rules” in Moz Pro:

    An ample scoop of tasty lemon “label” sorbet can make everything you need to do and progress you accomplish infinitely simpler to set of with obvious, actionable focus.

    What’s the recipe?

    Step One: Label keywords just like a pro.

    Step Two: Filter by labels within the Ranking tab to evaluate Search Visibility for the keyword segments.

    Within this example, I’m evaluating our visibility for “learn” keywords against “guide” keywords:

    Step Three: Produce a custom report for the keyword segments.

    Step Four: Give a drizzle of balsamic vinegar by triggering the Optimize button — you can now send the most recent on-page reporting together with your super-focused ranking report.

    Why the consumer is going to be impressed

    Your ranking reports is going to be immaterial the consumer has ever sampled. They’ll be tightly centered on the segments of keywords you’re focusing on, so that they aren’t bamboozled with a new slew of keywords or perhaps a sudden downward trend. By clearly segmenting your piles of gorgeous keywords, you will be proactively answering individuals inevitable questions on why, when, as well as in what form the consumer will start to see results.

    Using the on-page scores updating instantly and shipping to your client’s inbox each month using a custom report, you will be effortlessly highlighting what your team has achieved.

    [Part three] Steak sandwich links with crispy competitor bacon

    You’re dealing with the consumer to write content, amplifying it through social channels and driving brand awareness through PR campaigns.

    Now you need to have them informed from the big wins you’ve had because of that grind. Link data in Moz Pro concentrates on the greatest-quality links with this Mozscape index, from the most prominent pages of authoritative sites. So, while you might not see every link for any site inside our index, we are reporting probably the most valuable ones.

    Alongside our top-quality steak sarnie, we’re then add crispy competitor bacon so that you can identify what submissions are employed by another sites inside your industry.

    What’s the recipe?

    Step One: Check you have direct competitors setup in your campaign.

    Step Two: Compare link metrics for the site as well as your competitors.

    Step Four: Mind to Top Pages to determine what individuals competition is doing to obtain ahead.

    Step Five: Compile a scrumptious report sandwich!

    Step Six: Make another report to find the best Pages for that bacon-filled sandwich experience.

    Why the consumer is going to be impressed

    Each quality established link gives the consumer a obvious concept of the need for their content and also the bloodstream, sweat, and tears of the team.

    These little gems are in place and more prone to have an affect on their ranking potential. Remember to make an appointment with the consumer in which you explain that the link’s effect on rankings needs time to work.

    By evaluating this directly using the other sites battling it to find the best SERP property, it’s simpler to recognize progress and achievements.

    By highlighting individuals annoying competitors as well as their top pages by authority, you’re also getting in front of that burning question of: Exactly how should we improve?

    [Part four] Cinnamon-dusted ranking reports with cherry-glazed traffic

    Rankings really are a staple component within the Search engine optimization diet. Similar to the ever-expanding keyword list, reporting on rankings is becoming something we all do without thinking enough about this what clients can perform with this information.

    Dish up an exciting-singing, all-dancing cinnamon-dusted rankings report with cherry-glazed traffic by illustrating the direct impact these rankings dress in organic traffic. Real people, coasting on with the search engine results for your client’s site.

    Squeeze Pages in Moz Pro compares rankings with organic squeeze pages, imparting not only the ranking score however the value of individuals pages. Compliments towards the chef, because so good jobs are lower for you.

    What’s the recipe?

    Step One: Track your target keywords in Moz Pro.

    Step Two: Check you’ve connected Google Analytics for your tasty traffic data.

    Step Three: Uncover squeeze pages and believed traffic share.

    As the Search engine optimization work drives increased traffic to individuals pages as well as your keyword rankings continuously increase, you will see your believed traffic share increase.

    In case your organic traffic from search is growing however your ranking is shedding off, it’s a sign this keyword isn’t the driving pressure.

    Now you’ll have a dig around and discover why that keyword isn’t performing, beginning together with your on-page optimization and following track of market and keyword research.

    Why the consumer is going to be impressed

    All of us send ranking reports, and I know clients just love it. But you can now dazzle all of them with a look into what individuals rankings mean for that lifeblood of the site.

    You may also do something by directing more energy towards individuals well-performing keywords, or investigate what labored well for individuals pages and replicate it across other keywords and pages in your site.

    Overall

    It’s time for you to say “enough is sufficientInch and inject some flavor into individuals bland old Search engine optimization reports. Your team helps you to save some time and your customers will appreciate the tasty buffet of reporting delight.


    Next Stage is our educational series mixing actionable Search engine optimization tips with tools will achieve them. Take a look at any one of our past editions below:

    • Hunting Lower SERP Features to know Intent & Bring Customers
    • I have Enhanced My Website, But I am Still Not Ranking—Help!
    • Diving for Pearls: Helpful tips for Lengthy Tail Keywords
    • Become Your Site’s Hero: An Audit Manifesto
    • How you can Defeat Duplicate Content
    • Conquer Your Competitors using these Three Moz Tools
    • 10 Tips to accept Moz Tools one stage further

    Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

    Published by Keyword Explorer, it can save you your foraged terms to lists. By bundling together similar “species,” you will get a high-level look at the breadth and depth of search behavior inside the groups of the niche. Easily compare volume, difficulty, chance, and possibility to instigate an information-driven method of website architecture. You’ll also know, instantly, where you can expand on certain topics and apply more sources to article marketing.

    Using these metrics in hands as well as your client’s industry understanding, you are able to cherry-pick keywords to trace ranking positions week over week and add these to your Moz Pro campaign using the mouse click.

    What’s the recipe?

    Step One: Pluck keywords in the category pages of the client’s site.

    Step Two: Find keyword suggestions in Keyword Explorer.

    Step Three: Group by low lexicon to bundle together similar keywords to collect up that lengthy tail.

    Step Four: Evaluate and save relevant leads to a listing

    Step Five: Mind towards the Keyword Lists and compare the metrics: where’s the chance? Are you able to contend with the amount of difficulty? What is the high-volume lengthy tail that you could dig directly into?

    Step Six: Add sample keywords out of your containers straight to your campaign.

    Bonus step: Repeat for products or any other subject segments from the niche.

    Remember to drill in to the keywords which are arriving here to find out if you will find groups and subcategories you hadn’t considered. These may be targeted in existing happy to further extend the relevancy and achieve of the client’s content. Or it might inspire new content which will help to develop the authority from the site.

    Why the consumer is going to be impressed

    Through solid, informed research, you can demonstrate why their website ought to be structured with certain groups on top-level navigation right lower to product pages. You’ll likewise be able you prioritized focus on building, improving, or refining content on certain parts of the website by comprehending the introduction to search behavior and demand. Are you currently seeing plenty of keywords with a decent degree of volume minimizing difficulty? Or even more in-depth lengthy tail with low amount of searches? Or less different keywords rich in amount of searches but more powerful competition?

    Allow the demand drive the device forward and make certain you’re giving the hordes what they need.

    All of this helps you to further build up your understanding of the methods people finder so that you can make informed decisions about which keywords to trace.

    [Part two] Palate-cleansing lemon keyword label sorbet

    Before diving in to the next course you have to cleanse your palate having a lemon “label” sorbet.

    Partly One, we spoken concerning the struggle of maintaining gigantic lists of keywords. We’ve sampled keywords from your foraged containers, keeping these arranged and segmented within our Moz Pro campaign.

    Now you need to give individuals tracked keywords a far more defined purpose in existence. This helps to strengthen for your client why you’re tracking these keywords, exactly what the goal is perfect for tracking them, as well as in what type of time-frame you’re anticipating results.

    Kinds of labels can include:

    • Local keywords: Is the business serving residents, just like a mushroom walking tour? You can include geo modifiers for your keywords and label them as a result.
    • Lengthy-tail keywords: May have lower amount of searches, but focused intent can convert well for the client.
    • High-priority keywords: Where you’re shoveling more sources, these keywords are more inclined impacting another keyword segments.
    • Brand keywords: Mirror, mirror on your wall… yeah, everybody wants individuals vanity keywords, don’t lie. You can handle brand keywords instantly through “Manage Brand Rules” in Moz Pro:

    An ample scoop of tasty lemon “label” sorbet can make everything you need to do and progress you accomplish infinitely simpler to set of with obvious, actionable focus.

    What’s the recipe?

    Step One: Label keywords just like a pro.

    Step Two: Filter by labels within the Ranking tab to evaluate Search Visibility for the keyword segments.

    Within this example, I’m evaluating our visibility for “learn” keywords against “guide” keywords:

    Step Three: Produce a custom report for the keyword segments.

    Step Four: Give a drizzle of balsamic vinegar by triggering the Optimize button — you can now send the most recent on-page reporting together with your super-focused ranking report.

    Why the consumer is going to be impressed

    Your ranking reports is going to be immaterial the consumer has ever sampled. They’ll be tightly centered on the segments of keywords you’re focusing on, so that they aren’t bamboozled with a new slew of keywords or perhaps a sudden downward trend. By clearly segmenting your piles of gorgeous keywords, you will be proactively answering individuals inevitable questions on why, when, as well as in what form the consumer will start to see results.

    Using the on-page scores updating instantly and shipping to your client’s inbox each month using a custom report, you will be effortlessly highlighting what your team has achieved.

    [Part three] Steak sandwich links with crispy competitor bacon

    You’re dealing with the consumer to write content, amplifying it through social channels and driving brand awareness through PR campaigns.

    Now you need to have them informed from the big wins you’ve had because of that grind. Link data in Moz Pro concentrates on the greatest-quality links with this Mozscape index, from the most prominent pages of authoritative sites. So, while you might not see every link for any site inside our index, we are reporting probably the most valuable ones.

    Alongside our top-quality steak sarnie, we’re then add crispy competitor bacon so that you can identify what submissions are employed by another sites inside your industry.

    What’s the recipe?

    Step One: Check you have direct competitors setup in your campaign.

    Step Two: Compare link metrics for the site as well as your competitors.

    Step Four: Mind to Top Pages to determine what individuals competition is doing to obtain ahead.

    Step Five: Compile a scrumptious report sandwich!

    Step Six: Make another report to find the best Pages for that bacon-filled sandwich experience.

    Why the consumer is going to be impressed

    Each quality established link gives the consumer a obvious concept of the need for their content and also the bloodstream, sweat, and tears of the team.

    These little gems are in place and more prone to have an affect on their ranking potential. Remember to make an appointment with the consumer in which you explain that the link’s effect on rankings needs time to work.

    By evaluating this directly using the other sites battling it to find the best SERP property, it’s simpler to recognize progress and achievements.

    By highlighting individuals annoying competitors as well as their top pages by authority, you’re also getting in front of that burning question of: Exactly how should we improve?

    [Part four] Cinnamon-dusted ranking reports with cherry-glazed traffic

    Rankings really are a staple component within the Search engine optimization diet. Similar to the ever-expanding keyword list, reporting on rankings is becoming something we all do without thinking enough about this what clients can perform with this information.

    Dish up an exciting-singing, all-dancing cinnamon-dusted rankings report with cherry-glazed traffic by illustrating the direct impact these rankings dress in organic traffic. Real people, coasting on with the search engine results for your client’s site.

    Squeeze Pages in Moz Pro compares rankings with organic squeeze pages, imparting not only the ranking score however the value of individuals pages. Compliments towards the chef, because so good jobs are lower for you.

    What’s the recipe?

    Step One: Track your target keywords in Moz Pro.

    Step Two: Check you’ve connected Google Analytics for your tasty traffic data.

    Step Three: Uncover squeeze pages and believed traffic share.

    As the Search engine optimization work drives increased traffic to individuals pages as well as your keyword rankings continuously increase, you will see your believed traffic share increase.

    In case your organic traffic from search is growing however your ranking is shedding off, it’s a sign this keyword isn’t the driving pressure.

    Now you’ll have a dig around and discover why that keyword isn’t performing, beginning together with your on-page optimization and following track of market and keyword research.

    Why the consumer is going to be impressed

    All of us send ranking reports, and I know clients just love it. But you can now dazzle all of them with a look into what individuals rankings mean for that lifeblood of the site.

    You may also do something by directing more energy towards individuals well-performing keywords, or investigate what labored well for individuals pages and replicate it across other keywords and pages in your site.

    Overall

    It’s time for you to say “enough is sufficientInch and inject some flavor into individuals bland old Search engine optimization reports. Your team helps you to save some time and your customers will appreciate the tasty buffet of reporting delight.


    Next Stage is our educational series mixing actionable Search engine optimization tips with tools will achieve them. Take a look at any one of our past editions below:

    • Hunting Lower SERP Features to know Intent & Bring Customers
    • I have Enhanced My Website, But I am Still Not Ranking—Help!
    • Diving for Pearls: Helpful tips for Lengthy Tail Keywords
    • Become Your Site’s Hero: An Audit Manifesto
    • How you can Defeat Duplicate Content
    • Conquer Your Competitors using these Three Moz Tools
    • 10 Tips to accept Moz Tools one stage further

    Join The Moz Top Ten, a semimonthly mailer updating you on top ten hottest bits of Search engine optimization news, tips, and rad links uncovered through the Moz team. Consider it as being your exclusive digest of stuff you do not have time for you to search lower but wish to read!

    “”