Lee Odden

Duplicate Content and Multiple Sites – SES Chicago

susan moskwa googleThe second session on my liveblogging hitllist for SES Chicago is the “Duplicate Content and Multiple Sites” moderated by Adam Audette. Unfortunatley, Michael Gray was not able to be there, so things started up with Susan Moskwa, Webmaster Trends Analyst from Google, and Shari Thurow was added on the fly.

What is duplicate content? Identical or substantially similar content. Also multiple URLs with the same content. Google realizes that duplication can be deliberate or accidental.

Basically, duplicate content in the context of a search engine is publishing different URLs that present the same content. (My Example:)


Why does Google care about duplicate content? Users don’t like to see 10 nearly identical results. Also, there’s no benefit in crawling multiple urls with the same content. It’s a waste of resources for Googlebot to do that.

How does Google handle duplicate content? Google will filter results at serve time, collapsing the duplicates and then show the most relevant page in the SERPs for the query. This will likely be the page where the content originated. The original page is determined by considering factors like age and authority.  This is not an exact science and influences on what is considered to determine the “original” could change with the query.  Think about what people are searching for and what they expect to see.

The myth of the duplicate content penalty. A lot of people think that if they have duplicate content that they’ll be penalized. In most cases, Google does not penalize sites for accidental duplication. Many, many, many sites have duplicate content.

Google may penalize sites for deliberate or manipulative duplication. For example: auto generated content, link networks or similar tactics designed to be manipulative. Don’t be afraid of duplicate content, be informed.

Tactics for controlling what version of your content does get shown:

  • Control which version of your content is shown. Especially if you don’t like the version show in search.
  • If you aggregate all the duplicates and redirect to a canonical page, then it consolidates the ranking signals. It also optimizes the crawling of your site focusing on canonical URLs versus the duplicates.
  • You should establish contracts (when syndicating) that require linking back to the original version.

When should you not care about duplicate content?

  • Common, minimal duplication.
  • When you think the benefit outweighs potential ranking concerns. Consider your cost of fixing the duplicate content situation vs. the benefit you would receive.
  • Remember: duplication is common and search engines can handle it.

Ways to indicate your preferred version of content to Google:

  • Set preferred domain in Google webmaster Tools
  • Include your preferred URLs in your sitemap
  • Always use your preferred URLs when linking to your site
  • Use 301 redirects
  • Use rel=”canonical”
  • Google’s “Parameter handling” tool (now has default values)

Google doesn’t recommend blocking duplicate URLs with robots.txt because if they can’t crawl a URL they have to assume it’s unique. It’s better to let everything get crawled and to clearly indicate which URLs are duplicates. A number of SEOs including Shari, who also spoke on this panel, do not agree with Google on this.

Robots.txt controls crawling, not indexing. Google may index something (because of a link to it from an external site) but not crawl it. That can create a duplicate content issue.

International site duplicate content issues:

  • The same ideas in different languages are not considered duplicate content.
  • If you have the same content in one language for multiple countries: Consider having one page with the content they share, separate pages for what they don’t such as price, address, etc. Highlight what’s local or unique on each page.  Geotarget different folders or sub-domains to indicate that they target different users.

International sites: New tag. rel=”alternate”.

Indicate when a page is a duplicate of another except for the navigation. Check out the Google blog post on this.

Takeaways: Avoid knee jerk reactions to duplicate content. Understand the effects and take informed action. Don’t robots.txt out duplicate URLs, indicate which ones are duplicates. Content matters more than the domain name when it comes to duplicate content signals.

Shari Thurow stepped in last minute to fill in after Susan and talked about duplicate content from a user experience perspective as well as SEO. Essentially, when you fix duplicate content issues, it results in your best pages being delivered to users in the search results.

Having duplicate content lowers your index count which means less pages are available to rank well in the search results.  A consequence of that is your best converting pages might not appear for people searching.

Shari says that if you have too many links on a page, then the page doesn’t seem focused on content. If there are too few links to a page, it seems unimportant, so be sure to have a balance and focus on what would be most useful to a reader.

Here’s a (partial) checklist for duplicate content on a website:

  • Boilerplate similarity
  • Are linkage properties the same or similar (inbound or outbound)?
  • Does the same company control the content of both pages?
  • Are the same shingles (word sets) used on both pages?

If the answer to these questions is yes, then you have duplicate content.

Duplicate content solutions:

  • Information architecture, site navigation, and page interlinking.
  • Are URLS linked to consistently, throughout the site or the site netowork?
  • Are the links labeled consistently?
  • Are you preventing the web page from being spidered with robots.txt?
  • If articles are shared across your network of sites are you implementing the rel=canonical tag?
  • Redirects 301 from duplicate versions to the original
  • Use the rel=nofollow attribute where appropriate. For ecommerce sites, put it on your “add to cart” buttons
  • Use webmaster tools and provide the engine a sitemap  (Note: don’t robots.txt out a page and then include it in a XML sitemap)

Audience Question: Noticed someone was planning on duplicating an entire site using different language navigation. What to do?

Answer: File a Digital Millennium Copyright Act (DMCA) complaint with Google

PoorSo SoOKGoodAwesome (8 votes, average: 3.75 out of 5)

Lee Odden About Lee Odden

@LeeOdden is the CEO of TopRank Marketing and editor of Online Marketing Blog. Cited for his expertise by The Economist, Forbes and the Wall Street Journal, he's the author of the book Optimize and presents internationally on B2B marketing topics including content, search, social media and influencer marketing. When not at conferences, consulting, or working with his talented team, he's likely running, traveling or cooking up something new.


  1. Avatar Webdesign Utrecht says

    PS: the “view comments”-button on your website is generating a different one as your canonical link.. talking about duplicate content 😛

  2. Avatar Rahulbatra188 says

    Great post. Too much information on duplicate content and one can easily get solutions from your post.

  3. Hello,
    the fact that duplicate content was not considered, I think this information is to keep it handy and use it, before Google changes again 😉

    Sergio Vergara

  4. My question has always been about development sites. Yes, there is an argument that development can be done offline, but to work out bugs and glitches, it is helpful to have one online as well. I wonder how Google thinks about this. Typically .dev is put in front of the domain name to indicate development and these sites are intentional.

  5. Nice tips, adding “noindex” in meta tags helps to avoid getting indexed in search engines or simply restricting from robots.txt. If you are using wordpress then its quite easy to control duplicate content flow through out the blog (i am not sure about other systems) using several handy seo plugins.

  6. Avatar Josefina Argüello says

    How about coding similarities? In an example we have multiple clients in the same industry but different locations, that use a template design, therefore having the same (similar) code. However all textual content is written specifically for that client and is completely different than the other site(s). Is there any differentiation between “code similarities” and “content similarities”?