News

Google Can Identify Duplicate Content Before Crawling

Google’s John Mueller revealed in a Webmaster Central hangout this week that Googlebot is capable of recognizing duplicate content even before it has crawled the pages.

A site owner asked if and when Google would consider a French version of a page to be a duplicate of the English version.

Can Google determine when multiple pages share the same content in different languages? If so, how is that handled in search results?

In Mueller’s response, he revealed that, in some instances, Google can detect when pages share the same content without even having to crawl them. This is particularly relevant to the URL structure of pages.

“What sometimes happens is we kind of proactively recognize that something is probably a duplicate, even before crawling it. This happens when we see that the difference, for example, is within the URL in a place where we’ve generally noticed that the content shown is not so relevant to the content on the page.

So that could be something like a language parameter set to various terms. We might try “language=English,” “language=French,” “language=German,” … if we find that all these pages show the English content, except for maybe “language=Spanish” that shows the Spanish version, then we might assume that this language parameter is actually irrelevant to this page. Consequently, we might miss the one page that actually has unique content.”

Let’s unpack this and look at it from a broader perspective. Forget languages for a second. This particular example dealt with languages, but what Mueller had to say can apply to content in the same language as well.

What Mueller is saying is Google may determine a page has duplicate content if it shares similar URL parameters with other pages that are no different from each other.

This is not an ideal situation, as there may be instances where unique content is present on pages with similar URL parameters as exact duplicates.

Site owners can avoid having unique content dismissed as duplicate by paying attention to how URL parameters are generated by their site.

Mueller admits that it may not always be the webmaster’s fault when pages are treated as duplicates—sometimes Google has its own “bugs” as well.

The original question, along with Mueller’s response, can be seen in the video below starting at the 27:38 mark.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button