The Strongest Cloaking Yet – Cross Domain Canonical Tag
Google has fought valiantly to stop these techniques and, by and large, has removed all but the most sophisticated techniques. However, they have fallen on their own sword with the introduction of the new cross-domain canonical tag.
The Canonical Tag
The rel=canonical tag was a god-send for most webmasters. It allowed us to defeat duplicate content issues by placing a single line of code at the top of the HTML page, unbeknown to visitors, telling the visiting browser or bot the intended URL for that piece of content. No matter how crafty, strange or contrived the URL used to access the page was, ultimately the bot would know what it should be. Users needn’t be jostled by redirects, and webmasters needn’t rely on more complex server-side technologies to prevent duplicate content. All was well in the kingdom.
One important aspect of the canonical tag was that it only impacted same-domain content. I could place the tag on one page and tell it the canonical URL is located only on another page on my domain. This means that I could not find a way to sneak my canonical tag onto another webmaster’s site and expect to steal their PageRank.
This, however, has changed.
Cross-Domain Canonical Tag
The new cross-domain canonical tag allows webmasters to place a single tag that tells GoogleBot that the real page exists on another domain. The user does not see this tag unless they take the time to view the source. An even cleverer webmaster could even cloak the page so that the only thing that changes when GoogleBot visits is the display of that one tag. How then could a webmaster use this to cloak?. Simple.
All the PageRank, TrustRank, and potential rankings would be passed on to the target site.
A Solution in Search of a Problem
I don’t understand the cross-domain canonical tag. Nearly all webmasters with multiple domains issues have server-side access, allowing them to easily prevent duplicate content with mod-rewrite, ISAPI rewrite, etc. Sites dealing with scrapers and content syndication will see no benefit from the cross-domain policy, as it is highly unlikely the scraper sites will be willing to drop the canonical tag in place, potentially losing any rankings they might have. Ultimately, Google has given blackhatters a strong tool for cloaking that will require vigilance on Google’s part to detect and prevent.