Google posted a public service announcement saying it is best to disallow Googlebot from crawling your motion URLs. Gary Illyes from Google posted on LinkedIn, “It’s best to actually disallow crawling of your motion URLs. Crawlers won’t purchase that natural non-GMO scented candle, nor do they look after a wishlist.”
I imply, this isn’t new recommendation. Why let a spider crawl pages the place it can not actually take any actions. Googlebot cannot make purchases, can not join your e-newsletter, and so forth.
Gary wrote:
A standard criticism we get about crawling is that had been crawling an excessive amount of, which makes use of an excessive amount of of the server’s sources (although would not trigger issues in any other case). Taking a look at what we’re crawling from the websites within the complaints, manner too usually it is motion URLs equivalent to “add to cart” and “add to wishlist”. These are ineffective for crawlers and also you possible don’t need them to be crawled.
When you’ve got URLs like:
https://instance․com/product/scented-candle-v1?add_to_cart
and
https://instance․com/product/scented-candle-v1?add_to_wishlist
How do you have to block Googlebot? He mentioned, “It’s best to in all probability add a disallow rule for them in your robots.txt file. Changing them to HTTP POST technique additionally works, although many crawlers can and can make POST requests, so maintain that in thoughts.”
Now, just a few years in the past, we reported that Googlebot can add products to your cart to confirm your pricing is appropriate. It appears to be a part of the merchant shopping experience score function – so I might be a tad cautious with all of this.
Discussion board dialogue at LinkedIn.
Notice: This was pre-written and scheduled to be posted immediately, I’m currently offline for Shavout.