Google’s John Mueller answered a query about why Google indexes pages which are disallowed from crawling by robots.txt and why the it’s secure to disregard the associated Search Console experiences about these crawls.
Bot Site visitors To Question Parameter URLs
The particular person asking the query documented that bots had been creating hyperlinks to non-existent question parameter URLs (?q=xyz) to pages with noindex meta tags which are additionally blocked in robots.txt. What prompted the query is that Google is crawling the hyperlinks to these pages, getting blocked by robots.txt (with out seeing a noindex robots meta tag) then getting reported in Google Search Console as “Listed, although blocked by robots.txt.”
The particular person requested the next query:
“However right here’s the massive query: why would Google index pages after they can’t even see the content material? What’s the benefit in that?”
Google’s John Mueller confirmed that if they’ll’t crawl the web page they’ll’t see the noindex meta tag. He additionally makes an fascinating point out of the location:search operator, advising to disregard the outcomes as a result of the “common” customers gained’t see these outcomes.
He wrote:
“Sure, you’re appropriate: if we will’t crawl the web page, we will’t see the noindex. That mentioned, if we will’t crawl the pages, then there’s not rather a lot for us to index. So whilst you would possibly see a few of these pages with a focused web site:-query, the typical person gained’t see them, so I wouldn’t fuss over it. Noindex can also be nice (with out robots.txt disallow), it simply means the URLs will find yourself being crawled (and find yourself within the Search Console report for crawled/not listed — neither of those statuses trigger points to the remainder of the location). The vital half is that you simply don’t make them crawlable + indexable.”
Takeaways:
1. Mueller’s reply confirms the constraints in utilizing the Website:search superior search operator for diagnostic causes. A type of causes is as a result of it’s not related to the common search index, it’s a separate factor altogether.
Google’s John Mueller commented on the site search operator in 2021:
“The brief reply is {that a} web site: question isn’t meant to be full, nor used for diagnostics functions.
A web site question is a particular sort of search that limits the outcomes to a sure web site. It’s principally simply the phrase web site, a colon, after which the web site’s area.
This question limits the outcomes to a particular web site. It’s not meant to be a complete assortment of all of the pages from that web site.”
2. Noindex tag with out utilizing a robots.txt is okay for these sorts of conditions the place a bot is linking to non-existent pages which are getting found by Googlebot.
3. URLs with the noindex tag will generate a “crawled/not listed” entry in Search Console and that these gained’t have a adverse impact on the remainder of the web site.
Learn the query and reply on LinkedIn:
Why would Google index pages when they can’t even see the content?
Featured Picture by Shutterstock/Krakenimages.com