There’s this attention-grabbing dialog on LinkedIn round a robots.txt serves a 503 for 2 months and the remainder of the location is accessible. Gary Illyes from Google mentioned that when different pages on the location are reachable and obtainable, that makes a giant distinction, however when these different pages usually are not, then “you are out of luck,” he wrote.
Notice, he specified the house web page and different “essential” pages as needing to be obtainable…
The thread was posted by Carlos Sánchez Donate on LinkedIn the place he requested, “what would occurred if the robots.txt is 503 for two months and the remainder of the location is accessible?”
Gary Illyes from Google responded:
I am undecided if we have to add extra nuance to it; see final sentence. One side that is not noted often is whether or not our crawlers can attain constantly the homepage (or another essential pages? do not bear in mind) whereas the robotstxt is unreachable. Whether it is, then the location may be in an okay, albeit limbo state, however nonetheless served. If we get errors for the essential web page too, you are out of luck. With robotstxt http errors you actually simply need to deal with fixing the reachability as quickly as potential.
The query was if there must be extra clarification on the robots.txt 5xx error handling in the documentation or to not deal with this.
This can be a tremendous attention-grabbing thread, so I like to recommend you scan by means of these items if it pursuits you. In fact, most of you’d say, simply repair the 5xx errors and don’t fret about this. However many SEOs like to surprise in regards to the what if conditions.
Here’s a screenshot of this dialog, however once more, there’s much more there, so test it out:
Discussion board dialogue at LinkedIn.

