Google has barely up to date its Google crawlers and fetchers documentation to say that it’ll choose the protocol, HTTP/1.1 and HTTP/2, that “supplies one of the best crawling efficiency” for Googlebot. In reality, it might even change protocols between periods, if it must.
The change to the doc was so small that Google did not even doc the doc change in its updates page.
Google wrote:
Google’s crawlers and fetchers help HTTP/1.1 and HTTP/2. The crawlers will use the protocol model that gives one of the best crawling efficiency and will change protocols between crawling periods relying on earlier crawling statistics. The default protocol model utilized by Google’s crawlers is HTTP/1.1; crawling over HTTP/2 could save computing sources (for instance, CPU, RAM) in your web site and Googlebot, however in any other case there is not any Google-product particular profit to the location (for instance, no rating enhance in Google Search). To decide out from crawling over HTTP/2, instruct the server that is internet hosting your web site to reply with a 421 HTTP standing code when Google makes an attempt to entry your web site over HTTP/2. If that is not possible, you possibly can ship a message to the Crawling workforce (nonetheless this answer is momentary).
Google started crawling utilizing HTTP/2 for a restricted variety of URLs in November 2020 and a 12 months later was crawling about half the web on that protocol. HTTP/2 doesn’t have a direct benefit for SEO and you can’t pressure Google to crawl over HTTP/2.
Gagan Ghotra noticed this modification and wrote on X that the documentation was up to date on November nineteenth. He has this earlier than and after, which I after all, verified:
Discussion board dialogue at X.