“Sitemaps convey some very important metadata about the sites and pages which we could not infer otherwise, like the page’s priority and refresh cycle.”
And how Google Sitemaps work:
“Sitemaps are downloaded periodically and then scanned to extract links and metadata. The valid URLs are passed along to the rest of our crawling pipeline — the pipeline takes input from ‘discovery crawl’ and from Sitemaps. The pipeline then sends out the Googlebots to fetch the URLs, downloads the pages and submits them to be considered for our different indices.”
Matt Cutts adds:
“Matt adds “it’s useful to remember that our crawling strategies change and improve over time. As Sitemaps gains more and more functionality, I wouldn’t be surprised to see this data become more important. It’s definitely a good idea to join Sitemaps so that you can be on the ‘ground floor’ and watch as Sitemaps improves.””
Also visit the Sitemaps community and blog for more ongoing information.
Tags: Google Sitemaps, Sebastian, Google, SEO, Search Engine Indexing, Sitemaps
About the author
Lee Odden has been recognized as a top B2B Marketing professional by Forbes, The Economist and the Wall Street Journal. For over 20 years he's worked with his team at TopRank Marketing to help elevate the B2B marketing industry through creative marketing programs that deliver more authentic, experiential and inclusive content for brands like LinkedIn, Dell and Adobe. Lee is the author of Optimize and has published over 1.4 million words on his agency's B2B marketing blog. As a trusted marketing thought leader, he has given nearly 300 presentations in 19 different countries on B2B content, search and influencer marketing. When not marketing, Lee is probably running, cooking or traveling.