Sebastian has an excellent interview with the Google Sitemaps team over at his Smart IT Consulting site.
Some excerpts:
“Sitemaps convey some very important metadata about the sites and pages which we could not infer otherwise, like the page’s priority and refresh cycle.”
And how Google Sitemaps work:
“Sitemaps are downloaded periodically and then scanned to extract links and metadata. The valid URLs are passed along to the rest of our crawling pipeline — the pipeline takes input from ‘discovery crawl’ and from Sitemaps. The pipeline then sends out the Googlebots to fetch the URLs, downloads the pages and submits them to be considered for our different indices.”
Matt Cutts adds:
“Matt adds “it’s useful to remember that our crawling strategies change and improve over time. As Sitemaps gains more and more functionality, I wouldn’t be surprised to see this data become more important. It’s definitely a good idea to join Sitemaps so that you can be on the ‘ground floor’ and watch as Sitemaps improves.””
Also visit the Sitemaps community and blog for more ongoing information.
Tags: Google Sitemaps, Sebastian, Google, SEO, Search Engine Indexing, Sitemaps