How does Google know about so many sites serving so many interests all around the world? How does Google compile a list of millions of results from one simple search for something as mundane as “bread”? Do people pay money to have their name at the top of the search page? And what about the little guy – does he/she lose out when it comes to a chance at the top?
The more I learn about Google indexing, the more fascinated I am. Of course, I’ve said it before, I am decidedly right-brained and even the mention of the word “algorithm” makes the left side of my brain yawn, open one eye, sort of groan, and then complain about being woken up yet again to try to explain something to me that I am never going to get. Once we’ve all acknowledged this proclivity, it should come as no surprise that I really do picture tiny little robots going to town on all the pages of the web. (I mean, c’mon, this is the same person who relished the image of “hand-to-hand spam fighting” from last week.)
In all seriousness, I am thoroughly amazed in the ability of Google to index sites over and over again, registering new, out-of-date, or updated sites. I’m sure the hardware investment alone is significant, warranting this comment from its Webmaster Tools article on the Googlebot: We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. (What does Google consider a “huge set of computers”?) The most welcome part is the end of my perception that people or organizations pay to be in the top ten. (I guess, in a way, they “pay” to optimize their chances of getting seen in a search, but they don’t pay Google.)
In this forum, everyone has a chance of being seen in the top of the search order if they’ve done what they needed to do in SEO on the front end — keyword placement, URLs, title tags, ALT tags, site maps, meta tags, back links, and more. PageRank alone is determined by 200 factors. Every page is fair game for the spider, but it’s up to site managers to leave the right trail of food bits on the web.
I’m wondering if there is a resource out there – a case study of sorts – that really shows a “little” site doing things right enough to threaten one of the “big” ones? And is there a point with all these factors feeding the spider when enough is enough?