Proper Coding Ethos
Sideline to the obvious issues of useability, are issues of search engine exposure. Search engines employ sophisticated programs called "spiders" or "robots" to automatically trawl the internet for content, which is then indexed according to the search engine's own specific (and secret) algorhithm. The methods these spiders use to index a page are unknown, and change frequently. Certain things hold true (for the time being): The text used in an image is not seen by the spider, but the "alt=" text most likely is. Spiders are code based and thus are incapable of "fuzzy" interpretations of your website. This is the reason why proper code is crucial; if the spider hits an error, it may improperly index your page, or ignore it completely, severely limiting your exposure to search engines.
It is impossible to know what glitches a spider can overlook, and which ones will stop it in its tracks. For this reason, we have taken the stance that the only code which is sure to be read properly by a spider, is code that has passed W3C validation. The more of your code that can be validated, the more a spider can access. Once it's indexed, more customers will find your site among search results, thus increasing revenue. There is a demonstratable link between proper coding practice and revenue generation in today's online marketplaces, one that is unfortunately often ignored.
It is possible that search engine spiders aren't quite as stringent and strict as the validation rules are, but on an issue of this importance it's better to be safe than sorry. Next week, we will delve into proper code declarations to maximize the ability of a spider to properly detect and index your site.