As a search engine marketing consultant, I see a lot of websites that haven't been optimized for the search engines. Over the years, I continue to see the same problems with what I call "non-optimized websites." Typically, the biggest issues tend to be basic website design elements that are not included when a website is developed, not bad web design. Most search engine optimization problems can be fixed by making sure nothing is overlooked when building a web page.

One reason why I think the internet has been so successful is that website technology has allowed anyone to make a website and publish it on the internet. There are a lot of people out there that have absolute no formal training when it comes building websites—and a lot of people who build websites for a living have trained themselves. That actually is a good thing, especially because it has allowed the internet to grow at such a rapid pace. There are website standards in place such as those developed by the W3C, the World Wide Web Consortium ( These standards, though, have more to do with coding—not what the typical website visitor actually sees.

Problems arise when search engine spiders try to crawl websites and figure out what a web page is "about". Search engine spiders aren't humans, so they cannot read and interpret what a web page is "about" without help from the person who created that web page. There are certain elements that must be included in a web page that ultimately helps the search engine spiders figure out what a web page is about, which ultimately helps your website be found in the search engine's results.

Duplicate Content

The search engines do not wish to have multiple copies of the same web pages in their indexes. It takes up a lot of unnecessary room in their databases, and slows down how much processing they have to do on a regular basis. Google, in particular, has been removing web pages from their search engine index that they deem to be a duplicate. If you have duplicate web pages on your website, Google will keep the first copy they find and throw out all the others.

When I look at websites that haven't been optimized for the search engines, I typically see a lot of duplicate content—or web pages that Google thinks are duplicates. The website owners don't mean to have duplicate pages, but their pages are most likely are considered to be duplicates by Google. Web pages need to be at least 25 percent different from another web page in order to be considered a unique web page. Websites that have the same title tags and meta tags on every page is one major factor. The search engines see the title tags and meta tags as part of the web pages—if those are the same on every web page then the pages could be duplicates. The search engines then look at the overall content of the page. If there's not a lot of text but a lot of graphics on the web page it could also be a duplicate. When it comes to making sure your web pages aren't considered to be duplicates, every web page on your website needs to have a unique title tag, meta keywords tag, and meta description tag—and enough indexable content on the page, as well.

Not Enough Content

Many web designers like to use fancy graphics on websites—it makes the websites look cool and are visually appealing. There's only one problem, though—search engine spiders cannot read text that appears in graphics. So, text that can be read by a human won't necessarily be read by the search engine spider. The search engine spiders consider text to be necessary content, not graphics. Oftentimes I see websites that are very well designed—but since all the text that appears on the web pages appear only in graphics, then the page is most likely considered to be a duplicate of another page, as only the image file name is referenced in the source code of the web page—which doesn't cover the "25 percent unique" requirement by the search engines.

The more text that is included on a web page the better a search engine can figure out what that web page is about. Title tags are essential—they should give a quick overview of the general subject of the web page. Since meta tags are considered to be part of the content of a web page, the text included in the meta tags help make a web page unique—which further helps cover the "25 percent unique" requirement by the search engines.

Search Engine Spider Issues

The search engines spiders, when visiting your website, need to be able to crawl their way through all the web pages on your website. It's absolutely necessary that they can follow the links on your web pages. If they can't follow the links, it's likely that only the home page of your website will be listed in the search engines. Unfortunately, there are a lot of website navigation techniques that look really good and function well when a human visits a website—but those navigation techniques won't allow a search engine spider to follow the links. It's imperative that your website's navigation is search engine friendly and that you include a breadcrumb trail.

Lack of Links

In order for web pages to be crawled, indexed, and ranked well in the search results, they need links. Lately, the search engines have been relying more and more on linkage data (what other websites say about your website) in order to determine the search engine rankings. Not only is it important for your internal navigation to be search engine friendly, it's important that your web pages have links from other websites. Having links to your web pages helps the search engine spiders find your pages—and the more links your web pages have the better. Many websites that haven't been optimized well typically don't have search engine friendly links and they don't have many links from other websites.

By focusing on making sure your web pages aren't considered duplicate pages and providing enough search engine friendly content on your web pages, your website will benefit from increased visibility in the search engine results. Working on your website's internal navigation and including a breadcrumb trail as well as getting more links to your web pages will increase the likelihood of the search engine spiders finding your web pages. These are the most often overlooked issues—and they are the biggest contributors to poor search engine rankings.

Discuss this article in the Small Business Ideas forum.
November 4, 2005

Bill Hartzer currently runs a Strategic Online Marketing Consultancy that includes services such as search engine optimization, social media marketing, and online reputation management.

Bill Hartzer formerly managed the Search Engine Marketing division of Vizion Interactive and MarketNet, leading interactive marketing and website design firms in the Dallas, Texas area.

Hartzer is a successful writer, blogger, search engine marketing, and social media marketing expert. During the past fifteen years, some of his many accomplishments include: Search Engine Marketing Manager, Vizion Interactive, Search Engine Marketing Manager, MarketNet, Search Engine Optimization Strategist, Intec Telecom Systems PLC, Webmaster, Intec Telecom Systems PLC, Founder, Dallas/Fort Worth Search Engine Marketing Association, Owner/Author, Corporate Web Site Marketing, Administrator, Search Engine Forums, Frequent Speaker, Search Engine Strategies Conferences, Frequent Speaker, WebmasterWorld's PubCon Search Engine and Internet Marketing Conference.

Search Engine Guide > Bill Hartzer > Avoiding Common Web Site Mistakes