My last three posts covered a variety of questions regarding keyword usage, links and website architecture. In this post I'll address the final question that has to do with the visual display of your pages, duplicate content and CSS.

While on the BBC website I noticed that they have an optional low graphics version for all of their pages. I am not sure how they do this but I decided I could do the same by making a low graphics imitation of each of my pages by having a button on each page that could allow people to switch back and forth. I have made all my low graphics pages not only with no or low graphics but with all web safe colors and web safe fonts. The text on the low graphics pages are identical to my regular pages.
Is this considered duplicate content? Will this hurt my search rankings? if it would hurt my ranking could I avoid that by using no follow tags on the links to the low graphics pages or would they still get indexed and subsequently hurt my rankings? Is there something I could inset in a robot text file so that the spider would not go to those pages at all?

They way this is done is by creating multiple Cascading Style Sheets (CSS). allows its users to change display settings in a variety of ways, including six pre-set options. Each option, once selected, imports a different style sheet used to display the page. The page URL remains the same, but the way the content appears changes.

By using CSS to change the display, or any site really, can have unlimited viewing options without creating any duplicate content issues. Based on the question above, the way the low-res version was implemented will produce multiple pages (URLs) that uses the same content that is also used on the "normal" version. This creates duplicate content that will potentially be a problem for the search engines. And yes, it could effect your rankings.

If you want to implement multiple layouts of your pages, or even a printer friendly version, CSS is the way to go. It's the easiest and cleanest way, and doesn't allow for any potential duplicate content issues. However, if you want to do things the hard way, there are a couple things you can do that will help prevent such duplicate content problems.

The first, as mentioned in the question, is to use the nofollow tag. All links pointing to these alternate versions should be nofollowed. I'd also back that up with a robots.txt exclude file.

Secondly, you could implement the canonical tag. In the < head > section of your code place the following:

< link rel="canonical" href=""/ >

This tag would need to be placed on each alternate version of the page with the link going back to the main version. This will tell the search engines that all other versions are not the real one and therefore should not be considered duplicate content.

These are band-aid solutions and I wouldn't recommend them. Creating unique CSS is simpler, cleaner, and ultimately the more perfect route to go.

April 22, 2009

Stoney deGeyter is the President of Pole Position Marketing, a leading search engine optimization and marketing firm helping businesses grow since 1998. Stoney is a frequent speaker at website marketing conferences and has published hundreds of helpful SEO, SEM and small business articles.

If you'd like Stoney deGeyter to speak at your conference, seminar, workshop or provide in-house training to your team, contact him via his site or by phone at 866-685-3374.

Stoney pioneered the concept of Destination Search Engine Marketing which is the driving philosophy of how Pole Position Marketing helps clients expand their online presence and grow their businesses. Stoney is Associate Editor at Search Engine Guide and has written several SEO and SEM e-books including E-Marketing Performance; The Best Damn Web Marketing Checklist, Period!; Keyword Research and Selection, Destination Search Engine Marketing, and more.

Stoney has five wonderful children and spends his free time reviewing restaurants and other things to do in Canton, Ohio.


Why would you not just put a "no-index" on the low res pages in the meta areas for those pages. You do not want them indexed anyway, and therefore you would be eliminating the duplicate content as the spiders would not index those pages. Am I missing something here, or is that not the simplest way to do that.

The noindex would serve the purpose of keeping the page out of the search index but would not prevent the site from passing link juice to those pages. I actually meant to discuss using the robots.txt file but got sidetracked and forgot. :) The no index would accomplish approximately the same thing. But even still, it's a band-aid approach. It fixes the duplicate content problem but not that of link juice.

Thanks Stoney.

I have a similar question:

I've created several "almost" identical landing pages for phrases people use to find VoIP telephone service. Mostly I did this to increase my quality rating for Google AdWords (keeps costs down). I jerked the basic page around a bit, but didn't want to lose important information the visitor needs to see what we offer, so around 85% of the pages are the same (sometimes in different sequence).

Will this be a problem with "normal" rankings?

Al, If you are creating pages like for AdWords landing pages (often times a very good idea) you'll just want to make sure to keep those pages out of the search indexes. First, don't link to those landing pages, unless the content is significantly different and each provides unique value to the search engines. Second, use the robots.txt file to dissallow the engines to index those pages. It probably would not hurt to add the canonical tag to each, pointing to the main optimized page for that product.

I've always wondereed about this. There can be a lot of duplicate contenet when I talk about nj mortgages. I always wondered how the search engines do this, and handle duplicate graphics. I was under the impression that if at least 60% of your content was seen as duplicate, the search engines would slap you for it.


What is the % of content you are aloud to use as duplicate? I try to write unique content but at times it is similar, I wouldn't want to get penalized. This is all fairly new to me. Thanks Steven

I don't think you'd want any more than 50% of an article to be duplicate. And at that, you probably don't want to have the same content duplicated on article after article.

"The noindex would serve the purpose of keeping the page out of the search index but would not prevent the site from passing link juice "

Is this can be possible...if a page is not indexed by crawlers how could the page get any pr. Could you please explain it in more detail..

Comments closed after 30 days to combat spam.

Search Engine Guide > Stoney deGeyter > Q&A: A Little Something You Need to Know About Duplicate Content and CSS