Don't let duplicate pages and bad URLs destroy your SEO. Kill it dead!

This series is pulled from a presentation given at SMX East. Part I of this series covered the problems duplicate content creates. Part II covered some of the causes of duplicate content. This post covers some of the solutions that will help you fix your duplicate content problems.

Quick Recap:
Part I: Duplicate Content Causes Problems. Duh!
Part II: There is No Single Cause of Duplicate Content. Don't collect them all!

Great! Now let's move on.

Only You Can Prevent Duplicate Content

The Solutions: The Power is Within You.

Finally! Now we can address some of the solutions to the problems duplicate content creates.

Not all duplicate content issues are easily fixable, and some may be outside of your own control. But, those that are in your control do need to be addressed sooner rather than later. Or, you could just sit back and wait for Google to figure it all out. Don't worry, it's all good. Google's got your back!

But, while you're praying to Google for lavish blessings, I'll be working with my clients to fix problems that are holding them back in the search results.

Search engine friendly links

Solution: Search friendly links.

In Part II, I showed you two types of links that were not very search engine friendly. The above image shows one that is. So, what's the difference? This link doesn't use any JavaScript and it has an "href" that points to the URL being linked to. This is mostly gibberish to those of you who don't know HTML code, but it's important for you to know this so you can tell your developers exactly what kind of links you need.

This is a basic HTML link. Nothing fancy. That's not to say you can do fancy things with it, these can be embedded in CSS and some JavaScript can even be applied, but the crucial thing is that the link itself is very search engine friendly. If all your links are build like this, you will always know the search engines can spider it.

Link consistency

Solution: Link Consistency.

If you're going to link to a page, be consistent about it. We covered how the same page can be linked in several different ways. You can implement redirects and canonical tags (which I'll cover below), but regardless of the other solutions you put in place, be sure to be consistent in how you link to all pages in your site.

If you want to use the "www.", then use it on every link. If you want to link to the default page without using the file name of any directory or sub-directory, then do that consistently as well. Half of the problem with duplicate content is pages being linked inconsistently throughout the site. Fix your link structure first, then work on the rest of the solutions.

Secure shopping path

Solution: Secure shopping path.

In Part II, I talked about the problems that happen when visitors move into the secure area of your site. Often times these secure areas contain links back out, but maintain the secure "https" in the URL. This creates both a secure and non-secure version of the same page. A dupe. The solution here is two-fold.

First, don't let the search engines enter into your shopping cart area. Secure or not, keep them out! There is nothing there for them to see. Second, once visitors are in the secure area, be sure that any links back out of the check out area go to the unsecure site, not secure URLs of the same pages. It's OK for visitors to move in and out of the secure area, but what you don't want is them (or the search engines) accessing secure pages that aren't meant to be.

Hard code all of your links out of your secure area to be sure they are not using the secure "https" in the URL. Problem solved.

Canonical URLs

Solution: Canonical URLs.

The canonical tag (or attribute. Whatever.) is the ultimate duplicate content band-aid solution for duplicate content. The search engines released this as a way to give them a "hint" about which page of all your duplicates is the one that is supposed to be the genuine URL.

This solution is only necessary if you can't get your pages properly redirected, or duplicate URLs eliminated, via smart linking and content management implementation. It's the ultimate "if I can't do anything else" solution. And really, I wouldn't worry about it unless you can't implement any other type of fix.

The idea here is to put the tag in the head code of each duplicate page with the URL of the "proper" page. The search engines are supposed to treat it as if it is a redirect when assigning link and other values to the page.

Link to only to canonical page

Solution: Link only to canonical pages.

If you can't eliminate your duplicate pages and must use the canonical tag, I would also do my best to link only to the canonical version of each page. I wouldn't rely on the search engines to transfer all your link values from the incorrect URL to the correct one. Maybe they will, maybe they won't. But, if make sure your internal links point only to the canonical page, you've accounted for half the problem.

The other half will be external links, which redirects (see below) will handle. Linking to the canonical page ensures that all internal linking value will be passed to the proper page without relying on the search engines to get the "hint". "Don't make them [the search engines] think" is still the best play.

Redirect links

Solution: Redirect old links.

The absolute best solution to maintaining link value to the pages that are supposed to receive it is the use of the redirect. Whether you have deleted or moved old pages, or have duplicates with a single canonical page, using the 301 redirect (along with linking to the correct page) is the best solution available.

This doesn't require any thinking on behalf of the search engine or the visitor, and you never have to worry about what URLs are being used in links to your site, because only the correct URL is being served. This is the Big Kahuna (along with linking to the correct page) of duplicate content and bad URL solutions.

If you don't know how to implement redirects, talk to your developers. They should know the best solution for you, but be sure they implement a 301 redirect, and nothing less.

Duplicate content can be problematic, but implementing these solutions will do wonders to eliminating the problems and reducing the amount of online clutter your site may be producing. Once eliminated, your site should perform significantly better in the search engines, which is the goal we should all be shooting for.

February 2, 2011

Stoney deGeyter is the President of Pole Position Marketing, a leading search engine optimization and marketing firm helping businesses grow since 1998. Stoney is a frequent speaker at website marketing conferences and has published hundreds of helpful SEO, SEM and small business articles.

If you'd like Stoney deGeyter to speak at your conference, seminar, workshop or provide in-house training to your team, contact him via his site or by phone at 866-685-3374.

Stoney pioneered the concept of Destination Search Engine Marketing which is the driving philosophy of how Pole Position Marketing helps clients expand their online presence and grow their businesses. Stoney is Associate Editor at Search Engine Guide and has written several SEO and SEM e-books including E-Marketing Performance; The Best Damn Web Marketing Checklist, Period!; Keyword Research and Selection, Destination Search Engine Marketing, and more.

Stoney has five wonderful children and spends his free time reviewing restaurants and other things to do in Canton, Ohio.


There's been a lot of recent development in the past week or so about Google and the Duplicate Content Myth, I just made a post about it on my blog. I'd have to say it's definitely a real thing for us to keep an eye out for these days. This is an EXCELLENT post that you made discussing it. Keep up the Great Work!

Very good article, and it works into what I doing now. I've actually been wondering this about our site that I'm redoing now. We have lots of books listed by the authors, or reviews, on their own detailed pages. We also have pages, that are generated, showing lists of the same books, in different categories, linked back to the original book pages, so people can find them. Is that considered duplicate content?

Tom, I don't think any of that constitutes duplicate content, unless the additional pages are using content that is already on other pages. And then it's a matter to what degree. If these pages are entirely made up from duplicate content then I might might to fix that. The important thing is to have only ONE URL for the books, regardless of how the visitor navigated to that book.

Comments closed after 30 days to combat spam.

Search Engine Guide > Stoney deGeyter > Don't Let Duplicate Pages and Bad URLs Destroy Your SEO: Kill It Dead! (Part 3 of 3)