Delivering syndicated content for localized searches – still duplicate content?

Today’s WebProNews article dropped in with a section highlighting a topic brewing in their forums. If you are syndicating your own content for the sake “serving the localized searches”, is that still considered duplicated content in terms of Google’s eyes?

While it really makes sense that we should work to serve our customers and “not change the business just because of Google”, the big G has also been not very forgiving about duplicated content.

From my point of view, I still feel it is a no-no to have the same content on two or more sites – if you want both to perform well… unless you are already an authority site like most online newspapers, which are known to be the business of reporting syndicated stories from all around.

Update: I was thinking about this one. Since Google already has things like rel=nofollow tags and the section targeting stuff for Adsense publishers to demarcate which content to base on for their contextual ads, shouldn’t they create one similar to section targeting, but specifically for duplicated content? i.e. For people to demarcate a section saying “This is syndicated content and I know it, I don’t expect to get listed in your rankings with this page, because I’m doing this to serve my customers, but don’t penalise me for it?”

Update 2: Sheesh… I maybe answering my own question, but perhaps a directive in the Robots.txt file might do the trick, albeit not at a scale as fine as section targeting.

What are your thoughts?

Comments

  1. Google has a problem with duplicate content and it’s been discussed for a while. Scrapper sites use RSS to “hijack” your content and call it their own, and some of these sites may actually rank higher than the source because Google has no way of determining which is the original author.

    What about “duplicate” content on the same domain? I’ve not had problems with regular HTML/PHP pages but I just found out yesterday that many of my older blog posts have fallen into the dreaded Supplemental Index.

    It seems Google has a problem with WordPress too, i.e. it considers the same post on the index, categories, archives and rss feed as duplicates.

    I read a bit this morning and you’re right that robots.txt may be the best solution, although I did see one or two plugins.

  2. Pingback: SOB Business Cafe 06-15-07 - Liz Strauss at Successful Blog - Thinking, writing, business ideas . . . You’re only a stranger once.

  3. Hey Larry,

    Thanks for your thoughts. I read about some plugins for wordpress too. I guess Google really has to buck up a little in terms of identifying who is the orignal author or publisher of any content!

  4. Hi Kian Ann,

    Wow, this is scary alright.But abit unethical too.
    Anyway, i came here for a purpose ! haha..

    I just posted on a video by Rick Warren and i hope you can take a look to learn something from it ya?
    because it will change your life for the better certainly..

  5. Pingback: Singapore SEO & Online Marketing Blog

  6. Actually, i’m not too sure abt your statement regarding Google coming down hard on duplicate content. There are many extremely good content out there that are illegally or unknowingly duplicated by other websites and it wld hardly be right to penalize the originator. I think penalize is not the right word, but rather discounted or ignored is perhaps better? You don’t get demerit points for having duplicate content, its just that you don’t gain additional points. My 2 cents 🙂

    btw, came across your blog and I must say its a great blog. Do check out my relatively new blog too. Wld love your comments! Cheers.

  7. James,

    I agree with your reasoning. Google doesn’t penalize websites that publish duplicated stuff, it just ignores them. Going into “supplemental” isn’t a penalty, it is an advice from Google that we’ll have to buck up with better content – fresh and unique ones.

    In my opinion, penalty would be having a website blacklisted from Google and have none of your pages listed in Google.

    Use the ‘site:’ operator to find out how many of your pages are indexed by Google, at the Google search box type:

    site:blogopreneur.com

    To find out who is using your content, try http://www.copyscape.com

  8. Hey James and Shi,

    Yes, James, you are right – I think we all should define the difference between “ignore” and “penalize”.

    Come to think of it, in this case for duplicated content is it really necessary to use a robots.txt file? Is it better to have them off the index altogether (using robots.txt), or leave them in the supplemental index?

  9. Pingback: RSS 101 Tutorial - Part 1 | Internet Marketing and SEO by TrafficBoosterProV2.com

  10. Hi everybody
    I also agree that Google doesn’t penalize websites that publish duplicated stuff.
    There are many duplicated contents in Ezinearticles.com and some bloggers also take the contents from there. There will be a lot of duplicated contents.
    Google may discount or ignore them.