Part 3 in the SEO Death series
There are all sorts of things that can kill your search engine traffic, but perhaps none is more humiliating than leaving your development server accessible to the search engine robots. If your test environment gets crawled and the pages get indexed you can suddenly have a duplicate of your entire site competing with your live site and your rankings are guaranteed to go any direction except up. And to make matters worse, most of the pages on the dev site will probably have bad or no data on them.
A good way to check if your dev server is in the index is to search for one of the dev URLs. Start with the homepage. If it’s in the index chances are you are screwed.
Note to engineers: make sure your dev server is inaccessible to robots (and anyone else) by implementing a login. If you want to be super cautious, which is always a good idea, triple bag that sucker by also disallowing robots in the robots.txt file and tagging all dev pages as “noindex” (just remember to remove the noindex tags when you push the pages live).
Additional SEO=Death Posts:
3 Response Comments
Is it possible to sabotage your competitor by creating a doppelganger of their site?
Are there instances where this has happened?
I’ve had that happen as well, and here’s what I did to recover and prevent it from happening again:
1) Get the correct robots.txt implemented, and make the file read-only so it doesn’t accidentally get overwritten.
2) Verify the dev site in Google Webmaster Tools.
3) Request removal of the entire site, via GWT.
4) Go to Code Monitor (https://www.polepositionweb.com/roi/codemonitor/) and have that site monitor the robots.txt file on the dev site so you know right away if it gets changed.
One of my old design guys (still) does this and refuses to fix/change it… And he threw up some adsence above and below the sites he makes…
I no longer use them…