0 Members and 1 Guest are viewing this topic.
How does this prevent search engines from crawling the website?Donotlink.com routes links to questionable sites through a unique intermediate url that forwards the visitor to the destination through javascript. This url is blocked in our robots.txt file, so (search engine) robots are discouraged from crawling it. The "noindex" and "nofollow" properties of the link and the intermediate page give robots another reminder to not crawl the link. If a known robot does decide to crawl the link, our code will identify it and serve it a blank page (403 Forbidden).