Keyword stuffing, gateway pages, comment spam, etc, earned the first search engine optimisers a deservedly poor reputation within the web community. Some companies continue to peddle these harmful techniques to unsuspecting website owners today, perpetuating the myth that optimising your website for Google or Bing is an inherently villainous practice. Needless to say, this is not true. Fundamental principles of technical SEO, including crawl efficiency, indexation control, link profile maintenance, and much more.

  1. Search engines use automated bots commonly known as spiders to find and crawl content on the web. If Googlebot cannot efficiently crawl your website, it will not perform well in organic search. Regardless of your website’s size, history, and popularity, severe issues with crawl accessibility will cripple your performance and impact your ability to rank organically.
  2. You need to disregard popular nowadays advice to make all information accessible within three clicks, and instead focus on building a website in accordance with Information Architecture (IA) best practices. This architecture should be reflected with a static and human-readable URL structure, free of dynamic parameters where possible, and which uses hyphens rather than underscores as word separators. Maintain a consistent internal linking pattern, and avoid creating outlying, orphaned pages. Remember that search crawlers cannot use search forms, and so all content should be accessible via direct links.
  3. Prevent spiders (content block) from indexing certain pages (customer account areas, heavily personalised pages with little value to organic search, and staging sites under active development) or following any of its links. (Robots.txt file does not provide a means of preventing URLs from appearing in search results, nor does it provide any kind of security or privacy protection).

Read more about this