Why Does Google Hate My Pages?

0 comments, 21/09/2015, by , in Google, SEO

Question Mark ManThere are many different algorithms and reasons as to why Google assigns each particular page to a given spot in search. As the years have progressed, Google has become increasingly more complex with respect to its ranking algorithms, making it (purposefully) more difficult for webmasters to guess the formulas behind rankings. In addition, this complexity has helped improve overall search experience for users. Fortunately, not all web pages are denied prominent ranking by Google – sometimes, our own errors can be the problem! Below, we’ll discuss why some of your web pages may not be showing up in Google, and how that can be fixed.

Robots.txt Issues

Not all individual web pages need to be visible to Google, based on the preferences of some webmasters. There can be many different reasons behind this, but modifications made in this regard could be presenting problems in other areas. Some websites feature a robot.txt file as a result of these preferences, making it possible to exclude individual files, pages or directories from Google search. If your website has one of these files, then be sure to check it and determine whether any of the affected pages are included under a page or directory excluded in the list. Even the smallest of typos when the robots.txt file was setup could prove to be disastrous for webmasters who are pinging links to Google.

Incorrect HTTP Codes

While not as common as some other mistakes, the HTTP status code mistake could be another reason as to why your pages are being ignored by Google. Every web page on the internet has an HTTP status code of some type. Most everyone is familiar with the classic “404” error (page not found), but there are others – such as 301 (moved permanently) and 403 (forbidden). Web pages that are functional and publicly indexable should be coded with HTTP 200, meaning that they are OK for the public to see and for search engines to index. Any pages that are not coded with 200 will not be visible to search engines.

Additional Required Elements

Search engine crawlers and bots are very basic in nature – it is quite easy to deter them from your web pages if additional elements are required in order to access them. The two most common obstacles to pinging links to search engines are JavaScript or cookie requirements. Since bots do not use JavaScript or utilize cookies, they are incapable of reading web pages that are comprised of significant amounts of JavaScript or that otherwise mandate the use of cookies. Particularly in regards to the latter, this can be another way in which to make web pages purposefully off-limits to search engines (as opposed to the “robots.txt” function). In addition, password protected pages cannot be read by search engine bots, so do not expect to see such pages being indexed by Google.

Conclusion

Password protection, cookies, JavaScript, errors in HTTP status codes and any errors or typos with the robots.txt file can all be common reasons as to why your web pages do not display in search. By checking these elements out first, you’ll be able to eliminate one-by-one the most likely culprits.






Tags:
Googleseo

Leave a reply translated

Your email address will not be published. Required fields are marked *

15 + 11 =