Many webmasters sooner or later face the need to remove site pages from the search engine that got there by mistake, are no longer relevant, are duplicates, or contain confidential customer information (the reasons may be different).
How to Remove a Page from Google Search Results?
1.404 error of the easiest ways to remove a page from search is to remove it from your site, with the condition that in the future, when accessing the old address, the server issued a 404 error, meaning that the page does not exist.
http/1.1 404 Not Found
In this case, you will have to wait until the robot visits the page again. Sometimes it takes a significant amount of time, depending on how it got into the index. If the page must exist on the site when it is removed from the search, then this method is unsuitable. It is better to use the others presented below.
A popular method to block entire sections or individual pages from indexing is using the root robots.txt file. There are many manuals on how to configure this file properly. Here we give just a few examples.
Close the section of the admin panel from getting it into the index of search engines:
Close a specific page from indexing:
Disallow: /my_emails.html # закрытие страницы my_emails.html
Disallow: /search.php?q=* # закрытие страниц поиска
In the case of robots.txt, you will also have to wait for reindexing until the robot removes a page or an entire section from the index. At the same time, some pages may remain in the index if several external links were the reason for their getting there.
This method is inconvenient to use if you need to remove different pages from different sections if it is impossible to create a common template for the Disallow directive in robots.txt.
3. Meta robots tag
This is an alternative to the previous method; only the rule is set directly in the page’s HTML code, between the <head> tags.
<meta name=”robots” content=”noindex,nofollow” />
The convenience of the meta tag is that it can be added to all the necessary pages (using the management system), which are undesirable in the search engine index while leaving the robots.txt file simple and understandable. The disadvantage of this method is that it is difficult to implement for a dynamic site using a single header.tpl template if there are no special skills.
4. X-Robots-Tag Headers
This method is used by foreign search engines, including Google, as an alternative to the previous method. Yandex does not yet have official information about the support of this http header, but it may be short.
The essence of its use is very similar to the robots meta tag, except that the entry must be in http headers that are not visible in the page code.
X-Robots-Tag: noindex, nofollow
In some, often unethical cases, its use is very convenient (for example, when exchanging links and hiding the link cleaning page).
5. Manual removal from the webmaster panel
Finally, the last and fastest way to remove pages from the index is to remove them manually.
The only condition for manually deleting pages is that they must be closed from the robot by previous methods (in robots.txt, meta tag, or 404 error). It has been noticed that Google processes deletion requests within a few hours, and Yandex will have to wait for the next update. Use this method if you urgently need to remove a small number of pages from your search.
I am dedicated to providing you with the best of blogging, with a focus on dependability and how to write a blog, case study, blogging tips, digital marketing, SEO, and WordPress tutorial.