{"id":11686,"date":"2021-11-24T17:01:33","date_gmt":"2021-11-24T17:01:33","guid":{"rendered":"https:\/\/getdevdone.com\/blog\/?p=11686"},"modified":"2025-06-27T05:56:22","modified_gmt":"2025-06-27T05:56:22","slug":"learn-how-to-noindex-page-wisely","status":"publish","type":"post","link":"https:\/\/getdevdone.com\/blog\/learn-how-to-noindex-page-wisely.html","title":{"rendered":"Learn How to Noindex a Page Wisely"},"content":{"rendered":"\n<p>In this post, we\u2019ll explain what indexing is, why you might need to noindex or deindex a page, what techniques are available, and how to noindex a page smartly. Keep reading to broaden your SEO skills, learn professional tricks, and improve your website\u2019s ranking.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-search-engine-indexing\">What Is Search Engine Indexing?<\/h2>\n\n\n\n<p><strong>Search engine indexing<\/strong> is the process of gathering, analyzing, and storing information to make its retrieval faster and more accurate.<\/p>\n\n\n\n<p>Search engines are a tool for users who are trying to find some information. As users want to get necessary (and relevant) answers as fast as possible, search engines must organize the available information in advance. Web crawlers scan information from billions of web pages, collect it, and keep it in the search index, i.e., a huge database. <a href=\"https:\/\/www.google.com\/search\/howsearchworks\/crawling-indexing\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">According to Google<\/a>, its search index comprises more than 100,000,000 gigabytes of data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-do-web-pages-get-into-the-search-engine-index\">How Do Web Pages Get Into the Search Engine Index?<\/h2>\n\n\n\n<p>Web crawlers analyze publicly available web pages. <a href=\"https:\/\/moz.com\/learn\/seo\/robots-meta-directives\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Robots meta directives<\/a> (or \u2018meta tags\u2019) represent pieces of coding that instruct search bots how to crawl a website\u2019s pages. The two basic parameters are \u2018index\u2019 and \u2018noindex.\u2019 By default, the page has an \u2018index\u2019 parameter. Precisely, search bots can access a page when<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>the meta tag is absent;<\/li>\n\n\n\n<li>the page has an \u2018index\u2019 parameter indicated in the meta tag (<code>&lt;meta name=\"robots\" content=\"index,nofollow\"&gt;<\/code>);<\/li>\n\n\n\n<li>the meta tag does <strong>not<\/strong> include \u2018noindex\u2019 (<code>&lt;meta name=\"robots\" content=\"nofollow\"&gt;<\/code>).<\/li>\n<\/ul>\n\n\n\n<p>Accordingly, if you apply a \u2018noindex\u2019 meta tag or a directive in an HTTP response header, you\u2019re instructing search crawlers not to scan your page. To inform search engines about your website\u2019s list of pages available for crawling, you can use <a href=\"https:\/\/www.sitemaps.org\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">sitemaps<\/a>.<\/p>\n\n\n\n<p>As new websites constantly appear and search bots scan billions of pages every day, it\u2019s clear that search engines can\u2019t crawl each page every day. Search crawlers will come to the page periodically, checking whether there are any changes. <\/p>\n\n\n\n<p>Additionally, to prevent overload on servers, search bots don\u2019t scan all pages of multipage websites simultaneously to avoid overload on servers. Thus, it can take from some days to several months for the whole website to be inspected and indexed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"when-should-a-page-not-be-indexed\">When Should a Page NOT Be Indexed?<\/h2>\n\n\n\n<p>There are certain cases when website pages should not be crawled (or indexed) by search engines. Let\u2019s have a look at them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"website-s-development-stage\">Website\u2019s Development Stage<\/h3>\n\n\n\n<p>If you are just in the process of creating your website, it\u2019s necessary to prevent search engines from indexing it. It\u2019s better to noindex your website when it is in the \u2018construction mode.\u2019 So, while installing a theme or template and customizing your pages, website crawlers should not get access to them. You never know when a search bot visits your page. And you don\u2019t want incomplete information to get into the index, do you?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"non-informative-or-thin-content-pages\">Non-informative or Thin Content Pages<\/h3>\n\n\n\n<p>Search engines care about the quality of pages and provide <a href=\"https:\/\/developers.google.com\/search\/docs\/advanced\/guidelines\/webmaster-guidelines?visit_id=637718989764422880-629319616&amp;rd=1#quality_guidelines\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">quality guidelines<\/a> on how to create useful pages. The basic principle is making your pages primarily for users, not search engines, and avoiding tricks to improve search engine rankings (such as using hidden texts or links, for instance). <\/p>\n\n\n\n<p>Consequently, if your page doesn\u2019t contain useful information for website users, search bots may consider it as \u2018thin content\u2019 and even penalize the website.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"archives-author-and-tag-pages\">Archives, Author, and Tag Pages<\/h3>\n\n\n\n<p>Not all pages have the same value for users and should be available in search engine results. Often, blogs have numerous archives, author, and tag pages that by themselves don\u2019t have much importance for readers. When these archives or tags are indexed, people may see numerous results for the pages contained in archives and tags, alongside blog posts. <\/p>\n\n\n\n<p>Archive or tag pages don\u2019t add to users\u2019 experience and may confuse visitors. Thus, it can be wise to noindex archive and tag pages and not waste the <a href=\"https:\/\/developers.google.com\/search\/blog\/2017\/01\/what-crawl-budget-means-for-googlebot\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">crawl budget<\/a>. In other words, noindexing specific pages helps to prioritize crawling and have better results on the SERP.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"internal-search-results-pages\">Internal Search Results Pages<\/h3>\n\n\n\n<p>If your website has an internal search, it\u2019s a wise idea to noindex its multiple pages. This way, you are helping search crawlers to understand your website better and display only relevant results on the SERP.<\/p>\n\n\n\n<p>However, be careful with noindexing paginated pages on eCommerce websites. If your store has categories with multiple pages and you decide to noindex page 2 onwards, this can reduce the crawling of pages listed on paginated categories. You can use self-referencing canonical with standard pagination to avoid duplicate content in search indexes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"non-informative-pages\">Non-informative Pages<\/h3>\n\n\n\n<p>Frequently, websites include \u2018Thank you pages\u2019 that simply help you express gratitude to new customers when they have made a purchase or new subscribers after they have left their information for getting newsletters. These \u2018thank you pages\u2019 add no value to people who are using search engines to find helpful information. So, consider noindexing such pages. Likewise, most admin and login pages should have noindex tag.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"pages-with-confidential-information\">Pages With Confidential Information <\/h3>\n\n\n\n<p>Content not intended for search engines, such as confidential or sensitive information, should also have a \u2018noindex\u2019 tag or directive. You can consider making shopping carts or checkout pages of an online store as \u2018noindex,\u2019 too.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-noindex-a-page\">How to Noindex a Page?<\/h2>\n\n\n\n<p>If you have decided that you need to noindex a page\/pages, you probably wonder how to do this quickly and safely. In this section, we\u2019ll consider various ways of how to noindex a page and their peculiarities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"using-robots-txt\">Using Robots.txt<\/h3>\n\n\n\n<p><a href=\"https:\/\/developers.google.com\/search\/docs\/advanced\/robots\/intro\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Robots.txt<\/a> is a file that instructs search engine crawlers which URLs they can access on a website. This file is useful for managing crawler traffic to your website. A robots.txt file contains one or more rules that block or allow access for a specific crawler to a particular path. Specifically, a robots.txt file can let some search engines crawl your website and block pages for other search engines.&nbsp;<\/p>\n\n\n\n<p>The two basic rules in robots.txt are \u2018<strong>allow\u2019<\/strong> and \u2018<strong>disallow.\u2019<\/strong> By default, a search bot can crawl any page or directory unless it includes a \u2018disallow\u2019 rule. If you want to prevent all search engines (i.e., user-agents) from crawling your website, you can use the following command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">User-agent: * Disallow: \/ <\/code><\/pre>\n\n\n\n<p>The part \u2018User-agent: *\u2019 means that it refers to all search bots, and the \u2018Disallow: \/\u2019 rule states that you are blocking the whole website.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"when-to-use-blocking-via-robots-txt\">When to Use Blocking via Robots.txt?<\/h4>\n\n\n\n<p>If you want to avoid crawling unimportant pages, robots.txt is of great help as this file mainly works by exclusion. For instance, you can instruct search bots not to crawl PDF files of your website by adding two lines:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">User-agent:<br>* Disallow: \/*.pdf$<\/code><\/pre>\n\n\n\n<p>Similarly, for WordPress website, it\u2019s possible to block access to such directories as \/wp-admin\/, \/wp-content\/comments\/ or \/wp-content\/plugins\/. The commands will look this way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">User-agent: *<br>Disallow: \/wp-admin\/<br>User-agent: *<br>Disallow: \/wp-content\/comments\/<br>User-agent: *<br>Disallow: \/wp-content\/plugins\/<\/code><\/pre>\n\n\n\n<p>Thus, you are disallowing bots to crawl and index your admin folder, comments, and plugins.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"why-should-you-avoid-noindexing-with-robots-txt\">Why Should You Avoid Noindexing With Robots.txt?<\/h4>\n\n\n\n<p>A robots.txt file is not the best way to keep a web page out of the search index. Firstly, this file can stop search engines from crawling pages but not indexing them. If crawlers find access to these links (even on other websites) and consider them important, the URLs will get to the index. However, it might lead to missing descriptions in the snippets on the search engine results page.<\/p>\n\n\n\n<p>Secondly, it\u2019s up to search engines to follow the instructions, so search crawlers might just ignore your robots.txt file. Thirdly, anyone can access a robots.txt file and see what pages you wish to keep away from search crawlers. You can\u2019t hide pages with confidential information this way, either.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"using-password-protection\">Using Password Protection<\/h3>\n\n\n\n<p>Since search engines can\u2019t access pages protected by passwords, such pages will not be crawled and indexed. Accordingly, if you set password protection, a web page will not appear on the index. <\/p>\n\n\n\n<p>For instance, while creating a store on Shopify, your website will have password protection and thus won\u2019t be indexed. When you are ready to launch the store, you should remove password protection.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"applying-ip-filtering-or-bots-blocking-via-htaccess\">Applying Ip Filtering or Bots Blocking via .htaccess<\/h3>\n\n\n\n<p>Unfortunately, not all bots are as friendly and harmless as Google or Bing that crawl and index pages to provide users with the most relevant information. There are unwanted bots that can harm a website owner or even bring down a site. Spambots, hacker bots, or malicious bots are just a few examples.<\/p>\n\n\n\n<p>Suppose you have spotted unwanted activity on your website pages and want to block a bad bot. In that case, you should identify either the bot\u2019s IP address or the \u2018User-agent string\u2019 that the bot is using.<\/p>\n\n\n\n<p>Blocking the bot\u2019s activity is possible by modifying the .htaccess file, and it\u2019s advisable to be very careful with it. To block a specific IP address, you need to add the following line to the .htaccess file, inserting the necessary IP address:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">Order Deny, Allow<br>Deny from <em>IP address<\/em><\/code><\/pre>\n\n\n\n<p>If you need to disallow the bot\u2019s activity by a user-agent string, it\u2019s necessary to find a part of the user-agent string unique to that bot. Then, you\u2019ll have to add the rule that disallows this particular bot to the .htaccess file. For instance, if you know that a bot SecuitySpider is affecting your website, you\u2019ll use this RewriteRule:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\"><em>RewriteEngine On RewriteCond %{HTTP_USER_AGENT} SecuitySpider [NC] RewriteRule .* - [F,L]<\/em><\/code><\/pre>\n\n\n\n<p>Alternatively, it\u2019s possible to use a directive BrowserMatchNoCase to block a bot:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\"><em>BrowserMatchNoCase \"SecuitySpider\" bots Order Allow,Deny Allow from ALL Deny from env=bots<\/em><\/code><\/pre>\n\n\n\n<p>If you are not an experienced developer who knows the commands of .htaccess files well, we highly recommend backing up your .htaccess file before overwriting it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"implementing-noindex-with-meta-tags\">Implementing \u2018Noindex\u2019 With Meta Tags<\/h3>\n\n\n\n<p>We have already mentioned <a href=\"https:\/\/developers.google.com\/search\/docs\/advanced\/robots\/robots_meta_tag\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">robots meta tags<\/a> that define the interaction of a search bot with a page. It\u2019s necessary to note that search crawlers can see and understand these tags if a robots.txt file doesn\u2019t block access to them.<\/p>\n\n\n\n<p>The basic syntax for making a page noindex is &lt;<code>meta name=\"robots\" content=\"noindex\"<\/code>&gt;. This tag should be a part of the <code>&lt;head&gt;<\/code> section of the page that you are blocking from indexing.<\/p>\n\n\n\n<p>If you are using WordPress, you can easily discourage search engines from indexing the whole website. All you have to do is go to the Reading section in Settings of your admin panel and tick the box \u2018Discourage search engines from indexing this site.\u2019 Yet, as the admin panel warns you, it\u2019s up to search engines to honor this request.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"making-separate-pages-noindex\">Making Separate Pages \u2018Noindex\u2019<\/h4>\n\n\n\n<p>Applying a noindex meta tag to separate pages depends on what platform, website builder, or content management system you use to create a website. Frankly speaking, more and more website builders are facilitating the process of optimizing websites for search engines. So, blocking a page from indexing can be pretty straightforward. Let\u2019s have a look at just a few examples.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"1-wordpress\">1. WordPress<\/h5>\n\n\n\n<p>As leading open-source software, WordPress offers numerous tools for work. The easiest way to block a separate page from indexing is by using a plugin, such as <a href=\"https:\/\/wordpress.org\/plugins\/wordpress-seo\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Yoast<\/a> or <a href=\"https:\/\/wordpress.org\/plugins\/all-in-one-seo-pack\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">All-in-One SEO<\/a>. By using these plugins, you can adjust the SEO settings of single pages or blog posts.<\/p>\n\n\n\n<p>Open the page you need to hide, scroll to the Yoast SEO, go to the Advanced section, and choose \u2018No\u2019 for the \u2018Allow search engines to show this Page in search results?\u2019<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"2-shopify\">2. Shopify<\/h5>\n\n\n\n<p>Being a powerful e-commerce platform, Shopify also lets you control SEO settings. Noindexing a URL may not be as straightforward as it is with using plugins in WordPress: you\u2019ll have to edit the coding. To exclude a specific page from indexing in Shopify, you need to find the <em>theme.liquid<\/em> layout file in the Themes of your Online Store sections and insert the coding:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">{% if handle contains 'page-handle-you-want-to-exclude' %}<br>&lt;meta name=\"robots\" content=\"noindex\"&gt;<br>{% endif %}<\/code><\/pre>\n\n\n\n<p>Remember to replace <em>page-handle-you-want-to-exclude<\/em> with the correct page handle.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"3-wix\">3. Wix<\/h5>\n\n\n\n<p>The Editor in Wix is quite user-friendly when you need to <a href=\"https:\/\/support.wix.com\/en\/article\/wix-editor-preventing-search-engines-from-indexing-your-page\">prevent crawlers from indexing the page<\/a>. Select the necessary page in the Pages section, click \u2018Show More,\u2019 choose \u2018SEO Basics,\u2019 and disable the toggle \u2018Let search engines index this page.\u2019<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"4-hubspot\">4. HubSpot<\/h5>\n\n\n\n<p>The HubSpot CMS features the tools that allow customers to change page settings and add the necessary <a href=\"https:\/\/blog.hubspot.com\/marketing\/how-to-unindex-pages-from-search-engines\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">noindex meta tag<\/a>. In the Advanced Options in Settings, you need to choose the \u2018Head HTML\u2019 field, insert the tag <code>&lt;meta name=\"robots\" content=\"noindex\"&gt;<\/code>, and save the changes.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"5-squarespace\">5. Squarespace<\/h5>\n\n\n\n<p>Another popular content management system, Squarespace, has two ways of applying a <a href=\"https:\/\/support.squarespace.com\/hc\/en-us\/articles\/360022347072-Hiding-pages-from-search-engine-results\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">noindex tag<\/a>. Firstly, you can take advantage of built-in options in Page Settings without having to work with coding. On the SEO tab, check \u2018Hide this page from search engine results\u2019 and save the settings. <\/p>\n\n\n\n<p>Secondly, it\u2019s possible to add the noindex tag via code injection. In the Page Settings, select the Advanced tab and insert the tag in the field (Page Header Code Injection). Remember to save the settings.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"6-tilda\">6. Tilda<\/h5>\n\n\n\n<p>Page settings in Tilda also allow you to <a href=\"https:\/\/tilda.cc\/en\/answers\/a\/disallow-page-en\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">hide a specific page from indexing<\/a>. Simply go to Page Settings, open Facebook and SEO, choose \u2018Appearance in search results,\u2019 and check the box \u2018Forbid search engines from indexing this page.\u2019<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"7-webflow\">7. Webflow<\/h5>\n\n\n\n<p>As a flexible CMS, Webflow lets you modify page coding and <a href=\"https:\/\/university.webflow.com\/lesson\/remove-content-from-googles-index\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">apply the noindex tag<\/a> for a page. In the Page panel, select Page Settings and go to Custom code. Then, insert the necessary noindex tag, save, and publish the content.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"8-motocms\">8. MotoCMS<\/h5>\n\n\n\n<p>This website builder lets you modify SEO settings and apply \u2018noindex\u2019 right in the Page Properties of the admin panel. Make sure you have chosen the relevant page, tick the \u2018No-index\u2019 box, and save the settings.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"using-http-response-header\">Using HTTP Response Header<\/h4>\n\n\n\n<p>Besides noindex meta tags, you can use the X-Robots-Tag as an element of the HTTP header response for a specific URL. You can use this way for blocking non-HTML resources, such as <a href=\"https:\/\/pdf.net\/add-text-to-pdf\" target=\"_blank\" rel=\"noopener\" title=\"\">PDF<\/a> files, video files, images, etc. To apply this tag, you should be able to access your website\u2019s header .php, .htaccess, or server configuration file. Here, you can see an example of the x-robots-tag blocking PDF files:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">&lt;Files ~ \".pdf$\"&gt;<br>Header set X-Robots-Tag \"noindex, follow\"<br>&lt;\/Files&gt;<\/code><\/pre>\n\n\n\n<p>It\u2019s necessary to add the code to your .htaccess file or httpd.config file.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-check-if-the-page-is-indexable\">How to Check if the Page Is Indexable?<\/h2>\n\n\n\n<p>After blocking the page from indexing, you may want to make sure that everything is working correctly. Here, we\u2019ll examine some means of checking if a page is indexable or not.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"google-search-console\">Google Search Console<\/h3>\n\n\n\n<p>Firstly, Google Search Console offers the <a href=\"https:\/\/support.google.com\/webmasters\/answer\/9012289\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">URL Inspection Tool<\/a> that lets you see the current index status of a page. Additionally, you can test a URL with the <a href=\"https:\/\/support.google.com\/webmasters\/answer\/9012289#test_live_page\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Live URL test<\/a>. It examines the page in real-time, so the data you\u2019ll get with this test may differ from the indexed page.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"browser-add-ons\">Browser Add-Ons<\/h3>\n\n\n\n<p>Various add-ons can check the page\u2019s indexability directly in the browser. In Chrome, user-friendly <a href=\"https:\/\/chrome.google.com\/webstore\/detail\/website-seo-checker-free\/nljcdkjpjnhlilgepggmmagnmebhadnk\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Website SEO Checker<\/a> or <a href=\"https:\/\/seominion.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">SEO Minion<\/a> are just a few examples. Similarly, in Firefox, you can find the <a href=\"https:\/\/addons.mozilla.org\/en-US\/firefox\/addon\/seoquake-seo-extension\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">SeoQuake SEO extension<\/a>, <a href=\"https:\/\/addons.mozilla.org\/en-US\/firefox\/addon\/detailed-seo-extension\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Detailed SEO Extension<\/a>, SEO Minion, etc.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"site-audit-tools\">Site Audit Tools<\/h3>\n\n\n\n<p>Do you prefer using tools on your PC and getting a complete audit? Take advantage of the <a href=\"https:\/\/www.screamingfrog.co.uk\/seo-spider\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Screaming Frog SEO Spider<\/a>. It will analyze your website and provide a list of pages blocked by robots.txt, meta robots, or X-Robots-Tag directives such as \u2018noindex\u2019 or \u2018nofollow.\u2019<\/p>\n\n\n\n<p>You can also use the online SEO platform Sitechecker. This platform will help you perform an SEO audit and regular checks of your website.&nbsp;<a href=\"https:\/\/sitechecker.pro\/page-counter\/\" target=\"_blank\" rel=\"noreferrer noopener\">Page Counter<\/a>&nbsp;by Sitechecker will help you find all pages of your site and check their indexing in search engines.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-deindex-a-page\">How to Deindex a Page?<\/h2>\n\n\n\n<p>Sometimes you realize that your website has some pages that should be excluded from the search index. By the way, to check which pages are visible to Google, you can type site:yourdomain in the Google search and see all the results of the Google index for your website. So, let\u2019s study the most common cases when you need to deindex a page and how to do this.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"when-do-you-need-to-deindex-a-page\">When Do You Need to Deindex a Page?<\/h3>\n\n\n\n<p>Firstly, all pages in WordPress are indexed by default when a website is published. If you don\u2019t have much experience in SEO settings for a WordPress website, unnecessary pages can get indexed by mistake. For instance, besides pages, you may find archives, tags, menu items, theme parts, smart filters, etc., displayed on the SERP. <\/p>\n\n\n\n<p>Fortunately, the Yoast plugin lets you choose various content types settings that specify which types of content should appear in search results.<\/p>\n\n\n\n<p>Secondly, it\u2019s advisable to deindex page duplicates and hide those pages that don\u2019t add to the user experience. For instance, if your website contains a printer-friendly version together with an ordinary one, only one of them should appear in the search index.<\/p>\n\n\n\n<p>Thirdly, if you happen to have hacked website pages (we hope it\u2019s not the case now), it\u2019s necessary to remove these pages from indexing. Google\u2019s hacking classifiers monitor websites and can block them in case of hacking to prevent other computers from getting infected. <\/p>\n\n\n\n<p>However, this doesn\u2019t occur very often, and attackers do harm websites. Thus, if any hacked URLs are showing in the search index, you should remove them manually.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"what-tools-can-you-use-to-deindex-a-page\">What Tools Can You Use to Deindex a Page?<\/h3>\n\n\n\n<p>First and foremost, as discussed before, you can choose various ways of making a page noindex (using noindex meta tags, disallowing specific pages via robots.txt, canonicalizing to other pages, etc.).<\/p>\n\n\n\n<p>Furthermore, Google Search Console provides instructions on how to remove a page from the Google index. You may hide pages from search results temporarily with the help of the <a href=\"https:\/\/support.google.com\/webmasters\/answer\/9689846\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Removal tool<\/a>.<\/p>\n\n\n\n<p> Keep in mind that this can be a part of the permanent page removal. It will work for about six months, and then you have to decide which action to take.<\/p>\n\n\n\n<p> If you don\u2019t need a page anymore (like in the case of hacked pages), you can remove the content by deleting the page. Then, your server will return a 404 (Not Found) status code, and Google will remove the page from its index with time. <\/p>\n\n\n\n<p>Alternatively, as described earlier, you can choose other blocking techniques to block access to the pages. In short, it\u2019s up to you to decide how to prevent Google from indexing a page in each particular situation.<\/p>\n\n\n\n<div itemscope=\"\" itemtype=\"https:\/\/schema.org\/FAQPage\">\n\n<h2>Why and How to Noindex a Page: FAQ<\/h2>\n<p>Let&#8217;s briefly summarize what indexing\/noindexing is, why noindex or deindex a page, and how to do it.<\/p>\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">What is indexing?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>Search engine indexing performed by bots or crawlers is collecting, analyzing, and keeping information. Indexes aim to organize data from different pages and to make it accessible and valuable. A page that has an &#8216;index&#8217; status is available for crawlers.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">What is noindexing?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>Noindexing is blocking access to website pages and excluding them from appearing in search engine results.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">Why make pages &#8216;noindex&#8217;?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>When a website includes pages that don&#8217;t have much value and shouldn&#8217;t be visible on the SERP, make them &#8216;noindex.&#8217; The most typical examples are archive, tag, author pages for blogs, or login and admin pages. Also, blocking access to confidential pages or Thank You pages is necessary.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">What are the best tools to noindex a page?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>The safest means is applying a &#8216;noindex&#8217; meta tag or using the X-Robots-Tag in the HTTP header response. Additionally, you can restrict access to website directories, such as \/wp-admin\/ by &#8216;Disallow&#8217; commands in the robots.txt file.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">Can I deindex an existing page?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>Indeed, you can. If the page became available to search engines by mistake, you can make it noindex.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n<div itemscope=\"\" itemprop=\"mainEntity\" itemtype=\"https:\/\/schema.org\/Question\">\n<h3 itemprop=\"name\">How can I remove a page from the Google index?<\/h3>\n<div itemscope=\"\" itemprop=\"acceptedAnswer\" itemtype=\"https:\/\/schema.org\/Answer\">\n<div itemprop=\"text\">\n<p>A quick but temporary decision for removing a page is with the Removal tool in the Google Search Console. Afterward, remember to choose an appropriate way to deal with the situation (either delete the page entirely or make it noindex).<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n<\/div>\n\n\n\n<p>We hope you find this information helpful. Remember to update your website regularly, revise the content, monitor performance, check the site\u2019s health, and deindex pages when necessary.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity advgb-dyn-c3f4f0ad\"\/>\n\n\n\n<p>Still need help with noindexing a page or any other web development task?  Just drop us a line. Over 16+ years in the industry, we have helped thousands of businesses and agencies meet their needs. <\/p>\n\n\n\n<p>We are a leading provider of web and mobile development services, from <a href=\"https:\/\/getdevdone.com\/front-end-development-instant-quote.html\" target=\"_blank\" rel=\"noreferrer noopener\">building custom themes<\/a> for websites based on various platforms (WordPress, Drupal, etc.) and <a href=\"https:\/\/getdevdone.com\/ecommerce-web-development.html\" target=\"_blank\" rel=\"noreferrer noopener\">eCommerce development<\/a> (<a href=\"https:\/\/getdevdone.com\/shopify-development.html\" target=\"_blank\" rel=\"noopener\" title=\"Shopify development\">Shopify development<\/a>, Magento, WooCommerce) to hand-crafting <a href=\"https:\/\/getdevdone.com\/psd-to-email-templates.html\" target=\"_blank\" rel=\"noreferrer noopener\">email templates<\/a> and engaging <a href=\"https:\/\/getdevdone.com\/html5-banners-development.html\" target=\"_blank\" rel=\"noreferrer noopener\">HTML5 banners.<\/a> <\/p>\n\n\n\n<p>Let&#8217;s <a href=\"https:\/\/getdevdone.com\/contact-us.html\">get in <\/a><a href=\"https:\/\/getdevdone.com\/contact-us.html\" target=\"_blank\" rel=\"noreferrer noopener\">touch <\/a>to discuss your project details today! <\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post is your ultimate guide to noindexing web pages. You will learn why you need to noindex a page, what nonindexing methods you can apply, and other essentials.  <\/p>\n","protected":false},"author":4,"featured_media":11690,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[739],"tags":[765],"class_list":["post-11686","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-guide","tag-technical-seo"],"acf":[],"aioseo_notices":[],"author_meta":{"display_name":"Dmytro Mashchenko","author_link":"https:\/\/getdevdone.com\/blog\/author\/dima"},"featured_img":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2-300x300.png","coauthors":[],"tax_additional":{"categories":{"linked":["<a href=\"https:\/\/getdevdone.com\/blog\/category\/guide\" class=\"advgb-post-tax-term\">Guide<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Guide<\/span>"]},"tags":{"linked":["<a href=\"https:\/\/getdevdone.com\/blog\/category\/guide\" class=\"advgb-post-tax-term\">Technical SEO<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Technical SEO<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 4 years ago","modified":"Updated 10 months ago"},"absolute_dates":{"created":"Posted on November 24, 2021","modified":"Updated on June 27, 2025"},"absolute_dates_time":{"created":"Posted on November 24, 2021 5:01 pm","modified":"Updated on June 27, 2025 5:56 am"},"featured_img_caption":"","series_order":"","featured_image_urls":{"thumbnail_723x315":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2-400x315.png","thumbnail_723x315-2x":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2.png","thumbnail_723x315-3x":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2.png","thumbnail_770x510":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2.png","thumbnail_770x510-2x":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2.png","thumbnail_770x510-3x":"https:\/\/s3.amazonaws.com\/newblog.psd2html.com\/wp-content\/uploads\/2021\/11\/24152106\/Intro-preview-2.png"},"featured_post_color":"#e88080","author_avatar":"https:\/\/secure.gravatar.com\/avatar\/97bd036a871c68c70de0956108719ad9489849769ee15e25e0bee81f3bdd7286?s=96&d=mm&r=g","author_position":"COO of GetDevDone","reading_time":"<span class=\"span-reading-time rt-reading-time\"><span class=\"rt-label rt-prefix\"><\/span> <span class=\"rt-time\"> 12<\/span> <span class=\"rt-label rt-postfix\">min read<\/span><\/span>","prev_post":{"slug":"optimizing-image-alt-text-the-how-and-the-why","name":"Optimizing Image Alt Text: the How and the Why"},"next_post":{"slug":"html-meta-tags-that-matter-for-seo","name":"HTML Meta Tags That Matter for SEO in 2021\/22"},"related_posts":["how-to-organize-products-on-shopify-some-helpful-tips","shopify-website-development-6-reasons-to-build-your-online-store-with-shopify","main-reasons-to-hire-a-shopify-developer-for-your-online-store"],"_links":{"self":[{"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/posts\/11686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/comments?post=11686"}],"version-history":[{"count":65,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/posts\/11686\/revisions"}],"predecessor-version":[{"id":24712,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/posts\/11686\/revisions\/24712"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/media\/11690"}],"wp:attachment":[{"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/media?parent=11686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/categories?post=11686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/getdevdone.com\/blog\/wp-json\/wp\/v2\/tags?post=11686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}