Facebook Analytics Pixel
< View all glossary terms

meta robots tag

Meta Robots Tags: A Guide to Improving SEO with Meta Robots

Meta tags are HTML elements that offer information about a webpage to both search engines and users. Placed within the <head> section of an HTML document, these tags include metadata such as the title, description, and keywords, providing a summary of your page’s content to search engine crawlers. While the meta keywords tag was once significant, Google no longer uses it for ranking purposes.

But why do these snippets of code matter for SEO?

What Is the Meta Robots Tag, and Does It Matter?

The meta robots tag is an HTML element found in the <head> section of a webpage. It gives search engine crawlers specific instructions on how to handle the page, similar to the robots.txt file. This tag determines whether search engines should index or ignore the page.

To view the meta robots tag on a webpage:

Here’s an example of how it may look:

html

<meta name=”robots” content=”noindex” />

<meta name=”googlebot” content=”noindex” />

<meta name=”googlebot-news” content=”noindex” />

<meta name=”slurp” content=”noindex” />

<meta name=”msnbot” content=”noindex” />

The first line applies to all search engine crawlers, while the following lines target specific bots. In this case, the instructions prevent the page from being indexed but allow bots to follow its links.

The meta robots tag is super handy because it gives you extra control beyond what the robots.txt file can do. Let’s say a crawler skips over your robots.txt file by following an external link to one of your pages. Without the meta robots tag, that page could still get indexed. But with this tag in place, you can stop search engines from indexing the page, even if they land on it.

How Does the Meta Robots Tag Work?

The meta robots tag has two main parts: name=”” and content=””. Here’s how they work:

The name part tells search engines which bots you are giving instructions to. It’s similar to the user-agent line in a robots.txt file. If you want to address all bots, simply use “robots”—there is no need for wildcard characters here. That’s why it’s called the meta robots tag!

The content part is where you specify what those bots should do.

For example:

html

<meta name=”robots” content=”noindex, nofollow” />

<meta name=”googlebot” content=”index, follow” />

<meta name=”bingbot” content=”noindex” />

All bots are told not to index the page or follow its links in the first line. The second line gives Googlebot permission to index and follow links, while the third line tells Bingbot not to index the page.

It’s that simple—a straightforward way to control how search engines handle your pages!

Types of Meta Robots Directives to Know

The meta robots tag is critical for controlling how search engines interact with your website. By specifying directives in the content attribute, you can instruct search engine crawlers on how to index pages, follow links, and more. Below are the most commonly used directives for the meta robots tag:

Indexing Directives:

Link Following Directives:

Content Display Directives:

Translation Directives:

Image Indexing Directives:

Cache Control Directives:

How to Use the Meta Robots Tag for SEO?

SEO is not only about getting valuable pages into search results but also about keeping low-value pages out. Applying the “noindex” directive to less critical pages can help optimize your site’s crawl efficiency, ensuring critical pages are crawled more often.

In addition to the robots.txt file, the meta robots tag provides an extra layer of control over your pages. If a page is blocked in the robots.txt file, search engines could still index it if they discover it via an external backlink. Using the “noindex” directive ensures that search engines won’t index the page, even if they find it through other means.

Here’s how to prevent a page from being indexed and stop search engines from following links on that page:

html

<meta name=”robots” content=”noindex, nofollow”>

While “noindex” and “nofollow” are the most common directives used, there are other options in the meta robots tag that can help enhance your SEO strategy:

Using the meta robots tag can significantly benefit your SEO efforts in several ways:

Using the meta robots tag carefully is crucial, as an error could lead to serious issues, such as unintentionally deindexing your entire site. By understanding how the meta robots tag works and applying it properly, you can enhance your site’s SEO and ensure it performs at its best.

Understanding the Syntax of Meta Robots Tags

Let’s start by covering some essential details about the syntax of meta robots tags.

1. Case Sensitivity

Meta robots directives aren’t case sensitive, so variations in capitalization won’t impact their function. All of these examples are valid:

html

<meta name=”robots” content=”noindex,follow” />

<meta name=”ROBOTS” content=”noindex,follow” />

<meta name=”robots” content=”NOINDEX,FOLLOW” />

2. Separating Directives for Google

Separating directives with commas is crucial when using meta robots tags with Google. Spaces alone won’t work. Here’s the correct format:

html

<meta name=”robots” content=”noindex,follow” />

3. Spacing Between Directives

While spaces are optional after commas, adding them won’t affect the functionality. Both of these examples are valid:

html

<meta name=”robots” content=”noindex,follow” />

<meta name=”robots” content=”noindex, follow” />

Understanding these syntax details helps ensure the proper implementation of SEO meta robot tags.

Comprehensive Overview of Meta Robots Directives

Meta robots directives are HTML elements that guide search engine crawlers on how to interact with your web pages. By implementing these directives, you can control which pages are indexed, which links are followed, and how your content appears in search results.

Types of Meta Robots Directives

1. Meta Robots Tag: This tag is placed within the <head> section of your HTML document and provides instructions to all search engine crawlers. For example:

html

<meta name=”robots” content=”noindex,follow” />

In this example, the noindex directive instructs search engines not to index the page, while the follow directive allows them to follow the links on the page.

2. X-Robots-Tag: This directive is sent as an HTTP header and can be applied to non-HTML content like PDFs or images. For instance:

This header instructs search engines not to index the content but to follow any links.

Ways to Combine Meta Robots Directives

By combining these directives, you can customize how different search engines crawl and index your pages. This approach is beneficial when managing the visibility and link equity distribution across various search platforms.

Combining Directives for Specific Crawlers

You can specify directives for individual crawlers using the meta tag’s name attribute. This allows you to provide distinct instructions to different search engines.

Example:

html

<meta name=”robots” content=”nofollow” />

<meta name=”googlebot” content=”noindex” />

Interpretation:

Googlebot: The googlebot meta tag instructs Google’s crawler to noindex the page, meaning it should not be included in search results.

Other Search Engines: The meta robots tag applies to all other crawlers, directing them to nofollow the links on the page, meaning they should not follow any links from this page.

You can refer to Google’s official documentation for more detailed information on implementing and combining meta robot directives.