astro-robots-txt

Generate a robots.txt for Astro

Downloads in past

Stats

StarsIssuesVersionUpdatedCreatedSize
astro-robots-txt
1810.3.918 days ago5 months agoMinified + gzip package size for astro-robots-txt in KB

Readme

Help Ukraine now!
astro-robots-txt
This Astro integration generates a robots.txt for your Astro project during build.
Release License: MIT


Why astro-robots-txt?

The robots.txt file informs search engines which pages on your website should be crawled. See Google's own advice on robots.txt to learn more.
For Astro project you usually create the robots.txt in a text editor and place it to the public/ directory. In that case you must manually synchronize site option in astro.config.\* with Sitemap: record in robots.txt.
It brakes DRY principle.
Sometimes, especially during development, it's necessary to prevent your site from being indexed. To achieve this you need to place the meta tag <meta name="robots" content="noindex"> into the <head> section of your pages or add X-Robots-Tag: noindex to the HTTP response header, then add the lines User-agent: * and Disallow: \ to robots.txt.
Again you have to do it manually in two different places.
astro-robots-txt can help in both cases on the robots.txt side. See details in this demo repo.

Installation

Quick Install
The experimental astro add command-line tool automates the installation for you. Run one of the following commands in a new terminal window. (If you aren't sure which package manager you're using, run the first command.) Then, follow the prompts, and type "y" in the terminal (meaning "yes") for each one.
# Using NPM
npx astro add astro-robots-txt

# Using Yarn
yarn astro add astro-robots-txt

# Using PNPM
pnpx astro add astro-robots-txt

Then, restart the dev server by typing CTRL-C and then npm run astro dev in the terminal window that was running Astro.
Because this command is new, it might not properly set things up. If that happens, log an issue on Astro GitHub and try the manual installation steps below.

Manual Install
First, install the astro-robots-txt package using your package manager. If you're using npm or aren't sure, run this in the terminal:
npm install --save-dev astro-robots-txt

Then, apply this integration to your astro.config.* file using the integrations property:
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  // ...
  integrations: [robotsTxt()],
}

Then, restart the dev server.

Usage

The astro-robots-txt integration requires a deployment / site URL for generation. Add your site's URL under your astro.config.\* using the site property.
Then, apply this integration to your astro.config.\* file using the integrations property.
astro.config.mjs
import { defineConfig } from 'astro/config';
import robotsTxt from 'astro-robots-txt';

export default defineConfig({
  site: 'https://example.com',

  // Important!
  // Only official '@astrojs/*' integrations are currently supported by Astro.
  // Add 'experimental.integrations: true' to make 'astro-robots-txt' working
  // with 'astro build' command.
  experimental: {
    integrations: true,
  },
  integrations: [robotsTxt()],
});

Note that unlike other configuration options, site is set in the root defineConfig object, rather than inside the robotsTxt() call.
Now, build your site for production via the astro build command. You should find your robots.txt under dist/robots.txt!
Warning If you forget to add a site, you'll get a friendly warning when you build, and the robots.txt file won't be generated.

Example of generated robots.txt file
robots.txt
User-agent: *
Allow: /
Sitemap: https://example.com/sitemap-index.xml


:exclamation: Important notes: only official @astrojs/\** integrations are currently supported by Astro.
There are two ways to make astro-robots-txt integration working with current version of Astro.
Set the experimental.integrations option to true in your astro.config.\*.
astro.config.mjs
export default {
  // ...
  experimental: {
    integrations: true,
  },
};

Or use the --experimental-integrations flag for the build command.
astro build --experimental-integrations

Configuration

To configure this integration, pass an object to the robotsTxt() function call in astro.config.mjs.
astro.config.mjs
...
export default defineConfig({
  integrations: [robotsTxt({
    transform: ...
  })]
});

sitemap
| Type | Required | Default value | | :-----------------------------: | :------: | :-------------: | |Boolean / String / String[]| No | true |
If you omit the sitemap parameter or set it to true, the resulting output in a robots.txt will be Sitemap: your-site-url/sitemap-index.xml.
If you want to get the robots.txt file without the Sitemap: ... entry, set the sitemap parameter to false.
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      sitemap: false,
    }),
  ],
};

When the sitemap is String or String[] its values must be a valid URL. Only http or https protocols are allowed.
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      sitemap: [
        'https://example.com/first-sitemap.xml',
        'http://another.com/second-sitemap.xml',
      ],
    }),
  ],
};


sitemapBaseFileName
| Type | Required | Default value | | :-----: | :------: | :-------------: | | String| No | sitemap-index |
Sitemap file name before file extension (.xml). It will be used if the sitemap parameter is true or omitted.
:greyexclamation: @astrojs/sitemap
and astro-sitemap integrations have the sitemap-index.xml as their primary output. That is why the default value of sitemapBaseFileName is set to sitemap-index.
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      sitemapBaseFileName: 'custom-sitemap',
    }),
  ],
};


host
| Type | Required | Default value | | :------: | :------: | :-------------: | | String | No | undefined |
Some crawlers (Yandex) support a Host directive, allowing websites with multiple mirrors to specify their preferred domain.
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      host: 'your-domain-name.com',
    }),
  ],
};


transform
| Type | Required | Default value | | :------------------------: | :------: | :-------------: | | (content: String): String
or
(content: String): Promise<String> | No | undefined |
Sync or async function called just before writing the text output to disk.
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      transform(content) {
        return `# Some comments before the main content.\n# Second line.\n\n${content}`;        
      },
    }),
  ],
};


policy
| Type | Required | Default value | | :--------: | :------: | :---------------------------------: | | Policy[] | No | [{ allow: '/', userAgent: '*' }] |
List of Policy rules
Type Policy
| Name | Type | Required | Description | | :----------: | :-------------------: | :------: | :---------------------------------------------------------------------------------------------------- | | userAgent | String | Yes | You must provide a name of the automatic client (search engine crawler).
Wildcards are allowed.| | disallow | String / String[] | No | Disallowed paths for crawling | | allow | String / String[] | No | Allowed paths for crawling | | crawlDelay | Number | No | Minimum interval (in secs) for the crawler to wait after loading one page, before starting other | | cleanParam | String / String[] | No | Indicates that the page's URL contains parameters that should be ignored during crawling.
Maximum string length is limited to 500.|
astro.config.mjs
import robotsTxt from 'astro-robots-txt';

export default {
  site: 'https://example.com',
  experimental: {
    integrations: true,
  },
  integrations: [
    robotsTxt({
      policy: [
        {
          userAgent: 'Googlebot',
          allow: '/',
          disallow: ['/search'],
          crawlDelay: 2,
        },
        {
          userAgent: 'OtherBot',
          allow: ['/allow-for-all-bots', '/allow-only-for-other-bot'],
          disallow: ['/admin', '/login'],
          crawlDelay: 2,
        },
        {
          userAgent: '*',
          allow: '/',
          disallow: '/search',
          crawlDelay: 10,
          cleanParam: 'ref /articles/',
        },
      ],
    }),
  ],
};

External config file

You can configure the integration using the external file robots-txt.config.* (js, cjs, mjs). Put it in the application root folder (see about root in official docs).
The external config must contain the default export statement:
// ESM
export default {
  ...
};

or
// CommonJS
module.exports = {
  ...
};

How does the integration internally resolve a config?
| Options parameter provided? | External config exists? | Result | | :-------------------------- | :---------------------: | :----------------------------------------------- | | No | No | Default config used | | Yes | No | Options parameter used | | No | Yes | External config used | | Yes | Yes | External config is merged with options parameter |
The external configuration usage example is in the demo repo.
:exclamation: The current version of the integration doesn't support typescript configs.

Examples

| Example | Source | Playground | | ------------- | -------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | basic | GitHub | Play Online | | advanced | GitHub | Play Online |

Contributing

You're welcome to submit an issue or PR!

Changelog

See CHANGELOG.md for a history of changes to this integration.

Inspirations