Skip to content

docs: create SEO documentation page #5759

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Nov 4, 2021
Merged

docs: create SEO documentation page #5759

merged 8 commits into from
Nov 4, 2021

Conversation

cerkiewny
Copy link
Contributor

@cerkiewny cerkiewny commented Oct 21, 2021

Motivation

Users are requesting help discovering SEO, resolves #5707

Have you read the Contributing Guidelines on pull requests?

Yes.

Test Plan

@facebook-github-bot facebook-github-bot added the CLA Signed Signed Facebook CLA label Oct 21, 2021
@netlify
Copy link

netlify bot commented Oct 22, 2021

✔️ [V2]
Built without sensitive environment variables

🔨 Explore the source changes: a8ee9d8

🔍 Inspect the deploy log: https://app.netlify.com/sites/docusaurus-2/deploys/61837088492eff00079762bc

😎 Browse the preview: https://deploy-preview-5759--docusaurus-2.netlify.app

@github-actions
Copy link

github-actions bot commented Oct 22, 2021

⚡️ Lighthouse report for the changes in this PR:

Category Score
🟠 Performance 86
🟢 Accessibility 98
🟢 Best practices 100
🟢 SEO 100
🟢 PWA 95

Lighthouse ran on https://deploy-preview-5759--docusaurus-2.netlify.app/

@Josh-Cena Josh-Cena added the pr: documentation This PR works on the website or other text documents in the repo. label Oct 28, 2021
@cerkiewny cerkiewny closed this Nov 1, 2021
@Josh-Cena
Copy link
Collaborator

Josh-Cena commented Nov 1, 2021

@cerkiewny I think you've made a mistake here. When you force-push, the changes are gone.

In case you still need the doc, I have it on my machine which I've re-written a little.

seo.md
---
id: seo
title: Search engine optimization (SEO)
sidebar_label: SEO
keywords:
  - seo
  - positioning
---

Docusaurus supports the search engine optimization in a variety of ways.

```mdx-code-block
import TOCInline from '@theme/TOCInline';

<TOCInline toc={toc} />
```

## Global metadata {#global-metadata}

Provide global meta attributes for the entire site through the [site configuration](./configuration.md#site-metadata). The metadatas will all be rendered in the HTML `<head>` using the key-value pairs as the prop name and value.

```js title="docusaurus.config.js"
module.exports = {
  themeConfig: {
    metadatas: [{name: 'keywords', content: 'cooking, blog'}],
    // This would become <meta name="keywords" content="cooking, blog"> in the generated HTML
  },
};
```

To read more about types of meta tags, visit [the MDN docs](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/meta).

## Single page metadata {#single-page-metadata}

Similarly to [global metadata](#global-metadata), Docusaurus also allows for addition of meta information on single pages. In order to add metadata to single page follow [this guide](./guides/markdown-features/markdown-features-head-metadatas.mdx) for head tag extension.

## Static HTML generation {#static-html-generation}

Docusaurus is a static site generator—HTML files are statically generated for every URL route, which helps search engines discover your content easier.

## Image meta description {#image-meta-description}

Docusaurus supports alt tags for your images, see [this section](./guides/markdown-features/markdown-features-assets.mdx#images) for more details.

## Rich search information {#rich-search-information}

Docusaurus blogs support [rich search results](https://search.google.com/test/rich-results) out-of-the-box in order to get maximum search engine experience. The information is created depending on your meta information in blog/global configuration. In order to get the benefits of the rich search information, fill in the information about the post's publish date, authors, and image, etc. Read more about the meta information [here](./blog.mdx).

## Robots file {#robots-file}

To add a `robots.txt` file which regulates search engines' behavior about which should be displayed and which shouldn't, provide it as [static asset](./static-assets.md). The following would allow access to all sub-pages from all requests:

```text title="robots.txt"
User-agent: *
Disallow:
```

Read more about robots file in [the Google documentation](https://developers.google.com/search/docs/advanced/robots/intro).

:::caution

**Important**: the `robots.txt` file does **not** prevent HTML pages from being indexed. Use `<meta name="robots" content="noindex">` as [page metadata](#single-page-metadata) to prevent it from appearing in search results entirely.

:::

## Sitemap file {#sitemap-file}

Docusaurus provides the [`@docusaurus/plugin-sitemap`](./api/plugins/plugin-sitemap.md) plugin, which is shipped wih `preset-classic` by default. It autogenerates a `sitemap.xml` file which will be available at `https://example.com/<baseUrl>/sitemap.xml` after the production build. This sitemap metadata helps search engine crawlers crawl your site more accurately.

## Human readable links {#human-readable-links}

Docusaurus uses your file names as links, but you can always change that using slugs, see this [tutorial](./guides/docs/docs-introduction.md#document-id) for more details.

## Structured content {#structured-content}

Docusaurus provides you with option to structure your content in terms of how your data are presented in search engine see this [configuration option](./guides/markdown-features/markdown-features-head-metadatas.mdx). The structure of the site itself is rigid through specification of [common markdown](https://spec.commonmark.org/0.30/#atx-headings), this makes it easier for the search engines to classify and read your content. By using Markdown consistently in your project you will make it easy for search engines to understand your content.

You may re-open a PR.

@cerkiewny
Copy link
Contributor Author

Sorry for that, I wanted to clean my repo history to be identical with upstream, forgot I am mergint this one from main and not dedicated branch

@cerkiewny cerkiewny reopened this Nov 1, 2021
Copy link
Collaborator

@slorber slorber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM overall

There are a few typos and weird sentences, can you double-check your content in a tool like Grammarly?

Also worth mentioning that the i18n feature adds hreflang headers:

Good SEO defaults: we set useful SEO headers like hreflang for you

image


## Structured content {#structured-content}

Docusaurus provides you with option to structure your content in terms of how your data are presented in search engine see this [configuration option](./guides/markdown-features/markdown-features-head-metadatas.mdx). The structure of the site itself is rigid through specification of [common markdown](https://spec.commonmark.org/0.30/#atx-headings), this makes it easier for the search engines to classify and read your content. By using Markdown consistently in your project you will make it easy for search engines to understand your content.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure to understand what you mean here?

Copy link
Contributor Author

@cerkiewny cerkiewny Nov 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could have someone design <div>-based website (like back in a day), it is really hard if you are using Markdown as it provides good defaults. And this in turn helps search engines to spot things like h1 which allows them to easier understand the content.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see thanks

I thought you wanted to talk about structured data, that we also use a bit on blog posts:
https://developers.google.com/search/docs/advanced/structured-data/article

@Josh-Cena Josh-Cena requested a review from slorber November 4, 2021 10:30
@slorber
Copy link
Collaborator

slorber commented Nov 4, 2021

LGTM thanks 👍

@slorber slorber merged commit 4922764 into facebook:main Nov 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Signed Facebook CLA pr: documentation This PR works on the website or other text documents in the repo.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create dedicated page about SEO
5 participants