Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crawler does not seem to work on websites that use shadowDOM #552

Open
mantou132 opened this issue May 18, 2021 · 3 comments · May be fixed by #559
Open

Crawler does not seem to work on websites that use shadowDOM #552

mantou132 opened this issue May 18, 2021 · 3 comments · May be fixed by #559

Comments

@mantou132
Copy link

Hello Algolia Devs,

I tried to add search function to my website, but I got the reply that no content can be crawled. Is it because my website uses shadowdom?

Thank you.

@shortcuts
Copy link
Member

Hi,

It indeed doesn't seem like it is possible to access to dom via query selectors with the shadow-root tag open. I don't know much about shadowDOM but it might be possible to make it work.

As long as you can query selector something from the console, our scraper will be able to get it so you will be able to use DocSearch!

@mantou132
Copy link
Author

mantou132 commented May 18, 2021

Cannot select the content of shadowDOM through css selector or xpath.

To select shadowDOM content like a css selector, need to extend the css selector, such as using >>(outdated specification) instead of shdowDOM boundary: gem-book >> gem-book-sidebar >> gem-active-link,when using this selector, replace >> with shadowRoot, for example:

'body gem-book >> gem-book-sidebar >> gem-active-link >> a[href]'.split('>>').reduce(
  (p, c, index, arr) => {
    const isLastSelector = index === arr.length - 1;
    return p.map((e) => [...e.querySelectorAll(c)].map((ce) => (isLastSelector ? ce : ce.shadowRoot))).flat();
  },
  [document],
);

Screen Shot 2021-05-18 at 4 58 11 PM

This is also an example of use in the browser, if it is selenium, there should be a similar API

@mantou132
Copy link
Author

mantou132 commented Sep 1, 2021

Hi, I viewed the source code today, i found only a little update can support ShadowDOM.

https://github.com/algolia/docsearch-scraper/blob/master/scraper/src/custom_downloader_middleware.py#L31

Can use custom downloaders to pull all DOM:

# pseudocode
driver.execute_script("return document.documentElement.getInnerHTML();")

https://web.dev/declarative-shadow-dom/

Will get result:

<head>...</head>
<body>
<gem-book>
<template shadowroot="open">
... content
</template>
</gem-book>
</body>

Next, we only need to delete the all <template> tag(don't delete content), may be a regular expression

@mantou132 mantou132 linked a pull request Sep 3, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants