Skip to content

Releases: apify/crawlee

v3.0.3

11 Aug 13:41
b70740d
Compare
Choose a tag to compare

What's Changed

Full Changelog: v3.0.2...v3.0.3

v3.0.2

28 Jul 19:02
0aca5b5
Compare
Choose a tag to compare

What's Changed

  • fix: regression in resolving the base url for enqueue link filtering by @vladfrangu in #1422
  • fix: improve file saving on memory storage by @vladfrangu in #1421
  • fix: add UserData type argument to CheerioCrawlingContext and related interfaces by @B4nan in #1424
  • fix: always limit desiredConcurrency to the value of maxConcurrency by @B4nan in bcb689d
  • fix: wait for storage to finish before resolving crawler.run() by @B4nan in 9d62d56
  • fix: using explicitly typed router with CheerioCrawler by @B4nan in 07b7e69
  • fix: declare dependency on ow in @crawlee/cheerio package by @B4nan in be59f99
  • fix: use crawlee@^3.0.0 in the CLI templates by @B4nan in 6426f22
  • fix: fix building projects with TS when puppeteer and playwright are not installed by @B4nan in #1404
  • fix: enqueueLinks should respect full URL of the current request for relative link resolution by @B4nan in #1427
  • fix: use desiredConcurrency: 10 as the default for CheerioCrawler by @B4nan in #1428
  • feat: allow configuring what status codes will cause session retirement by @B4nan in #1423
  • feat: add support for middlewares to the Router via use method by @B4nan in #1431

Full Changelog: v3.0.1...v3.0.2

v3.0.1

26 Jul 11:34
Compare
Choose a tag to compare

What's Changed

  • fix: remove JSONData generic type arg from CheerioCrawler by @B4nan in #1402
  • fix: rename default storage folder to just storage by @B4nan in #1403
  • fix: remove trailing slash for proxyUrl by @AndreyBykov in #1405
  • fix: run browser crawlers in headless mode by default by @B4nan in #1409
  • fix: rename interface FailedRequestHandler to ErrorHandler by @B4nan in #1410
  • fix: ensure default route is not ignored in CheerioCrawler by @B4nan in #1411
  • fix: add headless option to BrowserCrawlerOptions by @B4nan in #1412
  • fix: processing custom cookies by @vladfrangu in #1414
  • fix: enqueue link not finding relative links if the checked page is redirected by @vladfrangu in #1416
  • fix: calling enqueueLinks in browser crawler on page without any links by @B4nan in 385ca27
  • fix: improve error message when no default route provided by @B4nan in 04c3b6a
  • feat: add parseWithCheerio for puppeteer & playwright by @AndreyBykov in #1418

Full Changelog: v3.0.0...v3.0.1

v3.0.0

13 Jul 19:20
Compare
Choose a tag to compare

Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.

Crawlee vs Apify SDK

Up until version 3 of apify, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:

  • Crawlee, the new web-scraping library, available as crawlee package on NPM
  • Actor SDK, helpers for the Apify platform, available as apify package on NPM

Moreover, the Crawlee library is published as several packages under @crawlee namespace:

  • @crawlee/core: the base for all the crawler implementations, also contains things like Request, RequestQueue, RequestList or Dataset classes
  • @crawlee/basic: exports BasicCrawler
  • @crawlee/cheerio: exports CheerioCrawler
  • @crawlee/browser: exports BrowserCrawler (which is used for creating @crawlee/playwright and @crawlee/puppeteer)
  • @crawlee/playwright: exports PlaywrightCrawler
  • @crawlee/puppeteer: exports PuppeteerCrawler
  • @crawlee/memory-storage: @apify/storage-local alternative
  • @crawlee/browser-pool: previously browser-pool package
  • @crawlee/utils: utility methods
  • @crawlee/types: holds TS interfaces mainly about the StorageClient

Installing Crawlee

As Crawlee is not yet released as latest, we need to install from the next distribution tag!

Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright if you plan on using playwright - it already contains everything from the @crawlee/browser package, which includes everything from @crawlee/basic, which includes everything from @crawlee/core.

npm install crawlee@next

Or if all we need is cheerio support, we can install only @crawlee/cheerio

npm install @crawlee/cheerio@next

When using playwright or puppeteer, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.

npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwright

Alternatively we can also use the crawlee meta-package which contains (re-exports) most of the @crawlee/* packages, and therefore contains all the crawler classes.

Sometimes you might want to use some utility methods from @crawlee/utils, so you might want to install that as well. This package contains some utilities that were previously available under Apify.utils. Browser related utilities can be also found in the crawler packages (e.g. @crawlee/playwright).

Full TypeScript support

Both Crawlee and Actor SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig package. Don't forget to set the module and target to ES2022 or above to be able to use top level await.

The @apify/tsconfig config has noImplicitAny enabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.

{
    "extends": "@apify/tsconfig",
    "compilerOptions": {
        "module": "ES2022",
        "target": "ES2022",
        "outDir": "dist",
        "lib": ["DOM"]
    },
    "include": [
        "./src/**/*"
    ]
}

Docker build

For Dockerfile we recommend using multi-stage build so you don't install the dev dependencies like TypeScript in your final image:

# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder

# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
    && npm run build

# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json

# install only prod deps
RUN npm --quiet set progress=false \
    && npm install --only=prod --no-optional \
    && echo "Installed NPM packages:" \
    && (npm list --only=prod --no-optional --all || true) \
    && echo "Node.js version:" \
    && node --version \
    && echo "NPM version:" \
    && npm --version

# run compiled code
CMD npm run start:prod

Browser fingerprints

Previously we had a magical stealth option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.

In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints in browserPoolOptions:

const crawler = new PlaywrightCrawler({
   browserPoolOptions: {
       useFingerprints: false,
   },
});

Session cookie method renames

Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies() or session.setPuppeteerCookies(). Since this method could be used for any of our crawlers, not just PuppeteerCrawler, the methods have been renamed to session.getCookies() and session.setCookies() respectively. Otherwise, their usage is exactly the same!

Memory storage

When we store some data or intermediate state (like the one RequestQueue holds), we now use @crawlee/memory-storage by default. It is an alternative to the @apify/storage-local, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local). While the state is stored in memory, it also dumps it to the file system so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json file).

When we want to run the crawler on Apify platform, we need to use Actor.init or Actor.main, which will automatically switch the storage client to ApifyClient when on the Apify platform.

We can still use the @apify/storage-local, to do it, first install it pass it to the Actor.init or Actor.main options:

@apify/storage-local v2.1.0+ is required for crawlee

import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';

const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });

Purging of the default storage

Previously the state was preserved between local runs, and we had to use --purge argument of the apify-cli. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main call. We can opt out of it via purge: false in the Actor.init options.

Renamed crawler options and interfaces

Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.

  • handleRequestFunction -> requestHandler
  • handlePageFunction -> requestHandler
  • handleRequestTimeoutSecs -> requestHandlerTimeoutSecs
  • handlePageTimeoutSecs -> requestHandlerTimeoutSecs
  • requestTimeoutSecs -> navigationTimeoutSecs
  • handleFailedRequestFunction -> failedRequestHandler

We also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:

  • CheerioHandlePageInputs -> CheerioCrawlingContext
  • PlaywrightHandlePageFunction -> PlaywrightCrawlingContext
  • PuppeteerHandlePageFunction -> PuppeteerCrawlingContext

Context aware helpers

Some utilities previously available under Apify.utils namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request instance or current Page object, or the RequestQueue bound to the crawler.

Enqueuing links

One common helper that received more attention is the enqueueLinks. As mentioned above, it is context aware - we no longer need pass in the requestQueue or page arguments (or the cheerio handle $). In addition to that, it now offers 3 enqueuing strategies:

  • EnqueueStrategy.All ('all'): Matches any URLs found
  • EnqueueStrategy.SameHostname ('same-hostname') Matches any URLs that have the same subdomain as the base URL (default)
  • EnqueueStrategy.SameDomain ('same-domain') Matches any URLs that have the same domain name. For example, https://wow.an.example.com and https://example.com will both be matched for a base url of https://example.com.

This means we can even call enqueueLinks() without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.

Moreover, we can specify patterns the URL should match via globs:

const crawler = new PlaywrightCrawler({
    async requestHandler({ enqueueLinks }) {
        await enqueueLinks({
            globs: ['https://apify.com/*/*'],
            // we can also use `regexps` and `pseudoUrls` keys here
        });
    },
});

Implicit RequestQueue instance

All crawlers now have the RequestQueue instance automatically available via crawler.getRequestQueue() method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue instance manually, and we can just use crawler.addRequests() method described undern...

Read more

v2.3.2

05 May 14:03
d2d5ac7
Compare
Choose a tag to compare

What's Changed

  • fix: use default user agent for playwright with chrome by @B4nan in #1350
  • fix: always hide webdriver of chrome browsers

Full Changelog: v2.3.1...v2.3.2

v2.3.1

03 May 15:32
Compare
Choose a tag to compare

What's Changed

  • fix: utils.apifyClient early instantiation by @barjin in #1330
  • fix: ensure failed req count is correct when using RequestList by @mnmkng in #1347
  • fix: random puppeteer crawler (running in headful mode) failure by @AndreyBykov in #1348

    This should help with the We either navigate top level or have old version of the navigated frame bug in puppeteer.

  • fix(ts): allow returning falsy values in RequestTransform's return type
  • feat: add utils.playwright.injectJQuery by @barjin in #1337
  • feat: add keyValueStore option to Statistics class by @B4nan in #1345
  • perf(browser-pool): do not use page.authenticate as it disables cache

Full Changelog: v2.3.0...v2.3.1

v2.3.0

07 Apr 12:00
Compare
Choose a tag to compare

What's Changed

  • feat: accept more social media patterns by @lhotanok in #1286
  • feat: add multiple click support to enqueueLinksByClickingElements by @audiBookning in #1295
  • feat: instance-scoped "global" configuration by @barjin in #1315
  • feat: stealth deprecation by @petrpatek in #1314
  • feat: RequestList accepts ProxyConfiguration for requestsFromUrls by @barjin in #1317
  • feat: allow passing a stream to KeyValueStore.setRecord by @gahabeen in #1325
  • feat: update playwright to v1.20.2
  • feat: update puppeteer to v13.5.2

    We noticed that with this version of puppeteer actor run could crash with We either navigate top level or have old version of the navigated frame error (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false), we recommend pinning puppeteer version to 10.4.0 in actor package.json file.

  • fix: improve guessing of chrome executable path on windows by @audiBookning in #1294
  • fix: use correct apify-client instance for snapshotting by @B4nan in #1308
  • fix: prune CPU snapshots locally by @B4nan in #1313
  • fix: improve browser launcher types by @barjin in #1318
  • fix: reset RequestQueue state after 5 minutes of inactivity by @B4nan in #1324

0 concurrency mitigation

This release should resolve the 0 concurrency bug by automatically resetting the internal RequestQueue state after 5 minutes of inactivity.

We now track last activity done on a RequestQueue instance:

  • added new request
  • started processing a request (added to inProgress cache)
  • marked request as handled
  • reclaimed request

If we don't detect one of those actions in last 5 minutes, and we have some requests in the inProgress cache, we try to reset the state. We can override this limit via APIFY_INTERNAL_TIMEOUT env var.

This should finally resolve the 0 concurrency bug, as it was always about stuck requests in the inProgress cache.

New Contributors

Full Changelog: v2.2.2...v2.3.0

v2.2.2

14 Feb 14:17
Compare
Choose a tag to compare

What's Changed

  • fix: ensure request.headers is set by @B4nan in #1281
  • fix: cookies setting in preNavigationHooks by @AndreyBykov in #1283
  • refactor: improve logging for fetching next request and timeouts by @B4nan in #1292

This release should help with the infamous 0 concurrency bug. The problem is probably still there, but should be much less common. The main difference is that we now use shorter timeouts for API calls from RequestQueue.

Full Changelog: v2.2.1...v2.2.2

v2.2.1

03 Jan 15:01
Compare
Choose a tag to compare

What's Changed

  • fix: ignore requests that are no longer in progress by @B4nan in #1258
  • fix: do not use tryCancel() from inside sync callback by @B4nan in #1265
  • fix: revert to puppeteer 10.x by @B4nan in #1276
  • fix: wait when body is not available in infiniteScroll() from Puppeteer utils by @B4nan in #1277
  • fix: expose logger classes on the utils.log instance by @B4nan in #1278

Full Changelog: v2.2.0...v2.2.1

v2.2.0

17 Dec 13:26
Compare
Choose a tag to compare

Proxy per page

Up until now, browser crawlers used the same session (and therefore the same proxy) for
all request from a single browser - now get a new proxy for each session. This means
that with incognito pages, each page will get a new proxy, aligning the behaviour with
CheerioCrawler.

This feature is not enabled by default. To use it, we need to enable useIncognitoPages
flag under launchContext:

new Apify.Playwright({
    launchContext: {
        useIncognitoPages: true,
    },
    // ...
})

Note that currently there is a performance overhead for using useIncognitoPages.
Use this flag at your own will.

We are planning to enable this feature by default in SDK v3.0.

Abortable timeouts

Previously when a page function timed out, the task still kept running. This could lead to requests being processed multiple times. In v2.2 we now have abortable timeouts that will cancel the task as early as possible.

Mitigation of zero concurrency issue

Several new timeouts were added to the task function, which should help mitigate the zero concurrency bug. Namely fetching of next request information and reclaiming failed requests back to the queue are now executed with a timeout with 3 additional retries before the task fails. The timeout is always at least 300s (5 minutes), or handleRequestTimeoutSecs if that value is higher.

Full list of changes

  • fix RequestError: URI malformed in cheerio crawler (#1205)
  • only provide Cookie header if cookies are present (#1218)
  • handle extra cases for diffCookie (#1217)
  • implement proxy per page in browser crawlers (#1228)
  • add fingerprinting support (#1243)
  • implement abortable timeouts (#1245)
  • add timeouts with retries to runTaskFunction() (#1250)
  • automatically convert google spreadsheet URLs to CSV exports (#1255)