Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by B4nan over 1 year ago
userData
option in enqueueLinksByClickingElements
(#1749) (736f85d), closes #1617
request.userData
when creating new request object (#1728) (222ef59), closes #1725
pendingRequestCount
in request queue (#1765) (946535f)tslib
(27e96c8), closes #1747
ow
(bf0e03c), closes #1716
Published by B4nan almost 2 years ago
Published by B4nan almost 2 years ago
utils.playwright.blockRequests
warning message (#1632) (76549eb)playwright
is not installed (#1637) (de9db0c)Published by B4nan about 2 years ago
KeyValueStore.getValue
with defaultValue (#1541) (e3cb509)label
in enqueueLinksByClickingElements
options (#1525) (18b7c25)request.noRetry
after errorHandler
(#1542) (2a2040e)this
instead of the class (#1596) (2b14eb7)Cookie
from crawlee
metapackage (7b02ceb)Dataset.exportToCSV
and Dataset.exportToJSON
Dataset.getData()
shortcut (522ed6e)utils.downloadListOfUrls
to crawlee metapackage (7b33b0a)utils.parseOpenGraph()
(#1555) (059f85e)utils.playwright.compileScript
(#1559) (2e14162)utils.playwright.infiniteScroll
(#1543) (60c8289), closes #1528
utils.playwright.saveSnapshot
(#1544) (a4ceef0)useState
helper (#1551) (2b03177)forefront
option to enqueueLinks
helper (f8755b6), closes #1595
INPUT.json
to support comments (#1538) (09133ff)Published by B4nan about 2 years ago
headless
option in browser crawlers by @B4nan in https://github.com/apify/crawlee/pull/1455
CheerioCrawlerOptions
type more loose by @B4nan in d871d8c
utils.playwright.blockRequests()
by @barjin in https://github.com/apify/crawlee/pull/1447
/INPUT.json
files for KeyValueStore.getInput()
by @vladfrangu in https://github.com/apify/crawlee/pull/1453
RetryRequestError
+ add error to the context for BC by @vladfrangu in https://github.com/apify/crawlee/pull/1443
keepAlive
to crawler options by @B4nan in https://github.com/apify/crawlee/pull/1452
Full Changelog: https://github.com/apify/crawlee/compare/v3.0.2...v3.0.3
Published by B4nan about 2 years ago
UserData
type argument to CheerioCrawlingContext
and related interfaces by @B4nan in https://github.com/apify/crawlee/pull/1424
desiredConcurrency
to the value of maxConcurrency
by @B4nan in https://github.com/apify/crawlee/commit/bcb689d4cb90835136295d879e710969ebaf29fa
crawler.run()
by @B4nan in https://github.com/apify/crawlee/commit/9d62d565c2ff8d058164c22333b07b7d2bf79ee0
CheerioCrawler
by @B4nan in https://github.com/apify/crawlee/commit/07b7e69e1a7b7c89b8a5538279eb6de8be0effde
ow
in @crawlee/cheerio
package by @B4nan in https://github.com/apify/crawlee/commit/be59f992d2897ce5c02349bbcc62472d99bb2718
crawlee@^3.0.0
in the CLI templates by @B4nan in https://github.com/apify/crawlee/commit/6426f22ce53fcce91b1d8686577557bae09fc0e9
desiredConcurrency: 10
as the default for CheerioCrawler
by @B4nan in https://github.com/apify/crawlee/pull/1428
Router
via use
method by @B4nan in https://github.com/apify/crawlee/pull/1431
Full Changelog: https://github.com/apify/crawlee/compare/v3.0.1...v3.0.2
Published by B4nan about 2 years ago
JSONData
generic type arg from CheerioCrawler
by @B4nan in https://github.com/apify/crawlee/pull/1402
storage
by @B4nan in https://github.com/apify/crawlee/pull/1403
FailedRequestHandler
to ErrorHandler
by @B4nan in https://github.com/apify/crawlee/pull/1410
CheerioCrawler
by @B4nan in https://github.com/apify/crawlee/pull/1411
headless
option to BrowserCrawlerOptions
by @B4nan in https://github.com/apify/crawlee/pull/1412
enqueueLinks
in browser crawler on page without any links by @B4nan in 385ca27
Full Changelog: https://github.com/apify/crawlee/compare/v3.0.0...v3.0.1
Published by B4nan over 2 years ago
Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.
Up until version 3 of apify
, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:
crawlee
package on NPMapify
package on NPMMoreover, the Crawlee library is published as several packages under @crawlee
namespace:
@crawlee/core
: the base for all the crawler implementations, also contains things like Request
, RequestQueue
, RequestList
or Dataset
classes@crawlee/basic
: exports BasicCrawler
@crawlee/cheerio
: exports CheerioCrawler
@crawlee/browser
: exports BrowserCrawler
(which is used for creating @crawlee/playwright
and @crawlee/puppeteer
)@crawlee/playwright
: exports PlaywrightCrawler
@crawlee/puppeteer
: exports PuppeteerCrawler
@crawlee/memory-storage
: @apify/storage-local
alternative@crawlee/browser-pool
: previously browser-pool
package@crawlee/utils
: utility methods@crawlee/types
: holds TS interfaces mainly about the StorageClient
As Crawlee is not yet released as
latest
, we need to install from thenext
distribution tag!
Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright
if you plan on using playwright
- it already contains everything from the @crawlee/browser
package, which includes everything from @crawlee/basic
, which includes everything from @crawlee/core
.
npm install crawlee@next
Or if all we need is cheerio support, we can install only @crawlee/cheerio
npm install @crawlee/cheerio@next
When using playwright
or puppeteer
, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.
npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwright
Alternatively we can also use the crawlee
meta-package which contains (re-exports) most of the @crawlee/*
packages, and therefore contains all the crawler classes.
Sometimes you might want to use some utility methods from
@crawlee/utils
, so you might want to install that as well. This package contains some utilities that were previously available underApify.utils
. Browser related utilities can be also found in the crawler packages (e.g.@crawlee/playwright
).
Both Crawlee and Actor SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig
package. Don't forget to set the module
and target
to ES2022
or above to be able to use top level await.
The
@apify/tsconfig
config hasnoImplicitAny
enabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.
{
"extends": "@apify/tsconfig",
"compilerOptions": {
"module": "ES2022",
"target": "ES2022",
"outDir": "dist",
"lib": ["DOM"]
},
"include": [
"./src/**/*"
]
}
For Dockerfile
we recommend using multi-stage build so you don't install the dev dependencies like TypeScript in your final image:
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder
# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build
# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json
# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# run compiled code
CMD npm run start:prod
Previously we had a magical stealth
option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.
In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints
in browserPoolOptions
:
const crawler = new PlaywrightCrawler({
browserPoolOptions: {
useFingerprints: false,
},
});
Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies()
or session.setPuppeteerCookies()
. Since this method could be used for any of our crawlers, not just PuppeteerCrawler
, the methods have been renamed to session.getCookies()
and session.setCookies()
respectively. Otherwise, their usage is exactly the same!
When we store some data or intermediate state (like the one RequestQueue
holds), we now use @crawlee/memory-storage
by default. It is an alternative to the @apify/storage-local
, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local
). While the state is stored in memory, it also dumps it to the file system so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json
file).
When we want to run the crawler on Apify platform, we need to use Actor.init
or Actor.main
, which will automatically switch the storage client to ApifyClient
when on the Apify platform.
We can still use the @apify/storage-local
, to do it, first install it pass it to the Actor.init
or Actor.main
options:
@apify/storage-local
v2.1.0+ is required for crawlee
import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';
const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });
Previously the state was preserved between local runs, and we had to use --purge
argument of the apify-cli
. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main
call. We can opt out of it via purge: false
in the Actor.init
options.
Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.
handleRequestFunction
-> requestHandler
handlePageFunction
-> requestHandler
handleRequestTimeoutSecs
-> requestHandlerTimeoutSecs
handlePageTimeoutSecs
-> requestHandlerTimeoutSecs
requestTimeoutSecs
-> navigationTimeoutSecs
handleFailedRequestFunction
-> failedRequestHandler
We also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:
CheerioHandlePageInputs
-> CheerioCrawlingContext
PlaywrightHandlePageFunction
-> PlaywrightCrawlingContext
PuppeteerHandlePageFunction
-> PuppeteerCrawlingContext
Some utilities previously available under Apify.utils
namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request
instance or current Page
object, or the RequestQueue
bound to the crawler.
One common helper that received more attention is the enqueueLinks
. As mentioned above, it is context aware - we no longer need pass in the requestQueue
or page
arguments (or the cheerio handle $
). In addition to that, it now offers 3 enqueuing strategies:
EnqueueStrategy.All
('all'
): Matches any URLs foundEnqueueStrategy.SameHostname
('same-hostname'
) Matches any URLs that have the same subdomain as the base URL (default)EnqueueStrategy.SameDomain
('same-domain'
) Matches any URLs that have the same domain name. For example, https://wow.an.example.com
and https://example.com
will both be matched for a base url of https://example.com
.This means we can even call enqueueLinks()
without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.
Moreover, we can specify patterns the URL should match via globs:
const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});
RequestQueue
instanceAll crawlers now have the RequestQueue
instance automatically available via crawler.getRequestQueue()
method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue
instance manually, and we can just use crawler.addRequests()
method described underneath.
We can still create the
RequestQueue
explicitly, thecrawler.getRequestQueue()
method will respect that and return the instance provided via crawler options.
crawler.addRequests()
We can now add multiple requests in batches. The newly added addRequests
method will handle everything for us. It enqueues the first 1000 requests and resolves, while continuing with the rest in the background, again in a smaller 1000 items batches, so we don't fall into any API rate limits. This means the crawling will start almost immediately (within few seconds at most), something previously possible only with a combination of RequestQueue
and RequestList
.
// will resolve right after the initial batch of 1000 requests is added
const result = await crawler.addRequests([/* many requests, can be even millions */]);
// if we want to wait for all the requests to be added, we can await the `waitForAllRequestsToBeAdded` promise
await result.waitForAllRequestsToBeAdded;
Previously an error thrown from inside request handler resulted in full error object being logged. With Crawlee, we log only the error message as a warning as long as we know the request will be retried. If you want to enable verbose logging like in v2, use the CRAWLEE_VERBOSE_LOG
env var.
requestAsBrowser
In v1 we replaced the underlying implementation of requestAsBrowser
to be just a proxy over calling got-scraping
- our custom extension to got
that tries to mimic the real browsers as much as possible. With v3, we are removing the requestAsBrowser
, encouraging the use of got-scraping
directly.
For easier migration, we also added context.sendRequest()
helper that allows processing the context bound Request
object through got-scraping
:
const crawler = new BasicCrawler({
async requestHandler({ sendRequest, log }) {
// we can use the options parameter to override gotScraping options
const res = await sendRequest({ responseType: 'json' });
log.info('received body', res.body);
},
});
sendRequest()
?The useInsecureHttpParser
option has been removed. It's permanently set to true
in order to better mimic browsers' behavior.
Got Scraping automatically performs protocol negotiation, hence we removed the useHttp2
option. It's set to true
- 100% of browsers nowadays are capable of HTTP/2 requests. Oh, more and more of the web is using it too!
In the requestAsBrowser
approach, some of the options were named differently. Here's a list of renamed options:
payload
This options represents the body to send. It could be a string
or a Buffer
. However there is no payload
option anymore. You need to use body
instead. Or, if you wish to send JSON, json
. Here's an example:
// Before:
await Apify.utils.requestAsBrowser({ …, payload: 'Hello, world!' });
await Apify.utils.requestAsBrowser({ …, payload: Buffer.from('c0ffe', 'hex') });
await Apify.utils.requestAsBrowser({ …, json: { hello: 'world' } });
// After:
await gotScraping({ …, body: 'Hello, world!' });
await gotScraping({ …, body: Buffer.from('c0ffe', 'hex') });
await gotScraping({ …, json: { hello: 'world' } });
ignoreSslErrors
It has been renamed to https.rejectUnauthorized
. By default it's set to false
for covenience. However, if you want to make sure the connection is secure, you can do the following:
// Before:
await Apify.utils.requestAsBrowser({ …, ignoreSslErrors: false });
// After:
await gotScraping({ …, https: { rejectUnauthorized: true } });
Please note: the meanings are opposite! So we needed to invert the values as well.
header-generator
optionsuseMobileVersion
, languageCode
and countryCode
no longer exist. Instead, you need to use headerGeneratorOptions
directly:
// Before:
await Apify.utils.requestAsBrowser({
…,
useMobileVersion: true,
languageCode: 'en',
countryCode: 'US',
});
// After:
await gotScraping({
…,
headerGeneratorOptions: {
devices: ['mobile'], // or ['desktop']
locales: ['en-US'],
},
});
timeoutSecs
In order to set a timeout, use timeout.request
(which is milliseconds now).
// Before:
await Apify.utils.requestAsBrowser({
…,
timeoutSecs: 30,
});
// After:
await gotScraping({
…,
timeout: {
request: 30 * 1000,
},
});
throwOnHttpErrors
throwOnHttpErrors
→ throwHttpErrors
. This options throws on unsuccessful HTTP status codes, for example 404
. By default, it's set to false
.
decodeBody
decodeBody
→ decompress
. This options decompresses the body. Defaults to true
- please do not change this or websites will break (unless you know what you're doing!).
abortFunction
This function used to make the promise throw on specific responses, if it returned true
. However it wasn't that useful.
You probably want to cancel the request instead, which you can do in the following way:
const promise = gotScraping(…);
promise.on('request', request => {
// Please note this is not a Got Request instance, but a ClientRequest one.
// https://nodejs.org/api/http.html#class-httpclientrequest
if (request.protocol !== 'https:') {
// Unsecure request, abort.
promise.cancel();
// If you set `isStream` to `true`, please use `stream.destroy()` instead.
}
});
const response = await promise;
Previously, you were able to have a browser pool that would mix Puppeteer and Playwright plugins (or even your own custom plugins if you've built any). As of this version, that is no longer allowed, and creating such a browser pool will cause an error to be thrown (it's expected that all plugins that will be used are of the same type).
:::info Confused?
As an example, this change disallows a pool to mix Puppeteer with Playwright. You can still create pools that use multiple Playwright plugins, each with a different launcher if you want!
:::
One small feature worth mentioning is the ability to handle requests with browser crawlers outside the browser. To do that, we can use a combination of Request.skipNavigation
and context.sendRequest()
.
Take a look at how to achieve this by checking out the Skipping navigation for certain requests example!
Crawlee exports the default log
instance directly as a named export. We also have a scoped log
instance provided in the crawling context - this one will log messages prefixed with the crawler name and should be preferred for logging inside the request handler.
const crawler = new CheerioCrawler({
async requestHandler({ log, request }) {
log.info(`Opened ${request.loadedUrl}`);
},
});
Every crawler instance now has useState()
method that will return a state object we can use. It will be automatically saved when persistState
event occurs. The value is cached, so we can freely call this method multiple times and get the exact same reference. No need to worry about saving the value either, as it will happen automatically.
const crawler = new CheerioCrawler({
async requestHandler({ crawler }) {
const state = await crawler.useState({ foo: [] as number[] });
// just change the value, no need to care about saving it
state.foo.push(123);
},
});
The Apify platform helpers can be now found in the Actor SDK (apify
NPM package). It exports the Actor
class that offers following static helpers:
ApifyClient
shortcuts: addWebhook()
, call()
, callTask()
, metamorph()
init()
, exit()
, fail()
, main()
, isAtHome()
, createProxyConfiguration()
getInput()
, getValue()
, openDataset()
, openKeyValueStore()
, openRequestQueue()
, pushData()
, setValue()
on()
, off()
getEnv()
, newClient()
, reboot()
Actor.main
is now just a syntax sugar around calling Actor.init()
at the beginning and Actor.exit()
at the end (plus wrapping the user function in try/catch block). All those methods are async and should be awaited - with node 16 we can use the top level await for that. In other words, following is equivalent:
import { Actor } from 'apify';
await Actor.init();
// your code
await Actor.exit('Crawling finished!');
import { Actor } from 'apify';
await Actor.main(async () => {
// your code
}, { statusMessage: 'Crawling finished!' });
Actor.init()
will conditionally set the storage implementation of Crawlee to the ApifyClient
when running on the Apify platform, or keep the default (memory storage) implementation otherwise. It will also subscribe to the websocket events (or mimic them locally). Actor.exit()
will handle the tear down and calls process.exit()
to ensure our process won't hang indefinitely for some reason.
Apify SDK exports Apify.events
, which is an EventEmitter
instance. With Crawlee, the events are managed by EventManager
class instead. We can either access it via Actor.eventManager
getter, or use Actor.on
and Actor.off
shortcuts instead.
-Apify.events.on(...);
+Actor.on(...);
We can also get the
EventManager
instance viaConfiguration.getEventManager()
.
In addition to the existing events, we now have an exit
event fired when calling Actor.exit()
(which is called at the end of Actor.main()
). This event allows you to gracefully shut down any resources when Actor.exit
is called.
Apify.call()
is now just a shortcut for running ApifyClient.actor(actorId).call(input, options)
, while also taking the token inside env vars into accountApify.callTask()
is now just a shortcut for running ApifyClient.task(taskId).call(input, options)
, while also taking the token inside env vars into accountApify.metamorph()
is now just a shortcut for running ApifyClient.task(taskId).metamorph(input, options)
, while also taking the ACTOR_RUN_ID inside env vars into accountApify.waitForRunToFinish()
has been removed, use ApifyClient.waitForFinish()
insteadActor.main/init
purges the storage by defaultpurgeLocalStorage
helper, move purging to the storage class directly
StorageClient
interface now has optional purge
methodActor.init()
(you can opt out via purge: false
in the options of init/main
methods)QueueOperationInfo.request
is no longer availableRequest.handledAt
is now string date in ISO formatRequest.inProgress
and Request.reclaimed
are now Set
s instead of POJOsinjectUnderscore
from puppeteer utils has been removedAPIFY_MEMORY_MBYTES
is no longer taken into account, use CRAWLEE_AVAILABLE_MEMORY_RATIO
insteadAutoscaledPool
options are no longer available:
cpuSnapshotIntervalSecs
and memorySnapshotIntervalSecs
has been replaced with top level systemInfoIntervalMillis
configurationmaxUsedCpuRatio
has been moved to the top level configurationProxyConfiguration.newUrlFunction
can be async. .newUrl()
and .newProxyInfo()
now return promises.prepareRequestFunction
and postResponseFunction
options are removed, use navigation hooks insteadgotoFunction
and gotoTimeoutSecs
are removedRequest
propsfingerprintsOptions
renamed to fingerprintOptions
(fingerprints
-> fingerprint
).fingerprintOptions
now accept useFingerprintCache
and fingerprintCacheSize
(instead of useFingerprintPerProxyCache
and fingerprintPerProxyCacheSize
, which are now no longer available). This is because the cached fingerprints are no longer connected to proxy URLs but to sessions.Full Changelog: https://github.com/apify/crawlee/compare/v2.3.2...v3.0.0
Published by B4nan over 2 years ago
Full Changelog: https://github.com/apify/apify-js/compare/v2.3.1...v2.3.2
Published by B4nan over 2 years ago
utils.apifyClient
early instantiation by @barjin in https://github.com/apify/apify-js/pull/1330
RequestList
by @mnmkng in https://github.com/apify/apify-js/pull/1347
This should help with the
We either navigate top level or have old version of the navigated frame
bug in puppeteer.
RequestTransform
's return typeutils.playwright.injectJQuery
by @barjin in https://github.com/apify/apify-js/pull/1337
keyValueStore
option to Statistics
class by @B4nan in https://github.com/apify/apify-js/pull/1345
page.authenticate
as it disables cacheFull Changelog: https://github.com/apify/apify-js/compare/v2.3.0...v2.3.1
Published by B4nan over 2 years ago
enqueueLinksByClickingElements
by @audiBookning in https://github.com/apify/apify-js/pull/1295
RequestList
accepts ProxyConfiguration
for requestsFromUrls
by @barjin in https://github.com/apify/apify-js/pull/1317
KeyValueStore.setRecord
by @gahabeen in https://github.com/apify/apify-js/pull/1325
playwright
to v1.20.2puppeteer
to v13.5.2
We noticed that with this version of puppeteer actor run could crash with
We either navigate top level or have old version of the navigated frame
error (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false
), we recommend pinning puppeteer version to10.4.0
in actorpackage.json
file.
RequestQueue
state after 5 minutes of inactivity by @B4nan in https://github.com/apify/apify-js/pull/1324
This release should resolve the 0 concurrency bug by automatically resetting the internal RequestQueue
state after 5 minutes of inactivity.
We now track last activity done on a RequestQueue
instance:
inProgress
cache)If we don't detect one of those actions in last 5 minutes, and we have some requests in the inProgress
cache, we try to reset the state. We can override this limit via APIFY_INTERNAL_TIMEOUT
env var.
This should finally resolve the 0 concurrency bug, as it was always about stuck requests in the inProgress
cache.
Full Changelog: https://github.com/apify/apify-js/compare/v2.2.2...v2.3.0
Published by B4nan over 2 years ago
request.headers
is set by @B4nan in https://github.com/apify/apify-js/pull/1281
This release should help with the infamous 0 concurrency bug. The problem is probably still there, but should be much less common. The main difference is that we now use shorter timeouts for API calls from RequestQueue
.
Full Changelog: https://github.com/apify/apify-js/compare/v2.2.1...v2.2.2
Published by B4nan almost 3 years ago
tryCancel()
from inside sync callback by @B4nan in https://github.com/apify/apify-js/pull/1265
body
is not available in infiniteScroll()
from Puppeteer utils by @B4nan in https://github.com/apify/apify-js/pull/1277
utils.log
instance by @B4nan in https://github.com/apify/apify-js/pull/1278
Full Changelog: https://github.com/apify/apify-js/compare/v2.2.0...v2.2.1
Published by B4nan almost 3 years ago
Up until now, browser crawlers used the same session (and therefore the same proxy) for
all request from a single browser - now get a new proxy for each session. This means
that with incognito pages, each page will get a new proxy, aligning the behaviour with
CheerioCrawler
.
This feature is not enabled by default. To use it, we need to enable useIncognitoPages
flag under launchContext
:
new Apify.Playwright({
launchContext: {
useIncognitoPages: true,
},
// ...
})
Note that currently there is a performance overhead for using
useIncognitoPages
.
Use this flag at your own will.
We are planning to enable this feature by default in SDK v3.0.
Previously when a page function timed out, the task still kept running. This could lead to requests being processed multiple times. In v2.2 we now have abortable timeouts that will cancel the task as early as possible.
Several new timeouts were added to the task function, which should help mitigate the zero concurrency bug. Namely fetching of next request information and reclaiming failed requests back to the queue are now executed with a timeout with 3 additional retries before the task fails. The timeout is always at least 300s (5 minutes), or handleRequestTimeoutSecs
if that value is higher.
RequestError: URI malformed
in cheerio crawler (#1205)diffCookie
(#1217)runTaskFunction()
(#1250)Published by B4nan about 3 years ago
purgeLocalStorage
method by @vladfrangu in https://github.com/apify/apify-js/pull/1187
forceCloud
down to the KV store by @vladfrangu in https://github.com/apify/apify-js/pull/1186
YOUTUBE_REGEX_STRING
being too greedy by @B4nan in https://github.com/apify/apify-js/pull/1171
fixUrl
function by @szmarczak in https://github.com/apify/apify-js/pull/1184
Full Changelog: https://github.com/apify/apify-js/compare/v2.0.7...v2.1.0