Scrapy-crawlera

Latest version: v1.7.2

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.7.2

-------------------
- Use request.meta than response.meta in the middleware

1.7.1

-------------------
- Consider Crawlera response if contains `X-Crawlera-Version` header
- Build the documentation in Travis CI and fail on documentation issues
- Update matrix of tests

1.7.0

-------------------
- Added more stats to better understanding the internal states.
- Log warning when using `https://` protocol.
- Add default `http://` protocol in case of none provided, and log warning about it.
- Fix duplicated request when the response is not from crawlera, this was causing an
infinite loop of retries when `dont_filter=True`.

1.6.0

-------------------

- Enable crawlera on demand by setting ``CRAWLERA_FORCE_ENABLE_ON_HTTP_CODES``

1.5.1

-------------------

- Remove username and password from settings since it's removed from crawlera.
- Include affected spider in logs.
- Handle situations when crawlera is restarted and reply with 407's for a few minutes
by retrying the requests with a exponential backoff system.

1.5.0

-------------------

- Correctly check for bans in crawlera (Jobs will not get banned on non ban 503's).
- Exponential backoff when crawlera doesn't have proxies available.
- Fix ``dont_proxy=False`` header disabling crawlera when it is enabled.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.