Twitterscraper

Latest version: v1.6.1

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

1.2.1

Fixed
- PR 208: Fixed a type in a print statement which was breaking down twitterscraper
- Remove the use of fake_useragent library

1.2.0

Added
- PR 186: adds the fields is_retweet, retweeter related information, and timestamp_epochs to the output.
- PR 184: use fake_useragent for generation of random user agent headers.
- Additionally scraper for 'is_verified' when scraping for user profile pages.

1.1.0

Added
- PR 176: Using billiard library instead of multiprocessing to add the ability to use this library with Celery.

1.0.1

Fixed
- PR 191: wrong argument was used in the method query_tweets_from_user()
- CSV output file has as default ";" as a separator.
- PR 173: Some small improvements on the profile page scraping.
Added
- Command line argument -ow / --overwrite to indicate if an existing output file should be overwritten.

1.0.0

Added
- PR 159: scrapes user profile pages for additional information.
Fixed:
- Moved example scripts demonstrating use of get_user_info() functionality to examples folder
- removed screenshot demonstrating get_user_info() works
- Added command line argument to main.py which calls get_user_info() for all users in list of scraped tweets.

0.9.3

Fixed
- PR 143: cancels query if end-date is earlier than begin-date.
- PR 151: returned json_resp['min_position] is parsed in order to quote special characters.
- PR 153: cast Tweets attributes to proper data types (int instead of str)
- Use codecs.open() to write to file. Should fix issues 144 and 147.

Page 2 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.