Researcher

Latest version: v0.4.3

Safety actively analyzes 723685 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.1.5

Another great release thanks to the amazing community! ❤️

Big shoutout to the following contributions:
proy9714 for adding newpaper3k support for better article scraping https://github.com/assafelovic/gpt-researcher/pull/365
jimmylin0979 for adding support for additional embeddings such as Mistral, Ollama and HuggingFace https://github.com/assafelovic/gpt-researcher/pull/375
assafelovic adding support for pdf styling of research reports https://github.com/assafelovic/gpt-researcher/pull/396
WarrenTheRabbit for fixing a documentation typo https://github.com/assafelovic/gpt-researcher/pull/391

Thank you to everyone and looking forward for more contributions!

0.1.4

Excited to introduce latest version that removes strict dependencies from requirements.txt, fixes some installation issues and adds support for virtual env and Poetry!

Big shoutout to contributors aaaastark for the PR: https://github.com/assafelovic/gpt-researcher/pull/319

0.1.3

Releasing new version that resolves dependency issues with latest version.

0.1.2

Excited to kick off the new year with a long awaited feature: Research report on specific urls! 🎉

You can now skip the search by providing urls directly to GPTResearcher and create a research report like so:

urls = ["https://docs.tavily.com/docs/tavily-api/introduction",
"https://docs.tavily.com/docs/tavily-api/python-sdk",
"https://docs.tavily.com/docs/tavily-api/rest_api"]

query = "How can I integrate Tavily Rest API with my application?"

async def get_report(query: str, source_urls: list) -> str:
researcher = GPTResearcher(query=query, source_urls=source_urls)
report = await researcher.run()
return report

report = asyncio.run(get_report(query, urls))
print(report)


The release includes additional stability and performance improvements, along with updated library dependencies.

0.1.1

Excited to release the latest version aimed at improving overall research performance! 🎉

We're introducing a new approach to extracting relevant information from scraped sites using Contextual Compression. We now leverage embeddings to better store and retrieve information across the research lifecycle. This latest improvement reduces research task time by an average of 60%, increases quality by ~30% and reduces GPT costs by 50% (we don't summarize with GPT anymore).

In addition, thank you to our amazing contributors:
reasonmethis for the serp retriever fix: https://github.com/assafelovic/gpt-researcher/pull/261
devon-ye for the Chinese README addition: https://github.com/assafelovic/gpt-researcher/pull/254

0.1.0

We’re excited to release the next generation of GPT Researcher! We’ve completely refactored the code base to be more modular, customizable, stable and accurate. We’ve added many new features, improvements and bug fixes.

Below is a list of the main changes:
- Improved report generation prompt for better accuracy and quality.
- Added a new config structure including support for external JSON files.
- Redesigned GPT Researcher library to be enabled as a stand alone agent in any project (see example in repo).
- Added new structure for retrievers, enabling a better developer experience for adding and modifying information retrievers.
- Optimized configuration for latest GPT-4 Turbo model.
- Fixed issues with scraping and improved overall stability and speed.
- Added support for arxiv and pdf scraping urls.
- Added a rich documentation site with Docusaurus (see docs.tavily.com).
- Updated all Python packages to latest for most up to date performance and experience

Just `git pull` the latest version and give it a run!
Next, we’re building embedding support and long term memory!
To see what’s next on our roadmap check it out here: https://trello.com/b/3O7KBePw/gpt-researcher-roadmap

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.