What's Changed * Added preliminary support for local models with [Ollama](https://ollama.com) * Fixed a bug in type stripping / equivalence processing that could prevent successful type annotation by emeryberger * Prevent bad code blocks from making it into the code; factored out prompt generation by emeryberger in https://github.com/plasma-umass/commentator/pull/11
What's Changed * Asynchronous operation for vastly higher throughput; parallelizes processing across and within files by emeryberger * Improved validity checking by emeryberger in https://github.com/plasma-umass/commentator/pull/6 * Change to handle nested functions. by emeryberger in https://github.com/plasma-umass/commentator/pull/7 * LiteLLM and Bedrock support by emeryberger in https://github.com/plasma-umass/commentator/pull/9
This early release uses ChatGPT to comment your code (currently limited to Python, more to come!). It processes code one function at a time to avoid exceeding ChatGPT's token limits.