Support for new and cheaper `gpt-4o-2024-08-06` model
0.3.4
Fixed decoding issue
0.3.3
Full support for gpt-4o-mini
0.3.2
- Support for gpt-4o (including new tokenizer) - Fixed maximum allowed tokens of standard `gpt-3.5-turbo` model (now correctly 16K) - `LLMException` that are thrown contain valuable information inside the object such as compiled system and user prompt