Add new prompt completion optional parameters:
- **temperature** : This setting influences the variety in the model's responses (0-2).
- Lower values (0.1-1.0) result in more conservative, factual responses.
- Higher values (1.0-2.0) lead to more diverse, imaginative outputs.
- **max_tokens** : Set the limit for the number of tokens the model can generate in response
**Full Changelog**: https://github.com/jayrinaldime/aio-straico/compare/0.0.6...0.0.7