Realtimetts

Latest version: v0.4.48

Safety actively analyzes 709727 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 8

0.4.13

🚀 New Features
**EdgeEngine**
- Introducing **EdgeEngine**, a free, extremely lightweight, and beginner-friendly engine.
- Designed for simplicity with no complex dependencies, making it ideal for lightweight projects or newcomers to TTS.

🛠 Bug Fixes
- Resolved **ValueError**: `('Sample format not supported', -9994)` (221).
- Fixed **RecursionError**: `maximum recursion depth exceeded` (222).
- Addressed the requirement to manually install `resampy` after installing RealtimeTTS.

0.4.11

- optimizations for linux
- setting multiprocessing spawn start method fix for linux now
- if tts engine output sample rate is not supported by the sound card the chunks get resampled now
- mechanism to prevent potential stream buffer overflows added

0.4.10

- new stream2sentence version 0.2.7
- bugfix for [5](https://github.com/KoljaB/stream2sentence/issues/5) (causing a whitespace between words to get lost sometimes)
- upgrade to latest NLTK and Stanza versions including new "punkt-tab" model
- allow offline environment for stanza
- adds support for async streams (preparations for async in RealtimeTTS)
- dependency upgrades to latest version (coqui tts 0.24.2 ➡️ 0.24.3, elevenlabs 1.11.0 ➡️ 1.12.1, openai 1.52.2 ➡️ 1.54.3)
- added load_balancing parameter to coqui engine
- if you have a fast machine with a realtime factor way lower than 1, we infer way faster then we need to
- this parameter allows you to infer with a rt factor closer to 1, so you will still have streaming voice inference BUT your GPU load goes down to the minimum that is needed to produce chunks in realtime
- if you do LLM inference in parallel this will be faster now because TTS takes less load

0.4.9

- added print_realtime_factor to CoquiEngine
- removed a debug message that somehow made it to pypi

0.4.8

- added ParlerEngine. Needs flash attention, then barely runs fast enough for realtime inference on a 4090.

Parler Installation for Windows (after installing RealtimeTTS):

python
pip install git+https://github.com/huggingface/parler-tts.git
pip install torch==2.3.1+cu121 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
pip install https://github.com/oobabooga/flash-attention/releases/download/v2.6.3/flash_attn-2.6.3+cu122torch2.3.1cxx11abiFALSE-cp310-cp310-win_amd64.whl
pip install "numpy<2"

0.4.7

- updated requirements.txt, minor Readme updates

Page 3 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.