Realtimestt

Latest version: v0.3.94

Safety actively analyzes 702510 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

0.3.94

- **New Parameters for stop-method of AudioToTextRecorder:**
- `backdate_stop_seconds` (float, default=0.0):
- **Description:** Specifies the number of seconds to backdate the stop time when ending a recording.
- **Usage:** When invoking `stop()` due to a wake word detection or a speaker diarization change event, this parameter compensates for any latency, ensuring that only relevant audio is included in the recording and transcription.

- `backdate_resume_seconds` (float, default=0.0):
- **Description:** Specifies the number of seconds to backdate the resume time when restarting listening after a recording has stopped.
- **Usage:** Typically set to the same value as `backdate_stop_seconds`, this parameter allows for fine-tuning.

0.3.93

- fix for stt-server (got broken by webservers dependency upgrade because of an api change)
- added initial_prompt_realtime to AudioToTextRecorder to be able to give different prompts to final and realtime model
- added new parameters to client/server (download root, batch sizes)

0.3.92

- fixed dependencies (causing "ImportError: cannot import name 'BatchedInferencePipeline' from 'faster_whisper'")

0.3.91

- upgrade to 0.3.91 since 0.3.9 had issues on PyPi

0.3.81

Enhanced [CLI Interface](https://github.com/KoljaB/RealtimeSTT/tree/master/RealtimeSTT_server)
- Introduced the `-sed` command for improved speech end detection
- Added the `-l` command to set the language
- Implemented the `-L` command to quickly display a list of all available audio input devices
- Enabled setting the input device index .
- Improved piping support for seamless with `>` or `|`

0.3.9

🚀 New Features

**Batched Transcription**
- Added support for **batched transcription** in both main and real-time models which improves performance and efficiency
- New parameters introduced:
- **`batch_size`**: Controls the batch size for main transcription tasks.
- **`realtime_batch_size`**: Configures batch size for real-time transcription.

This feature is designed to speed up processing. I can't say yet if there may be cases where batching overhead impacts performance negatively. It looked promising for me in initial tests, but I need your feedback! Please report if you get into any issues or notice even slower transcription due to batching.

Page 1 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.