Cdp-backend

Latest version: v4.1.3

Safety actively analyzes 688843 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

4.1.2

**Full Changelog**: https://github.com/CouncilDataProject/cdp-backend/compare/v4.1.1...v4.1.2

4.1.0

What's Changed
* feature/reduce-event-gather-complexity by evamaxfield in https://github.com/CouncilDataProject/cdp-backend/pull/232
* Remove Unneccessary Re-Encode If Video Is Already H264 by whargrove in https://github.com/CouncilDataProject/cdp-backend/pull/234
* Use requests stream and shutil.copyfileobj to constrain memory usage during resource copy by whargrove in https://github.com/CouncilDataProject/cdp-backend/pull/236


**Full Changelog**: https://github.com/CouncilDataProject/cdp-backend/compare/v4.0.9...v4.1.0

4.0.9

Pull in `faster-whisper` directly from PyPI, new `faster-whisper` lib also pulled in the base library's changes to allow word level timestamps (we no longer have to linearly interpolate! Finally, this is an attempt to fix a JSON decode error during config reading.

What's Changed
* docs/improve-dev-infra-setup by isaacna in https://github.com/CouncilDataProject/cdp-backend/pull/229


**Full Changelog**: https://github.com/CouncilDataProject/cdp-backend/compare/v4.0.8...v4.0.9

4.0.9.rc1

**Full Changelog**: https://github.com/CouncilDataProject/cdp-backend/compare/v4.0.9.rc0...v4.0.9.rc1

4.0.9.rc0

Pull in `faster-whisper` directly from PyPI, new `faster-whisper` lib also pulled in the base library's changes to allow word level timestamps (we no longer have to linearly interpolate! Finally, this is an attempt to fix a JSON decode error during config reading.

What's Changed
* docs/improve-dev-infra-setup by isaacna in https://github.com/CouncilDataProject/cdp-backend/pull/229


**Full Changelog**: https://github.com/CouncilDataProject/cdp-backend/compare/v4.0.0...v4.0.9.rc0

4.0.0

There are two main changes for this release.

1. **We are swapping out Google Speech-to-Text for OpenAIs Whisper.**

Specifically, we are using a forked version called [faster-whisper](https://github.com/guillaumekln/faster-whisper). This new speech-to-text model performs much better (ranging from ~3.6% word-error-rate to ~9% word-error-rate on long audio files).

To use this new model efficiently, we need access to a GPU. Since GitHub Actions do not have GPUs available, we are using a system which spins up a Google Cloud Compute Engine instance, connects to it, runs our job, and then tears it down all in the course of a single GitHub Action workflow. From multiple tests, this should be a reduction in cost and processing time however with this release we will do more testing to get a better estimate.

2. **We have switched from MIT to MPLv2 License.**

Unless you are trying to fork our code and take it private, this won't affect you.

Page 1 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.