Garak

Latest version: v0.9.0.13.post1

Safety actively analyzes 642295 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.9.0.8

* Rename ART to AG (Attack Generator)
* Add generator support for NeMo LLM
* Add generator support for OctoML
* Add generic REST connector, with configs
* Add option to parallelise requests
* Add option to parallelise attempts
* Include AutoDAN probe
* Added "interactive mode", where you get a garak CLI πŸŽ‰
* Fix continuation probe trigger alignment
* Fix RTP prompts to be aggressive
* Add support for langchain LLM interface
* Upgrade in avidtools
* Improve checking for detector names in probes
* Turn-by-turn visual indicator on attack generator probe

0.9.0.7

* tests, tests, tests
* docstrings in many classes, also in the documentation (https://reference.garak.ai/)
* improved package hallucination probe prompts
* speedup on package hallucination detector scan

0.9.0.6

New in garak!

* **integrated vulnerability reporting:** vulnerabilities found with garak can now be directly reported to [AVID](https://avidml.org/) shubhobm
* **package hallucination:** added a probe for detecting [package hallucination](https://vulcan.io/blog/ai-hallucinations-package-risk)
* **docs are up:** reference guide is here, https://reference.garak.ai/
* **primary/extended detectors:** it's now possible to designate a primary detector for a probe (when using the default probewise harness)
* **multiple payloads for encoding module:** as well as the default option, there's slurs and xss injection attempts; access them with `--probe_options '{"encoding.options": ["default", "slurs", "xss"]}'` (adjust to taste)
* **fine-tune perspective api backoff for bandwidth:** never wait sixty seconds, the window use to determine rate limit
* **doc fixes:** mkonxd
* **hitlog entries now more self-contained:** store how many generations were targeted with that prompt
* **remove shortnames:** from probes and detectors
* **move encoding injection module to use triggers:** finer-grained detection, means fewer false positives

0.9.0.5

New in `garak`

* enable reporting of vulnerabilities into [AVID](https://avidml.org/)
* de-prefix prompt from LLM output by default
* add a data leakage/replay attack probe
* add a glitch token detection probe
* enable narrow-format CLI output
* extra payloads (secret level!) in encoding probe

0.9.0.4

New in `garak`

Happy 4th! πŸ‡ΊπŸ‡ΈπŸŽ†

* full exchange capture and better progress tracking in the auto-red-team module (`probes.art`)
* new generator: load Hugging Face models directly instead of via `transformers.pipeline`
* handle OpenAI server-side errors more gracefully
* remove default random seed
* support custom reporting locations with `--report_prefix` option
* add module documentation

0.9.post3

Updates:

* detect exfiltration-via-markdown attack
* detect if models will help generate malware
* accept newer OpenAI generators
* broader test coverage
* refactoring for probe readability
* use smaller versions of snowball + promptinject by default
* add mappings to AVID taxonomy
* add a "hit log" to record successful attacks
* add analysis script for rough HTML report generation
* bug fixes around longer inputs
* handle server-side OpenAI API failures nicely


v0.9.post1-alpha
first alpha

Page 2 of 2

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.