Llamator

Latest version: v2.3.1

Safety actively analyzes 723954 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

2.2.0

What's New

* Add Suffix Attack and New System Prompt Leakage Requests (we're happy to see in contributors Shine-afk)
* Add HarmBench Prompts to Harmful Behavior Attack (thanks NickoJo)
* Other minor improvements and bug fixes

We Need Your Feedback

If you have suggestions, encounter any issues, or want to share your experiences using LLAMATOR 2.1.0, please don't hesitate to reach out! You can find us in Telegram: **[llamator](https://t.me/llamator)**

2.1.0

What's New

* Add BON attack (NickoJo)
* Add Crescendo attack (nizamovtimur)
* Add Docker example with Jupyter Notebook and installed LLAMATOR (RomiconEZ)
* Improve attack system prompt for Prompt Leakage (nizamovtimur)
* Other minor improvements and bug fixes

We Need Your Feedback

If you have suggestions, encounter any issues, or want to share your experiences using LLAMATOR 2.1.0, please don't hesitate to reach out! You can find us in Telegram: **[llamator](https://t.me/llamator)**

2.0.1

What's New

* Add the `strip_client_responses` parameter for `ChatSession`
* Other small improvements in attacks

2.0.0

What's New

New Features & Enhancements
- **Introduced Multistage Attack**: We've added a novel `multistage_depth` parameter to the `start_testing()` fucntion, allowing users to specify the depth of a dialogue during testing, enabling more sophisticated and targeted LLM Red teaming strategies.
- **Refactored Sycophancy Attack**: The `sycophancy_test` has been renamed to `sycophancy`, transforming it into a multistage attack for increased effectiveness in uncovering model vulnerabilities.
- **Enhanced Logical Inconsistencies Attack**: The `logical_inconsistencies_test` has been renamed to `logical_inconsistencies` and restructured as a multistage attack to better detect and exploit logical weaknesses within language models.
- **New Multistage Harmful Behavior Attack**: Introducing `harmful_behaviour_multistage`, a more nuanced version of the original harmful behavior attack, designed for deeper penetration testing.
- **Innovative System Prompt Leakage Attack**: We've developed a new multistage attack, `system_prompt_leakage`, leveraging jailbreak examples from dataset to target and exploit model internals.

Improvements & Refinements
- Conducted extensive **refactoring** for improved code efficiency and maintainability across the framework.
- Made numerous small **improvements and optimizations** to enhance overall performance and user experience.

Community Engagement
- **Join Our Telegram Chat**: We have created a LLAMATOR channel on Telegram where we encourage all users to share feedback, discuss findings, and contribute to our community. You can find us here: **[llamator](https://t.me/llamator)**

---

Get Involved

We value your input in making LLAMATOR the best tool for LLM Red teaming. Your feedback is essential as we continue to evolve and improve. If you have suggestions, encounter any issues, or want to share your experiences using LLAMATOR 2.0.0, please don't hesitate to reach out!

---

*Thank you for choosing LLAMATOR. Let's make AI security better together!*

1.1.1

What's new

* Enhanced prompts for attacking and judging models in base64, harmful behavior, sycophancy, ethical compliance attacks
* Added new Logical inconsistencies attack
* Added more jailbreaks into DAN and UCAR datasets, all datasets in parquet format now
* Included a practical example for testing chatbots within WhatsApp
* Various small bug fixes and optimizations across the framework

---

We hope these changes enhance your capabilities in conducting effective LLM Red teaming exercises.

If you have any feedback or questions, feel free to reach out!

1.0.2

What's New

* Fix issue with missing attack parquet-datasets in PyPi

Page 1 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.