Esbmc-ai

Latest version: v0.4.0.post0

Safety actively analyzes 623642 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.4.0

🎉 Exciting News for Software Developers: ESBMC-AI Version 0.4.0 is Released!

We're happy to announce the latest release of ESBMC-AI, packed with various new features and enhancements that will improve your development workflow. Here's an overview of this release:
* Dynamic Prompting: Introducing a feature that improves bug resolution! With Dynamic Prompting, you can tailor system messages to fix code errors detected by ESBMC. Initial tests reveal a success rate boost from 35% to 90%, ensuring smoother development cycles.
* Token Tracking and Compression: Say goodbye to truncated conversations! ESBMC-AI now intelligently tracks AI model usage and automatically compresses lengthy chats, ensuring seamless interactions even in extended sessions.
* User Configurations: Take control of your setup with user-configurable options. Specify configurations in a known system location, enabling access to ESBMC-AI from anywhere in your environment.
* Wheel Packaging: We're excited to announce that ESBMC-AI is now available as a PyPi package, streamlining installation and making it accessible across different directories.

Setting up ESBMC-AI is now improved, clocking in at just under 3 minutes! Check the links below to get started:

🚀 Download & Install: [PyPi - ESBMC-AI](https://pypi.org/project/esbmc-ai/)
📖 User Guide: [Initial Setup](https://github.com/Yiannis128/esbmc-ai/wiki/Initial-Setup)
🔗 GitHub Repository: [ESBMC-AI](https://github.com/Yiannis128/esbmc-ai)

Join us in deploying ESBMC-AI, backed by ESBMC, for code verification and security. Check out the project on GitHub: [ESBMC Repository](https://github.com/esbmc/esbmc)

Upgrade to ESBMC-AI 0.4.0 and improve your software development journey. Happy coding! 🚀👨‍💻🔥


What's Changed
* Dynamic Prompting by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/98
* BaseChatInterface ChatResponse always max bug by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/107
* Added Pipfile packages that were missing and updated regression tests by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/108


**Full Changelog**: https://github.com/Yiannis128/esbmc-ai/compare/v0.3.1...v0.4.0

0.3.1

This is a short release containing quality of life updates.

What's Changed

* Add `loading_hints` in config, when false, loading animations will be disabled.
* Add `allow_successful` in config, when true, will run even if `VERIFICATION SUCCESSFUL` by backend.
* Add signal to optimize code mode so that the rest of the modes get the optimized solution.


**Full Changelog**: https://github.com/Yiannis128/esbmc-ai/compare/v0.3.0...v0.3.1

0.3.0

Version 0.3.0 brings in `optimize-code`, a new mode of operation similar to `fix-code. This mode tries to optimize inefficiencies in the code using the specified LLM. _This is a very early prototype and will improve as future versions are released._ There are some limitations in the functioning of optimize code, that is, it will not work with features mentioned in [here](https://github.com/Yiannis128/esbmc-ai/wiki/Optimize-Code-Mode#unsupported-cc-language-features).

What's Changed
* Starchat-beta improvements, Verbosity level 2, and optimize code work by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/73

**Full Changelog**: https://github.com/Yiannis128/esbmc-ai/compare/v0.2.0...v0.3.0

0.2.0

The second release of ESBMC-AI adds some major changes into the project. The first is the addition of LLMs that use the [Text Generation Inference](https://github.com/huggingface/text-generation-inference) by Hugging Face, specifically [`starchat-beta`](https://huggingface.co/HuggingFaceH4/starchat-beta). What's more is that now custom LLM endpoints can be specified in the config, for more information see the [Wiki page](https://github.com/Yiannis128/esbmc-ai/wiki/AI-Models#custom-llm).

Another useful addition is the ability to automate the program, as it can now directly launch into command mode. By specifying `-c COMMAND` in the arguments, ESBMC-AI will automatically run that command instead of going into user chat mode.

Lastly, the backend now uses [langchain](https://github.com/hwchase17/langchain) instead of the APIs of OpenAI and Hugging Face directly, as it is easier to maintain. This took a while to implement as the entire backend had to be overhauled, but the change now will hopefully make adding new LLMs much easier and faster!

What's Changed
* Feature: Command automation by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/47
* fix_code_command.py consecutive prompt delay bug. by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/52
* Update README.md by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/55
* Added Falcon LLM, custom AI model loading, and LangChain backend by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/58
* Update README.md by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/64
* Add per-message templates for AIModelTextGen by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/67
* Add stop sequences to custom AI by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/69
* Regression tests and conversation summarizer by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/71


**Full Changelog**: https://github.com/Yiannis128/esbmc-ai/compare/v0.1.0...v0.2.0

0.1.0

The first release of ESBMC-AI, the release includes two features. There are many more planned features! This is a big first step with this release. Currently, the only supported way of running this software is to run it from the project folder, as there is no way of packaging this software and installing it on your system. The next release will aim to include a way to package ESBMC-AI using [the build package](https://pypi.org/project/build/) and hence allowing installation from repositories such as [PyPi](pypi.org).

User Chat Mode

As the output from ESBMC can be technical in nature, and often requires technical experts in the field to decipher, at the most basic level; ESBMC-AI uses the power of LLM to simplify the output of ESBMC so that it is more accessible. It also allows the user to prompt the AI model a wide variety of questions in order to understand the problem at hand much better.

Solution Generation Mode

When "/fix-code" is entered in the user chat mode interface, the Solution Generation Mode is activated. In Solution Generation Mode, the AI model is tasked with automatically fixing the software vulnerabilities. Solutions generated are checked with ESBMC to ensure the code generated is correct, and if not, the output from ESBMC is given back to Solution Generation Mode so that the AI can improve the solution.

Version 0.1 has a number of improvements from the previous posts, such as the ability to compress message conversations so that you are able to infinitely converse with the AI model. Below is a more comprehensive change log.

Features
* User chat mode
* `fix-code` command and `SolutionGenerator` - Solution evaluation loop by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/21
* Token management by Yiannis128 in https://github.com/Yiannis128/esbmc-ai/pull/39

**Full Changelog**: https://github.com/Yiannis128/esbmc-ai/commits/v0.1.0

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.