Shell-gpt

Latest version: v1.4.3

Safety actively analyzes 622142 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

1.4.3

What's Changed

* Bug fixed when parsing config **.sgptrc** that contains multiple equal "=" symbols.
* Interuption option when LLM actively generates (stream) response in REPL mode with Ctrl + C.
* Fixed a bug in function calls which didn’t work properly due to caching.

Shoutout to all contributors: keiththomps, artsparkAI, save196.

1.4.0

What's Changed
* Added new option `—md` and `—no-md` to disable or enable markdown output.
* Added new config variable `PRETTIFY_MARKDOWN` to disable or enable markdown output by default.
* Added new config variable `USE_LITELLM` to enforce usage of LiteLLM library.

OpenAI and LiteLLM
Because LiteLLM facilitates requests for numerous other LLM backends, it is heavy to import, adding 1-2 seconds of runtime. ShellGPT, by default will use OpenAI's library, which is suitable for most users. Optionally, ShellGPT can be installed with LiteLLM by running `pip install shell-gpt[litellm]`. To enforce LiteLLM usage set `USE_LITELLM` to true in the config file `~/.config/shell_gpt/.sgptrc`.

1.3.1

What's Changed
* Fix 422: Markdown formatting for chat history by jeanlucthumm in https://github.com/TheR1D/shell_gpt/pull/444
* New config variable `API_BASE_URL` 473 and fixing `REQUEST_TIMEOUT` by TheR1D in https://github.com/TheR1D/shell_gpt/pull/477
* Minor code optimisations.

**Full Changelog**: https://github.com/TheR1D/shell_gpt/compare/1.3.0...1.3.1

1.3.0

What's Changed
* Ollama and other LLM backends.
* Markdown formatting now depends on role description.
* Code refactoring and optimisation.

Multiple LLM backends
ShellGPT now can work with multiple Backends using [LiteLLM](https://github.com/BerriAI/litellm). You can use locally hosted open source models which are available for free. To use local models, you will need to run your own LLM backend server such as [Ollama](https://github.com/ollama/ollama). To setup ShellGPT with Ollama, please follow this comprehensive [guide](https://github.com/TheR1D/shell_gpt/wiki/Ollama). Full list of supported models and providers [here](https://docs.litellm.ai/docs/providers). **Note that ShellGPT is not optimized for local models and may not work as expected❗️**

Markdown formatting
Markdown formatting now depends on the role description. For instance, if the role includes `"APPLY MARKDOWN"` in its description, the output for this role will be Markdown-formatted. This applies to both default and custom roles. If you would like to disable Markdown formatting, edit the default role description in `~/.config/shell_gpt/roles`.

**Full Changelog**: https://github.com/TheR1D/shell_gpt/compare/1.2.0...1.3.0

1.2.0

* Added `--interaction` that works with `--shell` option, e.g. `sgpt --shell --no-interaction` will output suggested command to **stdout**. This is usefull when you want to redirect output to somewhere else. For instance `sgpt -s "say hi" | pbcopy`.
* Fixed issue with stdin and `--shell` not switching to interactive input mode.
* REPL mode now can accept stdin or PROMPT argument, or both.
* Changed shell integrations to use new `--no-interaction` to generate shell commands.
* Moved shell integrations into dedicated file **integration.py**.
* Changed `--install-integration` logic, will not download sh script anymore.
* Removed validation for PROMPT argument, now will be empty string by default.
* Fixing an issue when `sgpt` is being called from non-interactive shell environments such as cron tab.
* Fixed and optimised **Dockerfile**.
* GitHub codespaces setup.
* Improved tests.
* **README.md** improvements.
* New demo video 🐴.

❗️**Shell integration** logic has been updated, and it will not work with previous version of integration function in `~/.bashrc` or `~/.zshrc`. Run `sgpt --install-integration` to apply new changes, and remove old integration function from your shell profile if you were using it before.

https://github.com/TheR1D/shell_gpt/assets/16740832/9197283c-db6a-4b46-bfea-3eb776dd9093


REPL stdin
REPL mode can now accept stdin, a PROMPT argument, or even both. This is useful when you want to provide some initial context for your prompt.
shell
sgpt --repl temp < my_app.py

text
Entering REPL mode, press Ctrl+C to exit.
──────────────────────────────────── Input ────────────────────────────────────
name = input("What is your name?")
print(f"Hello {name}")
───────────────────────────────────────────────────────────────────────────────
>>> What is this code about?
The snippet of code you've provided is written in Python. It prompts the user...
>>> Follow up questions...

It is also possible to pass PROMPT to REPL mode `sgpt --repl temp "some initial prompt"` or even both `sgpt --repl temp "initial arg prompt" < text.txt`.

**Full Changelog**: https://github.com/TheR1D/shell_gpt/compare/1.1.0...1.2.0

1.1.0

https://github.com/TheR1D/shell_gpt/assets/16740832/721ddb19-97e7-428f-a0ee-107d027ddd59

OpenAI Library
ShellGPT has now integrated the OpenAI Python library for handling API requests. This integration simplifies the development and maintenance of the ShellGPT code base. Additionally, it enhances user experience by providing more user-friendly error messages, complete with descriptions and potential solutions.

Function Calling
[Function calls](https://platform.openai.com/docs/guides/function-calling) is a powerful feature OpenAI provides. It allows LLM to execute functions in your system, which can be used to accomplish a variety of tasks. ShellGPT has a convenient [way to define functions](https://github.com/TheR1D/shell_gpt#function-calling) and use them. To install [default functions](https://github.com/TheR1D/shell_gpt/tree/main/sgpt/default_functions/) run:
shell
sgpt --install-functions

This will add a function for LLM to execute shell commands and to execute Apple Scripts (on macOS). More details in demo video and README.md.

Options
* Shortcut option `-c` for `—code`.
* Shortcut option `-lc` for `--list-chats`
* Shortcut option `-lr` for `--list-roles`
* New `—functions` option, enables/disable function calling.
* New `—install-functions` option, installs default functions.

Config
* New config variable `OPENAI_FUNCTIONS_PATH`
* New config variable `OPENAI_USE_FUNCTIONS`
* New config variable `SHOW_FUNCTIONS_OUTPUT`

Minor Changes
* Code optimisation
* Cache optimisations for function calls

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.