Oobabot

Latest version: v0.2.3

Safety actively analyzes 682334 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.2.3

Release v.0.2.3

Note: version 0.2.2 only updated oobabot-plugin, not oobabot. This
shows changes to oobabot since the prior release, [v0.2.1](RELEASE-0.2.1.md).

What's Changed

Mainly a bugfix update for 0.2.1, with a few fixes and configuration
parameters.

New Features

* Option to disable unsolicited replies entirely

Unsolicited replies are still enabled by default, but you can now disable them entirely by changing this setting in your config.yml:

yaml
If set, the bot will not reply to any messages that do not -mention it or include a
wakeword. If unsolicited replies are disabled, the unsolicited_channel_cap setting will
have no effect.
default: False
disable_unsolicited_replies: true


The objective of this change is to support cases where
unsolicited replies are not desired, such as when the bot is used in a
channel with a high volume of messages.

Bug Fixes / Tech Improvements

* Unicode logging reliability fix ooba_client.py

Unicode bugs in oobabooga seem to be a moving target, so
this fix gates the fix applied in 0.2.1 to only be applied
in cases where oobabooga is known to be broken.

* Security fix: Bump aiohttp from 3.8.4 to 3.8.5

Update dependency aiohttp to v3.8.5. This fixes [a security
issue in aiohttp](https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst). On a quick scan it doesn't seem to be something
a user could exploit within oobabot, but better to update anyway.

* Preserve newlines when prompting the bot

In some cases the whitespace in user messages is important. One case is
described in the [issue 76, reported by xydreen](https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w).

When sending a prompt to the bot, we will now preserve any newlines
that the bot itself had generated in the past.

We will still strip newlines from messages from user-generated messages,
as otherwise they would have the ability to imitate our prompt format.
This would let users so inclined to fool the bot into thinking a
message was sent by another user, or even itself.

Full Changelog

[All changes from 0.2.1 to 0.2.3](https://github.com/chrisrude/oobabot/compare/v0.2.1...v0.2.3)

0.2.1

Not secure
- oobabot will be able to join audio channels using the `/join_voice` command
- it will transcribe audio from the channel, recording which user said what
- it will listen to wake-words, and respond using voice synthesis
- if you're using `oobabot-plugin`, you'll get a pretty transcript of the
conversation

This has been a ton of work, and I'm eager to get to putting on the finishing
touches and get it out. In the meantime, I wanted to release the now-unified
backend, so that I can make sure that it is stable, so that I can focus on
polishing the audio work. Also, a few important bugs have been reported in
the meantime, and I don't want to hold those back.

Add new .yaml settings

stream_responses_speed_limit

When in "streaming" mode (i.e. when stream_responses is set to True), this will limit the
rate at which we update the streaming message in Discord. We need this setting because Discord has rate-limiting logic, and if we send updates "too fast" then it will slow down our updates drastically, which will appear as jerky streaming.

This value is the minimum delay in seconds in between updates. That is -- we will update Discord no more than once this number of seconds. The updates may come slower than this, perhaps on systems that take a long time to generate tokens. It's only guaranteed that they won't be any faster than this.

Previously, this value was hard-coded to 0.5. Now the default is 0.7, which was determined by user testing. Thanks to [jmoney7823956789378](https://github.com/jmoney7823956789378) for helping make this happen!

`discrivener_location` and `discrivener_model_location`

These are new settings to add voice support to oobabot. Voice support means that the bot
can join voice chat channels, transcribe what is said, hear wakewords, and generate voice
responses in those channels. All of the audio processing -- text to speech, and speech to
text -- is handled in a binary called "discrivener", whose source lives at [github.com/chrisrude/discrivener](https://github.com/chrisrude/discriviner).

I've tested this to work on Linux and OSX, but there is still more work to do in documenting and packaging the software. So for now, these settings are blank by default, which will leave oobabot in text-only mode, as it has been.

command_lobotomize_response

A user noticed that there was no setting to customize the text that gets shown when you use the `/lobotomize` command. Whoops! Now here it is. This is of particular interest because the bot will see this text after a lobotomize occurs, so if you have specific character styling you want to keep it from getting confused about, then you might want to put in custom text of your choosing here.

You can also use variables `{AI_NAME}` and `{USER_NAME}` to represent the name of the AI, and the name of the user who ran the `/lobotomize` command.

Show an error if a custom .yaml file could not be loaded

Previously, we would ignore any errors that occurred when loading a custom .yaml file, and just proceed with defaults if we could. Now, we will show an error message to the user displaying the full path to the yaml file we could not load, and the bot will not start.

This should help users self-diagnose a number of configuration issues, such as accidentally having a syntax error in their .yaml file.

Bug Fixes / Tech Improvements

- Fix [bug 38](https://github.com/chrisrude/oobabot/issues/38): the bot will now only
mark messages as replies if it was directly mentioned (by an -mention or keyword). Also,
if it is configured to reply across several messages, it will only mark the first message
in the series as a reply. This reduces notification noise to users when using mobile clients.

- Increase default token space back to 2048. Users who have not set a custom a token space value (aka `truncation_length`) will just have this updated automatically.
- Add new oobabooga request params:
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"mirostat_mode",
"mirostat_tau", and
"mirostat_eta"

- If the user forgets to enable either `SERVER MEMBERS INTENT` or `MESSAGE CONTENT INTENT` for their bot's Discord account, show a specific error message letting them know.

Full Changelog

[All changes from 0.1.9 to 0.2.0](https://github.com/chrisrude/oobabot/compare/v0.1.9...v0.2.0)

0.2.0

Not secure
Release v.0.2.0

Long time since the last release, but tons of work!

New Features

Backend changes for AUDIO SUPPORT 🥳 (coming soon)

This release includes a lot of work to support audio
channels. This still needs to be documented and packaged,
but it is a thing that works! Look for full support in

0.1.9

Not secure
Release v.0.1.9

Very minor release, mainly want to get this out to support a big pending update in the new [Oobabot-plugin GUI for Oobaboog's Text Generation WebUI.](https://github.com/chrisrude/oobabot-plugin)

New Features

Unsolicited Reply Cap

There's a new `unsolicited_channel_cap` option in
`discord` section of `config.yml`. It does this:

FEATURE PREVIEW: Adds a limit to the number of channels
the bot will post unsolicited messages in at the same
time. This is to prevent the bot from being too noisy
in large servers.

When set, only the most recent N channels the bot has
been summoned in will have a chance of receiving an
unsolicited message. The bot will still respond to
-mentions and wake words in any channel it can access.

Set to 0 to disable this feature.

Breaking Changes

Remove deprecated command-line options

The following CLI arguments have been removed:

- `diffusion_steps`
- `image_height`
- `image_width`
- `stable_diffusion_sampler`
- `sd_negative_prompt`
- `sd_negative_prompt_nsfw`

All of these settings are still changeable via the config file.

If you don't have a config file, you can generate one first on your previous version by using:
``oobabot [all your normal CLI arguments] --generate-config > config.yml

From the directory you run oobabot from. Now all your CLI arguments are stored in the yml file,
and you don't need to pass them anymore.

Bug Fixes / Tech Improvements

- add a method to generate / load yml config files from another package

- Discord's own logs are now included in the standard logging output, in a purple background

Full Changelog

[Changes from 0.1.8 to 0.1.9](https://github.com/chrisrude/oobabot/compare/v0.1.8...v0.1.9)

0.1.8

Not secure
Release v.0.1.8

Lots of bugfixes in this release, and a lot of behind-the-scenes work to support a proper plugin to Oobabooga. Coming Soon (tm)!

However, there a number of small new features as well.

New Features

Reading Personas from a File

In `config.yml`, in `persona` > `persona_file`, you can now specify a path to a .yml, .json or .txt file containing a persona.

This file can be just a single string, a json file in the common "tavern" formats, or a yaml file in the Oobabooga format.

With a single string, the persona will be set to that string. Otherwise, the ai_name and persona will be overwritten with the values in the file. Also, the wakewords will be extended to include the character's own name.

Regex-based message splitting

This new setting is in `oobabooga` > `message_regex`.

Some newer chat-specific models are trained to generate specific delimiters to separate their response into individual messages.

This adds a setting to tell the bot what regex to use to split such responses into individual messages.

If this is set, it will only effect *how* message-splitting happens, not whether it happens. By default, the bot will still split messages. But if `stream_responses` or `dont_split_responses` is enabled, this setting will be ignored, as the messages won't be split anyway.

`--invite-url` command line option

This will generate an invite URL for the bot, and print it to the console. This is useful for when you have a new bot, or want to generate a new invite URL to add it to a new server. It will also automatically be printed if we notice the bot is listening on zero servers.

Configurable logging level

In `config.yml`, in `discord` > `log_level`, you can now specify the logging level.

Breaking Changes

Reminder that the deprecated CLI methods are going away soon.

Bug Fixes / Tech Improvements

- replace `<___user_id___>` with the user's display name in history. This would confuse the AI, and leak syntax into its regular chat.

- Add "draw me" to the list of words that will trigger a picture

- Inline stop tokens

With this change, we'll now look for stop tokens even if they're not on a separate line.
Also, automatically add anything from `oobabot` > `request_params` > `stopping_strings` into the list of stop tokens to look for.

- Don't allow the bot to -mention anyone but the user who it's replying
to. This is to prevent users from tricking the bot into pinging broad
groups, should the admin have granted them permission to do so.
-mentions will still work when used via the /say command, which I am
presuming will be used by trusted users

- The bot will now mark its responses to -mentions or keywords by showing an explicit reply in Discord. When this happens, the bot will not see any history "after" the summon. Unsolicited replies will see the full message history, and will not show an explicit reply. This is to help make it clear when the bot is responding to a specific message, and when it's responding to the channel in general.

- turn 'token space too small' message from error into warning
This is to allow users to crank it super high if they want, and let messages be dropped if they run out of space.

Full Changelog

[Changes from 0.1.7 to 0.1.8](https://github.com/chrisrude/oobabot/compare/v0.1.7...v0.1.8)

0.1.7

Not secure
Release v.0.1.7

New Features

- **Configure All the Things**

You can now configure every setting passed to Oobabooga
and Stable Diffusion, and more, via a config.yml file.

- **Streaming Responses**

That's right! It's a little janky, but you can now have the
bot stream its response into a single message. Just pass
the `--stream-responses` flag, or enable the `stream_responses`
flag in the config.yml file.

This works by continuously editing the bot's response message.

- **Stop Markers**

Some models were generating tokens that were appearing in the chat
output. There is a new config setting, `stop_markers`. We'll watch
the response for these markers, and if we see any one on its own line
we'll stop responding.

yaml
A list of strings that will cause the bot to stop generating a response when
encountered.
default: [' end of transcript <|endoftext|>', '<|endoftext|>']
stop_markers:
- ' End of Transcript <|endoftext|>'
- <|endoftext|>


- **Extra Prompt Text** for Stable Diffusion

You can now add extra prompt text to every prompt sent to Stable Diffusion.

This could help customize the image generation to be more appropriate for
your character's persona, or influence the type of images generated in ways
that are more subtle than allowed by other settings.

To use it, generate or regenerate the config.yml file and then set

yaml
This will be appended to every image generation prompt sent to Stable Diffusion.
default:
extra_prompt_text: "as a tattoo"


in the `stable_diffusion:` section.

Breaking Changes

The following command line arguments have been deprecated:

- `diffusion_steps`
- `image_height`
- `image_width`
- `stable_diffusion_sampler`
- `sd_negative_prompt`
- `sd_negative_prompt_nsfw`

They're deprecated because they're more naturally included in the
`stable_diffusion: request_params:` section in the new config file, and it would be
confusing to have the same setting in two places.

I'll keep them around for a while, but they will be removed in a
future release.

If you were generating a new config file anyway, then there's no
impact to you.

New Feature Q&A

What is in the new config file?

You can now configure **every parameter** sent to Oobabooga
and Stable Diffusion to generate responses. Some notable ones are:

- truncation_length (aka "max tokens")
- temperature (controls bot creativitiy)
- repetition_penalty
- early_stopping flag

In addition, this is done in a way so that anything in the
sections is just "passed through" to the underlying service.

This means that if a new release of Oobabooga or Stable Diffusion
adds a new parameter, you can just add it to the config.yml,
without needing a software update.

Creating a new `config.yml` file

Pass `--generate-config` to the CLI to print a fesh new config.yml
file to sdout. You can then redirect this to a file.

This file will include any other settings you've supplied on the
command line. So if you're upgrading from an earlier version,
all you have to do is:

If you've been running with the CLI alone, all you need to do is:
`oobabot {your normal args} --generate-config > config.yml`

and then
`oobabot`

Where to place `config.yml`

`oobabot` will look for a config.yml file in the current
directory by default. If you want to place it somewhere
else, you can specify a different location with the
`--config-file` flag. e.g.

bash
oobabot --config-file /path/to/config.yml


Upgrading from an earlier version

If you ever upgrade and want to regenerate the config.yml,
you can just do this:

bash
cp config.yml config.yml.backup &&
oobabot --config config.yml.backup --generate-config > config.yml


> Note: it's important to **make a backup copy of your config.yml** first,
> because the pipe command in the second line will overwrite it!

Your previous config.yml file will be read before generating the new one,
and the new one will include all the settings from the old one, plus
any new settings that have been added since the last time you generated
the config.yml file.

Your previous config.yml file will be read before generating the new one,
and the new one will include all the settings from the old one, plus
any new settings that have been added since the last time you generated
the config.yml file.

Notes on Streaming Response Jankeyness

It's janky in the following ways:

- there are rate limits on how fast edits can be made,
so it's not silky smooth. We will wait at least 0.2 seconds
between edits, but it may be longer. The actual speed will depend
on how fast Discord allows edits.

- you'll see an "edited" tag on the message. But if you can
ignore that, that's cool.

- you won't be notified when the bot responds this way. This is
because Discord sends new message notifications immediately
on message send, so the notification would only contain a single
token. This would be annoying, so we don't do it.

I'm just impressed it works at all.

Bug Fixes / Tech Improvements

- Fixed an issue with an "image regeneration failed"
when regenerating images against SD servers which took more than 5
seconds to render

- Fixed an issue where regenerating an image while simultaneously
the bot was generating a second reply in the channel would cause
an "image regeneration failed" error, sometimes.

- improve heuristic for detecting our own image posts. Sometimes
the bot would pick up on the UI elements of its own image posts.
This should be fixed now.

- Do a better job at logging exceptions that happen during
message responses

- new over-engineered config file parsing, should afford
easier paramater adding in the future

- fixes for pylint, added to precommit hooks

Full Changelog

[Changes from 0.1.6 to 0.1.7](https://github.com/chrisrude/oobabot/compare/v0.1.6...v0.1.7)

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.