Predacons

Latest version: v0.0.128

Safety actively analyzes 688896 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.0.128

- **New Features**
- Introduced streaming capabilities for text and chat generation, allowing real-time output.
- Added new functions: `text_stream` and `chat_stream` for enhanced streaming functionality.

- **Bug Fixes**
- Updated error handling for streaming setup and generation processes with descriptive messages.
-**Examples**
python
%%
import predacons

%%
predacons.rollout()

%%
import os
os.environ['CUDA_VISIBLE_DEVICES'] ='0'
import torch
if torch.cuda.is_available():
torch.set_default_device('cuda')
print("Using GPU")
else:
print("No GPU available")

%%
model_path = "Predacon/Pico-Lamma-3.2-1B-Reasoning-Instruct"

%%
model = predacons.load_model(model_path)
tokenizer = predacons.load_tokenizer(model_path)

%%
seq = "The quick brown fox jumps over the"

%%


thread,stream = predacons.text_generate(model=model, tokenizer = tokenizer, sequence = seq, max_length=100, temperature=0.1,stream=True)

%%
thread.start()
try:
out = ""
for new_text in stream:
out = out + new_text
print(new_text, end=" ")
finally:
thread.join()

%%
a = predacons.text_stream(model=model, tokenizer = tokenizer, sequence = seq, max_length=100, temperature=0.1)

%%
a

%%
chat = [
{"role": "user", "content": "A train travelling at a speed of 60 km/hr is stopped in 15 seconds by applying the brakes. Determine its retardation."},
]

%%
thread,stream = predacons.chat_generate(model=model, tokenizer = tokenizer, sequence = chat, max_length=500, temperature=0.1,stream=True)

%%
thread.start()
try:
out = ""
for new_text in stream:
out = out + new_text
print(new_text, end="")
finally:
thread.join()

%%
b = predacons.chat_stream(model=model, tokenizer = tokenizer, sequence = chat, max_length=500, temperature=0.1)

%%
b


- **Documentation**
- Enhanced documentation for functions to include detailed parameter descriptions and functionalities.

- **Chores**
- Incremented version number to `0.0.128` and updated package dependencies, removing compatibility concerns.

Walkthrough
This pull request introduces significant enhancements to the `Generate` class and the `predacons` module by adding streaming capabilities for output generation. New methods are implemented to allow real-time streaming of both text and chat outputs. The `rollout` function in `predacons.py` is updated with a new version number and additional print statements for better documentation. The `setup.py` file reflects a version increment and removal of specific dependencies, indicating a shift in package requirements.

Changes

| File | Change Summary |
|-------------------------------------|-----------------------------------------------------------------------------------------------------|
| app/predacons/src/generate.py | - Added methods for streaming outputs: `generate_output_stream`, `generate_chat_output_stream`, `generate_output_from_model_stream`, `generate_chat_output_from_model_stream`. <br> - Restructured `__generate_chat_output` to support streaming and updated error handling. |
| app/predacons/src/predacons.py | - Updated `rollout` function version from `v0.0.126` to `v0.0.128`. <br> - Added new functions: `text_stream`, `chat_stream`. <br> - Updated `generate`, `text_generate`, and `chat_generate` functions to include a `stream` parameter. |
| setup.py | - Updated version from `0.0.126` to `0.0.128`. <br> - Removed dependencies for `torch` and `bitsandbytes` from `install_requires`. |
| app/predacons/__init__.py | - Added `text_stream` and `chat_stream` to the list of exported functions. |

What's Changed
* Update README.md by shouryashashank in https://github.com/Predacons/predacons/pull/44
* Update predacons.py by shouryashashank in https://github.com/Predacons/predacons/pull/45
* Feature/quick fix by shouryashashank in https://github.com/Predacons/predacons/pull/46


**Full Changelog**: https://github.com/Predacons/predacons/compare/v0.0.126...v0.0.128

0.0.127

- **New Features**
- Introduced streaming capabilities for text and chat generation, allowing real-time output.
- Added new functions: `text_stream` and `chat_stream` for enhanced streaming functionality.

- **Bug Fixes**
- Updated error handling for streaming setup and generation processes with descriptive messages.
-**Examples**
python
%%
import predacons

%%
predacons.rollout()

%%
import os
os.environ['CUDA_VISIBLE_DEVICES'] ='0'
import torch
if torch.cuda.is_available():
torch.set_default_device('cuda')
print("Using GPU")
else:
print("No GPU available")

%%
model_path = "Predacon/Pico-Lamma-3.2-1B-Reasoning-Instruct"

%%
model = predacons.load_model(model_path)
tokenizer = predacons.load_tokenizer(model_path)

%%
seq = "The quick brown fox jumps over the"

%%


thread,stream = predacons.text_generate(model=model, tokenizer = tokenizer, sequence = seq, max_length=100, temperature=0.1,stream=True)

%%
thread.start()
try:
out = ""
for new_text in stream:
out = out + new_text
print(new_text, end=" ")
finally:
thread.join()

%%
a = predacons.text_stream(model=model, tokenizer = tokenizer, sequence = seq, max_length=100, temperature=0.1)

%%
a

%%
chat = [
{"role": "user", "content": "A train travelling at a speed of 60 km/hr is stopped in 15 seconds by applying the brakes. Determine its retardation."},
]

%%
thread,stream = predacons.chat_generate(model=model, tokenizer = tokenizer, sequence = chat, max_length=500, temperature=0.1,stream=True)

%%
thread.start()
try:
out = ""
for new_text in stream:
out = out + new_text
print(new_text, end="")
finally:
thread.join()

%%
b = predacons.chat_stream(model=model, tokenizer = tokenizer, sequence = chat, max_length=500, temperature=0.1)

%%
b


- **Documentation**
- Enhanced documentation for functions to include detailed parameter descriptions and functionalities.

- **Chores**
- Incremented version number to `0.0.127` and updated package dependencies, removing compatibility concerns.

Walkthrough
This pull request introduces significant enhancements to the `Generate` class and the `predacons` module by adding streaming capabilities for output generation. New methods are implemented to allow real-time streaming of both text and chat outputs. The `rollout` function in `predacons.py` is updated with a new version number and additional print statements for better documentation. The `setup.py` file reflects a version increment and removal of specific dependencies, indicating a shift in package requirements.

Changes

| File | Change Summary |
|-------------------------------------|-----------------------------------------------------------------------------------------------------|
| app/predacons/src/generate.py | - Added methods for streaming outputs: `generate_output_stream`, `generate_chat_output_stream`, `generate_output_from_model_stream`, `generate_chat_output_from_model_stream`. <br> - Restructured `__generate_chat_output` to support streaming and updated error handling. |
| app/predacons/src/predacons.py | - Updated `rollout` function version from `v0.0.126` to `v0.0.127`. <br> - Added new functions: `text_stream`, `chat_stream`. <br> - Updated `generate`, `text_generate`, and `chat_generate` functions to include a `stream` parameter. |
| setup.py | - Updated version from `0.0.126` to `0.0.127`. <br> - Removed dependencies for `torch` and `bitsandbytes` from `install_requires`. |
| app/predacons/__init__.py | - Added `text_stream` and `chat_stream` to the list of exported functions. |

0.0.126

New Features

* Incremented version number of the predacons package to 0.0.126.
* Introduced the PredaconsEmbedding class for generating sentence embeddings using a pre-trained transformer model.
* Bug Fixes

* Removed problematic dependencies (torch and bitsandbytes) from the installation requirements to prevent potential installation issues.

Chores

* Deleted the GitHub Actions workflow for automating package uploads to Test PyPI.
* Enhanced documentation within the predacons module to clarify function parameters and purposes.

Example
python

Generate embeddings for sentences
from predacons.src.embeddings import PredaconsEmbedding

this embedding_model object can be used directly in every method langchain
embedding_model = PredaconsEmbedding(model_name="sentence-transformers/paraphrase-MiniLM-L6-v2")
sentence_embeddings = embedding_model.get_embedding(["Your sentence here", "Another sentence here"])


What's Changed
* removed prerelease workflow by shouryashashank in https://github.com/Predacons/predacons/pull/39
* Feature/add embedding by shouryashashank in https://github.com/Predacons/predacons/pull/41
* Feature/add embedding by shouryashashank in https://github.com/Predacons/predacons/pull/42


**Full Changelog**: https://github.com/Predacons/predacons/compare/v0.0.125...v0.0.126

0.0.125

New Features

Introduced a new keyword argument dont_print_output in the chat generation function for improved output management.
Version Updates

Incremented the application version from 0.0.124 to 0.0.125.

What's Changed
* added dont print by shouryashashank in https://github.com/Predacons/predacons/pull/38


**Full Changelog**: https://github.com/Predacons/predacons/compare/v0.0.124...v0.0.125

0.0.124

* Added auto quantize for generation models reducing the max memory requirement by 4 folds
example
python
load model with auto_quantize
model = predacons.load_model(model_name,auto_quantize="4bit")
tokenizer = predacons.load_tokenizer(model_name)

generate response
sequence = "Explain the concept of acceleration in physics."
output,tokenizer =predacons.generate(model = model,
sequence = sequence,
max_length = 500,
tokenizer = tokenizer,
trust_remote_code = True)

Decode and print the generated text
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

What's Changed
* added auto quantize to all the model methods by shouryashashank in https://github.com/Predacons/predacons/pull/37


**Full Changelog**: https://github.com/Predacons/predacons/compare/v0.0.123...v0.0.124

0.0.123

* added support for gguf model files

python
model = predacons.load_model(model_path=model_id, gguf_file=gguf_file)
tokenizer = predacons.load_tokenizer(model_id, gguf_file=gguf_file)
chat = [
{"role": "system", "content": "you are a travel planner who plans trips for people. and list down the places to visit at that place"},
{"role": "user", "content": "I want to plan a trip to new delhi. Can you help me with that?"},
]
predacons.chat_generate(model = model,
sequence = chat,
max_length = 200,
tokenizer = tokenizer,
trust_remote_code = True,
do_sample=True,
)


What's Changed
* added default chat template by shouryashashank in https://github.com/Predacons/predacons/pull/36


**Full Changelog**: https://github.com/Predacons/predacons/compare/v0.0.122...v0.0.123

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.