Llm-inference

Latest version: v0.0.6

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.0.6

What's Changed
* refactor package by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/13
* refactor apis by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/14


**Full Changelog**: https://github.com/aniketmaurya/llm-inference/compare/v0.0.5...v0.0.6

0.0.5

What's Changed
* [pre-commit.ci] pre-commit suggestions by pre-commit-ci in https://github.com/aniketmaurya/llm-inference/pull/8
* fix build ci by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/9
* Refactor packaging by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/10
* refactor Chatbot by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/11
* longchat chatbot by aniketmaurya in https://github.com/aniketmaurya/llm-inference/pull/12

![](https://github.com/aniketmaurya/llm-inference/releases/download/v0.0.5/chatbot.mov)

New Contributors
* pre-commit-ci made their first contribution in https://github.com/aniketmaurya/llm-inference/pull/8

**Full Changelog**: https://github.com/aniketmaurya/llm-inference/compare/v0.0.4...v0.0.5

0.0.3

What's Changed
* Chatbot by aniketmaurya in https://github.com/aniketmaurya/LLaMA-Inference-API/pull/4
* Refactor bot by aniketmaurya in https://github.com/aniketmaurya/LLaMA-Inference-API/pull/5

How to use Chatbot

python
from chatbot import LLaMAChatBot

checkpoint_path = f"state_dict.pth"
tokenizer_path = f"tokenizer.model"

bot = LLaMAChatBot(
checkpoint_path=checkpoint_path, tokenizer_path=tokenizer_path
)

print(bot.send("hi, what is the capital of France?"))



**Full Changelog**: https://github.com/aniketmaurya/LLaMA-Inference-API/compare/v0.0.2...v0.0.3

0.0.2

What's Changed
* Load finetuned weights by aniketmaurya in https://github.com/aniketmaurya/LLaMA-Inference-API/pull/2
* Refactor serve by aniketmaurya in https://github.com/aniketmaurya/LLaMA-Inference-API/pull/3

For inference

python
from llama_inference import LLaMAInference
import os

WEIGHTS_PATH = os.environ["WEIGHTS"]

checkpoint_path = f"{WEIGHTS_PATH}/lit-llama/7B/state_dict.pth"
tokenizer_path = f"{WEIGHTS_PATH}/lit-llama/tokenizer.model"

model = LLaMAInference(checkpoint_path=checkpoint_path, tokenizer_path=tokenizer_path, dtype="bfloat16")

print(model("New York is located in"))


For serving a REST API
python
app.py
from llama_inference.serve import ServeLLaMA, Response

import lightning as L

component = ServeLLaMA(input_type=PromptRequest, output_type=Response)
app = L.LightningApp(component)


**Full Changelog**: https://github.com/aniketmaurya/LLaMA-Inference-API/compare/v0.0.1...v0.0.2

0.0.1

What's Changed
* Deploy LLaMA with Lightning App by aniketmaurya in https://github.com/aniketmaurya/LLaMA-Inference-API/pull/1


**Full Changelog**: https://github.com/aniketmaurya/LLaMA-Inference-API/commits/v0.0.1

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.