Gptcache

Latest version: v0.1.44

Safety actively analyzes 701533 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 7

0.1.27

πŸŽ‰ Introduction to new functions of GPTCache

1. Support the uform embedding, which can be used the **bilingual** (english + chinese) language

thanks ashvardanian 's contribution

python
from gptcache.embedding import UForm

test_sentence = 'Hello, world.'
encoder = UForm(model='unum-cloud/uform-vl-english')
embed = encoder.to_embeddings(test_sentence)

test_sentence = 'δ»€δΉˆζ˜―Github'
encoder = UForm(model='unum-cloud/uform-vl-multilingual')
embed = encoder.to_embeddings(test_sentence)


What's Changed
* Fix the wrong LangChainChat comment by SimFG in https://github.com/zilliztech/GPTCache/pull/381
* Add UForm multi-modal embedding by SimFG in https://github.com/zilliztech/GPTCache/pull/382
* Support to config the cache storage data size by SimFG in https://github.com/zilliztech/GPTCache/pull/383
* Update the protobuf version in the doc by SimFG in https://github.com/zilliztech/GPTCache/pull/387
* Update the version to `0.1.27` by SimFG in https://github.com/zilliztech/GPTCache/pull/389


**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.26...0.1.27

0.1.26

πŸŽ‰ Introduction to new functions of GPTCache

1. Support the paddlenlp embedding vax521

python
from gptcache.embedding import PaddleNLP

test_sentence = 'Hello, world.'
encoder = PaddleNLP(model='ernie-3.0-medium-zh')
embed = encoder.to_embeddings(test_sentence)


2. Support [the openai Moderation api](https://platform.openai.com/docs/api-reference/moderations)

python
from gptcache.adapter import openai
from gptcache.adapter.api import init_similar_cache
from gptcache.processor.pre import get_openai_moderation_input

init_similar_cache(pre_func=get_openai_moderation_input)
openai.Moderation.create(
input="hello, world",
)


3. Add the llama_index bootcamp, through which you can learn how GPTCache works with llama index

details: [WebPage QA](https://gptcache.readthedocs.io/en/latest/bootcamp/llama_index/webpage_qa.html)

What's Changed
* Replace summarization test model. by wxywb in https://github.com/zilliztech/GPTCache/pull/368
* Add the llama index bootcamp by SimFG in https://github.com/zilliztech/GPTCache/pull/371
* Update the llama index example url by SimFG in https://github.com/zilliztech/GPTCache/pull/372
* Support the openai moderation adapter by SimFG in https://github.com/zilliztech/GPTCache/pull/376
* Paddlenlp embedding support by SimFG in https://github.com/zilliztech/GPTCache/pull/377
* Update the cache config template file and example directory by SimFG in https://github.com/zilliztech/GPTCache/pull/380


**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.25...0.1.26

0.1.25

πŸŽ‰ Introduction to new functions of GPTCache
1. Support the DocArray vector database

python
from gptcache.manager import manager_factory

data_manager = manager_factory("sqlite,docarray")


2. Add rwkv model for embedding

python
from gptcache.embedding import Rwkv

test_sentence = 'Hello, world.'
encoder = Rwkv(model='sgugger/rwkv-430M-pile')
embed = encoder.to_embeddings(test_sentence)


What's Changed
* [skip ci]Add workflow to publish release image by Bennu-Li in https://github.com/zilliztech/GPTCache/pull/345
* Update the doc directory by SimFG in https://github.com/zilliztech/GPTCache/pull/348
* Add the docker image doc by SimFG in https://github.com/zilliztech/GPTCache/pull/349
* DocArray as a vectorstore by jupyterjazz in https://github.com/zilliztech/GPTCache/pull/351
* Fix the doc generation failure by SimFG in https://github.com/zilliztech/GPTCache/pull/352
* Replace base image and simplify dockerfile by Chiiizzzy in https://github.com/zilliztech/GPTCache/pull/353
* Example with DocArray by jupyterjazz in https://github.com/zilliztech/GPTCache/pull/354
* Add comments for session by shiyu22 in https://github.com/zilliztech/GPTCache/pull/355
* DocArray example adjustment by jupyterjazz in https://github.com/zilliztech/GPTCache/pull/356
* Improve the generation doc script by SimFG in https://github.com/zilliztech/GPTCache/pull/358
* Change the model test cases as L2 cases by SimFG in https://github.com/zilliztech/GPTCache/pull/362
* Add concat_context. by wxywb in https://github.com/zilliztech/GPTCache/pull/365
* Add the pre/report/session docs by SimFG in https://github.com/zilliztech/GPTCache/pull/364
* Add rwkv model for embedding. by wxywb in https://github.com/zilliztech/GPTCache/pull/363
* Update the version to `0.1.25` by SimFG in https://github.com/zilliztech/GPTCache/pull/367

New Contributors
* jupyterjazz made their first contribution in https://github.com/zilliztech/GPTCache/pull/351

**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.24...0.1.25

0.1.24

πŸŽ‰ Introduction to new functions of GPTCache

1. Support the langchain embedding

python
from gptcache.embedding import LangChain
from langchain.embeddings.openai import OpenAIEmbeddings

test_sentence = 'Hello, world.'
embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name")
encoder = LangChain(embeddings=embeddings)
embed = encoder.to_embeddings(test_sentence)


2. Add gptcache client

python
from gptcache import Client

client = Client()
client.put("Hi", "Hi back")
ans = client.get("Hi")


3. Support pgvector as vector store

python
from gptcache.manager import manager_factory

data_manager = manager_factory("sqlite,pgvector", vector_params={"dimension": 10})


4. Add the GPTCache server doc

reference: https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md#Build-GPTCache-server

What's Changed
* Update the version to `0.1.24` by SimFG in https://github.com/zilliztech/GPTCache/pull/347


**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.23...0.1.24

0.1.23

πŸŽ‰ Introduction to new functions of GPTCache

1. Support the session for the `LangChainLLMs`

python
from langchain import OpenAI
from gptcache.adapter.langchain_models import LangChainLLMs
from gptcache.session import Session

session = Session(name="sqlite-example")
llm = LangChainLLMs(llm=OpenAI(temperature=0), session=session)


2. Optimize the summarization context process

python
from gptcache import cache
from gptcache.processor.context.summarization_context import SummarizationContextProcess

context_process = SummarizationContextProcess()
cache.init(
pre_embedding_func=context_process.pre_process,
)


3. Add BabyAGI bootcamp

details: https://github.com/zilliztech/GPTCache/blob/main/docs/bootcamp/langchain/baby_agi.ipynb

What's Changed
* Update langchain llms with session by shiyu22 in https://github.com/zilliztech/GPTCache/pull/327
* Wrap gptcache server in a docker image by Chiiizzzy in https://github.com/zilliztech/GPTCache/pull/329
* Fix requirements conflict for sphinx by jaelgu in https://github.com/zilliztech/GPTCache/pull/330
* Use self-hosted tokenizer and update summarization context. by wxywb in https://github.com/zilliztech/GPTCache/pull/331
* Optimize some code by SimFG in https://github.com/zilliztech/GPTCache/pull/333
* Add BabyAGI bootcamp by shiyu22 in https://github.com/zilliztech/GPTCache/pull/334
* Improve the api for the `import_ruamel` by SimFG in https://github.com/zilliztech/GPTCache/pull/336


**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.22...0.1.23

0.1.22

πŸŽ‰ Introduction to new functions of GPTCache

1. Process the dialog context through the context processing interface, which currently supports two ways: summarize and selective context

python
import transformers
from gptcache.processor.context.summarization_context import SummarizationContextProcess
from gptcache.processor.context.selective_context import SelectiveContextProcess
from gptcache import cache

summarizer = transformers.pipeline("summarization", model="facebook/bart-large-cnn")
context_process = SummarizationContextProcess(summarizer, None, 512)
cache.init(
pre_embedding_func=context_process.pre_process,
)

context_processor = SelectiveContextProcess()
cache.init(
pre_embedding_func=context_process.pre_process,
)


What's Changed
* Add SummarizationContextProcess by wxywb in https://github.com/zilliztech/GPTCache/pull/316
* Support duckdb by SimFG in https://github.com/zilliztech/GPTCache/pull/323
* Add cmd for gptcache server by Chiiizzzy in https://github.com/zilliztech/GPTCache/pull/325
* Support the selective context processor by SimFG in https://github.com/zilliztech/GPTCache/pull/326


**Full Changelog**: https://github.com/zilliztech/GPTCache/compare/0.1.21...0.1.22

Page 4 of 7

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.