- feat: Update llama.cpp to ggerganov/llama.cppcb49e0f8c906e5da49e9f6d64a57742a9a241c6a - docs: fix typo in README.md embeddings example by iamlemec in 1232
0.2.53
Not secure
- feat: Update llama.cpp to ggerganov/llama.cppcb49e0f8c906e5da49e9f6d64a57742a9a241c6a - fix: eos/bos_token set correctly for Jinja2ChatFormatter and automatic chat formatter by CISC in 1230
0.2.52
Not secure
- feat: Update llama.cpp to ggerganov/llama.cppa33e6a0d2a66104ea9a906bdbf8a94d050189d91 - fix: Llava15ChatHandler (this function takes at least 4 arguments) by abetlen in 8383a9e5620f5df5a88f62da16813eac200dd706
0.2.51
Not secure
- feat: Update llama.cpp to ggerganov/llama.cppc39373398803c669056304090050fe3f44b41bf9 - fix: Restore type hints for low-level api by abetlen in 19234aa0dbd0c3c87656e65dd2b064665371925b
0.2.50
Not secure
- docs: Update Functionary OpenAI Server Readme by jeffrey-fong in 1193 - fix: LlamaHFTokenizer now receives pre_tokens by abetlen in 47bad30dd716443652275099fa3851811168ff4a
0.2.49
Not secure
- fix: module 'llama_cpp.llama_cpp' has no attribute 'c_uint8' in Llama.save_state by abetlen in db776a885cd4c20811f22f8bd1a27ecc71dba927 - feat: Auto detect Mixtral's slightly different format by lukestanley in 1214