- feat: Update llama.cpp to ggerganov/llama.cpp89febfed9322c8849520dc63c93ee4f5fd72556e
0.2.44
Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp4524290e87b8e107cc2b56e1251751546f4b9051 - fix: create_embedding broken response for input type str by abetlen in 0ce66bc080fe537590b05b24bf442480bf2dd045 - fix: Use '\n' seperator for EventSourceResponse by khimaros in 1188 - fix: Incorporate embedding pooling layer fixes by iamlemec in 1194
0.2.43
Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp8084d554406b767d36b3250b3b787462d5dd626f - feat: Support batch embeddings by iamlemec in 1186 - fix: submodule kompute is not included in sdist by abetlen in 7dbbfdecadebe7750be650d9409959640ff9a460 - fix: fix: Update openbuddy prompt format by abetlen in 07a783779a62a4aac0b11161c7e0eb983ff215f8
0.2.42
Not secure
- feat: Update llama.cpp to ggerganov/llama.cppea9c8e11436ad50719987fa23a289c74b7b40d40 - fix: sample idx off-by-one error for logit_processors by lapp0 in 1179 - fix: chat formatting bugs in `chatml-function-calling` by abetlen in 4b0e3320bd8c2c209e29978d0b21e2e471cc9ee3 and 68fb71b6a26a1e57331868f959b47ab4b87851e1
0.2.41
Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp895407f31b358e3d9335e847d13f033491ec8a5b - fix: Don't change order of json schema object properties in generated grammar unless prop_order is passed by abetlen in d1822fed6b706f38bd1ff0de4dec5baaa3cf84fa
0.2.40
Not secure
- feat: Update llama.cpp to ggerganov/llama.cpp3bdc4cd0f595a6096cca4a64aa75ffa8a3503465 - feat: Generic chatml Function Calling using chat_format="chatml-function-calling"` by abetlen in 957 - fix: Circular dependancy preventing early Llama object free by notwa in 1176 - docs: Set the correct command for compiling with syscl support by akarshanbiswas in 1172 - feat: use gpu backend for clip if available by iamlemec in 1175