Ramalama

Latest version: v0.7.2

Safety actively analyzes 723929 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.7.2

What's Changed
* Bump to v0.7.1 by rhatdan in https://github.com/containers/ramalama/pull/1063
* Link ramalama-nvidia.1 to ramalama-cuda.1 by rhatdan in https://github.com/containers/ramalama/pull/1064
* Fix handling of $RAMALAMA_CONTAINER_ENGINE by rhatdan in https://github.com/containers/ramalama/pull/1065
* Docker running of containers is blowing up by rhatdan in https://github.com/containers/ramalama/pull/1066
* args.engine can be None in this code path by ericcurtin in https://github.com/containers/ramalama/pull/1069
* Catch errors early about no support for --nocontainer by rhatdan in https://github.com/containers/ramalama/pull/1060
* Make sure build_rag.sh is in intel-gpu container image by rhatdan in https://github.com/containers/ramalama/pull/1075
* docs: fixes to ramalama-cuda by miabbott in https://github.com/containers/ramalama/pull/1070
* We should be pulling minor versions not latest by rhatdan in https://github.com/containers/ramalama/pull/1072
* Only install epel on rhel-based OSes by ericcurtin in https://github.com/containers/ramalama/pull/1080
* Fix gen of name in quadlet to be on its own line. by rhatdan in https://github.com/containers/ramalama/pull/1082
* Fix handling of entrypoint for Intel by rhatdan in https://github.com/containers/ramalama/pull/1081

New Contributors
* miabbott made their first contribution in https://github.com/containers/ramalama/pull/1070

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.7.1...v0.7.2

0.7.1

What's Changed
* Bump to v0.7.0 by rhatdan in https://github.com/containers/ramalama/pull/1042
* Explain dryrun option better in container_build.sh by ericcurtin in https://github.com/containers/ramalama/pull/1041
* Add openvino to all images by rhatdan in https://github.com/containers/ramalama/pull/1045
* Print status message when emulating --pull=newer for docker by edmcman in https://github.com/containers/ramalama/pull/1047
* Remove unused variable by ericcurtin in https://github.com/containers/ramalama/pull/1044
* Default the number of threads to (nproc)/(2) by ericcurtin in https://github.com/containers/ramalama/pull/982
* Attempt to install openvino using pip by rhatdan in https://github.com/containers/ramalama/pull/1050
* feat: add --jinja to the list of arguments if MODEL_JINJA env var is true by benoitf in https://github.com/containers/ramalama/pull/1053
* Never use entrypoint by rhatdan in https://github.com/containers/ramalama/pull/1046
* fix ramalama rag build code by rhatdan in https://github.com/containers/ramalama/pull/1049
* Combine Vulkan, Kompute and CPU inferencing into one image by ericcurtin in https://github.com/containers/ramalama/pull/1022
* Hardcode threads to 2 in this test by ericcurtin in https://github.com/containers/ramalama/pull/1056
* fixed chunk error by bmahabirbu in https://github.com/containers/ramalama/pull/1059
* Don't display server port when using run --rag by rhatdan in https://github.com/containers/ramalama/pull/1061
* Add support for /dev/accel being leaked into containers by rhatdan in https://github.com/containers/ramalama/pull/1055


**Full Changelog**: https://github.com/containers/ramalama/compare/v0.7.0...v0.7.1

0.7.0

This is a big release, We now have working support for RAG inside of RamaLama.
Try out
ramalama rag XYZ.pdf ABC.doc quay.io/NAME/myrag
ramalama run --rag quay.io/NAME/myrag MYMODEL

What's Changed
* Default whisper-server.sh, llama-server.sh to /mnt/models/model.file by rhatdan in https://github.com/containers/ramalama/pull/984
* Improve intel-gpu to work with whisper-server and llama-server by rhatdan in https://github.com/containers/ramalama/pull/986
* whisper.cpp requires ffmpeg by ericcurtin in https://github.com/containers/ramalama/pull/985
* Fix container_build.sh to build all images by rhatdan in https://github.com/containers/ramalama/pull/989
* fix: use expected condition by benoitf in https://github.com/containers/ramalama/pull/992
* [CANN]Fix the bug that openEuler repo does not have ffmpeg-free package, instand of using ffmpeg for openEuler by leo-pony in https://github.com/containers/ramalama/pull/994
* Add docling support version 2 by rhatdan in https://github.com/containers/ramalama/pull/979
* chore: use the reverse condition for models by benoitf in https://github.com/containers/ramalama/pull/995
* FIX: Ollama install with brew for CI by kush-gupt in https://github.com/containers/ramalama/pull/1002
* Add the ability to identify a wider set of Intel GPUs that have enough Execution Units to produce decent results by cgruver in https://github.com/containers/ramalama/pull/996
* Add ramalama client by ericcurtin in https://github.com/containers/ramalama/pull/997
* Fix errors found in RamaLama RAG by rhatdan in https://github.com/containers/ramalama/pull/998
* Turn on verbose logging in llama-server if --debug is on by ericcurtin in https://github.com/containers/ramalama/pull/1001
* Don't use relative paths for destination by rhatdan in https://github.com/containers/ramalama/pull/1003
* Red Hat Konflux update ramalama by red-hat-konflux in https://github.com/containers/ramalama/pull/1005
* Fix errors on python3.9 by rhatdan in https://github.com/containers/ramalama/pull/1007
* Use this container if we detect ROCm accelerator by ericcurtin in https://github.com/containers/ramalama/pull/1008
* Improve UX for ramalama-client by ericcurtin in https://github.com/containers/ramalama/pull/1013
* update docs for Intel GPU support. Clean up code comments by cgruver in https://github.com/containers/ramalama/pull/1011
* Generate quadlets with rag databases by rhatdan in https://github.com/containers/ramalama/pull/1012
* Keep conversation history by ericcurtin in https://github.com/containers/ramalama/pull/1014
* Fix ramalama serve --rag ABC --generate kube by rhatdan in https://github.com/containers/ramalama/pull/1015
* Adds Rag chatbot to ramalama serve and preloads models for doc2rag and rag_framework by bmahabirbu in https://github.com/containers/ramalama/pull/1010
* Rag condition should be and instead of or by ericcurtin in https://github.com/containers/ramalama/pull/1016
* Show model name in API instead of model file path by bachp in https://github.com/containers/ramalama/pull/1009
* Make install script more aesthetically pleasing by ericcurtin in https://github.com/containers/ramalama/pull/1019
* Color each word individually by ericcurtin in https://github.com/containers/ramalama/pull/1017
* Add feature to turn off colored text by ericcurtin in https://github.com/containers/ramalama/pull/1021
* Fix up building of images by rhatdan in https://github.com/containers/ramalama/pull/1023
* Change default ROCM image to rocm-fedora by rhatdan in https://github.com/containers/ramalama/pull/1024
* Run build_rag.sh as root by rhatdan in https://github.com/containers/ramalama/pull/1027
* added hacky method to use 'run' instead of 'serve' for rag by bmahabirbu in https://github.com/containers/ramalama/pull/1026
* More fixes to build scripts by rhatdan in https://github.com/containers/ramalama/pull/1028
* Updated rag to have much better querys at the cost of slight delay by bmahabirbu in https://github.com/containers/ramalama/pull/1029
* More fixes to build scripts by rhatdan in https://github.com/containers/ramalama/pull/1031
* Minor bugfix remove self. from self.prompt by ericcurtin in https://github.com/containers/ramalama/pull/1032
* Added terminal name fixed eof bug and added another model to rag_framework load by bmahabirbu in https://github.com/containers/ramalama/pull/1033
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1742918310 by renovate in https://github.com/containers/ramalama/pull/1035
* Typo in the webui by ericcurtin in https://github.com/containers/ramalama/pull/1039
* Fix errors on python3.9 by marceloleitner in https://github.com/containers/ramalama/pull/1038
* More updates for builds by rhatdan in https://github.com/containers/ramalama/pull/1036

New Contributors
* red-hat-konflux made their first contribution in https://github.com/containers/ramalama/pull/1005
* bachp made their first contribution in https://github.com/containers/ramalama/pull/1009
* marceloleitner made their first contribution in https://github.com/containers/ramalama/pull/1038

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.4...v0.7.0

0.6.4

What's Changed
* Print error when converting from an OCI Image by rhatdan in https://github.com/containers/ramalama/pull/932
* Make compatible with the macOS system python3 by ericcurtin in https://github.com/containers/ramalama/pull/933
* Bugfixes noticed while installing on Raspberry Pi by ericcurtin in https://github.com/containers/ramalama/pull/935
* Add note about updating nvidia.yaml file by rhatdan in https://github.com/containers/ramalama/pull/938
* Fix docker handling of GPUs. by rhatdan in https://github.com/containers/ramalama/pull/941
* macOS detection fix by ericcurtin in https://github.com/containers/ramalama/pull/942
* Add chat template support by engelmi in https://github.com/containers/ramalama/pull/917
* Consolidate gpu detection by ericcurtin in https://github.com/containers/ramalama/pull/943
* Implement RamaLama shell by ericcurtin in https://github.com/containers/ramalama/pull/915
* Add Linux x86-64 support for Ascend NPU accelerator in llama.cpp backend by leo-pony in https://github.com/containers/ramalama/pull/950
* Handle CNAI annotation deprecation by s3rj1k in https://github.com/containers/ramalama/pull/939
* Fix install.sh for OSTree system by ericcurtin in https://github.com/containers/ramalama/pull/951
* Lets run container in all tests, to make sure it does not explode. by rhatdan in https://github.com/containers/ramalama/pull/946
* Added --chat-template-file support to ramalama serve by engelmi in https://github.com/containers/ramalama/pull/952
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1741850090 by renovate in https://github.com/containers/ramalama/pull/956
* Add specified nvidia-oci runtime by rhatdan in https://github.com/containers/ramalama/pull/953
* python3 validator by ericcurtin in https://github.com/containers/ramalama/pull/959
* There must be at least one CDI device present to use CUDA by ericcurtin in https://github.com/containers/ramalama/pull/954
* [NPU][Fix] only specify device num, but without ascend-docker-runtime installed, running ramalama/cann container image will failing by leo-pony in https://github.com/containers/ramalama/pull/962
* Fix port rendering in README by andreadecorte in https://github.com/containers/ramalama/pull/963
* Update docker.io/nvidia/cuda Docker tag to v12.8.1 by renovate in https://github.com/containers/ramalama/pull/960
* Update llama.cpp to contain threads features by ericcurtin in https://github.com/containers/ramalama/pull/967
* Fix ENTRYPOINTS of whisper-server and llama-server by rhatdan in https://github.com/containers/ramalama/pull/965
* Add software to support using rag in RamaLama by rhatdan in https://github.com/containers/ramalama/pull/968
* Update llama.cpp for some Gemma features by ericcurtin in https://github.com/containers/ramalama/pull/973
* Only set this environment variable if we can resolve CDI by ericcurtin in https://github.com/containers/ramalama/pull/971
* feat(cpu): add --threads option to specify number of cpu threads by antheas in https://github.com/containers/ramalama/pull/966
* Asashi build is failing because of no python3-devel package by rhatdan in https://github.com/containers/ramalama/pull/974
* GPG Check is failing on the Intel Repo by cgruver in https://github.com/containers/ramalama/pull/976
* Add --runtime-arg option for run and serve by edmcman in https://github.com/containers/ramalama/pull/949
* Fix handling of whisper-server and llama-server entrypoints by rhatdan in https://github.com/containers/ramalama/pull/975
* Bump to v0.6.4 by rhatdan in https://github.com/containers/ramalama/pull/978

New Contributors
* s3rj1k made their first contribution in https://github.com/containers/ramalama/pull/939
* antheas made their first contribution in https://github.com/containers/ramalama/pull/966
* edmcman made their first contribution in https://github.com/containers/ramalama/pull/949

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.3...v0.6.4

0.6.3

What's Changed
* Check if terminal is compatible with emojis before using them by ericcurtin in https://github.com/containers/ramalama/pull/878
* Use vllm-openai upstream image by ericcurtin in https://github.com/containers/ramalama/pull/880
* The package available via dnf is in a good place by ericcurtin in https://github.com/containers/ramalama/pull/879
* Add Ollama to CI and system tests for its caching by kush-gupt in https://github.com/containers/ramalama/pull/881
* Moved pruning protocol from model to factory by engelmi in https://github.com/containers/ramalama/pull/882
* Remove emoji usage until linenoise.cpp and llama-run are compatible by ericcurtin in https://github.com/containers/ramalama/pull/884
* Inject config to cli functions by engelmi in https://github.com/containers/ramalama/pull/889
* Switch from tiny to smollm:135m by ericcurtin in https://github.com/containers/ramalama/pull/891
* benchmark failing because of lack of flag by ericcurtin in https://github.com/containers/ramalama/pull/888
* Update the README.md to point people at ramalama.ai web site by rhatdan in https://github.com/containers/ramalama/pull/894
* fix: handling of date with python 3.8/3.9/3.10 by benoitf in https://github.com/containers/ramalama/pull/897
* readme: fix artifactory link by alaviss in https://github.com/containers/ramalama/pull/903
* Added support for mac cpu and clear warning message by bmahabirbu in https://github.com/containers/ramalama/pull/902
* Use python variable instead of environment variable by ericcurtin in https://github.com/containers/ramalama/pull/907
* Update llama.cpp by ericcurtin in https://github.com/containers/ramalama/pull/908
* Build a non-kompute Vulkan container image by ericcurtin in https://github.com/containers/ramalama/pull/910
* Reintroduce emoji prompts by ericcurtin in https://github.com/containers/ramalama/pull/913
* Add new ramalama-*-core executables by ericcurtin in https://github.com/containers/ramalama/pull/909
* Detect & get info on hugging face repos, fix sizing of symlinked directories by kush-gupt in https://github.com/containers/ramalama/pull/901
* Add ramalama image built on Fedora using Fedora's rocm packages by maxamillion in https://github.com/containers/ramalama/pull/596
* Add new model store by engelmi in https://github.com/containers/ramalama/pull/905
* Add support for llama.cpp engine to use ascend NPU device by leo-pony in https://github.com/containers/ramalama/pull/911
* Extend make validate check to do more by ericcurtin in https://github.com/containers/ramalama/pull/916
* Modify GPU detection to match against env var value instead of prefix by cgruver in https://github.com/containers/ramalama/pull/919
* Add Intel ARC 155H to list of supported hardware by cgruver in https://github.com/containers/ramalama/pull/920
* Try to choose a free port on serve if default one is not available by andreadecorte in https://github.com/containers/ramalama/pull/898
* Add passing of environment variables to ramalama commands by rhatdan in https://github.com/containers/ramalama/pull/922
* Allow user to specify the images to use per hardware by rhatdan in https://github.com/containers/ramalama/pull/921
* fix: CHAT_FORMAT variable should be expanded by benoitf in https://github.com/containers/ramalama/pull/926
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1741600006 by renovate in https://github.com/containers/ramalama/pull/928
* Bump to v0.6.3 by rhatdan in https://github.com/containers/ramalama/pull/931

New Contributors
* alaviss made their first contribution in https://github.com/containers/ramalama/pull/903
* leo-pony made their first contribution in https://github.com/containers/ramalama/pull/911
* andreadecorte made their first contribution in https://github.com/containers/ramalama/pull/898

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.2...v0.6.3

0.6.2

What's Changed
* Introduce basic renovate.json file by gnaponie in https://github.com/containers/ramalama/pull/854
* Some tests around --network, --net options by ericcurtin in https://github.com/containers/ramalama/pull/840
* Add demos script to show the power of RamaLama by rhatdan in https://github.com/containers/ramalama/pull/855
* chore: add alias from llama-2 to llama2 by benoitf in https://github.com/containers/ramalama/pull/859
* Define Environment variables to use by rhatdan in https://github.com/containers/ramalama/pull/861
* Fix macOS GPU acceleration via podman by ericcurtin in https://github.com/containers/ramalama/pull/863
* Change rune to run by ericcurtin in https://github.com/containers/ramalama/pull/862
* Revert back to 12.6 version of cuda by rhatdan in https://github.com/containers/ramalama/pull/864
* Make CI build all images by ericcurtin in https://github.com/containers/ramalama/pull/831
* chore: do not format size for --json export in list command by benoitf in https://github.com/containers/ramalama/pull/870
* Added model factory by engelmi in https://github.com/containers/ramalama/pull/874
* feat: display emoji of the engine for the run in the prompt by benoitf in https://github.com/containers/ramalama/pull/872
* Fix up handling of image selection on generate by rhatdan in https://github.com/containers/ramalama/pull/856
* fix: use iso8601 for JSON modified field by benoitf in https://github.com/containers/ramalama/pull/873
* Bump to 0.6.2 by rhatdan in https://github.com/containers/ramalama/pull/875

New Contributors
* gnaponie made their first contribution in https://github.com/containers/ramalama/pull/854

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.1...v0.6.2

Page 1 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.