Ramalama

Latest version: v0.6.3

Safety actively analyzes 714792 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.6.3

What's Changed
* Check if terminal is compatible with emojis before using them by ericcurtin in https://github.com/containers/ramalama/pull/878
* Use vllm-openai upstream image by ericcurtin in https://github.com/containers/ramalama/pull/880
* The package available via dnf is in a good place by ericcurtin in https://github.com/containers/ramalama/pull/879
* Add Ollama to CI and system tests for its caching by kush-gupt in https://github.com/containers/ramalama/pull/881
* Moved pruning protocol from model to factory by engelmi in https://github.com/containers/ramalama/pull/882
* Remove emoji usage until linenoise.cpp and llama-run are compatible by ericcurtin in https://github.com/containers/ramalama/pull/884
* Inject config to cli functions by engelmi in https://github.com/containers/ramalama/pull/889
* Switch from tiny to smollm:135m by ericcurtin in https://github.com/containers/ramalama/pull/891
* benchmark failing because of lack of flag by ericcurtin in https://github.com/containers/ramalama/pull/888
* Update the README.md to point people at ramalama.ai web site by rhatdan in https://github.com/containers/ramalama/pull/894
* fix: handling of date with python 3.8/3.9/3.10 by benoitf in https://github.com/containers/ramalama/pull/897
* readme: fix artifactory link by alaviss in https://github.com/containers/ramalama/pull/903
* Added support for mac cpu and clear warning message by bmahabirbu in https://github.com/containers/ramalama/pull/902
* Use python variable instead of environment variable by ericcurtin in https://github.com/containers/ramalama/pull/907
* Update llama.cpp by ericcurtin in https://github.com/containers/ramalama/pull/908
* Build a non-kompute Vulkan container image by ericcurtin in https://github.com/containers/ramalama/pull/910
* Reintroduce emoji prompts by ericcurtin in https://github.com/containers/ramalama/pull/913
* Add new ramalama-*-core executables by ericcurtin in https://github.com/containers/ramalama/pull/909
* Detect & get info on hugging face repos, fix sizing of symlinked directories by kush-gupt in https://github.com/containers/ramalama/pull/901
* Add ramalama image built on Fedora using Fedora's rocm packages by maxamillion in https://github.com/containers/ramalama/pull/596
* Add new model store by engelmi in https://github.com/containers/ramalama/pull/905
* Add support for llama.cpp engine to use ascend NPU device by leo-pony in https://github.com/containers/ramalama/pull/911
* Extend make validate check to do more by ericcurtin in https://github.com/containers/ramalama/pull/916
* Modify GPU detection to match against env var value instead of prefix by cgruver in https://github.com/containers/ramalama/pull/919
* Add Intel ARC 155H to list of supported hardware by cgruver in https://github.com/containers/ramalama/pull/920
* Try to choose a free port on serve if default one is not available by andreadecorte in https://github.com/containers/ramalama/pull/898
* Add passing of environment variables to ramalama commands by rhatdan in https://github.com/containers/ramalama/pull/922
* Allow user to specify the images to use per hardware by rhatdan in https://github.com/containers/ramalama/pull/921
* fix: CHAT_FORMAT variable should be expanded by benoitf in https://github.com/containers/ramalama/pull/926
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1741600006 by renovate in https://github.com/containers/ramalama/pull/928
* Bump to v0.6.3 by rhatdan in https://github.com/containers/ramalama/pull/931

New Contributors
* alaviss made their first contribution in https://github.com/containers/ramalama/pull/903
* leo-pony made their first contribution in https://github.com/containers/ramalama/pull/911
* andreadecorte made their first contribution in https://github.com/containers/ramalama/pull/898

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.2...v0.6.3

0.6.2

What's Changed
* Introduce basic renovate.json file by gnaponie in https://github.com/containers/ramalama/pull/854
* Some tests around --network, --net options by ericcurtin in https://github.com/containers/ramalama/pull/840
* Add demos script to show the power of RamaLama by rhatdan in https://github.com/containers/ramalama/pull/855
* chore: add alias from llama-2 to llama2 by benoitf in https://github.com/containers/ramalama/pull/859
* Define Environment variables to use by rhatdan in https://github.com/containers/ramalama/pull/861
* Fix macOS GPU acceleration via podman by ericcurtin in https://github.com/containers/ramalama/pull/863
* Change rune to run by ericcurtin in https://github.com/containers/ramalama/pull/862
* Revert back to 12.6 version of cuda by rhatdan in https://github.com/containers/ramalama/pull/864
* Make CI build all images by ericcurtin in https://github.com/containers/ramalama/pull/831
* chore: do not format size for --json export in list command by benoitf in https://github.com/containers/ramalama/pull/870
* Added model factory by engelmi in https://github.com/containers/ramalama/pull/874
* feat: display emoji of the engine for the run in the prompt by benoitf in https://github.com/containers/ramalama/pull/872
* Fix up handling of image selection on generate by rhatdan in https://github.com/containers/ramalama/pull/856
* fix: use iso8601 for JSON modified field by benoitf in https://github.com/containers/ramalama/pull/873
* Bump to 0.6.2 by rhatdan in https://github.com/containers/ramalama/pull/875

New Contributors
* gnaponie made their first contribution in https://github.com/containers/ramalama/pull/854

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.1...v0.6.2

0.6.1

What's Changed
* chore: use absolute link for the RamaLama logo by benoitf in https://github.com/containers/ramalama/pull/781
* Reuse Ollama cached image when available by kush-gupt in https://github.com/containers/ramalama/pull/782
* Add env var RAMALAMA_GPU_DEVICE to allow for explicit declaration of the GPU device to use by cgruver in https://github.com/containers/ramalama/pull/773
* Change RAMALAMA_GPU_DEVICE to RAMALAMA_DEVICE for AI accelerator device override by cgruver in https://github.com/containers/ramalama/pull/786
* Add Security information to README.md by rhatdan in https://github.com/containers/ramalama/pull/787
* Fix exiting on llama-serve when user hits ^c by rhatdan in https://github.com/containers/ramalama/pull/785
* Check if file exists before sorting them into a list by kush-gupt in https://github.com/containers/ramalama/pull/784
* Add ramalama run --keepalive option by rhatdan in https://github.com/containers/ramalama/pull/789
* Stash output from container_manager by rhatdan in https://github.com/containers/ramalama/pull/790
* Install llama.cpp for mac and nocontainer tests by rhatdan in https://github.com/containers/ramalama/pull/792
* _engine is set to None or has a value by ericcurtin in https://github.com/containers/ramalama/pull/793
* Only run dnf commands on platforms that have dnf by ericcurtin in https://github.com/containers/ramalama/pull/794
* Add ramalama rag command by rhatdan in https://github.com/containers/ramalama/pull/501
* Attempt to use build_llama_and_whisper.sh by rhatdan in https://github.com/containers/ramalama/pull/795
* Change --network-mode to --network by ericcurtin in https://github.com/containers/ramalama/pull/800
* Add some more gfx values to the default list by ericcurtin in https://github.com/containers/ramalama/pull/806
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739449058 by renovate in https://github.com/containers/ramalama/pull/808
* Prepare containers to run with ai-lab-recipes by rhatdan in https://github.com/containers/ramalama/pull/803
* If ngl is not specified by ericcurtin in https://github.com/containers/ramalama/pull/802
* feat: add ramalama labels about the execution on top of container by benoitf in https://github.com/containers/ramalama/pull/810
* Add run and serve arguments for --device and --privileged by cgruver in https://github.com/containers/ramalama/pull/809
* chore: rewrite readarray function to make it portable by benoitf in https://github.com/containers/ramalama/pull/815
* chore: replace RAMALAMA label by ai.ramalama by benoitf in https://github.com/containers/ramalama/pull/814
* Upgrade from 6.3.1 to 6.3.2 by ericcurtin in https://github.com/containers/ramalama/pull/816
* Removed error wrapping in urlopen by engelmi in https://github.com/containers/ramalama/pull/818
* Encountered a bug where this function was returning -1 by ericcurtin in https://github.com/containers/ramalama/pull/817
* Align runtime arguments with run, serve, bench, and perplexity by cgruver in https://github.com/containers/ramalama/pull/820
* README: fix inspect command description by kush-gupt in https://github.com/containers/ramalama/pull/826
* Pin dev dependencies to major version and improve formatting + linting by engelmi in https://github.com/containers/ramalama/pull/824
* README: Fix typo by bupd in https://github.com/containers/ramalama/pull/827
* Switch apt-get to apt by ericcurtin in https://github.com/containers/ramalama/pull/832
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739751568 by renovate in https://github.com/containers/ramalama/pull/834
* Add entrypoint container images by rhatdan in https://github.com/containers/ramalama/pull/819
* HuggingFace Cache Implementation by kush-gupt in https://github.com/containers/ramalama/pull/833
* Make serve by default expose network by ericcurtin in https://github.com/containers/ramalama/pull/830
* Fix up man page help verifacation by rhatdan in https://github.com/containers/ramalama/pull/835
* Fix handling of --privileged flag by rhatdan in https://github.com/containers/ramalama/pull/821
* chore: fix links of llama.cpp repository by benoitf in https://github.com/containers/ramalama/pull/841
* Unify CLI options (verbosity, version) by mkesper in https://github.com/containers/ramalama/pull/685
* Add system tests to pull from the Hugging Face cache by kush-gupt in https://github.com/containers/ramalama/pull/846
* Just one add_argument call for --dryrun/--dry-run by ericcurtin in https://github.com/containers/ramalama/pull/847
* Fix ramalama info to display NVIDIA and amd GPU information by rhatdan in https://github.com/containers/ramalama/pull/848
* Remove LICENSE header from gpu_detector.py by ericcurtin in https://github.com/containers/ramalama/pull/850
* Allowing modification of pull policy by rhatdan in https://github.com/containers/ramalama/pull/843
* Include instructions for installing on Fedora 42+ by stefwalter in https://github.com/containers/ramalama/pull/849
* Bump to 0.6.1 by rhatdan in https://github.com/containers/ramalama/pull/851

New Contributors
* benoitf made their first contribution in https://github.com/containers/ramalama/pull/781
* bupd made their first contribution in https://github.com/containers/ramalama/pull/827
* mkesper made their first contribution in https://github.com/containers/ramalama/pull/685
* stefwalter made their first contribution in https://github.com/containers/ramalama/pull/849

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.0...v0.6.1

0.6.0

What's Changed
* fix error on macOS for M1 pro by volker48 in https://github.com/containers/ramalama/pull/687
* This should be a global variable by ericcurtin in https://github.com/containers/ramalama/pull/703
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1736404036 by renovate in https://github.com/containers/ramalama/pull/702
* Update install.sh to include "gpu_detector.py" by graystevens in https://github.com/containers/ramalama/pull/704
* add --ngl to specify the number of gpu layers, and --keep-groups so podman has access to gpu by khumarahn in https://github.com/containers/ramalama/pull/659
* We are displaying display driver info, scope creep by ericcurtin in https://github.com/containers/ramalama/pull/710
* Use CODEOWNERS file for autoassign by dougsland in https://github.com/containers/ramalama/pull/706
* common: general improvements by dougsland in https://github.com/containers/ramalama/pull/713
* Fix macOS emoji compatibility with Alacritty by ericcurtin in https://github.com/containers/ramalama/pull/716
* Makelint by dougsland in https://github.com/containers/ramalama/pull/715
* Adding slp, engelmi, also by ericcurtin in https://github.com/containers/ramalama/pull/711
* Report error when huggingface-cli is not available by rhatdan in https://github.com/containers/ramalama/pull/719
* Add --network-mode option by rhjostone in https://github.com/containers/ramalama/pull/674
* README: add convert to commands list by kush-gupt in https://github.com/containers/ramalama/pull/723
* Revert "Add --network-mode option" by ericcurtin in https://github.com/containers/ramalama/pull/731
* Check for apple,arm-platform in /proc by ericcurtin in https://github.com/containers/ramalama/pull/730
* Packit: downstream jobs for EPEL 9,10 by lsm5 in https://github.com/containers/ramalama/pull/728
* Add logic to build intel-gpu image to build_llama_and_whisper.sh by cgruver in https://github.com/containers/ramalama/pull/724
* Add --network-mode option by rhatdan in https://github.com/containers/ramalama/pull/734
* Honor RAMALAMA_IMAGE if set by rhatdan in https://github.com/containers/ramalama/pull/733
* ramalama container: Make it possible to build basic container on all RHEL architectures by jcajka in https://github.com/containers/ramalama/pull/722
* Add docs for using podman farm to build multi-arch images by cgruver in https://github.com/containers/ramalama/pull/735
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1738643550 by renovate in https://github.com/containers/ramalama/pull/729
* modify container_build.sh to add capability to use podman farm for multi-arch images by cgruver in https://github.com/containers/ramalama/pull/736
* There's a comma in the list of files in install.sh by ericcurtin in https://github.com/containers/ramalama/pull/739
* Make the default of ngl be -1 by ericcurtin in https://github.com/containers/ramalama/pull/707
* github actions: ramalama install by dougsland in https://github.com/containers/ramalama/pull/738
* [skip-ci] Update actions/checkout action to v4 by renovate in https://github.com/containers/ramalama/pull/740
* On macOS this was returning an incorrect path by ericcurtin in https://github.com/containers/ramalama/pull/741
* Begin process of packaging PRAGmatic by rhatdan in https://github.com/containers/ramalama/pull/597
* Allow users to build RAG versus Docling images by rhatdan in https://github.com/containers/ramalama/pull/744
* Update vLLM containers by ericcurtin in https://github.com/containers/ramalama/pull/746
* Update README.md by bmbouter in https://github.com/containers/ramalama/pull/748
* Update progress bar only once every 100ms by ericcurtin in https://github.com/containers/ramalama/pull/717
* Remove reference to non-existent docs in CONTRIBUTING.md by cgruver in https://github.com/containers/ramalama/pull/761
* Check if krunkit process is running with --all-providers by ericcurtin in https://github.com/containers/ramalama/pull/763
* update_progress only takes one parameter by ericcurtin in https://github.com/containers/ramalama/pull/764
* Detect Intel ARC GPU in Meteor Lake chipset by cgruver in https://github.com/containers/ramalama/pull/749
* Drop all capablities and run with no-new-privileges by rhatdan in https://github.com/containers/ramalama/pull/765
* Progress bar fixes by ericcurtin in https://github.com/containers/ramalama/pull/767
* typo: Add quotes to intel-gpu argument in build llama and whisper script by hanthor in https://github.com/containers/ramalama/pull/766
* chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.5-1738814488 by renovate in https://github.com/containers/ramalama/pull/771
* There would be one case where this wouldn't work by ericcurtin in https://github.com/containers/ramalama/pull/768
* docs: update ramalama.1.md by eltociear in https://github.com/containers/ramalama/pull/775
* Add community documents by rhatdan in https://github.com/containers/ramalama/pull/777
* Parse https://ollama.com/library/ syntax by ericcurtin in https://github.com/containers/ramalama/pull/648
* Use containers CODE-OF-CONDUCT.md by rhatdan in https://github.com/containers/ramalama/pull/778
* Add model inspect cli by engelmi in https://github.com/containers/ramalama/pull/776
* Cleanup READMEs and man pages. by rhatdan in https://github.com/containers/ramalama/pull/780
* Bump to v0.6.0 by rhatdan in https://github.com/containers/ramalama/pull/779

New Contributors
* volker48 made their first contribution in https://github.com/containers/ramalama/pull/687
* graystevens made their first contribution in https://github.com/containers/ramalama/pull/704
* khumarahn made their first contribution in https://github.com/containers/ramalama/pull/659
* rhjostone made their first contribution in https://github.com/containers/ramalama/pull/674
* jcajka made their first contribution in https://github.com/containers/ramalama/pull/722
* bmbouter made their first contribution in https://github.com/containers/ramalama/pull/748
* hanthor made their first contribution in https://github.com/containers/ramalama/pull/766
* eltociear made their first contribution in https://github.com/containers/ramalama/pull/775

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.5...v0.6.0

0.5.5

What's Changed
* Add perplexity subcommand to RamaLama CLI by ericcurtin in https://github.com/containers/ramalama/pull/637
* throwing an exception with there is a failure in http_client.init by jhjaggars in https://github.com/containers/ramalama/pull/647
* Add container image to support Intel ARC GPU by cgruver in https://github.com/containers/ramalama/pull/644
* Guide users to install huggingface-cli to login to huggingface by pbabinca in https://github.com/containers/ramalama/pull/645
* Update intel-gpu Containerfile to reduce the size of the builder image by cgruver in https://github.com/containers/ramalama/pull/657
* Look for configs also in /usr/local/share/ramalama by jistr in https://github.com/containers/ramalama/pull/672
* remove ro as an option when mounting images by kush-gupt in https://github.com/containers/ramalama/pull/676
* Add generated man pages for section 7 into gitignore by jistr in https://github.com/containers/ramalama/pull/673
* Revert "Added --jinja to llama-run command" by ericcurtin in https://github.com/containers/ramalama/pull/683
* Pull the source model if it isn't already in local storage for the convert and push functions by kush-gupt in https://github.com/containers/ramalama/pull/680
* bump llama.cpp to latest release hash aa6fb13 by maxamillion in https://github.com/containers/ramalama/pull/692
* Introduce a mode so one call install from git by ericcurtin in https://github.com/containers/ramalama/pull/690
* Add ramalama gpu_detector by dougsland in https://github.com/containers/ramalama/pull/670
* Bump to v0.5.5 by rhatdan in https://github.com/containers/ramalama/pull/701

New Contributors
* cgruver made their first contribution in https://github.com/containers/ramalama/pull/644
* pbabinca made their first contribution in https://github.com/containers/ramalama/pull/645
* jistr made their first contribution in https://github.com/containers/ramalama/pull/672
* kush-gupt made their first contribution in https://github.com/containers/ramalama/pull/676
* maxamillion made their first contribution in https://github.com/containers/ramalama/pull/692
* dougsland made their first contribution in https://github.com/containers/ramalama/pull/670

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.4...v0.5.5

0.5.4

What's Changed
* Attempt to install podman by ericcurtin in https://github.com/containers/ramalama/pull/621
* Introduce ramalama bench by ericcurtin in https://github.com/containers/ramalama/pull/620
* Add man page for cuda support by rhatdan in https://github.com/containers/ramalama/pull/623
* Less verbose output by ericcurtin in https://github.com/containers/ramalama/pull/624
* Avoid dnf install on OSTree system by ericcurtin in https://github.com/containers/ramalama/pull/622
* Fix list in README - Credits section by kubealex in https://github.com/containers/ramalama/pull/627
* added mac cpu only support by bmahabirbu in https://github.com/containers/ramalama/pull/628
* Added --jinja to llama-run command by engelmi in https://github.com/containers/ramalama/pull/625
* Update llama.cpp version by ericcurtin in https://github.com/containers/ramalama/pull/630
* Add shortname for deepseek by rhatdan in https://github.com/containers/ramalama/pull/631
* fixed rocm detection by adding gfx targets in containerfile by bmahabirbu in https://github.com/containers/ramalama/pull/632
* Point macOS users to script install by kubealex in https://github.com/containers/ramalama/pull/635
* Update docker.io/nvidia/cuda Docker tag to v12.8.0 by renovate in https://github.com/containers/ramalama/pull/633
* feat: add argument to define amd gpu targets by jobcespedes in https://github.com/containers/ramalama/pull/634
* Bump to v0.5.4 by rhatdan in https://github.com/containers/ramalama/pull/641

New Contributors
* kubealex made their first contribution in https://github.com/containers/ramalama/pull/627
* engelmi made their first contribution in https://github.com/containers/ramalama/pull/625
* jobcespedes made their first contribution in https://github.com/containers/ramalama/pull/634

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.3...v0.5.4

Page 1 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.