Ramalama

Latest version: v0.7.2

Safety actively analyzes 724166 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

0.6.1

What's Changed
* chore: use absolute link for the RamaLama logo by benoitf in https://github.com/containers/ramalama/pull/781
* Reuse Ollama cached image when available by kush-gupt in https://github.com/containers/ramalama/pull/782
* Add env var RAMALAMA_GPU_DEVICE to allow for explicit declaration of the GPU device to use by cgruver in https://github.com/containers/ramalama/pull/773
* Change RAMALAMA_GPU_DEVICE to RAMALAMA_DEVICE for AI accelerator device override by cgruver in https://github.com/containers/ramalama/pull/786
* Add Security information to README.md by rhatdan in https://github.com/containers/ramalama/pull/787
* Fix exiting on llama-serve when user hits ^c by rhatdan in https://github.com/containers/ramalama/pull/785
* Check if file exists before sorting them into a list by kush-gupt in https://github.com/containers/ramalama/pull/784
* Add ramalama run --keepalive option by rhatdan in https://github.com/containers/ramalama/pull/789
* Stash output from container_manager by rhatdan in https://github.com/containers/ramalama/pull/790
* Install llama.cpp for mac and nocontainer tests by rhatdan in https://github.com/containers/ramalama/pull/792
* _engine is set to None or has a value by ericcurtin in https://github.com/containers/ramalama/pull/793
* Only run dnf commands on platforms that have dnf by ericcurtin in https://github.com/containers/ramalama/pull/794
* Add ramalama rag command by rhatdan in https://github.com/containers/ramalama/pull/501
* Attempt to use build_llama_and_whisper.sh by rhatdan in https://github.com/containers/ramalama/pull/795
* Change --network-mode to --network by ericcurtin in https://github.com/containers/ramalama/pull/800
* Add some more gfx values to the default list by ericcurtin in https://github.com/containers/ramalama/pull/806
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739449058 by renovate in https://github.com/containers/ramalama/pull/808
* Prepare containers to run with ai-lab-recipes by rhatdan in https://github.com/containers/ramalama/pull/803
* If ngl is not specified by ericcurtin in https://github.com/containers/ramalama/pull/802
* feat: add ramalama labels about the execution on top of container by benoitf in https://github.com/containers/ramalama/pull/810
* Add run and serve arguments for --device and --privileged by cgruver in https://github.com/containers/ramalama/pull/809
* chore: rewrite readarray function to make it portable by benoitf in https://github.com/containers/ramalama/pull/815
* chore: replace RAMALAMA label by ai.ramalama by benoitf in https://github.com/containers/ramalama/pull/814
* Upgrade from 6.3.1 to 6.3.2 by ericcurtin in https://github.com/containers/ramalama/pull/816
* Removed error wrapping in urlopen by engelmi in https://github.com/containers/ramalama/pull/818
* Encountered a bug where this function was returning -1 by ericcurtin in https://github.com/containers/ramalama/pull/817
* Align runtime arguments with run, serve, bench, and perplexity by cgruver in https://github.com/containers/ramalama/pull/820
* README: fix inspect command description by kush-gupt in https://github.com/containers/ramalama/pull/826
* Pin dev dependencies to major version and improve formatting + linting by engelmi in https://github.com/containers/ramalama/pull/824
* README: Fix typo by bupd in https://github.com/containers/ramalama/pull/827
* Switch apt-get to apt by ericcurtin in https://github.com/containers/ramalama/pull/832
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1739751568 by renovate in https://github.com/containers/ramalama/pull/834
* Add entrypoint container images by rhatdan in https://github.com/containers/ramalama/pull/819
* HuggingFace Cache Implementation by kush-gupt in https://github.com/containers/ramalama/pull/833
* Make serve by default expose network by ericcurtin in https://github.com/containers/ramalama/pull/830
* Fix up man page help verifacation by rhatdan in https://github.com/containers/ramalama/pull/835
* Fix handling of --privileged flag by rhatdan in https://github.com/containers/ramalama/pull/821
* chore: fix links of llama.cpp repository by benoitf in https://github.com/containers/ramalama/pull/841
* Unify CLI options (verbosity, version) by mkesper in https://github.com/containers/ramalama/pull/685
* Add system tests to pull from the Hugging Face cache by kush-gupt in https://github.com/containers/ramalama/pull/846
* Just one add_argument call for --dryrun/--dry-run by ericcurtin in https://github.com/containers/ramalama/pull/847
* Fix ramalama info to display NVIDIA and amd GPU information by rhatdan in https://github.com/containers/ramalama/pull/848
* Remove LICENSE header from gpu_detector.py by ericcurtin in https://github.com/containers/ramalama/pull/850
* Allowing modification of pull policy by rhatdan in https://github.com/containers/ramalama/pull/843
* Include instructions for installing on Fedora 42+ by stefwalter in https://github.com/containers/ramalama/pull/849
* Bump to 0.6.1 by rhatdan in https://github.com/containers/ramalama/pull/851

New Contributors
* benoitf made their first contribution in https://github.com/containers/ramalama/pull/781
* bupd made their first contribution in https://github.com/containers/ramalama/pull/827
* mkesper made their first contribution in https://github.com/containers/ramalama/pull/685
* stefwalter made their first contribution in https://github.com/containers/ramalama/pull/849

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.6.0...v0.6.1

0.6.0

What's Changed
* fix error on macOS for M1 pro by volker48 in https://github.com/containers/ramalama/pull/687
* This should be a global variable by ericcurtin in https://github.com/containers/ramalama/pull/703
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1736404036 by renovate in https://github.com/containers/ramalama/pull/702
* Update install.sh to include "gpu_detector.py" by graystevens in https://github.com/containers/ramalama/pull/704
* add --ngl to specify the number of gpu layers, and --keep-groups so podman has access to gpu by khumarahn in https://github.com/containers/ramalama/pull/659
* We are displaying display driver info, scope creep by ericcurtin in https://github.com/containers/ramalama/pull/710
* Use CODEOWNERS file for autoassign by dougsland in https://github.com/containers/ramalama/pull/706
* common: general improvements by dougsland in https://github.com/containers/ramalama/pull/713
* Fix macOS emoji compatibility with Alacritty by ericcurtin in https://github.com/containers/ramalama/pull/716
* Makelint by dougsland in https://github.com/containers/ramalama/pull/715
* Adding slp, engelmi, also by ericcurtin in https://github.com/containers/ramalama/pull/711
* Report error when huggingface-cli is not available by rhatdan in https://github.com/containers/ramalama/pull/719
* Add --network-mode option by rhjostone in https://github.com/containers/ramalama/pull/674
* README: add convert to commands list by kush-gupt in https://github.com/containers/ramalama/pull/723
* Revert "Add --network-mode option" by ericcurtin in https://github.com/containers/ramalama/pull/731
* Check for apple,arm-platform in /proc by ericcurtin in https://github.com/containers/ramalama/pull/730
* Packit: downstream jobs for EPEL 9,10 by lsm5 in https://github.com/containers/ramalama/pull/728
* Add logic to build intel-gpu image to build_llama_and_whisper.sh by cgruver in https://github.com/containers/ramalama/pull/724
* Add --network-mode option by rhatdan in https://github.com/containers/ramalama/pull/734
* Honor RAMALAMA_IMAGE if set by rhatdan in https://github.com/containers/ramalama/pull/733
* ramalama container: Make it possible to build basic container on all RHEL architectures by jcajka in https://github.com/containers/ramalama/pull/722
* Add docs for using podman farm to build multi-arch images by cgruver in https://github.com/containers/ramalama/pull/735
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1738643550 by renovate in https://github.com/containers/ramalama/pull/729
* modify container_build.sh to add capability to use podman farm for multi-arch images by cgruver in https://github.com/containers/ramalama/pull/736
* There's a comma in the list of files in install.sh by ericcurtin in https://github.com/containers/ramalama/pull/739
* Make the default of ngl be -1 by ericcurtin in https://github.com/containers/ramalama/pull/707
* github actions: ramalama install by dougsland in https://github.com/containers/ramalama/pull/738
* [skip-ci] Update actions/checkout action to v4 by renovate in https://github.com/containers/ramalama/pull/740
* On macOS this was returning an incorrect path by ericcurtin in https://github.com/containers/ramalama/pull/741
* Begin process of packaging PRAGmatic by rhatdan in https://github.com/containers/ramalama/pull/597
* Allow users to build RAG versus Docling images by rhatdan in https://github.com/containers/ramalama/pull/744
* Update vLLM containers by ericcurtin in https://github.com/containers/ramalama/pull/746
* Update README.md by bmbouter in https://github.com/containers/ramalama/pull/748
* Update progress bar only once every 100ms by ericcurtin in https://github.com/containers/ramalama/pull/717
* Remove reference to non-existent docs in CONTRIBUTING.md by cgruver in https://github.com/containers/ramalama/pull/761
* Check if krunkit process is running with --all-providers by ericcurtin in https://github.com/containers/ramalama/pull/763
* update_progress only takes one parameter by ericcurtin in https://github.com/containers/ramalama/pull/764
* Detect Intel ARC GPU in Meteor Lake chipset by cgruver in https://github.com/containers/ramalama/pull/749
* Drop all capablities and run with no-new-privileges by rhatdan in https://github.com/containers/ramalama/pull/765
* Progress bar fixes by ericcurtin in https://github.com/containers/ramalama/pull/767
* typo: Add quotes to intel-gpu argument in build llama and whisper script by hanthor in https://github.com/containers/ramalama/pull/766
* chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.5-1738814488 by renovate in https://github.com/containers/ramalama/pull/771
* There would be one case where this wouldn't work by ericcurtin in https://github.com/containers/ramalama/pull/768
* docs: update ramalama.1.md by eltociear in https://github.com/containers/ramalama/pull/775
* Add community documents by rhatdan in https://github.com/containers/ramalama/pull/777
* Parse https://ollama.com/library/ syntax by ericcurtin in https://github.com/containers/ramalama/pull/648
* Use containers CODE-OF-CONDUCT.md by rhatdan in https://github.com/containers/ramalama/pull/778
* Add model inspect cli by engelmi in https://github.com/containers/ramalama/pull/776
* Cleanup READMEs and man pages. by rhatdan in https://github.com/containers/ramalama/pull/780
* Bump to v0.6.0 by rhatdan in https://github.com/containers/ramalama/pull/779

New Contributors
* volker48 made their first contribution in https://github.com/containers/ramalama/pull/687
* graystevens made their first contribution in https://github.com/containers/ramalama/pull/704
* khumarahn made their first contribution in https://github.com/containers/ramalama/pull/659
* rhjostone made their first contribution in https://github.com/containers/ramalama/pull/674
* jcajka made their first contribution in https://github.com/containers/ramalama/pull/722
* bmbouter made their first contribution in https://github.com/containers/ramalama/pull/748
* hanthor made their first contribution in https://github.com/containers/ramalama/pull/766
* eltociear made their first contribution in https://github.com/containers/ramalama/pull/775

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.5...v0.6.0

0.5.5

What's Changed
* Add perplexity subcommand to RamaLama CLI by ericcurtin in https://github.com/containers/ramalama/pull/637
* throwing an exception with there is a failure in http_client.init by jhjaggars in https://github.com/containers/ramalama/pull/647
* Add container image to support Intel ARC GPU by cgruver in https://github.com/containers/ramalama/pull/644
* Guide users to install huggingface-cli to login to huggingface by pbabinca in https://github.com/containers/ramalama/pull/645
* Update intel-gpu Containerfile to reduce the size of the builder image by cgruver in https://github.com/containers/ramalama/pull/657
* Look for configs also in /usr/local/share/ramalama by jistr in https://github.com/containers/ramalama/pull/672
* remove ro as an option when mounting images by kush-gupt in https://github.com/containers/ramalama/pull/676
* Add generated man pages for section 7 into gitignore by jistr in https://github.com/containers/ramalama/pull/673
* Revert "Added --jinja to llama-run command" by ericcurtin in https://github.com/containers/ramalama/pull/683
* Pull the source model if it isn't already in local storage for the convert and push functions by kush-gupt in https://github.com/containers/ramalama/pull/680
* bump llama.cpp to latest release hash aa6fb13 by maxamillion in https://github.com/containers/ramalama/pull/692
* Introduce a mode so one call install from git by ericcurtin in https://github.com/containers/ramalama/pull/690
* Add ramalama gpu_detector by dougsland in https://github.com/containers/ramalama/pull/670
* Bump to v0.5.5 by rhatdan in https://github.com/containers/ramalama/pull/701

New Contributors
* cgruver made their first contribution in https://github.com/containers/ramalama/pull/644
* pbabinca made their first contribution in https://github.com/containers/ramalama/pull/645
* jistr made their first contribution in https://github.com/containers/ramalama/pull/672
* kush-gupt made their first contribution in https://github.com/containers/ramalama/pull/676
* maxamillion made their first contribution in https://github.com/containers/ramalama/pull/692
* dougsland made their first contribution in https://github.com/containers/ramalama/pull/670

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.4...v0.5.5

0.5.4

What's Changed
* Attempt to install podman by ericcurtin in https://github.com/containers/ramalama/pull/621
* Introduce ramalama bench by ericcurtin in https://github.com/containers/ramalama/pull/620
* Add man page for cuda support by rhatdan in https://github.com/containers/ramalama/pull/623
* Less verbose output by ericcurtin in https://github.com/containers/ramalama/pull/624
* Avoid dnf install on OSTree system by ericcurtin in https://github.com/containers/ramalama/pull/622
* Fix list in README - Credits section by kubealex in https://github.com/containers/ramalama/pull/627
* added mac cpu only support by bmahabirbu in https://github.com/containers/ramalama/pull/628
* Added --jinja to llama-run command by engelmi in https://github.com/containers/ramalama/pull/625
* Update llama.cpp version by ericcurtin in https://github.com/containers/ramalama/pull/630
* Add shortname for deepseek by rhatdan in https://github.com/containers/ramalama/pull/631
* fixed rocm detection by adding gfx targets in containerfile by bmahabirbu in https://github.com/containers/ramalama/pull/632
* Point macOS users to script install by kubealex in https://github.com/containers/ramalama/pull/635
* Update docker.io/nvidia/cuda Docker tag to v12.8.0 by renovate in https://github.com/containers/ramalama/pull/633
* feat: add argument to define amd gpu targets by jobcespedes in https://github.com/containers/ramalama/pull/634
* Bump to v0.5.4 by rhatdan in https://github.com/containers/ramalama/pull/641

New Contributors
* kubealex made their first contribution in https://github.com/containers/ramalama/pull/627
* engelmi made their first contribution in https://github.com/containers/ramalama/pull/625
* jobcespedes made their first contribution in https://github.com/containers/ramalama/pull/634

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.3...v0.5.4

0.5.3

What's Changed
* We no longer have python dependancies by ericcurtin in https://github.com/containers/ramalama/pull/588
* container_build.sh works on MAC by rhatdan in https://github.com/containers/ramalama/pull/590
* Added vllm cuda support by bmahabirbu in https://github.com/containers/ramalama/pull/582
* Remove omlmd from OCI calls by rhatdan in https://github.com/containers/ramalama/pull/591
* Build with curl support by pepijndevos in https://github.com/containers/ramalama/pull/595
* Add model transport info to ramalama run/serve manpage by rhatdan in https://github.com/containers/ramalama/pull/593
* Various README.md updates by ericcurtin in https://github.com/containers/ramalama/pull/600
* code crashes for rocm added proper type cast for env var by bmahabirbu in https://github.com/containers/ramalama/pull/602
* ROCm build broken by ericcurtin in https://github.com/containers/ramalama/pull/605
* Cleaner output if a machine executes this command by ericcurtin in https://github.com/containers/ramalama/pull/604
* Update to version that has command history by ericcurtin in https://github.com/containers/ramalama/pull/603
* Remove these lines they are unused by ericcurtin in https://github.com/containers/ramalama/pull/606
* Had to make this change for my laptop to suppor nvidia by rhatdan in https://github.com/containers/ramalama/pull/609
* Start making vllm work with RamaLama by rhatdan in https://github.com/containers/ramalama/pull/610
* Treat hf.co/ prefix the same as hf:// by ericcurtin in https://github.com/containers/ramalama/pull/612
* We need the rocm libraries in here by ericcurtin in https://github.com/containers/ramalama/pull/613
* A couple of cleanups in build_llama_and_whisper.sh by rhatdan in https://github.com/containers/ramalama/pull/615
* Bump to v0.5.3 by rhatdan in https://github.com/containers/ramalama/pull/614

New Contributors
* pepijndevos made their first contribution in https://github.com/containers/ramalama/pull/595

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.2...v0.5.3

0.5.2

What's Changed
* This is all dead code which isn't called by ericcurtin in https://github.com/containers/ramalama/pull/574
* On ARM by default turn on GPU acceleration by ericcurtin in https://github.com/containers/ramalama/pull/573
* Capitalize constants in python files (CONSTANT_CASE) by swarajpande5 in https://github.com/containers/ramalama/pull/579
* Add flake by jim3692 in https://github.com/containers/ramalama/pull/581
* Update llama.cpp to include minor llama-run by ericcurtin in https://github.com/containers/ramalama/pull/580
* Simplify this comparison by ericcurtin in https://github.com/containers/ramalama/pull/576
* Fix ramalama run on docker to work correctly by rhatdan in https://github.com/containers/ramalama/pull/583
* granite-code models in Ollama are malformed by ericcurtin in https://github.com/containers/ramalama/pull/584
* Bump to v0.5.2 by rhatdan in https://github.com/containers/ramalama/pull/585

New Contributors
* jim3692 made their first contribution in https://github.com/containers/ramalama/pull/581

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.5.1...v0.5.2

Page 2 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.