Ramalama

Latest version: v0.1.2

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.1.2

What's Changed
* Bump to v0.1.1 by rhatdan in https://github.com/containers/ramalama/pull/450
* Update ggerganov/whisper.cpp digest to f19463e by renovate in https://github.com/containers/ramalama/pull/453
* Switch to llama-simple-chat by ericcurtin in https://github.com/containers/ramalama/pull/454
* Simplify container image build by ericcurtin in https://github.com/containers/ramalama/pull/451
* Update ggerganov/whisper.cpp digest to 83ac284 by renovate in https://github.com/containers/ramalama/pull/455
* cli.py: remove errant slash preventing the loading of user conf file(s) by FNGarvin in https://github.com/containers/ramalama/pull/457
* Update ggerganov/whisper.cpp digest to f02b40b by renovate in https://github.com/containers/ramalama/pull/456
* Switched DGGML_CUDA to ON in cuda containerfile by bmahabirbu in https://github.com/containers/ramalama/pull/459
* Update ggerganov/whisper.cpp digest to bb12cd9 by renovate in https://github.com/containers/ramalama/pull/460
* Update ggerganov/whisper.cpp digest to 01d3bd7 by renovate in https://github.com/containers/ramalama/pull/461
* Update ggerganov/whisper.cpp digest to d24f981 by renovate in https://github.com/containers/ramalama/pull/462
* Docu by atarlov in https://github.com/containers/ramalama/pull/464
* Update ggerganov/whisper.cpp digest to 6266a9f by renovate in https://github.com/containers/ramalama/pull/466
* Fix handling of ramalama login huggingface by rhatdan in https://github.com/containers/ramalama/pull/467
* Support huggingface-cli older than 0.25.0, like on Fedora 40 and 41 by debarshiray in https://github.com/containers/ramalama/pull/468
* Bump to v0.1.2 by rhatdan in https://github.com/containers/ramalama/pull/470

New Contributors
* FNGarvin made their first contribution in https://github.com/containers/ramalama/pull/457
* atarlov made their first contribution in https://github.com/containers/ramalama/pull/464
* debarshiray made their first contribution in https://github.com/containers/ramalama/pull/468

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.1.1...v0.1.2

0.1.1

**Full Changelog**: https://github.com/containers/ramalama/compare/v0.1.0...v0.1.1

Mainly to fix issue in PyPi

0.1.0

What's Changed
* We can now run models via Kompute in podman-machine by ericcurtin in https://github.com/containers/ramalama/pull/440
* Only do dnf install for cuda images by ericcurtin in https://github.com/containers/ramalama/pull/441
* Add --host=0.0.0.0 if running llama.cpp serve within a container by rhatdan in https://github.com/containers/ramalama/pull/444
* Document the host flag in ramalama.conf file by rhatdan in https://github.com/containers/ramalama/pull/447
* Add granite-8b to shortnames.conf by rhatdan in https://github.com/containers/ramalama/pull/448
* Fix RamaLama container image build by ericcurtin in https://github.com/containers/ramalama/pull/446
* Bump to v0.1.0 by rhatdan in https://github.com/containers/ramalama/pull/449


**Full Changelog**: https://github.com/containers/ramalama/compare/v0.0.23...v0.1.0

0.0.23

What's Changed
* Remove omlmd as a dependency by ericcurtin in https://github.com/containers/ramalama/pull/428
* Check versions match in CI by ericcurtin in https://github.com/containers/ramalama/pull/427
* Fix podman run oci://... by rhatdan in https://github.com/containers/ramalama/pull/429
* Attempt to remove OCI Image if removing as Ollama or Huggingface fails by rhatdan in https://github.com/containers/ramalama/pull/432
* Run does not have generate, so remove it by rhatdan in https://github.com/containers/ramalama/pull/434
* Run the command by default without stderr by rhatdan in https://github.com/containers/ramalama/pull/436
* Closing stderr on podman command is blocking progress information and… by rhatdan in https://github.com/containers/ramalama/pull/438
* Make it easier to test-run manually by rhatdan in https://github.com/containers/ramalama/pull/435
* Install llama-cpp-python[server] by ericcurtin in https://github.com/containers/ramalama/pull/430


**Full Changelog**: https://github.com/containers/ramalama/compare/v0.0.22...v0.0.23

0.0.22

What's Changed
* Bump to v0.0.21 by rhatdan in https://github.com/containers/ramalama/pull/410
* Update ggerganov/whisper.cpp digest to 0377596 by renovate in https://github.com/containers/ramalama/pull/409
* Use subpath for OCI Models by rhatdan in https://github.com/containers/ramalama/pull/411
* Consistency changes by ericcurtin in https://github.com/containers/ramalama/pull/408
* Split out kube.py from model.py by rhatdan in https://github.com/containers/ramalama/pull/412
* Fix mounting of Ollama AI Images into containers. by rhatdan in https://github.com/containers/ramalama/pull/414
* Start an Asahi version by ericcurtin in https://github.com/containers/ramalama/pull/369
* Generate MODEL.yaml file locally rather then just to stdout by rhatdan in https://github.com/containers/ramalama/pull/416
* Bugfix comma by ericcurtin in https://github.com/containers/ramalama/pull/421
* Fix nocontainer mode by rhatdan in https://github.com/containers/ramalama/pull/419
* Update ggerganov/whisper.cpp digest to 31aea56 by renovate in https://github.com/containers/ramalama/pull/425
* Add --generate quadlet/kube to create quadlet and kube.yaml by rhatdan in https://github.com/containers/ramalama/pull/423
* Allow default port to be specified in ramalama.conf file by rhatdan in https://github.com/containers/ramalama/pull/424
* Made run and serve consistent with model exec path. Fixes issue 413 by bmahabirbu in https://github.com/containers/ramalama/pull/426
* Bump to v0.0.22 by rhatdan in https://github.com/containers/ramalama/pull/415


**Full Changelog**: https://github.com/containers/ramalama/compare/v0.0.21...v0.0.22

0.0.21

What's Changed
* Fix rpm build by rhatdan in https://github.com/containers/ramalama/pull/350
* Add environment variables for checksums to ramalama container by rhatdan in https://github.com/containers/ramalama/pull/355
* Change default container name for ROCm container image by ericcurtin in https://github.com/containers/ramalama/pull/360
* Allow removal of models specified as shortnames by rhatdan in https://github.com/containers/ramalama/pull/357
* Added a check to the zsh completions generation step by ericcurtin in https://github.com/containers/ramalama/pull/356
* Add vulkan image and show size by ericcurtin in https://github.com/containers/ramalama/pull/353
* Update ggerganov/whisper.cpp digest to 0fbaac9 by renovate in https://github.com/containers/ramalama/pull/363
* Allow pushing of oci images by rhatdan in https://github.com/containers/ramalama/pull/358
* Fix Makefile to be less stringent on failues of zsh by smooge in https://github.com/containers/ramalama/pull/368
* Add support for --authfile and --tls-verify for login by rhatdan in https://github.com/containers/ramalama/pull/364
* Fix incompatible Ollama paths by swarajpande5 in https://github.com/containers/ramalama/pull/370
* Fix shortname paths by swarajpande5 in https://github.com/containers/ramalama/pull/372
* Change to None instead of "" by ericcurtin in https://github.com/containers/ramalama/pull/371
* Kompute build is warning it is missing this package by ericcurtin in https://github.com/containers/ramalama/pull/366
* Add --debug option to show exec_cmd and run_cmd commands by rhatdan in https://github.com/containers/ramalama/pull/373
* Add support for pushing a file into an OCI Model image by rhatdan in https://github.com/containers/ramalama/pull/374
* Replace `huggingface-cli download` command with simple https client to pull models by swarajpande5 in https://github.com/containers/ramalama/pull/375
* Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.4-1214.1729773476 by renovate in https://github.com/containers/ramalama/pull/380
* Update ggerganov/whisper.cpp digest to c0ea41f by renovate in https://github.com/containers/ramalama/pull/381
* Update ggerganov/whisper.cpp digest to fc49ee4 by renovate in https://github.com/containers/ramalama/pull/382
* Update dependency huggingface/huggingface_hub to v0.26.2 by renovate in https://github.com/containers/ramalama/pull/383
* Update dependency tqdm/tqdm to v4.66.6 - autoclosed by renovate in https://github.com/containers/ramalama/pull/385
* Update ggerganov/whisper.cpp digest to 1626b73 by renovate in https://github.com/containers/ramalama/pull/386
* Support listing and removing newly designed bundled images by rhatdan in https://github.com/containers/ramalama/pull/378
* Fix default conman check by rhatdan in https://github.com/containers/ramalama/pull/389
* Drop in config by ericcurtin in https://github.com/containers/ramalama/pull/379
* Update ggerganov/whisper.cpp digest to 55e4221 by renovate in https://github.com/containers/ramalama/pull/390
* Move run_container to model.py allowing models types to override by rhatdan in https://github.com/containers/ramalama/pull/388
* Update ggerganov/whisper.cpp digest to 19dca2b by renovate in https://github.com/containers/ramalama/pull/392
* Add man page information for ramalama.conf by rhatdan in https://github.com/containers/ramalama/pull/391
* More debug info by ericcurtin in https://github.com/containers/ramalama/pull/394
* Make transport use config by rhatdan in https://github.com/containers/ramalama/pull/395
* Enable containers on macOS to use the GPU by slp in https://github.com/containers/ramalama/pull/397
* chore(deps): update ggerganov/whisper.cpp digest to 4e10afb by renovate in https://github.com/containers/ramalama/pull/398
* Time for removal of huggingface_hub dependancy by ericcurtin in https://github.com/containers/ramalama/pull/400
* Mount model. car volumes into container by rhatdan in https://github.com/containers/ramalama/pull/396
* Remove huggingface-hub references from spec file by ericcurtin in https://github.com/containers/ramalama/pull/401
* Packit: disable osh diff scan by lsm5 in https://github.com/containers/ramalama/pull/403
* Make minimal change to allow for ramalama to build on EL9 by smooge in https://github.com/containers/ramalama/pull/404
* reduced the size of the nvidia containerfile by bmahabirbu in https://github.com/containers/ramalama/pull/407
* Move /run/model to /mnt/models to match k8s model.car definiton by rhatdan in https://github.com/containers/ramalama/pull/402
* Verify pyproject.py and setup.py have same version by rhatdan in https://github.com/containers/ramalama/pull/405
* Make quadlets work with OCI images by rhatdan in https://github.com/containers/ramalama/pull/406


**Full Changelog**: https://github.com/containers/ramalama/compare/v0.0.20...v0.0.21

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.