Meadowrun

Latest version: v0.2.16

Safety actively analyzes 702161 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.2.1

New features:
- Add a wait_for_result parameter to the run_* functions allowing for fire-and-forget
- Kubernetes: support mirror_local for code, ports, and resources

Major improvements:
- Allow passing in `Deployment.mirror_local` as a Coroutine rather than having to await it explicitly
- Improve run_map performance (increase async-ness, results are always returned via S3 instead of SQS which is faster, one SSH connection per host)
- run_map no longer hangs forever if a worker exits unexpectedly

**Full Changelog**: https://github.com/meadowdata/meadowrun/compare/v0.2.0...v0.2.1

0.2.0

**Breaking API change:**

- Splitting out AllocCloudInstance into Resources and AllocCloudInstance:

Before:
python
result = await run_function(
lambda: sum(range(1000)) / 1000,
AllocCloudInstance(
logical_cpu_required=1,
memory_gb_required=4,
interruption_probability_threshold=15,
cloud_provider="EC2"
),
await Deployment.mirror_local()
)


After:
python
result = await meadowrun.run_function(
this is where the function to run goes
lambda: sum(range(1000)) / 1000,
run on a dynamically allocated AWS EC2 instance
meadowrun.AllocCloudInstance(cloud_provider="EC2"),
requirements to choose an appropriate EC2 instance
meadowrun.Resources(logical_cpu=1, memory_gb=4, max_eviction_rate=15),
mirror the local code and python environment
await meadowrun.Deployment.mirror_local()
)

Equivalent changes in `run_command` and `run_map` as well. Also, `num_concurrent_tasks` has been moved out from `AllocCloudInstances` to a top-level parameter in `run_map`.

**New features:**
- Initial rudimentary support for Kubernetes
- Add support for sidecar containers
- `mirror_local` now allows specifying non .py files in the current working directory
- Add `meadowrun-manage-ec2 grant-permission-to-ecr-repo`, make identity-based access to ECR work for private containers

**Improvements and bug fixes:**
- Allow large tasks/results in `run_map`, these will use S3 if we are above the SQS message limit
- Fix an issue where us-east-1 did not work in AWS
- Add support for specifying credentials on container images
- grant-permission-to-s3-bucket grants ListBucket permissions as well
- `run_map`'s "running" tasks was previously running + completed, now it is just running as expected
- Handle more ways of specifying git dependencies in pip and poetry
- Fix `run_map` failure where multiple workers are trying to unzip to the same directory
- Make sure we are always in the current working directory on the remote machine (previously this would only happen if "" was first on `sys.path`)

**Full Changelog**: https://github.com/meadowdata/meadowrun/compare/v0.1.14...v0.2.0

0.1.14

New features:
- Added the ability to specify more requirements on EC2 instances, e.g. GPUs, GPU memory, AVX512, etc.
- Added support for git repo dependencies in pip requirements.txt and poetry project files.
- Added the ability to open an arbitrary port for a job
- Automatically set the working directory to be the remote equivalent of the current working directory so that relative paths mostly work as expected
- Add the ability to request arbitrary apt packages in addition to a pip/poetry/conda file
- Added a /meadowrun/machine_cache folder for containers on the same machine to share files

Improvements:
- SSH connections are much faster, most noticeable in `run_map` as a result of switching from fabric to asyncssh
- Change behavior when instances can't be created because of a quota. Previously we would just give up, new behavior is to just try more expensive instances if they are available.
- Stdout from the remote machine shows up on the local machine much more quickly
- Delete containers when we are done with them
- Deallocate jobs when the client is terminated. Also convert the deallocate_jobs.py cron job to a systemd unit so that it runs more frequently (every 30 seconds for now)
- Check for spot interruptions and prevent further allocations
- Set the idle timeout for automatically cleaning up machines to 5 minutes. Print out surviving machines on manual clean up.

Bug fixes:
- Fixes a bug where we did not take interruption probability into account when assigning jobs to existing instances
- Fixes a bug where the background deallocate_jobs.py process was not running correctly on Azure
- Fixes a bug where mirroring the current pip interpreter failed if pip was out of date

Full Changelog: https://github.com/meadowdata/meadowrun/compare/v0.1.13...v0.1.14

0.1.13

Improvements and bug fixes:
- Add better error messages for when you run into a spot instance quota
- Fix EC2 prices caching, cache was never being updated

0.1.12

Improvements:
- Make python 3.7 work again
- Some improvements to startup time: cache EC2 prices, precompile pyc files

Note: conda package could not be built, will be fixed in subsequent release

0.1.11

Breaking changes:
- Some AWS resources were renamed. Before upgrading, please run `meadowrun-manage-ec2 uninstall` with the old version, install the new package version, and then run `meadowrun-manage-ec2 install`

Major improvements:
- Add support for pip and poetry
- Azure: support local code upload
- Adding support for specifying containers as interpreters

Minor bug fixes and improvements:
- Adding a command to grant the AWS Meadowrun IAM role access to an S3 bucket
- Adding the ability to respond correctly when AWS says an instance type is not available (rather than just giving up entirely)
- Azure: Register resource providers on install so that Meadowrun works with a brand new Azure account

Page 3 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.