- Allow pipeline creation with static config files. - Reduce default resource monitoring frequency and allow monitoring to be disabled.
0.9.0
- Log stdout and stderror for each job in individual files. - Rename submit-jobs option `--num-processes` to `--num-parallel-processes-per-node`, preserving support for the old term. - Store resource utilization stats in Parquet files. - Support the SLURM reservation option.
0.8.2
- Add overall and per-node setup and teardown commands. - Set environment variables for the runtime output directory as well as each job name.
0.8.1
Allow users to specify an alternative scratch directory for Spark.
0.8.0
Adds support for Nvidia RAPIDS Accelerator for Apache Spark. - Includes breaking changes to spark commands. Those commands now go through `jade spark`.