Caper

Latest version: v2.3.2

Safety actively analyzes 688313 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 15

2.1.3

- Fixed `pip install` issues.
- Fixed GCP instance creation shell script (upgraded Ubuntu: 16.04 to 20.04).
- Fixed `caper init` issue for SGE clusters.

2.1.2

Hotfix for the auto-troubleshooter. (`stderr + '.background'` error)

2.1.1

Fixed `sbatch` stalling issue on `slurm` backend
- https://github.com/ENCODE-DCC/atac-seq-pipeline/issues/346

Troubleshooter can show `stderr.background` (`stderr` from cluster engines)

2.1.0

Fixed all Singularity/Conda issuess on HPC backends (`slurm`, `sge`, `pbs` and `lsf`)
- Fixes https://github.com/ENCODE-DCC/caper/issues/149
- `docker` attribute in WDL task's `runtime` is now safely ignored for HPC backends
- Singularity directory binding issue (sharing soft-linked input/output files between tasks)

Makes a new Cromwell STDOUT file if it exists
- For example, if `cromwell.out` exists on CWD then `cromwell.out.1` is created.

2.0.0

New Caper 2.0.0. is out.

There is no significant update for cloud-based backends (`gcp` and `aws`).
Most updates are for **HPC USERS** to fix various annoying Singularity/Conda and cluster (SLURM, ...) issues.

For HPC users

HPC users must initialize Caper's configuration file. This will add new important parameters and description/instruction for them. Please make a copy of your original configuration file (`~/.caper/default.conf`) and then initialize it with `caper init`.
bash
$ cp ~/.caper/default.conf ~/.caper/default.conf.bak
$ caper init [YOUR_BACKEND] local, slurm, sge, pbs, lsf. See README.md for details


Follow instruction comments in the configuration file. Most parameters will be the same as those in the original configuration file.


For Conda/Singularity users on HPC

**YOU DO NOT NEED TO ACTIVATE CONDA ENVIRONMENT BEFORE RUNNING PIPELINES**. Just make sure that pipeline's Conda environment is correctly installed and add `--conda` to the command line `caper run ... --conda`. New Caper 2.0.0 runs each WDL task inside a Conda environment.

We strongly recommend to use Singularity for new ENCODE ATAC-seq and ChIP-seq pipelines (both >=v2.0.0). `caper run ... --singularity` if your cluster support Singularity.

Added resource parameters for HPC

Added a resource parameter for each HPC backend so that users can customize resource parameters for the job submission command line (`squeue`, `qsub`, ...) according to their cluster configuration. It will be appended to the job submission command line. WDL syntax is allowed in `${}` notation. Please find details in the configuration file after initialization. See README for details.

- `slurm` backend: `slurm-resource-param`
- `sge` backend: `sge-resource-param`
- `pbs` backend: `pbs-resource-param`
- `lsf` backend: `lsf-resource-param`

1.6.3

No code changes to v1.6.2 except for documentation.

However, AWS users should reinstall AWS Batch with a new CloudFormation template built for Caper.
There were several issues with the old installation method (EBS autoscaling and bucket permission issues).

Please read [this instruction](https://github.com/ENCODE-DCC/caper/blob/master/scripts/aws_caper_server/README.md) carefully.

Page 2 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.