New Caper 2.0.0. is out.
There is no significant update for cloud-based backends (`gcp` and `aws`).
Most updates are for **HPC USERS** to fix various annoying Singularity/Conda and cluster (SLURM, ...) issues.
For HPC users
HPC users must initialize Caper's configuration file. This will add new important parameters and description/instruction for them. Please make a copy of your original configuration file (`~/.caper/default.conf`) and then initialize it with `caper init`.
bash
$ cp ~/.caper/default.conf ~/.caper/default.conf.bak
$ caper init [YOUR_BACKEND] local, slurm, sge, pbs, lsf. See README.md for details
Follow instruction comments in the configuration file. Most parameters will be the same as those in the original configuration file.
For Conda/Singularity users on HPC
**YOU DO NOT NEED TO ACTIVATE CONDA ENVIRONMENT BEFORE RUNNING PIPELINES**. Just make sure that pipeline's Conda environment is correctly installed and add `--conda` to the command line `caper run ... --conda`. New Caper 2.0.0 runs each WDL task inside a Conda environment.
We strongly recommend to use Singularity for new ENCODE ATAC-seq and ChIP-seq pipelines (both >=v2.0.0). `caper run ... --singularity` if your cluster support Singularity.
Added resource parameters for HPC
Added a resource parameter for each HPC backend so that users can customize resource parameters for the job submission command line (`squeue`, `qsub`, ...) according to their cluster configuration. It will be appended to the job submission command line. WDL syntax is allowed in `${}` notation. Please find details in the configuration file after initialization. See README for details.
- `slurm` backend: `slurm-resource-param`
- `sge` backend: `sge-resource-param`
- `pbs` backend: `pbs-resource-param`
- `lsf` backend: `lsf-resource-param`