Radical.pilot

Latest version: v1.90.0

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 30

1.33.0

- add a resource definition for rivanna at UVa.
- add documentation for missing properties
- add an exception for RAPTOR workers regarding GPU sharing
- add an exception in case GPU sharing is used in SRun or MPIRun LMs
- add configuration discovery for `gpus_per_node` (Slurm)
- add `PMI_ID` env variable (related to Hydra)
- add rank env variable for MPIExec LM
- add resource config for FrontierOLCF
- add service task description verification
- add interactive config to UVA
- add raptor tasks to the API doc
- add rank documentation
- allow access to full node memory by default
- changed type for `task['resources']`, let RADICAL-Analytics to
handle it
- changed type of `gpus_per_rank` attribute in `TaskDescription` (from
`int` to `float`)
- enforce correct task mode for raptor master/workers
- ensure result_cb for executable tasks
- ensure `session._get_task_sandbox` for raptor tasks
- ensure that `wait_workers` raises RuntimeError during stop
- ensure worker termination on raptor shutdown
- fix CUDA env variable(s) setup for `pre_exec` (in POPEN executor)
- fix `gpu_map` in Scheduler and its usage
- fix ranks calculation
- fix slots estimation process
- fix tasks binding (e.g., bind task to a certain number of cores)
- fix the process of requesting a correct number of cores/gpus (in
case of blocked cores/gpus)
- Fix path of task sandbox path
- fix wait_workers
- google style docstrings.
- use parameter `new_session_per_task` within resource description to
control input parameter `start_new_session` in `subprocess.Popen`
- keep virtualenv as fallback if venv is missing
- let SRun LM to get info about GPUs from configured slots
- make slot dumps dependent on debug level
- master rpc handles stop request
- move from custom virtualenv version to `venv` module
- MPI worker sync
- Reading resources from created task description
- reconcile different worker submission paths
- recover bootstrap_0\_stop event
- recover task description dump for raptor
- removed codecov from test requirements (codecov is represented by
GitHub actions)
- removed `gpus_per_node` - let SAGA handle GPUs
- removed obsolete configs (FUNCS leftover)
- re-order worker initialization steps, time out on registration
- support sandboxes for raptor tasks
- sync JSRun LM options according to defined slots
- update JSRun LM according to GPU sharing
- update slots estimation and `core/gpu_map` creation
- worker state update cb


Past

Use past releases to reproduce an earlier experiments.

--------------------------------------------------------------------------------

1.21.0

- add worker rank heartbeats to raptor
- ensure descr defaults for raptor worker submission
- move `blocked_cores/gpus` under `system_architecture` in resource
config
- fix `blocked_cores/gpus` parameters in configs for ACCESS and ORNL
resources
- fix core-option in JSRun LM
- fix inconsistency in launching order if some LMs failed to be
created
- fix thread-safety of PilotManager staging operations.
- add ANL's polaris and polaris_interactive support
- refactor raptor dispatchers to worker base class


--------------------------------------------------------------------------------

1.20.1

- fix task cancellation call


--------------------------------------------------------------------------------

1.20.0

- interactive amarel cfg
- add docstring for run_task, remove sort
- add option `-r` (number of RS per node) is case of GPU tasks
- add `TaskDescription` attribute `pre_exec_sync`
- add test for `Master.wait`
- add test for tasks cancelling
- add test for TMGR StagingIn
- add comment for config addition Fixes 2089
- add TASK_BULK_MKDIR_THRESHOLD as configurable Fixes 2089
- agent does not need to pull failed tasks
- bump python test env to 3.7
- cleanup error reporting
- document attributes as `attr`, not `data`.
- extended tests for RM PBSPro
- fix `allocated_cores/gpus` in PMGR Launching
- fix commands per rank (either a single string command or list of
commands)
- fix JSRun test
- fix nodes indexing (`node_id`)
- fix option `-b` (`--bind`)
- fix setup procedure for agent staging test(s)
- fix executor test
- fix task cancelation if task is waiting in the scheduler wait queue
- fix Sphinx syntax.
- fix worker state statistics
- implement task timeout for popen executor
- refactor popen task cancellation
- removed `pre_rank` and `post_rank` from Popen executor
- rename XSEDE to ACCESS 2676
- reorder env setup per rank (by RP) and consider (enforce) CPU/GPU
types
- reorganized task/rank-execution processes and synced that with
launch processes
- support schema aliases in resource configs
- task attribute `slots` is not required in an executor
- unify raptor and non-raptor prof traces
- update amarel cfg
- update RM Fork
- update RM PBSPro
- update SRun option `cpus-per-task` - set the option if
`cpu_threads > 0`
- update test for PMGR Launching
- update test for Popen (for pre/post_rank transformation)
- update test for RM Fork
- update test for JSRun (w/o ERF)
- update test for RM PBSPro
- update profile events for raptor tasks
- interactive amarel cfg


--------------------------------------------------------------------------------

1.18.1

- fix Amarel configuration


--------------------------------------------------------------------------------

1.18.0

- move raptor profiles and logfiles into sandboxes\
- consistent use of task modes\
- derive etypes from task modes
- clarify and troubleshoot raptor.py example
- docstring update
- make sure we issue a `bootstrap_0_stop` event
- raptor tasks now create `rank_start/ranks_stop` events
- reporte allocated resources for RA
- set MPIRun as default LM for Summit
- task manager cancel wont block: fixes 2336
- update task description (focus on `ranks`)


--------------------------------------------------------------------------------

Page 5 of 30

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.