Aws-parallelcluster

Latest version: v3.11.1

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 16

3.7.0

------

**ENHANCEMENTS**
- Allow configuration of static and dynamic node priorities in Slurm compute resources via the ParallelCluster configuration YAML file.
- Add support for Ubuntu 22.
- Allow memory-based scheduling when multiple instance types are specified for a Slurm Compute Resource.
- Add a queue-level parameter (`JobExclusiveAllocation`) to ensure nodes in the partition are exclusively allocated to a single job at any given time.
- Add support for login nodes.
- Add support to mount existing Amazon File Cache as shared storage.

**CHANGES**
- Assign Slurm dynamic nodes a priority (weight) of 1000 by default. This allows Slurm to prioritize idle static nodes over idle dynamic ones.
- Make `aws-parallelcluster-node` daemons handle only ParallelCluster-managed Slurm partitions.
- Increase EFS-utils watchdog poll interval to 10 seconds. Note: This change is meaningful only if [EncryptionInTransit](https://docs.aws.amazon.com/parallelcluster/latest/ug/SharedStorage-v3.html#yaml-SharedStorage-EfsSettings-EncryptionInTransit) is set to `true`, because the watchdog does not run otherwise.
- Upgrade EFA installer to `1.25.1`
- Efa-driver: `efa-2.5.0-1`
- Efa-config: `efa-config-1.15-1`
- Efa-profile: `efa-profile-1.5-1`
- Libfabric-aws: `libfabric-aws-1.18.1-1`
- Rdma-core: `rdma-core-46.0-1`
- Open MPI: `openmpi40-aws-4.1.5-4`
- Change the default value of `Imds/ImdsSupport` from `v1.0` to `v2.0`.
- Upgrade Slurm to version 23.02.4.
- Deprecate Ubuntu 18.
- Update the default root volume size to 40 GB to account for limits on Centos 7.
- Restrict permission on file `/tmp/wait_condition_handle.txt` within the head node so that only root can read it.
- Upgrade NVIDIA driver to version 535.54.03.
- Upgrade CUDA library to version 12.2.0.
- Upgrade NVIDIA Fabric manager to `nvidia-fabricmanager-535`

**BUG FIXES**
- Add validation to `ScaledownIdletime` value, to prevent setting a value lower than `-1`.
- Fix issue causing dangling IAM policies to be created when creating ParallelCluster CloudFormation custom resource provider with `CustomLambdaRole`.
- Fix an issue that was causing misalignment of compute nodes DNS name on instances with multiple network interfaces,
when using `SlurmSettings/Dns/UseEc2Hostnames` equals to `True`.

3.6.1

------

**ENHANCEMENTS**
- Add support for Slurm accounting in US isolated regions.

**CHANGES**
- Avoid duplication of nodes seen by `clustermgtd` if compute nodes are added to multiple Slurm partitions.
- ParallelCluster AMI for US isolated regions are now vended with preconfigured CA certificates to speed up node bootstrap.
- Replace `nvidia-persistenced` service with `parallelcluster_nvidia` service to avoid conflicts with DLAMI.

**BUG FIXES**
- Remove hardcoding of root volume device name (`/dev/sda1` and `/dev/xvda`) and retrieve it from the AMI(s) used during `create-cluster`.
- Fix cluster creation failure when using CloudFormation custom resource with `ElasticIp` set to `True`.
- Fix cluster creation/update failure when using CloudFormation custom resource with large configuration files.
- Fix an issue that was preventing `ptrace` protection from being disabled on Ubuntu and was not allowing Cross Memory Attach (CMA) in libfabric.
- Fix fast insufficient capacity fail-over logic when using multiple instance types and no instances are returned.

3.6.0

----
**ENHANCEMENTS**
- Add support for RHEL8.7.
- Add a CloudFormation custom resource for creating and managing clusters from CloudFormation.
- Add support for customizing the cluster Slurm configuration via the ParallelCluster configuration YAML file.
- Build Slurm with support for LUA.
- Increase the limit on the maximum number of queues per cluster from 10 to 50. Compute resources can be distributed flexibly across the various queues as long as the cluster contains a maximum of 50 compute resources.
- Allow to specify a sequence of multiple custom actions scripts per event for `OnNodeStart`, `OnNodeConfigured` and `OnNodeUpdated` parameters.
- Add new configuration section `HealthChecks/Gpu` for enabling the GPU Health Check in the compute node before job execution.
- Add support for `Tags` in the `SlurmQueues` and `SlurmQueues/ComputeResources` section.
- Add support for `DetailedMonitoring` in the `Monitoring` section.
- Add `mem_used_percent` and `disk_used_percent` metrics for head node memory and root volume disk utilization tracking on the ParallelCluster CloudWatch dashboard, and set up alarms for monitoring these metrics.
- Add log rotation support for ParallelCluster managed logs.
- Track common errors of compute nodes and longest dynamic node idle time on Cloudwatch Dashboard.
- Enforce the DCV Authenticator Server to use at least `TLS-1.2` protocol when creating the SSL Socket.
- Install [NVIDIA Data Center GPU Manager (DCGM)](https://developer.nvidia.com/dcgm) package on all supported OSes except for aarch64 `centos7` and `alinux2`.
- Load kernel module [nvidia-uvm](https://developer.nvidia.com/blog/unified-memory-cuda-beginners/) by default to provide Unified Virtual Memory (UVM) functionality to the CUDA driver.
- Install [NVIDIA Persistence Daemon](https://docs.nvidia.com/deploy/driver-persistence/index.html) as a system service.

**CHANGES**
- Note 3.6 will be the last release to include support for Ubuntu 18. Subsequent releases will only support Ubuntu from version 20.
- Upgrade Slurm to version 23.02.2.
- Upgrade munge to version 0.5.15.
- Set Slurm default `TreeWidth` to 30.
- Set Slurm prolog and epilog configurations to target a directory, `/opt/slurm/etc/scripts/prolog.d/` and `/opt/slurm/etc/scripts/epilog.d/` respectively.
- Set Slurm `BatchStartTimeout` to 3 minutes so to allow max 3 minutes Prolog execution during compute node registration.
- Increase the default `RetentionInDays` of CloudWatch logs from 14 to 180 days.
- Upgrade EFA installer to `1.22.1`
- Dkms : `2.8.3-2`
- Efa-driver: `efa-2.1.1g`
- Efa-config: `efa-config-1.13-1`
- Efa-profile: `efa-profile-1.5-1`
- Libfabric-aws: `libfabric-aws-1.17.1-1`
- Rdma-core: `rdma-core-43.0-1`
- Open MPI: `openmpi40-aws-4.1.5-1`
- Upgrade Lustre client version to 2.12 on Amazon Linux 2 (same version available on Ubuntu 20.04, 18.04 and CentOS >= 7.7).
- Upgrade Lustre client version to 2.10.8 on CentOS 7.6.
- Upgrade NVIDIA driver to version 470.182.03.
- Upgrade NVIDIA Fabric Manager to version 470.182.03.
- Upgrade NVIDIA CUDA Toolkit to version 11.8.0.
- Upgrade NVIDIA CUDA sample to version 11.8.0.
- Upgrade Intel MPI Library to 2021.9.0.43482.
- Upgrade NICE DCV to version `2023.0-15022`.
- server: `2023.0.15022-1`
- xdcv: `2023.0.547-1`
- gl: `2023.0.1027-1`
- web_viewer: `2023.0.15022-1`
- Upgrade `aws-cfn-bootstrap` to version 2.0-24.
- Upgrade image used by CodeBuild environment when building container images for AWS Batch clusters, from
`aws/codebuild/amazonlinux2-x86_64-standard:3.0` to `aws/codebuild/amazonlinux2-x86_64-standard:4.0` and from
`aws/codebuild/amazonlinux2-aarch64-standard:1.0` to `aws/codebuild/amazonlinux2-aarch64-standard:2.0`.
- Avoid to reset FSx and EFS shared folders permissions when mounting them in the compute nodes.

**BUG FIXES**
- Fix EFS, FSx network security groups validators to avoid reporting false errors.
- Fix missing tagging of resources created by ImageBuilder during the `build-image` operation.
- Fix Update policy for `MaxCount` to always perform numerical comparisons on MaxCount property.
- Fix an issue that was causing misalignment of compute nodes IP on instances with multiple network interfaces.
- Fix replacement of `StoragePass` in `slurm_parallelcluster_slurmdbd.conf` when a queue parameter update is performed and the Slurm accounting configurations are not updated.
- Fix issue causing `cfn-hup` daemon to fail when it gets restarted.
- Fix issue causing dangling security groups to be created when creating a cluster with an existing EFS.
- Fix issue causing NVIDIA GPU compute nodes not to resume correctly after executing an `scontrol reboot` command.
- Fix tags parsing to show a meaningful error message when using a boolean in the `Value` field of `Tags`.

3.5.1

-----
**ENHANCEMENTS**
- Add a new way to distribute ParallelCluster as a self-contained executable shipped with a dedicated installer.
- Add support for US isolated region us-isob-east-1.

**CHANGES**
- Upgrade EFA installer to `1.22.0`
- Efa-driver: `efa-2.1.1g`
- Efa-config: `efa-config-1.13-1`
- Efa-profile: `efa-profile-1.5-1`
- Libfabric-aws: `libfabric-aws-1.17.0-1`
- Rdma-core: `rdma-core-43.0-1`
- Open MPI: `openmpi40-aws-4.1.5-1`
- Upgrade NICE DCV to version `2022.2-14521`.
- server: `2022.2.14521-1`
- xdcv: `2022.2.519-1`
- gl: `2022.2.1012-1`
- web_viewer: `2022.2.14521-1`

**BUG FIXES**
- Fix update cluster to remove shared EBS volumes can potentially cause node launching failures if `MountDir` match the same pattern in `/etc/exports`.
- Fix for compute_console_output log file being truncated at every clustermgtd iteration.

3.5.0

-----

**ENHANCEMENTS**
- Add official versioned ParallelCluster policies in a CloudFormation template to allow customers to easily reference them in their workloads.
- Add a Python library to allow customers to use ParallelCluster functionalities in their own code.
- Add logging of compute node console output to CloudWatch on compute node bootstrap failure.
- Add failures field containing failure code and reason to `describe-cluster` output when cluster creation fails.

**CHANGES**
- Upgrade Slurm to version 22.05.8.
- Make Slurm controller logs more verbose and enable additional logging for the Slurm power save plugin.
- Upgrade EFA installer to `1.21.0`
- Efa-driver: `efa-2.1.1-1`
- Efa-config: `efa-config-1.12-1`
- Efa-profile: `efa-profile-1.5-1`
- Libfabric-aws: `libfabric-aws-1.16.1amzn3.0-1`
- Rdma-core: `rdma-core-43.0-1`
- Open MPI: `openmpi40-aws-4.1.4-3`

**BUG FIXES**
- Fix cluster DB creation by verifying the cluster name is no longer than 40 characters when Slurm accounting is enabled.
- Fix an issue in clustermgtd that caused compute nodes rebooted via Slurm to be replaced if the EC2 instance status checks fail.
- Fix an issue where compute nodes could not launch with capacity reservations shared by other accounts because of a wrong IAM policy on head node.
- Fix an issue where custom AMI creation failed in Ubuntu 20.04 on MySQL packages installation.
- Fix an issue where pcluster configure command failed when the account had no IPv4 CIDR subnet.

3.4.1

-----

**BUG FIXES**
- Fix an issue with the Slurm scheduler that might incorrectly apply updates to its internal registry of compute nodes. This might result in EC2 instances to become inaccessible or backed by an incorrect instance type.

Page 3 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.