Deckhouse Kubernetes Platform v1.59 Release Overview
Please note
* **Support for the _linstor_ module has been discontinued**. Deckhouse Kubernetes Platform will not be able to perform an upgrade if the _linstor_ module is enabled. You must [switch](https://deckhouse.io/modules/sds-replicated-volume/stable/faq.html#migrating-from-the-deckhouse-kubernetes-platform-linstorhttpsdeckhouseiodocumentationv157modules041-linstor--built-in-module-to-sds-replicated-volume) to using the [sds-replicated-volume](https://deckhouse.io/modules/sds-replicated-volume/stable/) module before upgrading DKP.
* **The new way of how application traffic is captured by Istio has been added.** The CNI plugin mode is now added instead of the init container. This, for example, removes restrictions that prevented _istio_ from being used together with the _admission-policy-engine_ module in some configurations. The new routing method is used by default, a switchover will be performed when DKP is updated. **Possible regressions** for applications that do network requests in their init containers. The solutions are described in the [PR](https://github.com/deckhouse/deckhouse/pull/8353).
Major changes
* **BE and SE editions** of Deckhouse Kubernetes Platform have been added. [Read more](https://deckhouse.io/products/kubernetes-platform/#revisions) about DKP editions and their features.
* **Support for Debian 12** has been added while support for **Debian 9 has been discontinued**.
* Enterprise Edition now features **support for zVirt**. The related cloud provider module is under active development (documentation will be available soon).
* **High availability mode for Deckhouse has been added.** In clusters with more than one master node, the DKP core will now automatically run in multiple replicas (like some other components). You can manage high availability mode [globally](https://deckhouse.io/documentation/v1.59/deckhouse-configure-global.html#parameters-highavailability) or at the module level ([highAvailability](https://deckhouse.io/documentation/v1.59/modules/002-deckhouse/configuration.html#parameters-highavailability) parameter of the `deckhouse` module).
* **Aggregation proxy for monitoring metrics** has been added. With it, you can use a single datasource in Grafana that will feature aggregated data both from all Prometheus Main and Prometheus Longterm replicas.
* **Grafana v10 has been added.**
* There is a separate domain for the new Grafana — `grafana-v10` (according to the [DNS names pattern](https://deckhouse.io/documentation/v1.59/deckhouse-configure-global.html#parameters-modules-publicdomaintemplate) set in the cluster), but in the future the new Grafana will replace the current version at its regular address.
* Note that some dashboards will not work in Grafana v10 without fixes. Therefore, there will be two Grafana running simultaneously in the cluster for some time.
* Alerts about dashboards that have to be migrated due to unsupported plugins or alerts have been added. In the alerts' description, you will find details on what steps to follow to migrate to Grafana v10.
* **A new method of L2 traffic balancing** has been added (the new [l2-load-balancer](https://deckhouse.io/documentation/v1.59/modules/381-l2-load-balancer/) module). Unlike the `metallb` module, it distributes traffic to nodes instead of redirecting it to a single node.
* Module update policy can now be inherited from the DKP update settings. If no update policy is defined for a module ([ModuleUpdatePolicy](https://deckhouse.io/documentation/v1.59/cr.html#moduleupdatepolicy) resource), the update policy is inherited from the DKP settings (the [update](https://deckhouse.io/documentation/v1.59/modules/002-deckhouse/configuration.html#parameters-update) parameter section of the `deckhouse` module).
* OpenStack provider updates:
* You can now group virtual machines for **master nodes** into groups and specify a policy to distribute them across hypervisors (the [masterNodeGroup.serverGroup](https://deckhouse.io/documentation/v1.59/modules/030-cloud-provider-openstack/cluster_configuration.html#openstackclusterconfiguration-masternodegroup-servergroup) field). This way, you can to avoid placing several master nodes on the same hypervisor.
* You can now use the HTTP proxy during installation.
Security
* `cilium`, `ebpf_exporter`, `memcached` (Prometheus) have been migrated to distroless images.
* Loki now only works over authenticated connections.
* The option to control the permission to run pods with automountServiceAccountToken enabled in the [security policy](https://deckhouse.io/documentation/v1.59/modules/015-admission-policy-engine/cr.html#securitypolicy) has been added. On top of that, you can now define a list of allowed cluster roles to bind to users (the [allowedClusterRoles](https://deckhouse.io/documentation/v1.59/modules/015-admission-policy-engine/cr.html#securitypolicy-v1alpha1-spec-policies-allowedclusterroles) field).
Component version updates
* Kubernetes control plane: `v1.26.15`, `v1.27.12`, `v1.28.8`, `v1.29.3`
* Grafana: `v10.2.2`
* terraform: `0.14.8`
* aws-ebs-csi-driver: `v1.28.0`
* ebpf_exporter: `v2.3.0`
A list of internal modules or their components that will be restarted during the upgrade
- Kubernetes control plane
- Ingress controller v1.9, and all versions if the `enableIstioSidecar` parameter is enabled.
- Grafana
- cloud-data-discoverer (cloud-provider-openstack)
- cni-cilium
- ebpf_exporter
- kruise (ingress-nginx)
- log-shipper-agent
- loki
- vertical-pod-autoscaler
See [CHANGELOG v1.59](https://github.com/deckhouse/deckhouse/blob/main/CHANGELOG/CHANGELOG-v1.59.md) for more details.