Compare commits

...

42 Commits

Author SHA1 Message Date
github-actions[bot]
1648b754f6 Patch versions updates 2026-03-18 03:22:52 +00:00
Ali Afsharzadeh
e979e770f2 Fix calico api server permissions (#13101)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-17 16:19:50 +05:30
Ali Afsharzadeh
b1e3816b2f Add calico-tier-getter RBAC (#13100)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-17 16:19:42 +05:30
Vitaly
391b08c645 fix: use nodelocaldns_ip with ipv6 address (#13087) 2026-03-17 16:07:38 +05:30
NoNE
39b97464be Use async/poll on drain tasks to prevent SSH connection timeouts (#13081) 2026-03-17 14:31:36 +05:30
Kay Yan
3c6d368397 ci(openeuler): improve mirror selection and stabilize CI checks (#13094)
Enable openEuler metalink and clear dnf cache after repo updates so package downloads use refreshed mirror metadata. Keep openeuler24-calico in the main CI matrix with a longer package timeout, and clean up failed pods before running-state checks to reduce transient CI noise.


Made-with: Cursor

Made-with: Cursor

Made-with: Cursor

Made-with: Cursor

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2026-03-17 13:29:37 +05:30
Max Gautier
03d17fea92 proxy: Fix the no_proxy variable (#12981)
* CI: add no_proxy regression test

* proxy: Fix the no_proxy variable

Since 2.29, probably due to a change in ansible templating, the no_proxy
variable is rendered as an array of character rather than a string.

This results in broken cluster in some case.

Eliminate the custom jinja looping to use filters and list flatteing +
join instead.
Also simplify some things (no separate tasks file, just use `run_once`
instead of delegating to localhost)
2026-03-17 03:45:37 +05:30
Cheprasov Daniil
dbb8527560 docs(etcd): clarify etcd metrics scraping with listen-metrics-urls (#13059) 2026-03-16 14:37:39 +05:30
Shaleen Bathla
7acdc4df64 cilium: honor resource limits and requests by default (#13092)
Signed-off-by: Shaleen Bathla <shaleen.bathla@servicenow.com>
2026-03-16 08:49:40 +05:30
Ali Afsharzadeh
a51773e78f Upgrade cilium from 1.18.6 to 1.19.1 (#13095)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-14 09:39:34 +05:30
Jannick Kappelmann
096dd1875a Added kube_version check (#13071) 2026-03-13 17:25:36 +05:30
Viktor
e3b5c41ced fix: update volumesnapshotclass to v1 (#12775) 2026-03-11 20:31:38 +05:30
Srishti Jaiswal
ba70ed35f0 Remove kubeadm config api version: v1beta3 for kubeadm config template (#13027) 2026-03-11 20:27:38 +05:30
Ali Afsharzadeh
1bafb8e882 Update load balancer versions to Nginx 1.28.2 and Haproxy 3.2.13 (#13034)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-11 20:17:37 +05:30
Cheprasov Daniil
3bdd70c5d8 Feat: make kube-vip BGP source configurable (#13044) 2026-03-11 19:17:38 +05:30
Shaleen Bathla
979fe25521 cilium: remove unused bpf clock probe variable (#13050)
Signed-off-by: Shaleen Bathla <shaleen.bathla@servicenow.com>
2026-03-11 16:39:38 +05:30
dependabot[bot]
7e7b016a15 build(deps): bump molecule from 26.2.0 to 26.3.0 in the molecule group (#13079)
Bumps the molecule group with 1 update: [molecule](https://github.com/ansible-community/molecule).


Updates `molecule` from 26.2.0 to 26.3.0
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v26.2.0...v26.3.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-version: 26.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: molecule
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 23:49:12 +05:30
Ali Afsharzadeh
da6539c7a0 Enable create_namespace option for custom_cni with helm (#13061)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-05 15:12:20 +05:30
dependabot[bot]
459f31034e build(deps): bump molecule from 25.12.0 to 26.2.0 in the molecule group (#13065)
Bumps the molecule group with 1 update: [molecule](https://github.com/ansible-community/molecule).


Updates `molecule` from 25.12.0 to 26.2.0
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v25.12.0...v26.2.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-version: 26.2.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: molecule
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-05 09:22:17 +05:30
Max Gautier
f66e11e5cc CI: Fix terraform job not using correct extra vars (#13057) 2026-03-04 02:00:18 +05:30
Max Gautier
0c47a6891e Remove netcheker as we now use hydrophone for network tests (#13058) 2026-03-02 19:40:54 +05:30
NoNE
a866292279 Deduplicate GraphQL node IDs in update-hashes to fix 502 err (#13064)
* Deduplicate GraphQL node IDs in update-hashes to fix 502

* Bump component_hash_update version to 1.0.1

Avoids stale pip/uv installation cache in CI pipelines
after the GraphQL deduplication fix.
2026-03-02 16:00:13 +05:30
ChengHao Yang
98ac2e40bf Test: fix vm_memory not enough for testing (#13060)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-03-02 13:52:13 +05:30
Kubernetes Prow Robot
dcab5c8b23 Merge pull request #12754 from tico88612/test/hydrophone
Test: refactor check-network to hydrophone
2026-02-27 16:38:56 +05:30
Srishti Jaiswal
16ad53eac5 Revert "add Helm 4.x checksums and remove 3.16.* checksums (#13018)" (#13056)
This reverts commit 5e6fbc9769.
2026-02-27 16:28:56 +05:30
Srishti Jaiswal
5e6fbc9769 add Helm 4.x checksums and remove 3.16.* checksums (#13018) 2026-02-25 13:49:43 +05:30
Chen-Hsien, Huang
ba55ece670 Fix typo in variable name: ssh_bastion_confing__name -> ssh_bastion_config_name (#13046) 2026-02-24 18:15:37 +05:30
Timothy Liao
d80983875c docs: remove outdated ansible-galaxy install from CONTRIBUTING.md (#13038) 2026-02-24 12:21:35 +05:30
Max Gautier
4ca7c2f5c5 system_packages: skip action if package set is empty (#13015)
We currently invoke the package module even if the set of package is
empty.

This apparently make the package manager of OpenEuler to regularly time
out.
2026-02-24 09:13:33 +05:30
Max Gautier
55787bd1a2 cleanup 1.32 crio checksums (Kubernetes 1.32 is no longer supported) (#13042) 2026-02-23 20:47:36 +05:30
Max Gautier
9cce89dc5e kubeadm: delete unused node-kubeconfig template (#13012) 2026-02-17 23:41:38 +05:30
Max Gautier
e7f4c9f9f6 kubeadm_patches: remove old patches on inventory change (#13019)
Currently, if changing the inventory variable `kubeadm_patches`, new
patches will be created, but the existing ones will also be left on the
filesystem, and applied by kubeadm ; this means that removed or changed
configuration can linger.

Cleanup old patches (which are the difference between existing patches
on filesystem and the one created for the current runs).
2026-02-17 09:56:01 +05:30
Max Gautier
0c75e97601 Remove outdated RHEL8 documentation (#13017) 2026-02-16 21:44:10 +05:30
Max Gautier
dcebecc6e0 CI: test disabled localhost LB (#13013) 2026-02-16 21:44:02 +05:30
Takuya Murakami
0bfd43499a Update etcd to 3.6 (#12634)
* [etcd] Update etcd 3.5.x to 3.6.5

- Add hashes for etcd 3.6.5
- Remove etcd v2 backup task for etcd 3.6
  The etcd 3.6 removes 'etcdctl backup' command with ETCDCTL_API=2
- Downgrade etcd to 3.5 in netchecker
  The netchecker does not work with etcd 3.6 becaust it removes v2 API support (--enable-v2).
  And netchekcer does not support v3 API.

* Fix: Change etcd config to clean up v2 store before upgrading etcd to 3.6

* Bump etcd to 3.6.8
2026-02-16 19:16:01 +05:30
dependabot[bot]
6a1243110c build(deps): bump cryptography from 46.0.4 to 46.0.5 (#13011)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.4 to 46.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.4...46.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-16 16:20:01 +05:30
Chad Swenson
8af9f37b28 Undefined check for apiserver_loadbalancer_domain_name in apiserver_sans (#13009) 2026-02-16 15:22:01 +05:30
ChengHao Yang
f1fe9036ce Test: ram increase for hydrophone
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:24:54 +08:00
ChengHao Yang
0458d33698 Test: when hardening test needs enabled csr approve
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:24:54 +08:00
ChengHao Yang
275cdc70d4 Test: refactor check-network to hydrophone
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:24:54 +08:00
ChengHao Yang
c138157886 Test: add install hydrophone
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:24:53 +08:00
ChengHao Yang
78199c3bc3 Refactor: check csr request is separated from check network
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:24:53 +08:00
92 changed files with 594 additions and 1272 deletions

View File

@@ -57,6 +57,7 @@ pr:
- ubuntu24-kube-router-svc-proxy
- ubuntu24-ha-separate-etcd
- fedora40-flannel-crio-collection-scale
- openeuler24-calico
# This is for flakey test so they don't disrupt the PR worklflow too much.
# Jobs here MUST have a open issue so we don't lose sight of them
@@ -67,7 +68,6 @@ pr-flakey:
matrix:
- TESTCASE:
- flatcar4081-calico # https://github.com/kubernetes-sigs/kubespray/issues/12309
- openeuler24-calico # https://github.com/kubernetes-sigs/kubespray/issues/12877
# The ubuntu24-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
ubuntu24-calico-all-in-one:

View File

@@ -116,3 +116,4 @@ tf-elastx_ubuntu24-calico:
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-24.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
TESTCASE: $CI_JOB_NAME

View File

@@ -12,7 +12,6 @@ To install development dependencies you can set up a python virtual env with the
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
ansible-galaxy install -r tests/requirements.yml
```
#### Linting

View File

@@ -35,8 +35,8 @@ RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.35.1/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.35.1/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& curl -L "https://dl.k8s.io/release/v1.35.2/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.35.2/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./

View File

@@ -89,13 +89,13 @@ vagrant up
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Trixie
- **Ubuntu** 22.04, 24.04
- **CentOS Stream / RHEL** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **CentOS Stream / RHEL** 9, 10
- **Fedora** 39, 40, 41, 42
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8) (experimental in 10: see [Rocky Linux 10 notes](docs/operating_systems/rhel.md#rocky-linux-10))
- **Oracle Linux** 9, 10
- **Alma Linux** 9, 10
- **Rocky Linux** 9, 10 (experimental in 10: see [Rocky Linux 10 notes](docs/operating_systems/rhel.md#rocky-linux-10))
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
@@ -111,15 +111,15 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.35.1
- [etcd](https://github.com/etcd-io/etcd) 3.5.27
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.35.2
- [etcd](https://github.com/etcd-io/etcd) 3.6.8
- [docker](https://www.docker.com/) 28.3
- [containerd](https://containerd.io/) 2.2.1
- [cri-o](http://cri-o.io/) 1.35.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) 2.2.2
- [cri-o](http://cri-o.io/) 1.35.1 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.6
- [cilium](https://github.com/cilium/cilium) 1.18.6
- [cilium](https://github.com/cilium/cilium) 1.19.1
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1

View File

@@ -245,7 +245,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.18.6"
cilium_version: "1.19.1"
```
## Add variable to config

View File

@@ -63,6 +63,8 @@ kube_vip_bgppeers:
# kube_vip_bgp_peeraddress:
# kube_vip_bgp_peerpass:
# kube_vip_bgp_peeras:
# kube_vip_bgp_sourceip:
# kube_vip_bgp_sourceif:
```
If using [control plane load-balancing](https://kube-vip.io/docs/about/architecture/#control-plane-load-balancing):

View File

@@ -6,9 +6,9 @@ The documentation also applies to Red Hat derivatives, including Alma Linux, Roc
The content of this section does not apply to open-source derivatives.
In order to install packages via yum or dnf, RHEL 7/8 hosts are required to be registered for a valid Red Hat support subscription.
In order to install packages via yum or dnf, RHEL hosts are required to be registered for a valid Red Hat support subscription.
You can apply for a 1-year Development support subscription by creating a [Red Hat Developers](https://developers.redhat.com/) account. Be aware though that as the Red Hat Developers subscription is limited to only 1 year, it should not be used to register RHEL 7/8 hosts provisioned in Production environments.
You can apply for a 1-year Development support subscription by creating a [Red Hat Developers](https://developers.redhat.com/) account. Be aware though that as the Red Hat Developers subscription is limited to only 1 year, it should not be used to register RHEL hosts provisioned in Production environments.
Once you have a Red Hat support account, simply add the credentials to the Ansible inventory parameters `rh_subscription_username` and `rh_subscription_password` prior to deploying Kubespray. If your company has a Corporate Red Hat support account, then obtain an **Organization ID** and **Activation Key**, and add these to the Ansible inventory parameters `rh_subscription_org_id` and `rh_subscription_activation_key` instead of using your Red Hat support account credentials.
@@ -29,15 +29,7 @@ rh_subscription_role: "Red Hat Enterprise Server"
rh_subscription_sla: "Self-Support"
```
If the RHEL 8/9 hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
## RHEL 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
The kernel version is lower than the kubernetes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md).
If the RHEL hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
## Rocky Linux 10

View File

@@ -32,12 +32,12 @@ etcd_metrics_service_labels:
k8s-app: etcd
app.kubernetes.io/managed-by: Kubespray
app: kube-prometheus-stack-kube-etcd
release: prometheus-stack
release: kube-prometheus-stack
```
The last two labels in the above example allows to scrape the metrics from the
[kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
chart with the following Helm `values.yaml` :
chart when it is installed with the release name `kube-prometheus-stack` and the following Helm `values.yaml`:
```yaml
kubeEtcd:
@@ -45,8 +45,22 @@ kubeEtcd:
enabled: false
```
To fully override metrics exposition urls, define it in the inventory with:
If your Helm release name is different, adjust the `release` label accordingly.
To fully override metrics exposition URLs, define it in the inventory with:
```yaml
etcd_listen_metrics_urls: "http://0.0.0.0:2381"
```
If you choose to expose metrics on specific node IPs (for example `10.141.4.22`, `10.141.4.23`, `10.141.4.24`) in `etcd_listen_metrics_urls`,
you can configure kube-prometheus-stack to scrape those endpoints directly with:
```yaml
kubeEtcd:
enabled: true
endpoints:
- 10.141.4.22
- 10.141.4.23
- 10.141.4.24
```

View File

@@ -199,6 +199,8 @@ kube_vip_enabled: false
# kube_vip_leasename: plndr-cp-lock
# kube_vip_enable_node_labeling: false
# kube_vip_lb_fwdmethod: local
# kube_vip_bgp_sourceip:
# kube_vip_bgp_sourceif:
# Node Feature Discovery
node_feature_discovery_enabled: false

View File

@@ -361,8 +361,6 @@ cilium_l2announcements: false
# -- Enable the use of well-known identities.
# cilium_enable_well_known_identities: false
# cilium_enable_bpf_clock_probe: true
# -- Whether to enable CNP status updates.
# cilium_disable_cnp_status_updates: true

View File

@@ -46,8 +46,8 @@ ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.35.1/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.35.1/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& curl -L https://dl.k8s.io/release/v1.35.2/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.35.2/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
# Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \

View File

@@ -16,6 +16,8 @@
- name: Gather and compute network facts
import_role:
name: network_facts
tags:
- always
- name: Gather minimal facts
setup:
gather_subset: '!all'

View File

@@ -1,6 +1,6 @@
ansible==11.13.0
# Needed for community.crypto module
cryptography==46.0.4
cryptography==46.0.5
# Needed for jinja2 json_query templating
jmespath==1.1.0
# Needed for ansible.utils.ipaddr

View File

@@ -1,2 +1,2 @@
---
ssh_bastion_confing__name: ssh-bastion.conf
ssh_bastion_config_name: ssh-bastion.conf

View File

@@ -8,8 +8,8 @@
tasks:
- name: Copy config to remote host
copy:
src: "{{ playbook_dir }}/{{ ssh_bastion_confing__name }}"
dest: "{{ ssh_bastion_confing__name }}"
src: "{{ playbook_dir }}/{{ ssh_bastion_config_name }}"
dest: "{{ ssh_bastion_config_name }}"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: "0644"

View File

@@ -17,6 +17,6 @@
delegate_to: localhost
connection: local
template:
src: "{{ ssh_bastion_confing__name }}.j2"
dest: "{{ playbook_dir }}/{{ ssh_bastion_confing__name }}"
src: "{{ ssh_bastion_config_name }}.j2"
dest: "{{ playbook_dir }}/{{ ssh_bastion_config_name }}"
mode: "0640"

View File

@@ -12,6 +12,10 @@ coreos_locksmithd_disable: false
# Install epel repo on Centos/RHEL
epel_enabled: false
## openEuler specific variables
# Enable metalink for openEuler repos (auto-selects fastest mirror by location)
openeuler_metalink_enabled: false
## Oracle Linux specific variables
# Install public repo on Oracle Linux
use_oracle_public_repo: true

View File

@@ -1,3 +1,43 @@
---
- name: Import Centos boostrap for openEuler
import_tasks: centos.yml
- name: Import CentOS bootstrap for openEuler
ansible.builtin.import_tasks: centos.yml
- name: Get existing openEuler repo sections
ansible.builtin.shell:
cmd: "set -o pipefail && grep '^\\[' /etc/yum.repos.d/openEuler.repo | tr -d '[]'"
executable: /bin/bash
register: _openeuler_repo_sections
changed_when: false
failed_when: false
check_mode: false
become: true
when: openeuler_metalink_enabled
- name: Enable metalink for openEuler repos
community.general.ini_file:
path: /etc/yum.repos.d/openEuler.repo
section: "{{ item.key }}"
option: metalink
value: "{{ item.value }}"
no_extra_spaces: true
mode: "0644"
loop: "{{ _openeuler_metalink_repos | dict2items | selectattr('key', 'in', _openeuler_repo_sections.stdout_lines | default([])) }}"
become: true
when: openeuler_metalink_enabled
register: _openeuler_metalink_result
vars:
_openeuler_metalink_repos:
OS: "https://mirrors.openeuler.org/metalink?repo=$releasever/OS&arch=$basearch"
everything: "https://mirrors.openeuler.org/metalink?repo=$releasever/everything&arch=$basearch"
EPOL: "https://mirrors.openeuler.org/metalink?repo=$releasever/EPOL/main&arch=$basearch"
debuginfo: "https://mirrors.openeuler.org/metalink?repo=$releasever/debuginfo&arch=$basearch"
source: "https://mirrors.openeuler.org/metalink?repo=$releasever&arch=source"
update: "https://mirrors.openeuler.org/metalink?repo=$releasever/update&arch=$basearch"
update-source: "https://mirrors.openeuler.org/metalink?repo=$releasever/update&arch=source"
- name: Clean dnf cache to apply metalink mirror selection
ansible.builtin.command: dnf clean all
become: true
when:
- openeuler_metalink_enabled
- _openeuler_metalink_result.changed

View File

@@ -1,9 +1,9 @@
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
criSocket: {{ cri_socket }}
---
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
imageRepository: {{ kubeadm_image_repo }}
kubernetesVersion: v{{ kube_version }}

View File

@@ -34,6 +34,7 @@
when:
- etcd_data_dir_member.stat.exists
- etcd_cluster_is_healthy.rc == 0
- etcd_version is version('3.6.0', '<')
command: >-
{{ bin_dir }}/etcdctl backup
--data-dir {{ etcd_data_dir }}

View File

@@ -0,0 +1,43 @@
---
# When upgrading from etcd 3.5 to 3.6, need to clean up v2 store before upgrading.
# Without this, etcd 3.6 will crash with following error:
# "panic: detected disallowed v2 WAL for stage --v2-deprecation=write-only [recovered]"
- name: Cleanup v2 store when upgrade etcd from <3.6 to >=3.6
when:
- etcd_cluster_setup
- etcd_current_version != ''
- etcd_current_version is version('3.6.0', '<')
- etcd_version is version('3.6.0', '>=')
block:
- name: Ensure etcd version is >=3.5.26
when:
- etcd_current_version is version('3.5.26', '<')
fail:
msg: "You need to upgrade etcd to 3.5.26 or later before upgrade to 3.6. Current version is {{ etcd_current_version }}."
# Workarounds:
# Disable --enable-v2 (recommended in 20289) and do workaround of 20231 (MAX_WALS=1 and SNAPSHOT_COUNT=1)
# - https://github.com/etcd-io/etcd/issues/20809
# - https://github.com/etcd-io/etcd/discussions/20231#discussioncomment-13958051
- name: Change etcd configuration temporally to limit number of WALs and snapshots to clean up v2 store
ansible.builtin.lineinfile:
path: /etc/etcd.env
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^ETCD_SNAPSHOT_COUNT=', line: 'ETCD_SNAPSHOT_COUNT=1' }
- { regexp: '^ETCD_MAX_WALS=', line: 'ETCD_MAX_WALS=1' }
- { regexp: '^ETCD_MAX_SNAPSHOTS=', line: 'ETCD_MAX_SNAPSHOTS=1' }
- { regexp: '^ETCD_ENABLE_V2=', line: 'ETCD_ENABLE_V2=false' }
# Restart etcd to apply temporal configuration and prevent some upgrade failures
# See also: https://etcd.io/blog/2025/upgrade_from_3.5_to_3.6_issue_followup/
- name: Stop etcd
service:
name: etcd
state: stopped
- name: Start etcd
service:
name: etcd
state: started

View File

@@ -23,6 +23,14 @@
- etcd_events_cluster_setup
- etcd_image_tag not in etcd_events_current_docker_image.stdout | default('')
- name: Get currently-deployed etcd version as x.y.z format
set_fact:
etcd_current_version: "{{ (etcd_current_docker_image.stdout | regex_search('.*:v([0-9]+\\.[0-9]+\\.[0-9]+)', '\\1'))[0] | default('') }}"
when: etcd_cluster_setup
- name: Cleanup v2 store data
import_tasks: clean_v2_store.yml
- name: Install etcd launch script
template:
src: etcd.j2

View File

@@ -21,6 +21,14 @@
- etcd_events_cluster_setup
- etcd_version not in etcd_current_host_version.stdout | default('')
- name: Get currently-deployed etcd version as x.y.z format
set_fact:
etcd_current_version: "{{ (etcd_current_host_version.stdout | regex_search('etcd Version: ([0-9]+\\.[0-9]+\\.[0-9]+)', '\\1'))[0] | default('') }}"
when: etcd_cluster_setup
- name: Cleanup v2 store data
import_tasks: clean_v2_store.yml
- name: Install | Copy etcd binary from download dir
copy:
src: "{{ local_release_dir }}/etcd-v{{ etcd_version }}-linux-{{ host_architecture }}/{{ item }}"

View File

@@ -53,6 +53,12 @@
- control-plane
- network
- name: Install etcd
include_tasks: "install_{{ etcd_deployment_type }}.yml"
when: ('etcd' in group_names)
tags:
- upgrade
- name: Install etcdctl and etcdutl binary
import_role:
name: etcdctl_etcdutl
@@ -64,12 +70,6 @@
- ('etcd' in group_names)
- etcd_cluster_setup
- name: Install etcd
include_tasks: "install_{{ etcd_deployment_type }}.yml"
when: ('etcd' in group_names)
tags:
- upgrade
- name: Configure etcd
include_tasks: configure.yml
when: ('etcd' in group_names)

View File

@@ -25,8 +25,6 @@ ETCD_MAX_REQUEST_BYTES={{ etcd_max_request_bytes }}
ETCD_LOG_LEVEL={{ etcd_log_level }}
ETCD_MAX_SNAPSHOTS={{ etcd_max_snapshots }}
ETCD_MAX_WALS={{ etcd_max_wals }}
# Flannel need etcd v2 API
ETCD_ENABLE_V2=true
# TLS settings
ETCD_TRUSTED_CA_FILE={{ etcd_cert_dir }}/ca.pem

View File

@@ -88,36 +88,5 @@ dns_autoscaler_affinity: {}
# app: kube-prometheus-stack-kube-etcd
# release: prometheus-stack
# Netchecker
deploy_netchecker: false
netchecker_port: 31081
agent_report_interval: 15
netcheck_namespace: default
# Limits for netchecker apps
netchecker_agent_cpu_limit: 30m
netchecker_agent_memory_limit: 100M
netchecker_agent_cpu_requests: 15m
netchecker_agent_memory_requests: 64M
netchecker_server_cpu_limit: 100m
netchecker_server_memory_limit: 256M
netchecker_server_cpu_requests: 50m
netchecker_server_memory_requests: 64M
netchecker_etcd_cpu_limit: 200m
netchecker_etcd_memory_limit: 256M
netchecker_etcd_cpu_requests: 100m
netchecker_etcd_memory_requests: 128M
# SecurityContext (user/group)
netchecker_agent_user: 1000
netchecker_server_user: 1000
netchecker_agent_group: 1000
netchecker_server_group: 1000
# Log levels
netchecker_agent_log_level: 5
netchecker_server_log_level: 5
netchecker_etcd_log_level: info
# Policy Controllers
# policy_controller_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]

View File

@@ -87,25 +87,3 @@
when: etcd_metrics_port is defined and etcd_metrics_service_labels is defined
tags:
- etcd_metrics
- name: Kubernetes Apps | Netchecker
command:
cmd: "{{ kubectl_apply_stdin }}"
stdin: "{{ lookup('template', item) }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
vars:
k8s_namespace: "{{ netcheck_namespace }}"
when: deploy_netchecker
tags:
- netchecker
loop:
- netchecker-ns.yml.j2
- netchecker-agent-sa.yml.j2
- netchecker-agent-ds.yml.j2
- netchecker-agent-hostnet-ds.yml.j2
- netchecker-server-sa.yml.j2
- netchecker-server-clusterrole.yml.j2
- netchecker-server-clusterrolebinding.yml.j2
- netchecker-server-deployment.yml.j2
- netchecker-server-svc.yml.j2

View File

@@ -1,56 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: netchecker-agent
name: netchecker-agent
namespace: {{ netcheck_namespace }}
spec:
selector:
matchLabels:
app: netchecker-agent
template:
metadata:
name: netchecker-agent
labels:
app: netchecker-agent
spec:
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
tolerations:
- effect: NoSchedule
operator: Exists
nodeSelector:
kubernetes.io/os: linux
containers:
- name: netchecker-agent
image: "{{ netcheck_agent_image_repo }}:{{ netcheck_agent_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- "-v={{ netchecker_agent_log_level }}"
- "-alsologtostderr=true"
- "-serverendpoint=netchecker-service:8081"
- "-reportinterval={{ agent_report_interval }}"
resources:
limits:
cpu: {{ netchecker_agent_cpu_limit }}
memory: {{ netchecker_agent_memory_limit }}
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
securityContext:
runAsUser: {{ netchecker_agent_user | default('0') }}
runAsGroup: {{ netchecker_agent_group | default('0') }}
serviceAccountName: netchecker-agent
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -1,58 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: netchecker-agent-hostnet
name: netchecker-agent-hostnet
namespace: {{ netcheck_namespace }}
spec:
selector:
matchLabels:
app: netchecker-agent-hostnet
template:
metadata:
name: netchecker-agent-hostnet
labels:
app: netchecker-agent-hostnet
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: netchecker-agent
image: "{{ netcheck_agent_image_repo }}:{{ netcheck_agent_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- "-v={{ netchecker_agent_log_level }}"
- "-alsologtostderr=true"
- "-serverendpoint=netchecker-service:8081"
- "-reportinterval={{ agent_report_interval }}"
resources:
limits:
cpu: {{ netchecker_agent_cpu_limit }}
memory: {{ netchecker_agent_memory_limit }}
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
securityContext:
runAsUser: {{ netchecker_agent_user | default('0') }}
runAsGroup: {{ netchecker_agent_group | default('0') }}
serviceAccountName: netchecker-agent
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: netchecker-agent
namespace: {{ netcheck_namespace }}

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: "{{ netcheck_namespace }}"
labels:
name: "{{ netcheck_namespace }}"

View File

@@ -1,9 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get"]

View File

@@ -1,13 +0,0 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
subjects:
- kind: ServiceAccount
name: netchecker-server
namespace: {{ netcheck_namespace }}
roleRef:
kind: ClusterRole
name: netchecker-server
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,86 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
labels:
app: netchecker-server
spec:
replicas: 1
selector:
matchLabels:
app: netchecker-server
template:
metadata:
name: netchecker-server
labels:
app: netchecker-server
spec:
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-cluster-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
volumes:
- name: etcd-data
emptyDir: {}
containers:
- name: netchecker-server
image: "{{ netcheck_server_image_repo }}:{{ netcheck_server_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
limits:
cpu: {{ netchecker_server_cpu_limit }}
memory: {{ netchecker_server_memory_limit }}
requests:
cpu: {{ netchecker_server_cpu_requests }}
memory: {{ netchecker_server_memory_requests }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: {{ netchecker_server_user | default('0') }}
runAsGroup: {{ netchecker_server_group | default('0') }}
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
ports:
- containerPort: 8081
args:
- -v={{ netchecker_server_log_level }}
- -logtostderr
- -kubeproxyinit=false
- -endpoint=0.0.0.0:8081
- -etcd-endpoints=http://127.0.0.1:2379
- name: etcd
image: "{{ etcd_image_repo }}:{{ netcheck_etcd_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: ETCD_LOG_LEVEL
value: "{{ netchecker_etcd_log_level }}"
command:
- etcd
- --listen-client-urls=http://127.0.0.1:2379
- --advertise-client-urls=http://127.0.0.1:2379
- --data-dir=/var/lib/etcd
- --enable-v2
- --force-new-cluster
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
resources:
limits:
cpu: {{ netchecker_etcd_cpu_limit }}
memory: {{ netchecker_etcd_memory_limit }}
requests:
cpu: {{ netchecker_etcd_cpu_requests }}
memory: {{ netchecker_etcd_memory_requests }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: {{ netchecker_server_user | default('0') }}
runAsGroup: {{ netchecker_server_group | default('0') }}
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
tolerations:
- effect: NoSchedule
operator: Exists
serviceAccountName: netchecker-server

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: netchecker-service
namespace: {{ netcheck_namespace }}
spec:
selector:
app: netchecker-server
ports:
-
protocol: TCP
port: 8081
targetPort: 8081
nodePort: {{ netchecker_port }}
type: NodePort

View File

@@ -45,7 +45,7 @@ data:
force_tcp
}
prometheus {% if nodelocaldns_bind_metrics_host_ip %}{$MY_HOST_IP}{% endif %}:{{ nodelocaldns_prometheus_port }}
health {{ nodelocaldns_ip }}:{{ nodelocaldns_health_port }}
health {{ nodelocaldns_ip | ansible.utils.ipwrap }}:{{ nodelocaldns_health_port }}
{% if dns_etchosts | default(None) %}
hosts /etc/coredns/hosts {
fallthrough
@@ -132,7 +132,7 @@ data:
force_tcp
}
prometheus {% if nodelocaldns_bind_metrics_host_ip %}{$MY_HOST_IP}{% endif %}:{{ nodelocaldns_secondary_prometheus_port }}
health {{ nodelocaldns_ip }}:{{ nodelocaldns_second_health_port }}
health {{ nodelocaldns_ip | ansible.utils.ipwrap }}:{{ nodelocaldns_second_health_port }}
{% if dns_etchosts | default(None) %}
hosts /etc/coredns/hosts {
fallthrough

View File

@@ -1,7 +1,7 @@
{% for class in snapshot_classes %}
---
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
apiVersion: snapshot.storage.k8s.io/v1
metadata:
name: "{{ class.name }}"
annotations:

View File

@@ -36,7 +36,7 @@
- "localhost"
- "127.0.0.1"
- "::1"
- "{{ apiserver_loadbalancer_domain_name }}"
- "{{ apiserver_loadbalancer_domain_name | d('') }}"
- "{{ loadbalancer_apiserver.address | d('') }}"
- "{{ supplementary_addresses_in_ssl_keys }}"
- "{{ groups['kube_control_plane'] | map('extract', hostvars, 'main_access_ip') }}"
@@ -95,7 +95,7 @@
- name: Kubeadm | Create kubeadm config
template:
src: "kubeadm-config.{{ kubeadm_config_api_version }}.yaml.j2"
src: "kubeadm-config.v1beta4.yaml.j2"
dest: "{{ kube_config_dir }}/kubeadm-config.yaml"
mode: "0640"
validate: "{{ kubeadm_config_validate_enabled | ternary(bin_dir + '/kubeadm config validate --config %s', omit) }}"

View File

@@ -2,44 +2,21 @@
- name: Ensure kube-apiserver is up before upgrade
import_tasks: check-api.yml
# kubeadm-config.v1beta4 with UpgradeConfiguration requires some values that were previously allowed as args to be specified in the config file
# TODO: Remove --skip-phases from command when v1beta4 UpgradeConfiguration supports skipPhases
- name: Kubeadm | Upgrade first control plane node to {{ kube_version }}
command: >-
timeout -k 600s 600s
{{ bin_dir }}/kubeadm upgrade apply -y v{{ kube_version }}
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif %}
{%- if kube_version is version('1.32.0', '>=') %}
--skip-phases={{ kubeadm_init_phases_skip | join(',') }}
{%- endif %}
register: kubeadm_upgrade
when: inventory_hostname == first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
# TODO: When we retire kubeadm-config.v1beta3, remove --certificate-renewal, --ignore-preflight-errors, --etcd-upgrade, --patches, and --skip-phases from command, since v1beta4+ supports these in UpgradeConfiguration.node
- name: Kubeadm | Upgrade other control plane nodes to {{ kube_version }}
command: >-
{{ bin_dir }}/kubeadm upgrade node
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif %}
--skip-phases={{ kubeadm_upgrade_node_phases_skip | join(',') }}
register: kubeadm_upgrade
when: inventory_hostname != first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr

View File

@@ -1,445 +0,0 @@
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
{% if kubeadm_token is defined %}
bootstrapTokens:
- token: "{{ kubeadm_token }}"
description: "kubespray kubeadm bootstrap token"
ttl: "24h"
{% endif %}
localAPIEndpoint:
advertiseAddress: "{{ kube_apiserver_address }}"
bindPort: {{ kube_apiserver_port }}
{% if kubeadm_certificate_key is defined %}
certificateKey: {{ kubeadm_certificate_key }}
{% endif %}
nodeRegistration:
{% if kube_override_hostname | default('') %}
name: "{{ kube_override_hostname }}"
{% endif %}
{% if 'kube_control_plane' in group_names and 'kube_node' not in group_names %}
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
{% else %}
taints: []
{% endif %}
criSocket: {{ cri_socket }}
{% if cloud_provider == "external" %}
kubeletExtraArgs:
cloud-provider: external
{% endif %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches_dir }}
{% endif %}
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: {{ cluster_name }}
etcd:
{% if etcd_deployment_type != "kubeadm" %}
external:
endpoints:
{% for endpoint in etcd_access_addresses.split(',') %}
- "{{ endpoint }}"
{% endfor %}
caFile: {{ etcd_cert_dir }}/{{ kube_etcd_cacert_file }}
certFile: {{ etcd_cert_dir }}/{{ kube_etcd_cert_file }}
keyFile: {{ etcd_cert_dir }}/{{ kube_etcd_key_file }}
{% elif etcd_deployment_type == "kubeadm" %}
local:
imageRepository: "{{ etcd_image_repo | regex_replace("/etcd$","") }}"
imageTag: "{{ etcd_image_tag }}"
dataDir: "{{ etcd_data_dir }}"
extraArgs:
metrics: {{ etcd_metrics }}
election-timeout: "{{ etcd_election_timeout }}"
heartbeat-interval: "{{ etcd_heartbeat_interval }}"
auto-compaction-retention: "{{ etcd_compaction_retention }}"
{% if etcd_listen_metrics_urls is defined %}
listen-metrics-urls: "{{ etcd_listen_metrics_urls }}"
{% endif %}
snapshot-count: "{{ etcd_snapshot_count }}"
quota-backend-bytes: "{{ etcd_quota_backend_bytes }}"
max-request-bytes: "{{ etcd_max_request_bytes }}"
log-level: "{{ etcd_log_level }}"
{% for key, value in etcd_extra_vars.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
serverCertSANs:
{% for san in etcd_cert_alt_names %}
- "{{ san }}"
{% endfor %}
{% for san in etcd_cert_alt_ips %}
- "{{ san }}"
{% endfor %}
peerCertSANs:
{% for san in etcd_cert_alt_names %}
- "{{ san }}"
{% endfor %}
{% for san in etcd_cert_alt_ips %}
- "{{ san }}"
{% endfor %}
{% endif %}
dns:
imageRepository: {{ coredns_image_repo | regex_replace('/coredns(?!/coredns).*$', '') }}
imageTag: {{ coredns_image_tag }}
networking:
dnsDomain: {{ dns_domain }}
serviceSubnet: "{{ kube_service_subnets }}"
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
podSubnet: "{{ kube_pods_subnets }}"
{% endif %}
{% if kubeadm_feature_gates %}
featureGates:
{% for feature in kubeadm_feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}
kubernetesVersion: v{{ kube_version }}
{% if kubeadm_config_api_fqdn is defined %}
controlPlaneEndpoint: "{{ kubeadm_config_api_fqdn }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}"
{% else %}
controlPlaneEndpoint: "{{ main_ip | ansible.utils.ipwrap }}:{{ kube_apiserver_port }}"
{% endif %}
certificatesDir: {{ kube_cert_dir }}
imageRepository: {{ kubeadm_image_repo }}
apiServer:
extraArgs:
etcd-compaction-interval: "{{ kube_apiserver_etcd_compaction_interval }}"
default-not-ready-toleration-seconds: "{{ kube_apiserver_pod_eviction_not_ready_timeout_seconds }}"
default-unreachable-toleration-seconds: "{{ kube_apiserver_pod_eviction_unreachable_timeout_seconds }}"
{% if kube_api_anonymous_auth is defined %}
{# TODO: rework once suppport for structured auth lands #}
anonymous-auth: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
authorization-config: "{{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml"
{% else %}
authorization-mode: {{ authorization_modes | join(',') }}
{% endif %}
bind-address: "{{ kube_apiserver_bind_address }}"
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
enable-admission-plugins: {{ kube_apiserver_enable_admission_plugins | join(',') }}
{% endif %}
{% if kube_apiserver_admission_control_config_file %}
admission-control-config-file: {{ kube_config_dir }}/admission-controls.yaml
{% endif %}
{% if kube_apiserver_disable_admission_plugins | length > 0 %}
disable-admission-plugins: {{ kube_apiserver_disable_admission_plugins | join(',') }}
{% endif %}
apiserver-count: "{{ kube_apiserver_count }}"
endpoint-reconciler-type: lease
{% if etcd_events_cluster_enabled %}
etcd-servers-overrides: "/events#{{ etcd_events_access_addresses_semicolon }}"
{% endif %}
service-node-port-range: {{ kube_apiserver_node_port_range }}
service-cluster-ip-range: "{{ kube_service_subnets }}"
kubelet-preferred-address-types: "{{ kubelet_preferred_address_types }}"
profiling: "{{ kube_profiling }}"
request-timeout: "{{ kube_apiserver_request_timeout }}"
enable-aggregator-routing: "{{ kube_api_aggregator_routing }}"
{% if kube_token_auth %}
token-auth-file: {{ kube_token_dir }}/known_tokens.csv
{% endif %}
{% if kube_apiserver_service_account_lookup %}
service-account-lookup: "{{ kube_apiserver_service_account_lookup }}"
{% endif %}
{% if kube_oidc_auth and kube_oidc_url is defined and kube_oidc_client_id is defined %}
oidc-issuer-url: "{{ kube_oidc_url }}"
oidc-client-id: "{{ kube_oidc_client_id }}"
{% if kube_oidc_ca_file is defined %}
oidc-ca-file: "{{ kube_oidc_ca_file }}"
{% endif %}
{% if kube_oidc_username_claim is defined %}
oidc-username-claim: "{{ kube_oidc_username_claim }}"
{% endif %}
{% if kube_oidc_groups_claim is defined %}
oidc-groups-claim: "{{ kube_oidc_groups_claim }}"
{% endif %}
{% if kube_oidc_username_prefix is defined %}
oidc-username-prefix: "{{ kube_oidc_username_prefix }}"
{% endif %}
{% if kube_oidc_groups_prefix is defined %}
oidc-groups-prefix: "{{ kube_oidc_groups_prefix }}"
{% endif %}
{% endif %}
{% if kube_webhook_token_auth %}
authentication-token-webhook-config-file: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
authorization-webhook-config-file: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_encrypt_secret_data %}
encryption-provider-config: {{ kube_cert_dir }}/secrets_encryption.yaml
{% endif %}
storage-backend: {{ kube_apiserver_storage_backend }}
{% if kube_api_runtime_config | length > 0 %}
runtime-config: {{ kube_api_runtime_config | join(',') }}
{% endif %}
allow-privileged: "true"
{% if kubernetes_audit or kubernetes_audit_webhook %}
audit-policy-file: {{ audit_policy_file }}
{% endif %}
{% if kubernetes_audit %}
audit-log-path: "{{ audit_log_path }}"
audit-log-maxage: "{{ audit_log_maxage }}"
audit-log-maxbackup: "{{ audit_log_maxbackups }}"
audit-log-maxsize: "{{ audit_log_maxsize }}"
{% endif %}
{% if kubernetes_audit_webhook %}
audit-webhook-config-file: {{ audit_webhook_config_file }}
audit-webhook-mode: {{ audit_webhook_mode }}
{% if audit_webhook_mode == "batch" %}
audit-webhook-batch-max-size: "{{ audit_webhook_batch_max_size }}"
audit-webhook-batch-max-wait: "{{ audit_webhook_batch_max_wait }}"
{% endif %}
{% endif %}
{% for key in kube_kubeadm_apiserver_extra_args %}
{{ key }}: "{{ kube_kubeadm_apiserver_extra_args[key] }}"
{% endfor %}
{% if kube_apiserver_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_apiserver_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
event-ttl: {{ event_ttl_duration }}
{% if kubelet_rotate_server_certificates %}
kubelet-certificate-authority: {{ kube_cert_dir }}/ca.crt
{% endif %}
{% if kube_apiserver_tracing %}
tracing-config-file: {{ kube_config_dir }}/tracing/apiserver-tracing.yaml
{% endif %}
{% if kubernetes_audit or kube_token_auth or kube_webhook_token_auth or apiserver_extra_volumes or ssl_ca_dirs | length %}
extraVolumes:
{% if kube_token_auth %}
- name: token-auth-config
hostPath: {{ kube_token_dir }}
mountPath: {{ kube_token_dir }}
{% endif %}
{% if kube_webhook_token_auth %}
- name: webhook-token-auth-config
hostPath: {{ kube_config_dir }}/webhook-token-auth-config.yaml
mountPath: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization %}
- name: webhook-authorization-config
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}
mountPath: {{ audit_policy_mountpath }}
{% if audit_log_path != "-" %}
- name: {{ audit_log_name }}
hostPath: {{ audit_log_hostpath }}
mountPath: {{ audit_log_mountpath }}
readOnly: false
{% endif %}
{% endif %}
{% if kube_apiserver_admission_control_config_file %}
- name: admission-control-configs
hostPath: {{ kube_config_dir }}/admission-controls
mountPath: {{ kube_config_dir }}
readOnly: false
pathType: DirectoryOrCreate
{% endif %}
{% if kube_apiserver_tracing %}
- name: tracing
hostPath: {{ kube_config_dir }}/tracing
mountPath: {{ kube_config_dir }}/tracing
readOnly: true
pathType: DirectoryOrCreate
{% endif %}
{% for volume in apiserver_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% if ssl_ca_dirs | length %}
{% for dir in ssl_ca_dirs %}
- name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
hostPath: {{ dir }}
mountPath: {{ dir }}
readOnly: true
{% endfor %}
{% endif %}
{% endif %}
certSANs:
{% for san in apiserver_sans %}
- "{{ san }}"
{% endfor %}
timeoutForControlPlane: 5m0s
controllerManager:
extraArgs:
node-monitor-grace-period: {{ kube_controller_node_monitor_grace_period }}
node-monitor-period: {{ kube_controller_node_monitor_period }}
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
cluster-cidr: "{{ kube_pods_subnets }}"
{% endif %}
service-cluster-ip-range: "{{ kube_service_subnets }}"
{% if kube_network_plugin is defined and kube_network_plugin == "calico" and not calico_ipam_host_local %}
allocate-node-cidrs: "false"
{% else %}
{% if ipv4_stack %}
node-cidr-mask-size-ipv4: "{{ kube_network_node_prefix }}"
{% endif %}
{% if ipv6_stack %}
node-cidr-mask-size-ipv6: "{{ kube_network_node_prefix_ipv6 }}"
{% endif %}
{% endif %}
profiling: "{{ kube_profiling }}"
terminated-pod-gc-threshold: "{{ kube_controller_terminated_pod_gc_threshold }}"
bind-address: "{{ kube_controller_manager_bind_address }}"
leader-elect-lease-duration: {{ kube_controller_manager_leader_elect_lease_duration }}
leader-elect-renew-deadline: {{ kube_controller_manager_leader_elect_renew_deadline }}
{% if kube_controller_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_controller_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
{% for key in kube_kubeadm_controller_extra_args %}
{{ key }}: "{{ kube_kubeadm_controller_extra_args[key] }}"
{% endfor %}
{% if kube_network_plugin is defined and kube_network_plugin not in ["cloud"] %}
configure-cloud-routes: "false"
{% endif %}
{% if kubelet_flexvolumes_plugins_dir is defined %}
flex-volume-plugin-dir: {{ kubelet_flexvolumes_plugins_dir }}
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
{% if controller_manager_extra_volumes %}
extraVolumes:
{% for volume in controller_manager_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% endif %}
scheduler:
extraArgs:
bind-address: "{{ kube_scheduler_bind_address }}"
config: {{ kube_config_dir }}/kubescheduler-config.yaml
{% if kube_scheduler_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_scheduler_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
profiling: "{{ kube_profiling }}"
{% if kube_kubeadm_scheduler_extra_args | length > 0 %}
{% for key in kube_kubeadm_scheduler_extra_args %}
{{ key }}: "{{ kube_kubeadm_scheduler_extra_args[key] }}"
{% endfor %}
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
extraVolumes:
- name: kubescheduler-config
hostPath: {{ kube_config_dir }}/kubescheduler-config.yaml
mountPath: {{ kube_config_dir }}/kubescheduler-config.yaml
readOnly: true
{% if scheduler_extra_volumes %}
{% for volume in scheduler_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% endif %}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: "{{ kube_proxy_bind_address }}"
clientConnection:
acceptContentTypes: {{ kube_proxy_client_accept_content_types }}
burst: {{ kube_proxy_client_burst }}
contentType: {{ kube_proxy_client_content_type }}
kubeconfig: {{ kube_proxy_client_kubeconfig }}
qps: {{ kube_proxy_client_qps }}
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
clusterCIDR: "{{ kube_pods_subnets }}"
{% endif %}
configSyncPeriod: {{ kube_proxy_config_sync_period }}
conntrack:
maxPerCore: {{ kube_proxy_conntrack_max_per_core }}
min: {{ kube_proxy_conntrack_min }}
tcpCloseWaitTimeout: {{ kube_proxy_conntrack_tcp_close_wait_timeout }}
tcpEstablishedTimeout: {{ kube_proxy_conntrack_tcp_established_timeout }}
enableProfiling: {{ kube_proxy_enable_profiling }}
healthzBindAddress: "{{ kube_proxy_healthz_bind_address }}"
hostnameOverride: "{{ kube_override_hostname }}"
iptables:
masqueradeAll: {{ kube_proxy_masquerade_all }}
masqueradeBit: {{ kube_proxy_masquerade_bit }}
minSyncPeriod: {{ kube_proxy_min_sync_period }}
syncPeriod: {{ kube_proxy_sync_period }}
ipvs:
excludeCIDRs: {{ kube_proxy_exclude_cidrs }}
minSyncPeriod: {{ kube_proxy_min_sync_period }}
scheduler: {{ kube_proxy_scheduler }}
syncPeriod: {{ kube_proxy_sync_period }}
strictARP: {{ kube_proxy_strict_arp }}
tcpTimeout: {{ kube_proxy_tcp_timeout }}
tcpFinTimeout: {{ kube_proxy_tcp_fin_timeout }}
udpTimeout: {{ kube_proxy_udp_timeout }}
metricsBindAddress: "{{ kube_proxy_metrics_bind_address }}"
mode: {{ kube_proxy_mode }}
nodePortAddresses: {{ kube_proxy_nodeport_addresses }}
oomScoreAdj: {{ kube_proxy_oom_score_adj }}
portRange: {{ kube_proxy_port_range }}
{% if kube_proxy_feature_gates or kube_feature_gates %}
{% set feature_gates = ( kube_proxy_feature_gates | default(kube_feature_gates, true) ) %}
featureGates:
{% for feature in feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}
{# DNS settings for kubelet #}
{% if enable_nodelocaldns %}
{% set kubelet_cluster_dns = [nodelocaldns_ip] %}
{% elif dns_mode in ['coredns'] %}
{% set kubelet_cluster_dns = [skydns_server] %}
{% elif dns_mode == 'coredns_dual' %}
{% set kubelet_cluster_dns = [skydns_server,skydns_server_secondary] %}
{% elif dns_mode == 'manual' %}
{% set kubelet_cluster_dns = [manual_dns_server] %}
{% else %}
{% set kubelet_cluster_dns = [] %}
{% endif %}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
{% if kube_version is version('1.35.0', '>=') %}
failCgroupV1: {{ kubelet_fail_cgroup_v1 }}
{% endif %}
clusterDNS:
{% for dns_address in kubelet_cluster_dns %}
- {{ dns_address }}
{% endfor %}
{% if kubelet_feature_gates or kube_feature_gates %}
{% set feature_gates = ( kubelet_feature_gates | default(kube_feature_gates, true) ) %}
featureGates:
{% for feature in feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}

View File

@@ -1,4 +1,4 @@
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
{% if kubeadm_use_file_discovery %}
@@ -15,13 +15,8 @@ discovery:
unsafeSkipCAVerification: true
{% endif %}
tlsBootstrapToken: {{ kubeadm_token }}
{# TODO: drop the if when we drop support for k8s<1.31 #}
{% if kubeadm_config_api_version == 'v1beta3' %}
timeout: {{ discovery_timeout }}
{% else %}
timeouts:
discovery: {{ discovery_timeout }}
{% endif %}
controlPlane:
localAPIEndpoint:
advertiseAddress: "{{ kube_apiserver_address }}"

View File

@@ -1,5 +1,5 @@
---
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
{% if kubeadm_use_file_discovery %}
@@ -21,13 +21,8 @@ discovery:
{% endif %}
{% endif %}
tlsBootstrapToken: {{ kubeadm_token }}
{# TODO: drop the if when we drop support for k8s<1.31 #}
{% if kubeadm_config_api_version == 'v1beta3' %}
timeout: {{ discovery_timeout }}
{% else %}
timeouts:
discovery: {{ discovery_timeout }}
{% endif %}
caCertPath: {{ kube_cert_dir }}/ca.crt
{% if kubeadm_cert_controlplane is defined and kubeadm_cert_controlplane %}
controlPlane:

View File

@@ -3,9 +3,19 @@
file:
path: "{{ kubeadm_patches_dir }}"
state: directory
mode: "0640"
mode: "0750"
when: kubeadm_patches | length > 0
- name: Kubeadm | List existing kubeadm patches
find:
paths:
- "{{ kubeadm_patches_dir }}"
file_type: file
use_regex: true
patterns:
- '^(kube-apiserver|kube-controller-manager|kube-scheduler|etcd|kubeletconfiguration)[0-9]+\+(strategic|json|merge).yaml$'
register: existing_kubeadm_patches
- name: Kubeadm | Copy kubeadm patches from inventory files
copy:
content: "{{ item.patch | to_yaml }}"
@@ -15,3 +25,13 @@
loop: "{{ kubeadm_patches }}"
loop_control:
index_var: suffix
register: current_kubeadm_patches
- name: Kubeadm | Delete old patches
loop: "{{ existing_kubeadm_patches.files | map(attribute='path') |
difference(
current_kubeadm_patches.results | map(attribute='dest')
) }}"
file:
state: absent
path: "{{ item }}"

View File

@@ -86,6 +86,8 @@ kube_vip_leaseduration: 5
kube_vip_renewdeadline: 3
kube_vip_retryperiod: 1
kube_vip_enable_node_labeling: false
kube_vip_bgp_sourceip:
kube_vip_bgp_sourceif:
# Requests for load balancer app
loadbalancer_apiserver_memory_requests: 32M

View File

@@ -6,6 +6,17 @@
- kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp
- kube_vip_arp_enabled
- name: Kube-vip | Check mutually exclusive BGP source settings
vars:
kube_vip_bgp_sourceip_normalized: "{{ kube_vip_bgp_sourceip | default('', true) | string | trim }}"
kube_vip_bgp_sourceif_normalized: "{{ kube_vip_bgp_sourceif | default('', true) | string | trim }}"
assert:
that:
- kube_vip_bgp_sourceip_normalized == '' or kube_vip_bgp_sourceif_normalized == ''
fail_msg: "kube-vip allows only one of kube_vip_bgp_sourceip or kube_vip_bgp_sourceif."
when:
- kube_vip_bgp_enabled | default(false)
- name: Kube-vip | Check if super-admin.conf exists
stat:
path: "{{ kube_config_dir }}/super-admin.conf"

View File

@@ -85,6 +85,16 @@ spec:
value: {{ kube_vip_bgp_peerpass | to_json }}
- name: bgp_peeras
value: {{ kube_vip_bgp_peeras | string | to_json }}
{% set kube_vip_bgp_sourceip_normalized = kube_vip_bgp_sourceip | default('', true) | string | trim %}
{% if kube_vip_bgp_sourceip_normalized %}
- name: bgp_sourceip
value: {{ kube_vip_bgp_sourceip_normalized | to_json }}
{% endif %}
{% set kube_vip_bgp_sourceif_normalized = kube_vip_bgp_sourceif | default('', true) | string | trim %}
{% if kube_vip_bgp_sourceif_normalized %}
- name: bgp_sourceif
value: {{ kube_vip_bgp_sourceif_normalized | to_json }}
{% endif %}
{% if kube_vip_bgppeers %}
- name: bgp_peers
value: {{ kube_vip_bgppeers | join(',') | to_json }}

View File

@@ -1,19 +0,0 @@
---
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: {{ kube_cert_dir }}/ca.pem
server: "{{ kube_apiserver_endpoint }}"
users:
- name: kubelet
user:
client-certificate: {{ kube_cert_dir }}/node-{{ inventory_hostname }}.pem
client-key: {{ kube_cert_dir }}/node-{{ inventory_hostname }}-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-{{ cluster_name }}
current-context: kubelet-{{ cluster_name }}

View File

@@ -116,7 +116,7 @@ flannel_version: 0.27.3
flannel_cni_version: 1.7.1-flannel1
cni_version: "{{ (cni_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_version: "1.18.6"
cilium_version: "1.19.1"
cilium_cli_version: "{{ (ciliumcli_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_enable_hubble: false
@@ -232,12 +232,6 @@ calico_apiserver_image_repo: "{{ quay_image_repo }}/calico/apiserver"
calico_apiserver_image_tag: "v{{ calico_apiserver_version }}"
pod_infra_image_repo: "{{ kube_image_repo }}/pause"
pod_infra_image_tag: "{{ pod_infra_version }}"
netcheck_version: "1.2.2"
netcheck_agent_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-agent"
netcheck_agent_image_tag: "v{{ netcheck_version }}"
netcheck_server_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-server"
netcheck_server_image_tag: "v{{ netcheck_version }}"
netcheck_etcd_image_tag: "{{ etcd_image_tag }}"
cilium_image_repo: "{{ quay_image_repo }}/cilium/cilium"
cilium_image_tag: "v{{ cilium_version }}"
cilium_operator_image_repo: "{{ quay_image_repo }}/cilium/operator"
@@ -269,9 +263,9 @@ kube_vip_version: 1.0.3
kube_vip_image_repo: "{{ github_image_repo }}/kube-vip/kube-vip{{ '-iptables' if kube_vip_lb_fwdmethod == 'masquerade' else '' }}"
kube_vip_image_tag: "v{{ kube_vip_version }}"
nginx_image_repo: "{{ docker_image_repo }}/library/nginx"
nginx_image_tag: 1.28.0-alpine
nginx_image_tag: 1.28.2-alpine
haproxy_image_repo: "{{ docker_image_repo }}/library/haproxy"
haproxy_image_tag: 3.2.4-alpine
haproxy_image_tag: 3.2.13-alpine
# Coredns version should be supported by corefile-migration (or at least work with)
# bundle with kubeadm; if not 'basic' upgrade can sometimes fail
@@ -379,24 +373,6 @@ node_feature_discovery_image_repo: "{{ kube_image_repo }}/nfd/node-feature-disco
node_feature_discovery_image_tag: "v{{ node_feature_discovery_version }}"
downloads:
netcheck_server:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_server_image_repo }}"
tag: "{{ netcheck_server_image_tag }}"
checksum: "{{ netcheck_server_digest_checksum | default(None) }}"
groups:
- k8s_cluster
netcheck_agent:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_agent_image_repo }}"
tag: "{{ netcheck_agent_image_tag }}"
checksum: "{{ netcheck_agent_digest_checksum | default(None) }}"
groups:
- k8s_cluster
etcd:
container: "{{ etcd_deployment_type != 'host' }}"
file: "{{ etcd_deployment_type == 'host' }}"

View File

@@ -33,10 +33,6 @@ kube_version_min_required: "{{ (kubelet_checksums['amd64'] | dict2items)[-1].key
## Kube Proxy mode One of ['ipvs', 'iptables', 'nftables']
kube_proxy_mode: ipvs
# Kubeadm config api version
# If kube_version is v1.31 or higher, it will be v1beta4, otherwise it will be v1beta3.
kubeadm_config_api_version: "{{ 'v1beta4' if kube_version is version('1.31.0', '>=') else 'v1beta3' }}"
# Debugging option for the kubeadm config validate command
# Set to false only for development and testing scenarios where validation is expected to fail (pre-release Kubernetes versions, etc.)
kubeadm_config_validate_enabled: true
@@ -152,8 +148,6 @@ manual_dns_server: ""
# Can be host_resolvconf, docker_dns or none
resolvconf_mode: host_resolvconf
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes DNS service (called skydns for historical reasons)
skydns_server: "{{ kube_service_subnets.split(',') | first | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(3) | ansible.utils.ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_subnets.split(',') | first | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(4) | ansible.utils.ipaddr('address') }}"

View File

@@ -14,13 +14,16 @@ crictl_checksums:
1.33.0: sha256:4224acfef4d1deba2ba456b7d93fa98feb0a96063ef66024375294f1de2b064f
crio_archive_checksums:
arm64:
1.35.1: sha256:15fe5c7b87c985a3a78324227b920a01f3309fd1aa5eadfaa38fd48a4dd96d17
1.35.0: sha256:e57175a4d00387b78adfbe248d087d8127bed625afb529e34b2c90d08cfdaf87
1.34.6: sha256:ac189974bcc1cb6829e7b61a39bc3f34fc27a32e5c9d2628bdfc74f88edb6988
1.34.5: sha256:999a5dc2dc9854222aeff8a20897e0b34f0ba02c9b260b611d66c62e00e279e0
1.34.4: sha256:d176f6256d606a3fc279f9f2994ef4a4c4cbaaa0601f4d1bba1a19bec5674ce9
1.34.3: sha256:314595247054b53767a736e24bc3030a5f7c17552944c62b2e190c9e95fe4ca6
1.34.2: sha256:ac7530f7fc9d531a87bfdfcae9cf8bf81a8bbdb75e63a046ed96911aa7b68ebd
1.34.1: sha256:41a71cab6a61ae429ec447d572fd1cdea0a7e33d62aaa58c3b07467665b50b9f
1.34.0: sha256:3006658270477c5fb1e88e9124e40982d2ba7b34495fcc12f0fecd33bbab9a5a
1.33.10: sha256:1fb33599cccf590594b3a29ca1e3f45140bd25bdb836154dbcbd5eb3c4d21ace
1.33.9: sha256:bfcd534db3d1a9380dd7007d623e1eb3250ba64f7c4657e79e9e99b1d874f8f1
1.33.8: sha256:59c91726535dcadd0372df0c6aa8595e4d59590994b598b2d97ea2510b216359
1.33.7: sha256:af3ea22d3d6944c9a907c6c13d77e9fc4dbcf3972ffbde18dd6f37f1c2ffbd0d
@@ -31,28 +34,17 @@ crio_archive_checksums:
1.33.2: sha256:0a161cb1437a50fbdb04bf5ca11dbec8bfc567871d0597a5676737278a945a36
1.33.1: sha256:6bf135db438937f0ab7a533af64564a0fb1d2079a43723ce9255ecbf9556ae05
1.33.0: sha256:8a0dbee2879495d5b33e6fdeac32e5d86c356897bdcf3a94cd602851620ce8b5
1.32.13: sha256:f40004183d93bb203231385b5dd07a32e17eced47213817c1958ccc9eea73f70
1.32.12: sha256:26a5138f4e4f15d370630c3bb8bf04fe28b24c57ce2bb11717a2c9a2e1c54404
1.32.11: sha256:25c6ccfe9b70bf12222577b4cbf286ade9e2d112ab10c7d4507ba12cbcfad5ba
1.32.10: sha256:4e8ceb6f2c936e31a9b892a076deecc52be9feac4acf8af242fb6db817fda9b1
1.32.9: sha256:f854848dc5ae54ea03e48f2bc6d6ffbea2173de45c3d7a2abbc3af3abcb779f9
1.32.8: sha256:1da6d9bd9e3a7f2d2e17310353c1d41c68d5d77606b8933a95f399db1ec809c7
1.32.7: sha256:02a0f37f87eda1adf73a2f7145dbead4db9cb7470083cd474fe2970853bb32ff
1.32.6: sha256:8b9a3a0ec3a7d1476396e4893ae9358eff1448d7631c27725d651cbfc4071902
1.32.5: sha256:1725d914b2041b428e5346202a4d874796ed146bac0170084e09d8f430af3c2e
1.32.4: sha256:06ccee8b31963f80c0253bf8c6ba56afa222fc0608ca309b21ace2d8748e3023
1.32.3: sha256:f196bdc30c8effbbc8ec54f99e2598e34a901a7996a2f8a53f1f9134b0dc1b80
1.32.2: sha256:627df634df178baf2800c8eb68185489e82f78b0b33ea5bec2bf9ce55ad57647
1.32.1: sha256:f64da0ef41604575b476ad6d7288ca14f56fc06cc0ca138a5c3dc933427f7b32
1.32.0: sha256:b092eddabedac98a0f8449dc535acfec0e14c21f59cabe8f9703043d995a1a41
amd64:
1.35.1: sha256:cd819546f01ae9dddd4a85b82f220518b37596053555a85e4b4a3d200a6e9354
1.35.0: sha256:55b6d3e9fc9a5864ab5cdf0b24d54b1dcbaf6d4919274b3b9eb37bfc4b0b8cb5
1.34.6: sha256:9f17d9a7dc8d8c4fc16eccca65fe5db8177392f26156335dc6318a14215a5cd1
1.34.5: sha256:d6606fb6d686b8f814dfec801f0f3cf2ded974c194fa90facefda36075b6fab2
1.34.4: sha256:f6348a781c34b433fe1c5150da3408e51e828b610eacbe734405e9c31136d810
1.34.3: sha256:e269914f3bc4f36ac87cd593d74daaa43c390571994062180019248be32cc6f7
1.34.2: sha256:3a0012938ed389e9270a208bb73b250062d5f1be5798472b1728403d55ddc1da
1.34.1: sha256:22c1e4d68d9339aa58a1b0f1b40a8944102934a7505105abe461dc8a7e3de540
1.34.0: sha256:5a8bc5c3b8072cb9bde1cf025d5597f75bf21018712c5b72d5cb0657948595c8
1.33.10: sha256:1fcf2f23ef874b3df04957f15789fc14eeb34020550fe4307c9fc81fc0490acd
1.33.9: sha256:81c20a12866d9a7c08c6e381ed326141c917454b696a05b46ae27665fe3c5cfa
1.33.8: sha256:537adda39074377893f1f650a71b576ba487b3c4d2ee55e9b22f4e95fc188594
1.33.7: sha256:e2999436a272c77370241a4f962c80737698dd8c2400fe75e5c7cf2142c96001
@@ -63,28 +55,17 @@ crio_archive_checksums:
1.33.2: sha256:6e82739bbbeae12d571a277a88d85e8a0e23dbc87529414a91ee5f2e23792dcf
1.33.1: sha256:036063194028d24c75b9ce080e475ad97bacc955de796b7c895845294db8edbf
1.33.0: sha256:dad0cec9e09368b37b35ce824b0ef517a1b33365c4bb164fe82310c73c886f7e
1.32.13: sha256:27e2bf049f589a568d45c4fdd0eaf119680176c202bd09219f8726ba37f9c21e
1.32.12: sha256:13cb9676686c0ccd6bd7ffef9125f6370f803f08a559cf31f017193619891960
1.32.11: sha256:98424dbe3eb1377b314bb35b30842987ccc800faa2f8145d52eb2a9c1efa17be
1.32.10: sha256:b8e66bd33c885baf65535e671a120de4d7675833a75489403a9406e5fd2faa5e
1.32.9: sha256:59b861b9c8913328c9bc97b3bcb007951b0c3bf6c9f40fbad236be4b31534503
1.32.8: sha256:39b10999bc26ebea7714fb08d6eaef5f8bac63de3c8bbabae6a7d716c93cdb2e
1.32.7: sha256:2592c2aad6eabf7109d62f49417c14a78fabedd24eab0915770d92610e032f89
1.32.6: sha256:430ffcd8a140177b453ff75f4f11c22483378f4751f2e62379526b6ef817d184
1.32.5: sha256:e31f6d9acb955bb6065ae1bbb4bb71e23ecf61417b4c03ea87e152ff7ae45b5e
1.32.4: sha256:9934370708bfc641649bef83cd8df0745e8d3a3887b67062ae970d95b58003f4
1.32.3: sha256:860c53b91dbe547b0cf23837880506a92348783efd9a7003a7da3fff6555fa28
1.32.2: sha256:3ab6b4cc7641c2d181c2141ca42edecaac837d1409caef9311ebc57fb922fbb6
1.32.1: sha256:d35de1e765481018c7ccdc92edeb59b25938f3bd9d1670440e7ccd3d599f95a7
1.32.0: sha256:8f483f1429d2d9cd6bfa6db2e3a4263151701dd4f05f2b1c06cf8e67c44ea67e
ppc64le:
1.35.1: sha256:b4a23e9f70297f01da2840f94b82adf2ac67a4017e1d93f0c20526637df282ca
1.35.0: sha256:081ab73a6970ac3c68893dea9a03b0732ca22ab44a2aa8794fddac0bd4dfa749
1.34.6: sha256:395a475c0181a0c82e89e6dd8e258c6c0529f889a7fc9d0a54da3218b76f58f4
1.34.5: sha256:3a10d4c1406df01bd9ab88750eabc1273964e9c5f24c7d4a0b719ae77e6cfec2
1.34.4: sha256:dca59a28fe9b0b9163418eca1545c9ed01cf514179f108d14e462c6074fd103c
1.34.3: sha256:4dd782484eeb460b9a95e6e2e07474216fc02ad45a27ba871799d18f2b6ee0ae
1.34.2: sha256:d4c3c9ba24b1b0eabf3c11ddec98801dda7a87b0529706e9ede18b8cc9e4182a
1.34.1: sha256:cba0ac74e7202fe28cf8aa895b83f7a30d78b148666add78e19215259f629bb0
1.34.0: sha256:e9e41d14439db0ca88cf2cd8533038203f379c25cd612f37635c17908e050ebf
1.33.10: sha256:da8933e5b90be44e818f2a3d165957897adac3570f42f73131d91edab0201ad5
1.33.9: sha256:c0a9e60800f66f85c70615128fec5a8358ffde0f715a4058163707dbcca8eb94
1.33.8: sha256:1d69c01512e8ebdd51fc70fc64473a31d492e8db095c0ee5d3ee58722048150c
1.33.7: sha256:076e7519bfff72a43fb1121ce836eee3cc1fec5bb5a59a11747c514e9d162d26
@@ -95,29 +76,18 @@ crio_archive_checksums:
1.33.2: sha256:8ed65404a57262a9f8eb75b61afa37fcec134472eb1a6d81f1889a74ff32c651
1.33.1: sha256:12646aca33f65fe335c27d3af582c599584d3f51185f01044e7ddd0668bb2b4c
1.33.0: sha256:b4fa46b25538d8145197f8bf2e935486392c0ca2a9fa609aedd02b9f106d37a6
1.32.13: sha256:52e9c38bb1a11abfe4f271eb4d4675cc99cfbaef3d35fd5572be8e63659b08ab
1.32.12: sha256:9ba4f2c3be48c0f1f3228ef6322aeb3738f3ef461fd483a0cb4c2e5b067f080c
1.32.11: sha256:6c2036f2ed7134c596b5a453a06fbb7e646db9586bff0d993f5223dccf167420
1.32.10: sha256:ae4740c6bb6f346338f94508c74d5b1ec94f2691cb12f9a9add437fee5391f8d
1.32.9: sha256:604bd6f866be327951942656931847c3623cd1e138197f153dd4d5537dd19f11
1.32.8: sha256:b7be7a811d598c317b04db75769ac2a2e73633b4511513f1851f8f8fed71655e
1.32.7: sha256:cc4cb9e5337716fbd341e84dfd59e80a4cfd2c28b70a30223a29bbe2a7607203
1.32.6: sha256:f2b80598398dfbc5672696309dce2cb9c2ae80eda9d9b86141cc80995bc3bb92
1.32.5: sha256:2886b8392452ee6e91d87e7228d3720a21b89e4398291f7479ec68ddb0f4f7c0
1.32.4: sha256:533f6a6d252be8e78a9df4c911df5c3f4b361c608939427839fa4db682ade0a2
1.32.3: sha256:bab472e532ed31307f92781717b32016ad02dc25b9a7facf158eab0ff49531c5
1.32.2: sha256:680928bbeb84df7e87a17ad059679bb365a8d68781819798175e370629c293e6
1.32.1: sha256:e59948b183ca87bf3cf4e54ebd5d3ac9418b1e88af4dc92883323003bd16412a
1.32.0: sha256:e0544544c91f603afaf54ed814c8519883212bcb149f53a8be9bb0c749e9ec86
kubelet_checksums:
arm64:
1.35.2: sha256:eaf11f9c20c385624775671662e127b083c9b5d0b451fd613f9468e09ca5656c
1.35.1: sha256:73475c6db8fd8a9780b1b378fa2f917875e6146166c24603c1abc6eafd4493a8
1.35.0: sha256:aa658d077348b43d238f50966a583f4244b2a7d45590c77b3b165b7d44983ab8
1.34.5: sha256:a4cd54fa5af5ed6b4ed14cf172d30f53fd1f197d7d3f43af733b54b19cc87ac5
1.34.4: sha256:c78845473c434ee85a2444eeab87f8b20f524e3ab6854a078f79468f44aad8f5
1.34.3: sha256:765b740e3ad9c590852652a2623424ec60e2dddce2c6280d7f042f56c8c98619
1.34.2: sha256:3e31b1bee9ab32264a67af8a19679777cd372b1c3a04b5d7621289cf137b357c
1.34.1: sha256:6a66bc08d6c637fcea50c19063cf49e708fde1630a7f1d4ceca069a45a87e6f1
1.34.0: sha256:e45a7795391cd62ee226666039153832d3096c0f892266cd968936e18b2b40b0
1.33.9: sha256:c5719223cf378ac6bdd6a7ed79a75ba428bdc4468da70e641b2d4f73f70de6e0
1.33.8: sha256:e835f15be6d8b7b27b963a46c4a054f7663c26741f17e003bfcb8271350cf882
1.33.7: sha256:3035c44e0d429946d6b4b66c593d371cf5bbbfc85df39d7e2a03c422e4fe404a
1.33.6: sha256:7d8b7c63309cfe2da2331a1ae13cce070b9ba01e487099e7881a4281667c131d
@@ -128,13 +98,16 @@ kubelet_checksums:
1.33.1: sha256:10540261c311ae005b9af514d83c02694e12614406a8524fd2d0bad75296f70d
1.33.0: sha256:ae5a4fc6d733fc28ff198e2d80334e21fcb5c34e76b411c50fff9cb25accf05a
amd64:
1.35.2: sha256:20887f461c0de96b0cb14c7af6b897f92d424ac078f8642f98e83ef52a0bf03e
1.35.1: sha256:e7343310e03ff0d424df4397bdfa4468947d6d1f0f93dac586c1e8d6e4086d5d
1.35.0: sha256:2f4ed7778681649b81244426c29c5d98df60ccabf83d561d69e61c1cbb943ddf
1.34.5: sha256:660c7da5cca02dcd83ba21bb3df95a8983bb1c0169af396a8cc234a6d6e7297f
1.34.4: sha256:03b8fea715a7ef82eeaf518dee34c72670c57cc7bc40dc1320c04fbf4f15172f
1.34.3: sha256:0e759f40bbc717c05227ae3994b77786f58f59ffa0137a34958c6b26fa5bcbbd
1.34.2: sha256:9c5e717b774ee9b9285ce47e7d2150c29e84837eb19a7eaa24b60b1543c9d58f
1.34.1: sha256:5a72c596c253ea0b0e5bcc6f29903fd41d1d542a7cadf3700c165a2a041a8d82
1.34.0: sha256:5c0d28cea2a3a5c91861dda088a29d56c1b027e184dae1d792686f0710750076
1.33.9: sha256:d0cec9b15e1ba1b4e3595754aa2d5200d1c8c704892ac07afe5b04b44bdf288c
1.33.8: sha256:1caa69c5328cfa774218f75f0621a6f10a1b97e095af85015f468aeb8fdf956a
1.33.7: sha256:2cea40c8c6929330e799f8fc73233a4b61e63f208739669865e2a23a39c3a007
1.33.6: sha256:10cd08fe1f9169fd7520123bcdfff87e37b8a4e21c39481faa382f00355b6973
@@ -145,13 +118,16 @@ kubelet_checksums:
1.33.1: sha256:f7224648451dd4f9f2c4f79416f9874223c286ce41727788965fd0341ddb59c4
1.33.0: sha256:dd416d94850c342226d3dcdce838518b040ccea16548bfeaf2595934af88ef60
ppc64le:
1.35.2: sha256:6516f318a3711e83c0b3b8b069514a87c7dfeda931082c1b561043a7c54e312e
1.35.1: sha256:ec8b7f870043f711b5d73e528342af1705d6ad7f8308d7f31d74d967986b54f6
1.35.0: sha256:f24eb1244878a3876fe180e6052822cc9998033850478b2f4776e5c3b09baecd
1.34.5: sha256:625858ed19559873af61728a50cc250419432ecaa2c8062f78e86e41df358f3c
1.34.4: sha256:fab75e3eb1e0edf15aef7e8ba219256b44f047544ac421737d1778784fa46676
1.34.3: sha256:67dcceb6d91710e4da7af720eda7b20fd4e8c24237fc345602bb54439ad8ccca
1.34.2: sha256:a195f278b9bac26803f1e26b0f608e0dce66aad033e8c043e8555775612530c9
1.34.1: sha256:c4782dbf1987680e9b2baa3ecf5db9e66395772e82b251eb73a150fbfbe0b906
1.34.0: sha256:ed663fa4ff3e305276dd889885303e07989dfab073e95ef2da931b975f6686e8
1.33.9: sha256:2afd4985fa8ef88bbc8e40691b83ad44ccf8ae2e57a17297921f922b4aa6f438
1.33.8: sha256:392ed39b6c037bc5c510412c9b5cfd29238d31dd67d1a3cbae7ef4a274304c63
1.33.7: sha256:f96dd4272ca8eccf1f93fb5162323840b9286c5a42a5305fcc1b4d47889534d3
1.33.6: sha256:00ae91297503518efd237d40900af4de0067597ae4f2ab8250ddb629ffb6df05
@@ -163,13 +139,16 @@ kubelet_checksums:
1.33.0: sha256:6fa5abbc14d65b943b00fcfc8a6ac7eb39fd7e924271738c6f17e0b7e74c665b
kubectl_checksums:
arm:
1.35.2: sha256:02b6affb0da9837b88354e44a7a188cbe2ad522060c880c70eacd33f19944c35
1.35.1: sha256:dbe14e5b12184d72978b17b167aedc3f42f4a1faf249180025d6359eebcd983e
1.35.0: sha256:dca28f6af03b31ca6043baa1da7332472c7a3df743606a758534b9ac3ed7ecce
1.34.5: sha256:a042713dd3911a46c2118d2df3967cfcee0e3cdf4e78865e1d427a67cf58e7ba
1.34.4: sha256:3a6e631bdbb79e633d23055dadc97b45f45d325105ddf40e696de2a324a254c0
1.34.3: sha256:e0cf1eddede6abfd539e30ccbb4e50f65b2d6ff44b3bb9d9107ea8775a90a7e4
1.34.2: sha256:18e03c1c6ab1dbff6d2a648bf944213f627369d1daeea5b43a7890181ab33abf
1.34.1: sha256:ca6218ae8bf366bd8ccdcb440b756c67422a4e04936163845f74d8c056e786ee
1.34.0: sha256:69d2ce88274caf9d9117b359cc27656fb6f9dd6517c266cfd93c6513043968b8
1.33.9: sha256:d6b8a351fb5f1409e2c50c52452884eca09e56cabdaae03cfaa40467661d3ecc
1.33.8: sha256:734dea07663751c8b45926c843e2c250f13473d65f396555a1ecfe0c9c502fa8
1.33.7: sha256:f6b9ac99f4efb406c5184d0a51d9ed896690c80155387007291309cbb8cdd847
1.33.6: sha256:89bcef827ac8662781740d092cff410744c0653d828b68cc14051294fcd717e6
@@ -180,13 +159,16 @@ kubectl_checksums:
1.33.1: sha256:6b1cd6e2bf05c6adaa76b952f9c4ea775f5255913974ccdb12145175d4809e93
1.33.0: sha256:bbb4b4906d483f62b0fc3a0aea3ddac942820984679ad11635b81ee881d69ab3
arm64:
1.35.2: sha256:cd859449f54ad2cb05b491c490c13bb836cdd0886ae013c0aed3dd67ff747467
1.35.1: sha256:706256e21a4e9192ee62d1a007ac0bfcff2b0b26e92cc7baad487a6a5d08ff82
1.35.0: sha256:58f82f9fe796c375c5c4b8439850b0f3f4d401a52434052f2df46035a8789e25
1.34.5: sha256:2d433b53b99ea532f877df6fa5044286e3950d4933967ac3d99262760bc649fd
1.34.4: sha256:5b982c0644ab1e27780246b9085a5886651b4a7ed86243acbb2bacc1bea01dda
1.34.3: sha256:46913a7aa0327f6cc2e1cc2775d53c4a2af5e52f7fd8dacbfbfd098e757f19e9
1.34.2: sha256:95df604e914941f3172a93fa8feeb1a1a50f4011dfbe0c01e01b660afc8f9b85
1.34.1: sha256:420e6110e3ba7ee5a3927b5af868d18df17aae36b720529ffa4e9e945aa95450
1.34.0: sha256:00b182d103a8a73da7a4d11e7526d0543dcf352f06cc63a1fde25ce9243f49a0
1.33.9: sha256:af4dc943a6f447ecb070340efe63c7f8ee2808e6c0bc42126efe7cde0cc1e69b
1.33.8: sha256:76e284669f1f6343bd9fe2a011757809c8c01cf51da9f85ee6ef4eb93c8393a8
1.33.7: sha256:fa7ee98fdb6fba92ae05b5e0cde0abd5972b2d9a4a084f7052a1fd0dce6bc1de
1.33.6: sha256:3ab32d945a67a6000ba332bf16382fc3646271da6b7d751608b320819e5b8f38
@@ -197,13 +179,16 @@ kubectl_checksums:
1.33.1: sha256:d595d1a26b7444e0beb122e25750ee4524e74414bbde070b672b423139295ce6
1.33.0: sha256:48541d119455ac5bcc5043275ccda792371e0b112483aa0b29378439cf6322b9
amd64:
1.35.2: sha256:924eb50779153f20cb668117d141440b95df2f325a64452d78dff9469145e277
1.35.1: sha256:36e2f4ac66259232341dd7866952d64a958846470f6a9a6a813b9117bd965207
1.35.0: sha256:a2e984a18a0c063279d692533031c1eff93a262afcc0afdc517375432d060989
1.34.5: sha256:6a17dd8387783b3144a65535e38d02c351027e9718ea34a6c360476cb26d28bb
1.34.4: sha256:d50c359d95e0841eaad08ddc27c7be37cba8fdccfba5c8e2ded65e121ff112db
1.34.3: sha256:ab60ca5f0fd60c1eb81b52909e67060e3ba0bd27e55a8ac147cbc2172ff14212
1.34.2: sha256:9591f3d75e1581f3f7392e6ad119aab2f28ae7d6c6e083dc5d22469667f27253
1.34.1: sha256:7721f265e18709862655affba5343e85e1980639395d5754473dafaadcaa69e3
1.34.0: sha256:cfda68cba5848bc3b6c6135ae2f20ba2c78de20059f68789c090166d6abc3e2c
1.33.9: sha256:9e33e3234c0842cd44a12c13e334b4ce930145ea84b855ce7cc0a7b6bc670c22
1.33.8: sha256:7f9c3faab7c9f9cc3f318d49eb88efc60eb3b3a7ce9eee5feb39b1280e108a29
1.33.7: sha256:471d94e208a89be62eb776700fc8206cbef11116a8de2dc06fc0086b0015375b
1.33.6: sha256:d25d9b63335c038333bed785e9c6c4b0e41d791a09cac5f3e8df9862c684afbe
@@ -214,13 +199,16 @@ kubectl_checksums:
1.33.1: sha256:5de4e9f2266738fd112b721265a0c1cd7f4e5208b670f811861f699474a100a3
1.33.0: sha256:9efe8d3facb23e1618cba36fb1c4e15ac9dc3ed5a2c2e18109e4a66b2bac12dc
ppc64le:
1.35.2: sha256:2101400af7fb53d326cee8d3411c9348e0735ac85d7059cd61514632181b8810
1.35.1: sha256:bced44e491ce52cce11e2b4bd4bd9181f4f963ffe868438778d028d56485c5d9
1.35.0: sha256:8989809d0ac771244dabe50ed742249ac60eeb6d385cd234ee151eb40b7c32c4
1.34.5: sha256:22e5813388d20881c2225c86420cd2ff69ed7fc5e953842bb041d4842c38d7f7
1.34.4: sha256:b083c39879483816f34d1f7e2e31e70ec48984fcc1753c79f4b846cfedbf41ac
1.34.3: sha256:ae239b7f6f071e47014e1b5b20aa60626e06b32922a6b5054562ae2c5fa82c18
1.34.2: sha256:49a985986a9add6c229c628bf2a83addebbdeeef40469fce2a54e51b6f1bb05b
1.34.1: sha256:45499f0728b4a3428400db289edb444609d41787061f09b66f18028c0a73652f
1.34.0: sha256:1773805a0c128f4d267b2e11f4c74cac287e9a07fffaecc3f7af6df9c8aaf82c
1.33.9: sha256:0641f8f8a6153c13dc3ab90a86e242d095a218d30e13cf42c41000e9a7ccc9c3
1.33.8: sha256:aa079f403c80ba6017449c230733fed4e5d7b0a8700bd6590ee202161b8b12af
1.33.7: sha256:0807c38a1342ab8dea6435f33d5897a01527d348a968a5c4ca2929769f3d54f2
1.33.6: sha256:4b056b1749c619fab6a855247c3bd04123f2b61cf136ca6bddf69ff97a727e32
@@ -232,13 +220,16 @@ kubectl_checksums:
1.33.0: sha256:580d076c891711ec37afaf5994f72a8aad9d45c25413e6e94648e988a5a9933a
kubeadm_checksums:
arm64:
1.35.2: sha256:b540b734e9eaeb134e2cb7c7803dc62160589ac3b2615d717ce41acb22bbec98
1.35.1: sha256:80097a3c4ef824f4edfe131d2bd429772c4be3a460c42a44f2320164a917de32
1.35.0: sha256:1dac7dc2c6a56548bbc6bf8a7ecf4734f2e733fb336d7293d84541ebe52d0e50
1.34.5: sha256:cc8a59c0f04a1d7389790dc872b1c714e4cf4d3ba0c38ca2f4ca1388fb38b774
1.34.4: sha256:d8028b7e8c8d6c9b3fc3da6bc88d4d0cfb33df1b4b026a7d6e8c35d1471c9f6e
1.34.3: sha256:697cf3aa54f1a5740b883a3b18a5d051b4032fd68ba89af626781a43ec9bccc3
1.34.2: sha256:065f7de266c59831676cc48b50f404fd18d1f6464502d53980957158e4cab3a7
1.34.1: sha256:b0dc5cf091373caf87d069dc3678e661464837e4f10156f1436bd35a9a7db06b
1.34.0: sha256:6b7108016bb2b74132f7494e200501d6522682c01759db91892051a052079c77
1.33.9: sha256:d57594c581998d011b7c3ec77fde8b5a2d9e37885f21b749b423087715dc4511
1.33.8: sha256:b5248b51e66e4716261f2c926fe2f08a293795e6863099e7792b4d57dbb9109e
1.33.7: sha256:b24eeeff288f9565e11a2527e5aed42c21386596110537adb805a5a2a7b3e9ce
1.33.6: sha256:ef80c198ca15a0850660323655ebf5c32cc4ab00da7a5a59efe95e4bcf8503ab
@@ -249,13 +240,16 @@ kubeadm_checksums:
1.33.1: sha256:5b3e3a1e18d43522fdee0e15be13a42cee316e07ddcf47ef718104836edebb3e
1.33.0: sha256:746c0ee45f4d32ec5046fb10d4354f145ba1ff0c997f9712d46036650ad26340
amd64:
1.35.2: sha256:a51cb85c70c98ec6868fd3413ac786af5fab4ce51438963752ec5f58e68e5452
1.35.1: sha256:8a7ff344eef1bfba88f9a74b3fdc9ea4448c94f1b3cefb8c0aeeaf1f96e05053
1.35.0: sha256:729e7fb34e4f1bfcf2bdaf2a14891ed64bd18c47aaab42f8cc5030875276cfed
1.34.5: sha256:e381f7f5c41f2a6916432177d95c1ee95d5539f790bbeacafe9ef61dc68e8e35
1.34.4: sha256:b967f1fa0e36621c402d38bb560eb4a943954d5cf5a00e5150842f6f5da73455
1.34.3: sha256:f9ce265434d306e59d800b26f3049b8430ba71f815947f4bacdcdc33359417fb
1.34.2: sha256:6a2346006132f6e1ed0b5248e518098cf5abbce25bf11b8926fb1073091b83f4
1.34.1: sha256:20654fd7c5155057af5c30b86c52c9ba169db6229eee6ac7abab4309df4172e7
1.34.0: sha256:aecc23726768d1753fd417f6e7395cb1a350373295e8e9d9f80e95ed3618e38e
1.33.9: sha256:9732cc2383e73f64275326e02a5595c792a94ee0ebf84cea37a32fcbf926e6e5
1.33.8: sha256:8259af514dc3655e8abec1a69b637f31cce2ecb940a80ae4a268e5287890f009
1.33.7: sha256:c10813d54f58ef33bbe6675f3d39c8bd401867743ebc729afdd043265040c31d
1.33.6: sha256:c1b84cb3482dd79e26629012f432541ccb505c17f5073aa1fdbca26b1e4909fd
@@ -266,13 +260,16 @@ kubeadm_checksums:
1.33.1: sha256:9a481b0a5f1cee1e071bc9a0867ca0aad5524408c2580596c00767ba1a7df0bd
1.33.0: sha256:5a65cfec0648cabec124c41be8c61040baf2ba27a99f047db9ca08cac9344987
ppc64le:
1.35.2: sha256:8a3800d88070679a09d2c5cd5dd479d70539dce84415623e876cbaf366d94043
1.35.1: sha256:eec12948cfabc18115636c44aca894bf9abef3b2ea73cba180314ee3c218dcca
1.35.0: sha256:77a466e1b6a8e28362a729541269de0a7c4a6b9e7770cccefcd745502e656b90
1.34.5: sha256:5e5d7dba56b5bb663e70a5d7962b0f6d4328d9b9a8ee77ddbb2e9760265ff61d
1.34.4: sha256:69f1065e718ef2aa5f0287444ef97bd4a5fb8841fc0662f54ca8992a39865391
1.34.3: sha256:2b8b48b3b0eb657e04122a158cb7fcad964fba5bd2d8e07f8eeec6f856a63ecf
1.34.2: sha256:bea4ed6d971523da794a802de15910b08c09e23bc4c850ee3b953c4bdb0b7976
1.34.1: sha256:ddb6bd80bee0719924ae901672b99205226badab74fb13a9e1bb6d3de49fbb21
1.34.0: sha256:7201ba36f44187f408a036c4a545e2a3cd12943b1297092687bb66c9a1a9fed6
1.33.9: sha256:b644b9947f3b79d0ff4c19389cf23f436bb5d6f23166ed3b4e0aee09775b3065
1.33.8: sha256:d618fa97b5782b57512e0a8ab9ed17af190236907af7bd3c9c0776d81c78273f
1.33.7: sha256:db2e20d0c20928ae7d68d7603020f8ffd89dcdac4fdc160ef83f1da663868bed
1.33.6: sha256:58aaec7b5066b6e3705e0493a2f51c7f101b17165ce714c4d52a2b53861c078b
@@ -284,6 +281,15 @@ kubeadm_checksums:
1.33.0: sha256:26cb7ac57d522a59c84c4784b176097d23c7b4e61874fab84ae719d0e43ac0bc
etcd_binary_checksums:
arm64:
3.6.8: sha256:438f56a700d17ce761510a3e63e6fa5c1d587b2dd4d7a22c179c09a649366760
3.6.7: sha256:ef5fc443cf7cc5b82738f3c28363704896551900af90a6d622cae740b5644270
3.6.6: sha256:8a15f5427c111ff4692753682374970fb68878401d946c2c28bdad6857db652f
3.6.5: sha256:7010161787077b07de29b15b76825ceacbbcedcb77fe2e6832f509be102cab6b
3.6.4: sha256:323421fa279f4f3d7da4c7f2dfa17d9e49529cb2b4cdf40899a7416bccdde42d
3.6.3: sha256:4b39989093699da7502d1cdd649c412055a2bddd26b3d80ed87d0db31957075c
3.6.2: sha256:79d0a2488967aa07ecfde79158b1dab458158522f834810c2827eecac4695a31
3.6.1: sha256:5f8ed6e314df44128c218decbf0d146cf882583d05c6f6d9023ce905d232aaec
3.6.0: sha256:81477b120ef66ff338fe7de63d894e5feec17e6e1f1d98507676832e089d9b58
3.5.27: sha256:1277309f540c5a0329c428f95455c9f76d24f768c8d28fd2753e891c379053fa
3.5.26: sha256:93ac1667df0e178ea6d152476ce4088df4075604fe4bc7f85f4719e863cd030b
3.5.25: sha256:419dce0b679df31cc45201ef2449b7a6a48e9d241af01741957c9ac86a35badc
@@ -307,6 +313,15 @@ etcd_binary_checksums:
3.5.7: sha256:1a35314900da7db006b198dd917e923459b462128101736c63a3cda57ecdbf51
3.5.6: sha256:888e25c9c94702ac1254c7655709b44bb3711ebaabd3cb05439f3dd1f2b51a87
amd64:
3.6.8: sha256:cf9cfe91a4856cb90eed9c99e6aee4b708db2c7888b88a6f116281f04b0ea693
3.6.7: sha256:cf8af880c5a01ee5363cefa14a3e0cb7e5308dcf4ed17a6973099c9a7aee5a9a
3.6.6: sha256:887afaa4a99f22d802ccdfbe65730a5e79aa5c9ce2c8799c67e9d804c50ecedb
3.6.5: sha256:66bad39ed920f6fc15fd74adcb8bfd38ba9a6412f8c7852d09eb11670e88cac3
3.6.4: sha256:4d5f3101daa534e45ccaf3eec8d21c19b7222db377bcfd5e5a9144155238c105
3.6.3: sha256:3f3b4aa9785d86322c50b296eebdc7a0a57b27065190154b5858bf6a7512ac10
3.6.2: sha256:4b5d55d61e2218fab7c1cc1c00b341c469159ecde8cedd575fa858683f67e9f4
3.6.1: sha256:1324664bfe56d178d1362a57462ca5a7b26a6d2cbe9e1c94b6820e32cb82d673
3.6.0: sha256:42305b0dcbba7b6fdff0382d0c7b99c42026c88c44847a619ab58cde216725d9
3.5.27: sha256:0aad9a9e4e0817a021e933f9806a2b2960a62f949ad5a3d6436d8886945cb1bc
3.5.26: sha256:0a682a91201dc8351d507210bc30b021a11e254eab806f03224b51e8fad29abb
3.5.25: sha256:168af82b59772e1811a9af7b358d42f5c6df44e0d9767afb006ecf12c4bbd607
@@ -330,6 +345,15 @@ etcd_binary_checksums:
3.5.7: sha256:a43119af79c592a874e8f59c4f23832297849d0c479338f9df36e196b86bc396
3.5.6: sha256:4db32e3bc06dd0999e2171f76a87c1cffed8369475ec7aa7abee9023635670fb
ppc64le:
3.6.8: sha256:3b9bb486b0eb8d79b30410749ec26e174db075956c9ecb533b313b9263e7ba78
3.6.7: sha256:de3b1ed50fc8868cdd56b12b0cd81d6740bf53edbca570400a78e530e4829b7b
3.6.6: sha256:e4f528b63a731e9b96f5d10f55ce096223fb4e1bc1778aa2535a3d47e9a129e5
3.6.5: sha256:3cf99879c7c5b8678a0ec2edf9102b268ea934584db2850f049d89ed8e36b61c
3.6.4: sha256:2910fc73e42e1eeb9cc7da8080b821c7649558465e0e6122e49afce832e4b9da
3.6.3: sha256:de8ee412ee2669483fd9c730e915c5bd4fe113ba33be4a70305d13ff35e1f919
3.6.2: sha256:bf79b9d4c7e9f86e611e73de9fe54a195bc0ad54aeb17200b1c8bda3c4119705
3.6.1: sha256:bb87fcd0ea4b9fabf502703512c416ca1d9f4082679cb7f6dbc34bed3dfc13f6
3.6.0: sha256:1180d06e3a3787ab65078d9a488f778a4712c59cc82d614abde80c5d06efe38f
3.5.27: sha256:b41d488dcd579e780f49f5bd747e9386e17e1376ffb77bfff061f7944818a678
3.5.26: sha256:9678ddaced9fcd4878b76b0b76c9c2a3638a70bdc362c9f4cb25ecc48de2c6d3
3.5.25: sha256:0dee64e99a43a06dd9541a40a18b52c7309eb1682a2a32740d4bdf358296c007
@@ -638,6 +662,7 @@ cri_dockerd_archive_checksums:
0.3.5: sha256:30d47bd89998526d51a8518f9e8ef10baed408ab273879ee0e30350702092938
runc_checksums:
arm64:
1.3.5: sha256:bd843d75a788e612c9df286b1fa519a44fcbb7a7b8d01e2268431433cc7c718c
1.3.4: sha256:d6dcab36d1b6af1b72c7f0662e5fcf446a291271ba6006532b95c4144e19d428
1.3.3: sha256:3c9a8e9e6dafd00db61f4611692447ebab4a56388bae4f82192aed67b66df712
1.3.2: sha256:06fbccb4528ecd490f3f333d6dcf22c876bd72a024813a0c0a46312121f4c5fd
@@ -662,6 +687,7 @@ runc_checksums:
1.1.9: sha256:b43e9f561e85906f469eef5a7b7992fc586f750f44a0e011da4467e7008c33a0
1.1.8: sha256:7c22cb618116d1d5216d79e076349f93a672253d564b19928a099c20e4acd658
amd64:
1.3.5: sha256:66fa8390be8fb3b23dfbb60c767368bb5b51f1acfa88692bbff1a82953d4d9e9
1.3.4: sha256:5966ca40b6187b30e33bfc299c5f1fe72e8c1aa01cf3fefdadf391668f47f103
1.3.3: sha256:8781ab9f71c12f314d21c8e85f13ca1a82d90cf475aa5131a7b543fcc5487543
1.3.2: sha256:e7a8e30bd6d248f494aae9163521ff4eb112a30602ac56ada0871e3531269c2d
@@ -686,6 +712,7 @@ runc_checksums:
1.1.9: sha256:b9bfdd4cb27cddbb6172a442df165a80bfc0538a676fbca1a6a6c8f4c6933b43
1.1.8: sha256:1d05ed79854efc707841dfc7afbf3b86546fc1d0b3a204435ca921c14af8385b
ppc64le:
1.3.5: sha256:62e8f062291c2b2b29bd8ab8c983cef56409063287e256c50ab54fb54f5d98a7
1.3.4: sha256:268d9be1188f3efa82cad0d8e6b938d8da0d741427660d874ca9386c68d72937
1.3.3: sha256:c42394e7cf7cd508a91b090b72d57ff4df262effde742d5e29ea607e65f38b43
1.3.2: sha256:9373062bc547b5afe44fb0122a12aaa980763969d4b69dd17134a6a292838ce5
@@ -756,6 +783,10 @@ kata_containers_binary_checksums:
3.5.0: sha256:fa4cf67d010244c4f8d0e6d450d04e28d1bbce5ad1a3cbc0154adff628d56c0c
gvisor_runsc_binary_checksums:
arm64:
'20260309.0': sha512:85d37e7a0b249706f1b3b0ec5d84cc38f4dd53a4e490395f489eb406194529233631405be8bcd1648126db3fc0221a8a1b599743eca2b183b2a35ef3aca638fa
'20260302.0': sha512:1f5a81cd6080252d4bee22e0828ed796b438bc768b0845015457944ca659b37fa5a2c22acdf655963b1a27855d5bb2263887463238eaf930efb55b0e4ca801ed
'20260223.0': sha512:b3077dc8da3142b117829955ea8ecca7f75fda22f8bd59323a62ba34c6b836f62edea0b10ce0b745d3cd001814cd215175a3bd51d90abc07556bafe600683056
'20260216.0': sha512:e5e46739fe6eb26477f57224dc66152ac259706b1d76512391eab6cfc70ff235f5d0a506d54d685520534b76576f2837c1f909bbf33b19dbe2e6186010a58c0e
'20260209.1': sha512:e95170b4f70688d014c795ffa9b3d583753f865edfd8afb4e2969490869bdb46b60672f641741f788e2ffee8f29751a017e9a68b98c1e44f5194da9a64b0ff28
'20260202.0': sha512:5fbb9c68efdf3a404217fb57be55051b4b5f8b83ca631101204615b87ff5b6ea8680cd6599e434f1d87fecb9071367b65e90cd8ad5df3f0b9f0101796ecc8c43
'20260126.0': sha512:c1b42f5789c09a68eb006964048448c058776440477fac83c7fd9cef879cec40878fb2f5f2450315ca0e7f568889f0b52c842b84929784a57023961f6eb77d04
@@ -788,6 +819,10 @@ gvisor_runsc_binary_checksums:
'20250414.0': sha512:d1ba68b20057622e58e886f472e021a473222590c936a86951005d7b97366b446ef0342b91457ffc0d7e543d54c9c06a363f2883bdd6c594799c4ca1091dabd5
'20250407.0': sha512:cb590f72b0fbda45e89a2300e9247f12ff295a8c52653c8cf815c662d3fbbc774f9b915cdd4fad59e30694d8cc8737fe2a1a8186ab5136f7701bd6e6877a1662
amd64:
'20260309.0': sha512:2f2d5092d53cae40c53006dead1c5b75552f7c901b9f15bd63b967d2444b59953647c029548ee3950adac0de82be6411a7bf199b3aa7a1dfaab59a51509e4768
'20260302.0': sha512:67c0125cc0e3b2633c2557ab0d76f1f15f9fe7b47db77b5a0c4d1e025b184c3f10803b88109948378995cf72068a5d486b05bac8f7a69ca77bf38460ae43ccdc
'20260223.0': sha512:9ec46dca073545187cc6640350b728d5eb358740919236915f7f1619fdeb1ed14815dd4b117156ea2c6b00a9cf97024902dd6ee9b2b6be0693f1e930c59f40c2
'20260216.0': sha512:f9a0c0094e5fdddffabfa1f0f5d6d3e048a320f85c17686f8697df66f6472d594ab11a290348e97f9a78b614aeb3ab814dc60a8f4ecaa73b1bec49b5b33e8f76
'20260209.1': sha512:1e0e42f7d3f4b3eded4e96be5af4dcdbecc9bca7ce40f5b9fa191210690397d71771c7c0e0835c32221261b004250fe513a9265447e62d9bf92fb6a5f7276a68
'20260202.0': sha512:f7bb9cc5e3f5e36a6788f959361415f6d7f7cd0225b8b4d99728da4b1ac7e5c7ce9c72b4c61e424ba93db77c983109d56b54907a3b2e2b982b34058410611023
'20260126.0': sha512:cce974fa832c50d26c6ccc08ce50b4972921cd0818ebe8007587211d360cbc828ceea4ec8296703200afa208b679437d24f27a6dca31887b3c0fc6ee8be5eb05
@@ -821,6 +856,10 @@ gvisor_runsc_binary_checksums:
'20250407.0': sha512:097259d6d93548bf669e21cfec5ba6a47081e43f61d22c5d8a8a4c0c209c81ac9c4454162b826f98cec49e047bbdc29c270113ab6db5519ef3e6a90f302fa47b
gvisor_containerd_shim_binary_checksums:
arm64:
'20260309.0': sha512:43ffb3accd1b4e3d12319824917b3defd268190b1efcdce76bdaea862e7784c598dbd1dd4a3565d12db01452a02609fd1d0b68d9e29c4d26cac5dfde1329a8cb
'20260302.0': sha512:9bdc7ed0fe570223e6a60368d75e19dd2b211d1b4f84cfbc3d466b0be8b028ba75f49bd9b97e2836ab30140ef2910aec81bebdc444ae9e83805236e5e165b0a9
'20260223.0': sha512:9bdc7ed0fe570223e6a60368d75e19dd2b211d1b4f84cfbc3d466b0be8b028ba75f49bd9b97e2836ab30140ef2910aec81bebdc444ae9e83805236e5e165b0a9
'20260216.0': sha512:74474c2302ff5623ab55057b9f552edd81f8e41f2b8e702e6236e1c2c56295f6beabd9ff6eb788cf23a42c18fc6c2a328a2af70ac4ebc04fbc586eb9cd0616e3
'20260209.1': sha512:714ad3a53a28aa4acd891553d848278f5a873d0a1733836382eaf2bf701d62ece9cef324390602d2676af5e2e3a3d329486d2b18803c9cef5685220764757eb4
'20260202.0': sha512:714ad3a53a28aa4acd891553d848278f5a873d0a1733836382eaf2bf701d62ece9cef324390602d2676af5e2e3a3d329486d2b18803c9cef5685220764757eb4
'20260126.0': sha512:84abf41b68ba450ed2cbbdf544e7d347d30f6fd577572e2e58f2fa8e038689f557953148287e26c8f4ee5040c1e928670f113bebca6d81ed7ce014ec4e0ad256
@@ -853,6 +892,10 @@ gvisor_containerd_shim_binary_checksums:
'20250414.0': sha512:33b9c67bc7b73ca49154aff48da52029414a707b6a3a25eb4f71e861a94dec8fce220e63a162841670ddd4876f45b0e39abdf9f8c3235019c89f209684d3007d
'20250407.0': sha512:1c3838e10c905af0cb52697712bf6bd76b94c9e9d3d07a7643cd43dc2f8dab03b4ed4693c117e555e07a158e04ee583b6b1f1cf2fb9705244ffa5fdc4af67248
amd64:
'20260309.0': sha512:f5708fd1fdad4da12f440780c2c09ca6f2ca7cb47089e24e536851388bae54228927a462d61c584c3adf3426f03addea9f08e390be39c744cac008a84e7559e9
'20260302.0': sha512:7b0cbc9130b4f2fdffe45d0f811070a868bbcdafc9cc30264108ee35fa17ee3a54f1a1c7775619c6afa4b7fc623bd4acf0e80f4fb40fbc7f1f58a1cceb8a9f83
'20260223.0': sha512:7b0cbc9130b4f2fdffe45d0f811070a868bbcdafc9cc30264108ee35fa17ee3a54f1a1c7775619c6afa4b7fc623bd4acf0e80f4fb40fbc7f1f58a1cceb8a9f83
'20260216.0': sha512:f42c07dc741d52720ee531cd8928386ecb9a7605ccb4bd0805a8c3396213e05cd10572936dccc77049e6b4b8094cbdaf02752291047c852e7a48608d35832d58
'20260209.1': sha512:bd21b80502be25484d8b43168c88d66b6f3e853c78c0ae5b5206c5625e2a365e98c8b3ba259453d18c01d1aa08fb7c8c1e7f122fdcd7ef806bfc2f44f5837b5e
'20260202.0': sha512:bd21b80502be25484d8b43168c88d66b6f3e853c78c0ae5b5206c5625e2a365e98c8b3ba259453d18c01d1aa08fb7c8c1e7f122fdcd7ef806bfc2f44f5837b5e
'20260126.0': sha512:51c3b4bc21cb5c3d4e3baf9f43e5fecd86c327abf0c84d492510f480cdfb38c90d43f3b0dbf1887ada8846d3806da79a73729acaedc570894ba6ed7cf9e083ed
@@ -979,6 +1022,7 @@ nerdctl_archive_checksums:
1.7.0: sha256:e421ae655ff68461bad04b4a1a0ffe40c6f0fcfb0847d5730d66cd95a7fd10cd
containerd_archive_checksums:
arm64:
2.2.2: sha256:cb102473d6e353beb604178879d51cc456da0cdf368d9437d8d404ed01baf674
2.2.1: sha256:dac15a0d412a24be8bfe6a40cec8f51829062725169f1e72ac7d120a891ef5b6
2.2.0: sha256:8805c2123d3b7c7ee2030e9f8fc07a1167d8a3f871d6a7d7ec5d1deb0b51a4a7
2.1.6: sha256:88d6e32348c36628c8500a630c6dd4b3cb8c680b1d18dc8d1d19041f67757c6e
@@ -1028,6 +1072,7 @@ containerd_archive_checksums:
1.7.1: sha256:1f828dc063e3c24b0840b284c5635b5a11b1197d564c97f9e873b220bab2b41b
1.7.0: sha256:e7e5be2d9c92e076f1e2e15c9f0a6e0609ddb75f7616999b843cba92d01e4da2
amd64:
2.2.2: sha256:2c08c99cbde73b3388c6d5da68e0bcaebc70c9174f2b14d785695e4401b3ede0
2.2.1: sha256:f5d8e90ecb6c1c7e33ecddf8cc268a93b9e5b54e0e850320d765511d76624f41
2.2.0: sha256:b9626a94ab93b00bcbcbf13d98deef972c6fb064690e57940632df54ad39ee71
2.1.6: sha256:4793dc5c1f34ebf8402990d0050f3c294aa3c794cd5a4baa403c1cf10602326d
@@ -1077,6 +1122,7 @@ containerd_archive_checksums:
1.7.1: sha256:9504771bcb816d3b27fab37a6cf76928ee5e95a31eb41510a7d10ae726e01e85
1.7.0: sha256:b068b05d58025dc9f2fc336674cac0e377a478930f29b48e068f97c783a423f0
ppc64le:
2.2.2: sha256:8f7a8190f2a635cd0e5580a131408a275ba277f7a04edffba4a4005960093987
2.2.1: sha256:3de0f215ea649952a9e99040cb3888d8059bd3d35b04edbe6afb916c763f9ea7
2.2.0: sha256:e4ecd0b03200864e117371b25cce5335e39ce0b0a168a01d2ba6562a05020f0b
2.1.6: sha256:aef2b639a14ae79f2bbe43356b25e84ecfb2c7f269c87f41e41585e724073e54
@@ -1127,6 +1173,7 @@ containerd_archive_checksums:
1.7.0: sha256:051e897d3ee5b8c8097f65be447fea2d29226b583ca5d9ed78e9aebcf4e69889
containerd_static_archive_checksums:
arm64:
2.2.2: sha256:f22e03e12edd08dc49e139fec1fb0e0571950df0b6275577bbca718733acea9d
2.2.1: sha256:6b3b011ee388638eace05ac3be0eb32dfb4a43086695c29d06e997febd336f2e
2.2.0: sha256:5f2a7f451231ff35d8306f874c51606fc9da1e2db56048834a23260f68a78eef
2.1.6: sha256:9da292010d36d80afa3bb48cbd15f65d3bf38177217060272a1c3fd65351cfa4
@@ -1176,6 +1223,7 @@ containerd_static_archive_checksums:
1.7.1: sha256:f0435e7cda3c3abc40d3f27d403a8e24bd0b927a8a893a7e4dfaec5996fa9731
1.7.0: sha256:6e648cd832f026e23eb6998191e618da7c1ec0c0373263d503ff464e0ae3977a
amd64:
2.2.2: sha256:5db46232ce716f85bf1e71497a9038c87e63030574bf03f9d09557802188ad27
2.2.1: sha256:af3e82bac6abed58d45956c653244aa2be583359a9753614278ef652012f2883
2.2.0: sha256:2d20037947cbb0def12b8ac0c572b212284c1832bf3c921df1e58975515d1d08
2.1.6: sha256:577900a5a8684c27e344aeeb1fc64e355745f58cba7f83c53649235ba25abbbf
@@ -1225,6 +1273,7 @@ containerd_static_archive_checksums:
1.7.1: sha256:8b4e8ed8a650ea435aa71e115fa1a70701ab98bc1836b3ed33341af35bf85a3a
1.7.0: sha256:64ad6428cc4aca486db3a6148682052955d1e3134b69f079edf686c21d123fcd
ppc64le:
2.2.2: sha256:7e3d541c578fe06bcdb36ee58140e6e36dc97e784a9228e31c3ce99937cbad10
2.2.1: sha256:fc9235be9a3dd3005e7fe6a9d7bb80e42dbfbff4b119cdce6ea3ee66bc7ae9ca
2.2.0: sha256:d15a4edfe689ce71df8cc5a0c1837856f54aba8d7336170600e6592c2fbf3d8d
2.1.6: sha256:c64312b87181d900452b5c3360a90578acd39ec7664d0c2e060183b24a708766

View File

@@ -12,7 +12,7 @@ pod_infra_supported_versions:
'1.33': '3.10'
etcd_supported_versions:
'1.35': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
'1.35': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.7', '<'))[0] }}"
'1.34': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
'1.33': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
# Kubespray constants

View File

@@ -0,0 +1,5 @@
---
# Additional string host to inject into NO_PROXY
additional_no_proxy: ""
additional_no_proxy_list: "{{ additional_no_proxy | split(',') }}"
no_proxy_exclude_workers: false

View File

@@ -1,41 +1,63 @@
---
- name: Set facts variables
tags:
- always
block:
- name: Gather node IPs
setup:
gather_subset: '!all,!min,network'
filter: "ansible_default_ip*"
when: ansible_default_ipv4 is not defined or ansible_default_ipv6 is not defined
ignore_unreachable: true
- name: Gather node IPs
setup:
gather_subset: '!all,!min,network'
filter: "ansible_default_ip*"
when: ansible_default_ipv4 is not defined or ansible_default_ipv6 is not defined
ignore_unreachable: true
- name: Set computed IPs varables
vars:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
fallback_ip6: "{{ ansible_default_ipv6.address | d('::1') }}"
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
_ipv4: "{{ ip | default(fallback_ip) }}"
_access_ipv4: "{{ access_ip | default(_ipv4) }}"
_ipv6: "{{ ip6 | default(fallback_ip6) }}"
_access_ipv6: "{{ access_ip6 | default(_ipv6) }}"
_access_ips:
- "{{ _access_ipv4 if ipv4_stack }}"
- "{{ _access_ipv6 if ipv6_stack }}"
_ips:
- "{{ _ipv4 if ipv4_stack }}"
- "{{ _ipv6 if ipv6_stack }}"
set_fact:
cacheable: true
main_access_ip: "{{ _access_ipv4 if ipv4_stack else _access_ipv6 }}"
main_ip: "{{ _ipv4 if ipv4_stack else _ipv6 }}"
# Mixed IPs - for dualstack
main_access_ips: "{{ _access_ips | select }}"
main_ips: "{{ _ips | select }}"
- name: Set computed IPs variables
vars:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
fallback_ip6: "{{ ansible_default_ipv6.address | d('::1') }}"
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
_ipv4: "{{ ip | default(fallback_ip) }}"
_access_ipv4: "{{ access_ip | default(_ipv4) }}"
_ipv6: "{{ ip6 | default(fallback_ip6) }}"
_access_ipv6: "{{ access_ip6 | default(_ipv6) }}"
_access_ips:
- "{{ _access_ipv4 if ipv4_stack }}"
- "{{ _access_ipv6 if ipv6_stack }}"
_ips:
- "{{ _ipv4 if ipv4_stack }}"
- "{{ _ipv6 if ipv6_stack }}"
set_fact:
cacheable: true
main_access_ip: "{{ _access_ipv4 if ipv4_stack else _access_ipv6 }}"
main_ip: "{{ _ipv4 if ipv4_stack else _ipv6 }}"
# Mixed IPs - for dualstack
main_access_ips: "{{ _access_ips | select }}"
main_ips: "{{ _ips | select }}"
- name: Set no_proxy
import_tasks: no_proxy.yml
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
- name: Set no_proxy to all assigned cluster IPs and hostnames
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
vars:
groups_with_no_proxy:
- kube_control_plane
- "{{ '' if no_proxy_exclude_workers else 'kube_node' }}" # TODO: exclude by a boolean in inventory rather than global variable
- etcd
- calico_rr
hosts_with_no_proxy: "{{ groups_with_no_proxy | select | map('extract', groups) | select('defined') | flatten }}"
_hostnames: "{{ (hosts_with_no_proxy +
(hosts_with_no_proxy | map('extract', hostvars, morekeys=['ansible_hostname'])
| select('defined')))
| unique }}"
no_proxy_prepare:
- "{{ apiserver_loadbalancer_domain_name | d('') }}"
- "{{ loadbalancer_apiserver.address if loadbalancer_apiserver is defined else '' }}"
- "{{ hosts_with_no_proxy | map('extract', hostvars, morekeys=['main_access_ip']) }}"
- "{{ _hostnames }}"
- "{{ _hostnames | map('regex_replace', '$', '.' + dns_domain ) }}"
- "{{ additional_no_proxy_list }}"
- 127.0.0.1
- localhost
- "{{ kube_service_subnets }}"
- "{{ kube_pods_subnets }}"
- svc
- "svc.{{ dns_domain }}"
set_fact:
no_proxy: "{{ no_proxy_prepare | select | flatten | unique | join(',') }}"
run_once: true

View File

@@ -1,40 +0,0 @@
---
- name: Set no_proxy to all assigned cluster IPs and hostnames
set_fact:
# noqa: jinja[spacing]
no_proxy_prepare: >-
{%- if loadbalancer_apiserver is defined -%}
{{ apiserver_loadbalancer_domain_name }},
{{ loadbalancer_apiserver.address | default('') }},
{%- endif -%}
{%- if no_proxy_exclude_workers | default(false) -%}
{% set cluster_or_control_plane = 'kube_control_plane' %}
{%- else -%}
{% set cluster_or_control_plane = 'k8s_cluster' %}
{%- endif -%}
{%- for item in (groups[cluster_or_control_plane] + groups['etcd'] | default([]) + groups['calico_rr'] | default([])) | unique -%}
{{ hostvars[item]['main_access_ip'] }},
{%- if item != hostvars[item].get('ansible_hostname', '') -%}
{{ hostvars[item]['ansible_hostname'] }},
{{ hostvars[item]['ansible_hostname'] }}.{{ dns_domain }},
{%- endif -%}
{{ item }},{{ item }}.{{ dns_domain }},
{%- endfor -%}
{%- if additional_no_proxy is defined -%}
{{ additional_no_proxy }},
{%- endif -%}
127.0.0.1,localhost,{{ kube_service_subnets }},{{ kube_pods_subnets }},svc,svc.{{ dns_domain }}
delegate_to: localhost
connection: local
delegate_facts: true
become: false
run_once: true
- name: Populates no_proxy to all hosts
set_fact:
no_proxy: "{{ hostvars.localhost.no_proxy_prepare | select }}"
# noqa: jinja[spacing]
proxy_env: "{{ proxy_env | combine({
'no_proxy': hostvars.localhost.no_proxy_prepare,
'NO_PROXY': hostvars.localhost.no_proxy_prepare
}) }}"

View File

@@ -177,6 +177,9 @@ rules:
- blockaffinities
- caliconodestatuses
- tiers
- stagednetworkpolicies
- stagedglobalnetworkpolicies
- stagedkubernetesnetworkpolicies
verbs:
- get
- list

View File

@@ -215,3 +215,17 @@ rules:
- calico-cni-plugin
verbs:
- create
{% if calico_version is version('3.29.0', '>=') %}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-tier-getter
rules:
- apiGroups:
- "projectcalico.org"
resources:
- "tiers"
verbs:
- "get"
{% endif %}

View File

@@ -26,3 +26,18 @@ subjects:
- kind: ServiceAccount
name: calico-cni-plugin
namespace: kube-system
{% if calico_version is version('3.29.0', '>=') %}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-tier-getter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-tier-getter
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:kube-controller-manager
{% endif %}

View File

@@ -305,12 +305,9 @@ cilium_enable_well_known_identities: false
# Only effective when monitor aggregation is set to "medium" or higher.
cilium_monitor_aggregation_flags: "all"
cilium_enable_bpf_clock_probe: true
# -- Enable BGP Control Plane
cilium_enable_bgp_control_plane: false
# -- Configure BGP Instances (New bgpv2 API v1.16+)
cilium_bgp_cluster_configs: []

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_advertisement in cilium_bgp_advertisements %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPAdvertisement
metadata:
name: "{{ cilium_bgp_advertisement.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_cluster_config in cilium_bgp_cluster_configs %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPClusterConfig
metadata:
name: "{{ cilium_bgp_cluster_config.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_node_config_override in cilium_bgp_node_config_overrides %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPNodeConfigOverride
metadata:
name: "{{ cilium_bgp_node_config_override.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_peer_config in cilium_bgp_peer_configs %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPPeerConfig
metadata:
name: "{{ cilium_bgp_peer_config.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_loadbalancer_ip_pool in cilium_loadbalancer_ip_pools %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "{{ cilium_loadbalancer_ip_pool.name }}"

View File

@@ -143,6 +143,14 @@ cgroup:
enabled: {{ cilium_cgroup_auto_mount | to_json }}
hostRoot: {{ cilium_cgroup_host_root }}
resources:
limits:
memory: "{{ cilium_memory_limit }}"
cpu: "{{ cilium_cpu_limit }}"
requests:
memory: "{{ cilium_memory_requests }}"
cpu: "{{ cilium_cpu_requests }}"
operator:
image:
repository: {{ cilium_operator_image_repo }}

View File

@@ -14,6 +14,7 @@ dependencies:
chart_ref: "{{ custom_cni_chart_ref }}"
chart_version: "{{ custom_cni_chart_version }}"
wait: true
create_namespace: true
values: "{{ custom_cni_chart_values }}"
repositories:
- name: "{{ custom_cni_chart_repository_name }}"

View File

@@ -17,6 +17,8 @@
--grace-period {{ drain_grace_period }}
--timeout {{ drain_timeout }}
--delete-emptydir-data {{ kube_override_hostname }}
async: "{{ (drain_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
when:
- groups['kube_control_plane'] | length > 0
# ignore servers that are not nodes

View File

@@ -47,7 +47,7 @@
- name: Manage packages
package:
name: "{{ item.packages | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"
name: "{{ item.packages }}"
state: "{{ item.state }}"
update_cache: "{{ true if ansible_pkg_mgr in ['zypper', 'apt', 'dnf'] else omit }}"
cache_valid_time: "{{ 86400 if ansible_pkg_mgr == 'apt' else omit }}" # 24h
@@ -55,10 +55,17 @@
until: pkgs_task_result is succeeded
retries: "{{ pkg_install_retries }}"
delay: "{{ retry_stagger | random + 3 }}"
when: not (ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"] or is_fedora_coreos)
when:
- ansible_os_family not in ["Flatcar", "Flatcar Container Linux by Kinvolk"]
- not is_fedora_coreos
- item.packages != []
loop:
- { packages: "{{ pkgs_to_remove }}", state: "absent", action_label: "remove" }
- { packages: "{{ pkgs }}", state: "present", action_label: "install" }
- packages: "{{ pkgs_to_remove | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"
state: "absent"
action_label: "remove"
- packages: "{{ pkgs | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"
state: "present"
action_label: "install"
loop_control:
label: "{{ item.action_label }}"
tags:

View File

@@ -59,6 +59,8 @@
--timeout {{ drain_timeout }}
--delete-emptydir-data {{ kube_override_hostname | default(inventory_hostname) }}
{% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %}
async: "{{ (drain_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
when: drain_nodes
register: result
failed_when:
@@ -82,6 +84,8 @@
--delete-emptydir-data {{ kube_override_hostname | default(inventory_hostname) }}
{% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %}
--disable-eviction
async: "{{ (drain_fallback_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
register: drain_fallback_result
until: drain_fallback_result.rc == 0
retries: "{{ drain_fallback_retries }}"

View File

@@ -49,7 +49,6 @@
assert:
that:
- download_run_once | type_debug == 'bool'
- deploy_netchecker | type_debug == 'bool'
- download_always_pull | type_debug == 'bool'
- helm_enabled | type_debug == 'bool'
- openstack_lbaas_enabled | type_debug == 'bool'
@@ -214,3 +213,13 @@
when:
- kube_external_ca_mode
- not ignore_assert_errors
- name: Download_file | Check if requested Kubernetes are supported
assert:
that:
- kube_version in kubeadm_checksums[image_arch]
- kube_version in kubelet_checksums[image_arch]
- kube_version in kubectl_checksums[image_arch]
msg: >-
Kubernetes v{{ kube_version }} is not supported for {{ image_arch }}.
Please check roles/kubespray_defaults/vars/main/checksums.yml for supported versions.

View File

@@ -6,7 +6,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "kubespray_component_hash_update"
version = "1.0.0"
version = "1.0.1"
dependencies = [
"more_itertools",
"ruamel.yaml",

View File

@@ -126,15 +126,20 @@ def download_hash(downloads: {str: {str: Any}}) -> None:
releases, tags = map(
dict, partition(lambda r: r[1].get("tags", False), downloads.items())
)
repos = {
"with_releases": [r["graphql_id"] for r in releases.values()],
"with_tags": [t["graphql_id"] for t in tags.values()],
}
unique_release_ids = list(dict.fromkeys(
r["graphql_id"] for r in releases.values()
))
unique_tag_ids = list(dict.fromkeys(
t["graphql_id"] for t in tags.values()
))
response = s.post(
"https://api.github.com/graphql",
json={
"query": files(__package__).joinpath("list_releases.graphql").read_text(),
"variables": repos,
"variables": {
"with_releases": unique_release_ids,
"with_tags": unique_tag_ids,
},
},
headers={
"Authorization": f"Bearer {os.environ['API_KEY']}",
@@ -155,31 +160,30 @@ def download_hash(downloads: {str: {str: Any}}) -> None:
except InvalidVersion:
return None
repos = response.json()["data"]
github_versions = dict(
zip(
chain(releases.keys(), tags.keys()),
[
{
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for repo in repos["with_releases"]
]
+ [
{
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for repo in repos["with_tags"]
],
strict=True,
)
)
resp_data = response.json()["data"]
release_versions_by_id = {
gql_id: {
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for gql_id, repo in zip(unique_release_ids, resp_data["with_releases"])
}
tag_versions_by_id = {
gql_id: {
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for gql_id, repo in zip(unique_tag_ids, resp_data["with_tags"])
}
github_versions = {}
for name, info in releases.items():
github_versions[name] = release_versions_by_id[info["graphql_id"]]
for name, info in tags.items():
github_versions[name] = tag_versions_by_id[info["graphql_id"]]
components_supported_arch = {
component.removesuffix("_checksums"): [a for a in archs.keys()]

View File

@@ -4,7 +4,7 @@
vm_cpu_cores: 2
vm_cpu_sockets: 1
vm_cpu_threads: 2
vm_memory: 2048
vm_memory: 4096
releases_disk_size: 2Gi
# Request/Limit allocation settings

View File

@@ -1,6 +1,5 @@
---
# Kubespray settings for tests
deploy_netchecker: true
dns_min_replicas: 1
unsafe_show_logs: true
@@ -29,12 +28,15 @@ crio_registries:
- location: mirror.gcr.io
insecure: false
netcheck_agent_image_repo: "{{ quay_image_repo }}/kubespray/k8s-netchecker-agent"
netcheck_server_image_repo: "{{ quay_image_repo }}/kubespray/k8s-netchecker-server"
nginx_image_repo: "{{ quay_image_repo }}/kubespray/nginx"
flannel_image_repo: "{{ quay_image_repo }}/kubespray/flannel"
flannel_init_image_repo: "{{ quay_image_repo }}/kubespray/flannel-cni-plugin"
local_release_dir: "{{ '/tmp/releases' if inventory_hostname != 'localhost' else (lookup('env', 'PWD') + '/downloads') }}"
hydrophone_version: "0.7.0"
hydrophone_arch: "x86_64"
hydrophone_checksum: "sha256:15a6c09962f9bd4a1587af068b5edef1072327a77012d8fbb84992c7c87c0475"
hydrophone_parallel: 1
hydrophone_path: "{{ bin_dir }}/hydrophone"

View File

@@ -3,8 +3,11 @@
cloud_image: openeuler-2403
vm_memory: 3072
# Openeuler package mgmt is slow for some reason
pkg_install_timeout: "{{ 10 * 60 }}"
# Use metalink for faster package downloads (auto-selects closest mirror)
openeuler_metalink_enabled: true
# CI package installation takes ~7min; default 5min is too tight, use 15min for margin
pkg_install_timeout: "{{ 15 * 60 }}"
# Work around so the Kubernetes 1.35 tests can pass. We will discuss the openeuler support later.
kubeadm_ignore_preflight_errors:

View File

@@ -13,3 +13,21 @@ kube_owner: root
# Node Feature Discovery
node_feature_discovery_enabled: true
kube_asymmetric_encryption_algorithm: "ECDSA-P256"
# Testing no_proxy setup
# The proxy is not intended to be accessed at all, we're only testing
# the no_proxy construction
https_proxy: "http://some-proxy.invalid"
http_proxy: "http://some-proxy.invalid"
additional_no_proxy_list:
- github.com
- githubusercontent.com
- k8s.io
- rockylinux.org
- docker.io
- googleapis.com
- quay.io
- pkg.dev
- amazonaws.com
- cilium.io
skip_http_proxy_on_os_packages: true

View File

@@ -13,3 +13,5 @@ kube_owner: root
# Node Feature Discovery
node_feature_discovery_enabled: true
kube_asymmetric_encryption_algorithm: "ECDSA-P256"
loadbalancer_apiserver_localhost: false

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2204
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2204
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2204
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2404
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -75,7 +75,10 @@ etcd_deployment_type: kubeadm
kubelet_authentication_token_webhook: true
kube_read_only_port: 0
kubelet_rotate_server_certificates: true
kubelet_csr_approver_enabled: false
kubelet_csr_approver_enabled: true # For hydrophone
kubelet_csr_approver_values:
# Do not check DNS resolution in testing (not recommended in production)
bypassDnsResolution: true
kubelet_protect_kernel_defaults: true
kubelet_event_record_qps: 1
kubelet_rotate_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2404
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2404
mode: node-etcd-client
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
auto_renew_certificates: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2404
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
container_manager: crio

View File

@@ -1,4 +1,4 @@
-r ../requirements.txt
distlib==0.4.0 # required for building collections
molecule==25.12.0
molecule==26.3.0
pytest-testinfra==10.2.2

View File

@@ -0,0 +1,13 @@
---
- name: Download hydrophone
get_url:
url: "https://github.com/kubernetes-sigs/hydrophone/releases/download/v{{ hydrophone_version }}/hydrophone_Linux_{{ hydrophone_arch }}.tar.gz"
dest: /tmp/hydrophone.tar.gz
checksum: "{{ hydrophone_checksum }}"
mode: "0644"
- name: Extract hydrophone
unarchive:
src: /tmp/hydrophone.tar.gz
dest: "{{ bin_dir }}"
copy: false

View File

@@ -0,0 +1,48 @@
---
- name: Check kubelet serving certificates approved with kubelet_csr_approver
when:
- kubelet_rotate_server_certificates | default(false)
- kubelet_csr_approver_enabled | default(kubelet_rotate_server_certificates | default(false))
vars:
csrs: "{{ csr_json.stdout | from_json }}"
block:
- name: Get certificate signing requests
command: "{{ bin_dir }}/kubectl get csr -o jsonpath-as-json={.items[*]}"
register: csr_json
changed_when: false
- name: Check there are csrs
assert:
that: csrs | length > 0
fail_msg: kubelet_rotate_server_certificates is {{ kubelet_rotate_server_certificates }} but no csr's found
- name: Check there are Denied/Pending csrs
assert:
that:
- csrs | rejectattr('status') | length == 0 # Pending == no status
- csrs | map(attribute='status.conditions') | flatten | selectattr('type', 'equalto', 'Denied') | length == 0 # Denied
fail_msg: kubelet_csr_approver is enabled but CSRs are not approved
- name: Approve kubelet serving certificates
when:
- kubelet_rotate_server_certificates | default(false)
- not (kubelet_csr_approver_enabled | default(kubelet_rotate_server_certificates | default(false)))
block:
- name: Get certificate signing requests
command: "{{ bin_dir }}/kubectl get csr -o name"
register: get_csr
changed_when: false
- name: Check there are csrs
assert:
that: get_csr.stdout_lines | length > 0
fail_msg: kubelet_rotate_server_certificates is {{ kubelet_rotate_server_certificates }} but no csr's found
- name: Approve certificates
command: "{{ bin_dir }}/kubectl certificate approve {{ get_csr.stdout_lines | join(' ') }}"
register: certificate_approve
when: get_csr.stdout_lines | length > 0
changed_when: certificate_approve.stdout

View File

@@ -1,114 +1,10 @@
---
- name: Check kubelet serving certificates approved with kubelet_csr_approver
when:
- kubelet_rotate_server_certificates | default(false)
- kubelet_csr_approver_enabled | default(kubelet_rotate_server_certificates | default(false))
- name: Run the hydrophone checks
vars:
csrs: "{{ csr_json.stdout | from_json }}"
networking_check: "\\[sig-network\\] Networking Granular Checks.+\\[Conformance\\]"
block:
- name: Get certificate signing requests
command: "{{ bin_dir }}/kubectl get csr -o jsonpath-as-json={.items[*]}"
register: csr_json
changed_when: false
- name: Check there are csrs
assert:
that: csrs | length > 0
fail_msg: kubelet_rotate_server_certificates is {{ kubelet_rotate_server_certificates }} but no csr's found
- name: Check there are Denied/Pending csrs
assert:
that:
- csrs | rejectattr('status') | length == 0 # Pending == no status
- csrs | map(attribute='status.conditions') | flatten | selectattr('type', 'equalto', 'Denied') | length == 0 # Denied
fail_msg: kubelet_csr_approver is enabled but CSRs are not approved
- name: Approve kubelet serving certificates
when:
- kubelet_rotate_server_certificates | default(false)
- not (kubelet_csr_approver_enabled | default(kubelet_rotate_server_certificates | default(false)))
block:
- name: Get certificate signing requests
command: "{{ bin_dir }}/kubectl get csr -o name"
register: get_csr
changed_when: false
- name: Check there are csrs
assert:
that: get_csr.stdout_lines | length > 0
fail_msg: kubelet_rotate_server_certificates is {{ kubelet_rotate_server_certificates }} but no csr's found
- name: Approve certificates
command: "{{ bin_dir }}/kubectl certificate approve {{ get_csr.stdout_lines | join(' ') }}"
register: certificate_approve
when: get_csr.stdout_lines | length > 0
changed_when: certificate_approve.stdout
- name: Create test namespace
command: "{{ bin_dir }}/kubectl create namespace test"
changed_when: false
- name: Run 2 agnhost pods in test ns
command:
cmd: "{{ bin_dir }}/kubectl apply --namespace test -f -"
stdin: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: agnhost
spec:
replicas: 2
selector:
matchLabels:
app: agnhost
template:
metadata:
labels:
app: agnhost
spec:
containers:
- name: agnhost
image: {{ test_image_repo }}:{{ test_image_tag }}
command: ['/agnhost', 'netexec', '--http-port=8080']
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
changed_when: false
- name: Check that all pods are running and ready
vars:
pods: "{{ (pods_json.stdout | from_json)['items'] }}"
block:
- name: Check Deployment is ready
command: "{{ bin_dir }}/kubectl rollout status deploy --namespace test agnhost --timeout=180s"
changed_when: false
- name: Get pod names
command: "{{ bin_dir }}/kubectl get pods -n test -o json"
changed_when: false
register: pods_json
- name: Check pods IP are in correct network
assert:
that: pods
| selectattr('status.phase', '==', 'Running')
| selectattr('status.podIP', 'ansible.utils.in_network', kube_pods_subnet)
| length == 2
- name: Curl between pods is working
command: "{{ bin_dir }}/kubectl -n test exec {{ item[0].metadata.name }} -- curl {{ item[1].status.podIP | ansible.utils.ipwrap}}:8080"
with_nested:
- "{{ pods }}"
- "{{ pods }}"
loop_control:
label: "{{ item[0].metadata.name + ' --> ' + item[1].metadata.name }}"
- name: Run the networking granular checks
command: "{{ hydrophone_path }} --focus=\"{{ networking_check }}\" --parallel {{ hydrophone_parallel }}"
rescue:
- name: List pods cluster-wide
command: "{{ bin_dir }}/kubectl get pods --all-namespaces -owide"

View File

@@ -13,88 +13,6 @@
- import_role: # noqa name[missing]
name: cluster-dump
- name: Wait for netchecker server
command: "{{ bin_dir }}/kubectl get pods --field-selector=status.phase==Running -o jsonpath-as-json={.items[*].metadata.name} --namespace {{ netcheck_namespace }}"
register: pods_json
until:
- pods_json.stdout | from_json | select('match', 'netchecker-server.*') | length == 1
- (pods_json.stdout | from_json | select('match', 'netchecker-agent.*') | length)
>= (groups['k8s_cluster'] | intersect(ansible_play_hosts) | length * 2)
retries: 3
delay: 10
when: inventory_hostname == groups['kube_control_plane'][0]
- name: Get netchecker pods
command: "{{ bin_dir }}/kubectl -n {{ netcheck_namespace }} describe pod -l app={{ item }}"
run_once: true
delegate_to: "{{ groups['kube_control_plane'][0] }}"
with_items:
- netchecker-agent
- netchecker-agent-hostnet
when: not pods_json is success
- name: Perform netchecker tests
run_once: true
delegate_to: "{{ groups['kube_control_plane'][0] }}"
block:
- name: Get netchecker agents
uri:
url: "http://{{ (ansible_default_ipv6.address if not (ipv4_stack | default(true)) else ansible_default_ipv4.address) | ansible.utils.ipwrap }}:{{ netchecker_port }}/api/v1/agents/"
return_content: true
headers:
Accept: application/json
register: agents
retries: 18
delay: "{{ agent_report_interval }}"
until:
- agents is success
- (agents.content | from_json | length) == (groups['k8s_cluster'] | length * 2)
- name: Check netchecker status
uri:
url: "http://{{ (ansible_default_ipv6.address if not (ipv4_stack | default(true)) else ansible_default_ipv4.address) | ansible.utils.ipwrap }}:{{ netchecker_port }}/api/v1/connectivity_check"
return_content: true
headers:
Accept: application/json
register: connectivity_check
retries: 3
delay: "{{ agent_report_interval }}"
until:
- connectivity_check is success
- connectivity_check.content | from_json
rescue:
- name: Get kube-proxy logs
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app=kube-proxy"
- name: Get logs from other apps
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app={{ item }} --all-containers"
with_items:
- kube-router
- flannel
- canal-node
- calico-node
- cilium
- name: Netchecker tests failed
fail:
msg: "netchecker tests failed"
- name: Check connectivity with all netchecker agents
vars:
connectivity_check_result: "{{ connectivity_check.content | from_json }}"
agents_check_result: "{{ agents.content | from_json }}"
assert:
that:
- agents_check_result is defined
- connectivity_check_result is defined
- agents_check_result.keys() | length > 0
- not connectivity_check_result.Absent
- not connectivity_check_result.Outdated
msg: "Connectivity check to netchecker agents failed"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
- name: Create macvlan network conf
command:
cmd: "{{ bin_dir }}/kubectl create -f -"

View File

@@ -11,6 +11,8 @@
- name: Import Kubespray variables
import_role:
name: ../../roles/kubespray_defaults
- name: Install the Hydrophone for tests
import_tasks: 000_install-hydrophone.yml
- name: Testcases for apiserver
import_tasks: 010_check-apiserver.yml
when:
@@ -24,21 +26,16 @@
- name: Testcases checking pods
import_tasks: 020_check-pods-running.yml
when: ('macvlan' not in testcase)
- name: Checking CSR approver
import_tasks: 025_check-csr-request.yml
- name: Testcases for network
import_tasks: 030_check-network.yml
when: ('macvlan' not in testcase)
vars:
test_image_repo: registry.k8s.io/e2e-test-images/agnhost
test_image_tag: "2.40"
- name: Testcases for calico / advanced network
import_tasks: 040_check-network-adv.yml
when:
- ('macvlan' not in testcase)
- ('hardening' not in testcase)
vars:
agent_report_interval: 10
netcheck_namespace: default
netchecker_port: 31081
- name: Testcases for kubernetes conformance
import_tasks: 100_check-k8s-conformance.yml
delegate_to: "{{ groups['kube_control_plane'][0] }}"