Compare commits

..

22 Commits

Author SHA1 Message Date
Max Gautier
3f6567bba0 Add patches versions checksums (k8s binaries, runc, containerd) (#10876)
Make runc 1.1.12 and containerd 1.7.13 default
Make kubernetes 1.27.10 default
2024-02-05 07:46:01 -08:00
bo.jiang
e09a7c02a6 Fix hardcoded pod infra version
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-01-22 17:23:25 +01:00
Kay Yan
48b3d9c56d cleanup-for-2.23.2 (#10801) 2024-01-18 11:23:30 +01:00
Max Gautier
ca271b8a65 [2.23] Update k8s and etcd hashes + default to latest patch version (#10797)
* k8s: add hashes for 1.25.16, 1.26.12, 1.27.9

Make 1.27.9 default

* [etcd] add 3.5.10 hashes (#10566)

* Update etcd version for 1.26 and 1.27

---------

Co-authored-by: Mohamed Omar Zaian <mohamedzaian@gmail.com>
2024-01-16 15:55:38 +01:00
Max Gautier
c264ae3016 Fix download retry when get_url has no status_code. (#10613) (#10791)
* Fix download retry when get_url has no status_code.

* Fix until clause in download role.

Co-authored-by: Romain <58464216+RomainMou@users.noreply.github.com>
2024-01-15 09:22:47 +01:00
Max Gautier
1bcd7395fa [2.23] Bump galaxy.yml to next expected version (#10728)
* Bump galaxy.yml to next expected version

* Refactor check_galaxy + fix version (#10729)

* Remove checks for docs using exact tags

Instead use a more generic documentation for installing kubespray as a
collection from git.

* Check that we upgraded galaxy.yml to next version

This is only intented to check for human error. The version in galaxy
should be the next (which does not mean the same if we're on master or a
release branch).

* Set collection version to KUBESPRAY_NEXT_VERSION
2024-01-12 10:42:48 +01:00
Max Gautier
3d76c30354 [2.23] Fix calico-node in etcd mode (#10768)
* CI: Document the 'all-in-one' layout + small refactoring (#10725)

* Rename aio to all-in-one and document it

ADTM.
Acronyms don't tell much.

* Refactor vm_count in tests provisioning

* Add test case for calico using etcd datastore (#10722)

* Add multinode ci layout

* Add test case for calico using etcd datastore

* Fix calico-node in etcd mode (#10438)

* Calico : add ETCD endpoints to install-cni container

* Calico : remove nodename from configmap in etcd mode

---------

Co-authored-by: Olivier Levitt <olivier.levitt@gmail.com>
2024-01-12 04:11:00 +01:00
Kay Yan
20a9e20c5a bump vagrant 2.3.7 (#10788) 2024-01-11 12:07:04 +01:00
Max Gautier
e4be213cf7 Disable podCIDR allocation from control-plane when using calico (#10639) (#10715)
* Disable control plane allocating podCIDR for nodes when using calico

Calico does not use the .spec.podCIDR field for its IP address
management.
Furthermore, it can false positives from the kube controller manager if
kube_network_node_prefix and calico_pool_blocksize are unaligned, which
is the case with the default shipped by kubespray.

If the subnets obtained from using kube_network_node_prefix are bigger,
this would result at some point in the control plane thinking it does
not have subnets left for a new node, while calico will work without
problems.

Explicitely set a default value of false for calico_ipam_host_local to
facilitate its use in templates.

* Don't default to kube_network_node_prefix for calico_pool_blocksize

They have different semantics: kube_network_node_prefix is intended to
be the size of the subnet for all pods on a node, while there can be
more than on calico block of the specified size (they are allocated on
demand).

Besides, this commit does not actually change anything, because the
current code is buggy: we don't ever default to
kube_network_node_prefix, since the variable is defined in the role
defaults.
2023-12-13 11:30:18 +01:00
Max Gautier
0107dbc29c [2.23] kubernetes: hashes for 1.27.8, 1.26.11, default to 1.27.8 (#10706)
* kubernetes: add hashes for 1.27.8, 1.26.11

Make 1.27.8 default.

* Convert exoscale tf provider to new version (#10646)

This is untested. It passes terraform validate to un-broke the CI.

* Update 0040-verify-settings.yml (#10699)

remove embedded template

---------

Co-authored-by: piwinkler <9642809+piwinkler@users.noreply.github.com>
2023-12-11 17:26:26 +01:00
Khanh Ngo Van Kim
72da838519 fix: invalid version check in containerd jinja-template config (#10620) 2023-11-17 14:34:00 +01:00
Romain
10679ebb5d [download] Don't fail on 304 Not Modified (#10452) (#10559)
i.e when file was not modified since last download

Co-authored-by: Mathieu Parent <mathieu.parent@insee.fr>
2023-10-30 17:28:43 +01:00
Mohamed Omar Zaian
8775dcf92f [ingress-nginx] Fix nginx controller leader election RBAC permissions (#10569) 2023-10-30 04:24:52 +01:00
Mohamed Omar Zaian
bd382a9c39 Change default cri-o versions for Kubernetes 1.25, 1.26 (#10563) 2023-10-30 04:24:45 +01:00
Mohamed Omar Zaian
ffacfe3ede Add crictl 1.26.1 for Kubernetes v1.26 (#10562) 2023-10-30 04:20:44 +01:00
Unai Arríen
7dcc22fe8c Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane (#10532)
* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane
2023-10-25 18:14:32 +02:00
Mohamed Omar Zaian
47ed2b115d [kubernetes] Add hashes for kubernetes 1.27.7, 1.26.10, 1.25.15 (#10543) 2023-10-19 14:29:50 +02:00
Feruzjon Muyassarov
b9fc4ec43e Refactor NRI activation for containerd and CRI-O (#10470)
Refactor NRI (Node Resource Interface) activation in CRI-O and
containerd. Introduce a shared variable, nri_enabled, to streamline
the process. Currently, enabling NRI requires a separate update of
defaults for each container runtime independently, without any
verification of NRI support for the specific version of containerd
or CRI-O in use.

With this commit, the previous approach is replaced. Now, a single
variable, nri_enabled, handles this functionality. Also, this commit
separates the responsibility of verifying NRI supported versions of
containerd and CRI-O from cluster administrators, and leaves it to
Ansible.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
(cherry picked from commit 1fd31ccc28)
2023-10-06 23:24:19 +02:00
Feruzjon Muyassarov
7bd757da5f Add configuration option for NRI in crio & containerd (#10454)
* [containerd] Add Configuration option for Node Resource Interface

Node Resource Interface (NRI) is a common is a common framework for
plugging domain or vendor-specific custom logic into container
runtime like containerd. With this commit, we introduce the
containerd_disable_nri configuration flag, providing cluster
administrators the flexibility to opt in or out (defaulted to 'out')
of this feature in containerd. In line with containerd's default
configuration, NRI is disabled by default in this containerd role
defaults.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>

* [cri-o] Add configuration option for Node Resource Interface

Node Resource Interface (NRI) is a common is a common framework for
plugging domain or vendor-specific custom logic into container
runtimes like containerd/crio. With this commit, we introduce the
crio_enable_nri configuration flag, providing cluster
administrators the flexibility to opt in or out (defaulted to 'out')
of this feature in cri-o runtime. In line with crio's default
configuration, NRI is disabled by default in this cri-o role
defaults.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>

---------

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
(cherry picked from commit f964b3438d)
2023-10-06 23:24:19 +02:00
Mohamed Omar Zaian
9dc2092042 [etcd] make etcd 3.5.9 default (#10483) 2023-09-29 00:22:45 -07:00
Hans Kristian Moen
c7cfd32c40 [cilium] fix: invalid hubble yaml if cilium_hubble_tls_generate is enabled (#10430) (#10476)
Co-authored-by: Toon Albers <45094749+toonalbers@users.noreply.github.com>
2023-09-26 05:01:28 -07:00
Boris Barnier
a4b0656d9b [2.23] Add hashes for kubernetes version 1.25.14, 1.27.6 & 1.26.9 (#10443)
* Add hashes for kubernetes version 1.27.6 & 1.26.9

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>

* Add hashes for kubernetes version 1.25.14

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>

---------

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>
2023-09-18 07:18:32 -07:00
242 changed files with 2532 additions and 2214 deletions

View File

@@ -5,4 +5,4 @@ roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]

44
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@@ -0,0 +1,44 @@
---
name: Bug Report
about: Report a bug encountered while operating Kubernetes
labels: kind/bug
---
<!--
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
- **Cloud provider or hardware configuration:**
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
- **Version of Ansible** (`ansible --version`):
- **Version of Python** (`python --version`):
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**:
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Command used to invoke ansible**:
**Output of ansible run**:
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Anything else do we need to know**:
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->

View File

@@ -1,117 +0,0 @@
---
name: Bug Report
description: Report a bug encountered while using Kubespray
labels: kind/bug
body:
- type: markdown
attributes:
value: |
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
- type: textarea
id: problem
attributes:
label: What happened?
description: |
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What did you expect to happen?
validations:
required: true
- type: textarea
id: repro
attributes:
label: How can we reproduce it (as minimally and precisely as possible)?
validations:
required: true
- type: markdown
attributes:
value: '### Environment'
- type: textarea
id: os
attributes:
label: OS
placeholder: 'printf "$(uname -srm)\n$(cat /etc/os-release)\n"'
validations:
required: true
- type: textarea
id: ansible_version
attributes:
label: Version of Ansible
placeholder: 'ansible --version'
validations:
required: true
- type: input
id: python_version
attributes:
label: Version of Python
placeholder: 'python --version'
validations:
required: true
- type: input
id: kubespray_version
attributes:
label: Version of Kubespray (commit)
placeholder: 'git rev-parse --short HEAD'
validations:
required: true
- type: dropdown
id: network_plugin
attributes:
label: Network plugin used
options:
- calico
- cilium
- cni
- custom_cni
- flannel
- kube-ovn
- kube-router
- macvlan
- meta
- multus
- ovn4nfv
- weave
validations:
required: true
- type: textarea
id: inventory
attributes:
label: Full inventory with variables
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
- type: input
id: ansible_command
attributes:
label: Command used to invoke ansible
- type: textarea
id: ansible_output
attributes:
label: Output of ansible run
description: We recommend using snippets services like https://gist.github.com/ etc.
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know
description: |
By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here

View File

@@ -1,5 +0,0 @@
---
contact_links:
- name: Support Request
url: https://kubernetes.slack.com/channels/kubespray
about: Support request or question relating to Kubernetes

11
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,11 @@
---
name: Enhancement Request
about: Suggest an enhancement to the Kubespray project
labels: kind/feature
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

View File

@@ -1,20 +0,0 @@
---
name: Enhancement Request
description: Suggest an enhancement to the Kubespray project
labels: kind/feature
body:
- type: markdown
attributes:
value: Please only use this template for submitting enhancement requests
- type: textarea
id: what
attributes:
label: What would you like to be added
validations:
required: true
- type: textarea
id: why
attributes:
label: Why is this needed
validations:
required: true

20
.github/ISSUE_TEMPLATE/failing-test.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Failing Test
about: Report test failures in Kubespray CI jobs
labels: kind/failing-test
---
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:

View File

@@ -1,41 +0,0 @@
---
name: Failing Test
description: Report test failures in Kubespray CI jobs
labels: kind/failing-test
body:
- type: markdown
attributes:
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
- type: textarea
id: failing_jobs
attributes:
label: Which jobs are failing ?
validations:
required: true
- type: textarea
id: failing_tests
attributes:
label: Which tests are failing ?
validations:
required: true
- type: input
id: since_when
attributes:
label: Since when has it been failing ?
validations:
required: true
- type: textarea
id: failure_reason
attributes:
label: Reason for failure
description: If you don't know and have no guess, just put "Unknown"
validations:
required: true
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know

18
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@@ -0,0 +1,18 @@
---
name: Support Request
about: Support request or question relating to Kubespray
labels: kind/support
---
<!--
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->

View File

@@ -9,7 +9,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.23.2
KUBESPRAY_VERSION: v2.22.1
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"

View File

@@ -27,14 +27,6 @@ ansible-lint:
- ansible-lint -v
except: ['triggers', 'master']
jinja-syntax-check:
extends: .job
stage: unit-tests
tags: [light]
script:
- "find -name '*.j2' -exec tests/scripts/check-templates.py {} +"
except: ['triggers', 'master']
syntax-check:
extends: .job
stage: unit-tests

View File

@@ -265,11 +265,6 @@ packet_debian11-kubelet-csr-approver:
extends: .packet_pr
when: manual
packet_debian12-custom-cni-helm:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3
# Long jobs (45min+)

View File

@@ -69,12 +69,3 @@ repos:
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false
- id: jinja-syntax-check
name: jinja-syntax-check
entry: tests/scripts/check-templates.py
language: python
types:
- jinja
additional_dependencies:
- Jinja2

View File

@@ -1,8 +1,5 @@
# syntax=docker/dockerfile:1
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
FROM ubuntu:jammy-20230308
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -10,37 +7,7 @@ FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a
ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /kubespray
# hadolint ignore=DL3008
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update -q \
&& apt-get install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/log/*
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN --mount=type=bind,source=roles/kubespray-defaults/defaults/main/main.yml,target=roles/kubespray-defaults/defaults/main/main.yml \
KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
@@ -50,3 +17,29 @@ COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN apt update -q \
&& apt install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& pip install --no-compile --no-cache-dir \
ansible==7.6.0 \
ansible-core==2.14.6 \
cryptography==41.0.1 \
jinja2==3.1.2 \
netaddr==0.8.0 \
jmespath==1.0.1 \
MarkupSafe==2.1.3 \
ruamel.yaml==0.17.21 \
passlib==1.7.4 \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
&& rm -rf /var/lib/apt/lists/* /var/log/* \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

View File

@@ -24,7 +24,6 @@ aliases:
- mzaian
- mrfreezeex
- erikjiang
- vannten
kubespray-emeritus_approvers:
- riverzhang
- atoms

View File

@@ -75,8 +75,8 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.24.1
docker pull quay.io/kubespray/kubespray:v2.24.1
git checkout v2.23.2
docker pull quay.io/kubespray/kubespray:v2.23.2
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.23.2 bash
@@ -161,28 +161,28 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.6
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.10
- [etcd](https://github.com/etcd-io/etcd) v3.5.10
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.7.13
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.26.4
- [calico](https://github.com/projectcalico/calico) v3.25.2
- [cilium](https://github.com/cilium/cilium) v1.13.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
- [coredns](https://github.com/coredns/coredns) v1.10.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.9.4
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.8.4
- [helm](https://helm.sh/) v3.13.1
- [argocd](https://argoproj.github.io/) v2.8.0
- [helm](https://helm.sh/) v3.12.3
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
@@ -202,7 +202,7 @@ Note: Upstart/SysV init based OS types are not supported.
## Requirements
- **Minimum required version of Kubernetes is v1.26**
- **Minimum required version of Kubernetes is v1.25**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.

View File

@@ -5,9 +5,7 @@
Container image collecting script for offline deployment
This script has two features:
(1) Get container images from an environment which is deployed online.
(2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images
@@ -29,7 +27,7 @@ manage-offline-container-images.sh register
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/kubespray-defaults/defaults/main/download.yml` file.
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.

View File

@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/kubespray-defaults/defaults/main/download.yml"}
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
mkdir -p ${TEMP_DIR}
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/kubespray-defaults/defaults/main/download.yml
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"

View File

@@ -23,8 +23,8 @@ function create_container_image_tar() {
mkdir ${IMAGE_DIR}
cd ${IMAGE_DIR}
sudo ${runtime} pull registry:latest
sudo ${runtime} save -o registry-latest.tar registry:latest
sudo docker pull registry:latest
sudo docker save -o registry-latest.tar registry:latest
for image in ${IMAGES}
do
@@ -32,7 +32,7 @@ function create_container_image_tar() {
set +e
for step in $(seq 1 ${RETRY_COUNT})
do
sudo ${runtime} pull ${image}
sudo docker pull ${image}
if [ $? -eq 0 ]; then
break
fi
@@ -42,7 +42,7 @@ function create_container_image_tar() {
fi
done
set -e
sudo ${runtime} save -o ${FILE_NAME} ${image}
sudo docker save -o ${FILE_NAME} ${image}
# NOTE: Here removes the following repo parts from each image
# so that these parts will be replaced with Kubespray.
@@ -95,16 +95,16 @@ function register_container_images() {
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/registries.conf
sudo cp ${TEMP_DIR}/registries.conf /etc/containers/registries.conf
else
echo "runtime package(docker-ce, podman, nerctl, etc.) should be installed"
echo "docker package(docker-ce, etc.) should be installed"
exit 1
fi
tar -zxvf ${IMAGE_TAR_FILE}
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
set +e
sudo ${runtime} container inspect registry >/dev/null 2>&1
sudo docker container inspect registry >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo ${runtime} run --restart=always -d -p 5000:5000 --name registry registry:latest
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
fi
set -e
@@ -112,8 +112,8 @@ function register_container_images() {
file_name=$(echo ${line} | awk '{print $1}')
raw_image=$(echo ${line} | awk '{print $2}')
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
org_image=$(sudo ${runtime} load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
if [ -z "${file_name}" ]; then
echo "Failed to get file_name for line ${line}"
exit 1
@@ -130,9 +130,9 @@ function register_container_images() {
echo "Failed to get image_id for file ${file_name}"
exit 1
fi
sudo ${runtime} load -i ${IMAGE_DIR}/${file_name}
sudo ${runtime} tag ${image_id} ${new_image}
sudo ${runtime} push ${new_image}
sudo docker load -i ${IMAGE_DIR}/${file_name}
sudo docker tag ${image_id} ${new_image}
sudo docker push ${new_image}
done <<< "$(cat ${IMAGE_LIST})"
echo "Succeeded to register container images to local registry."
@@ -143,18 +143,6 @@ function register_container_images() {
echo "- quay_image_repo"
}
# get runtime command
if command -v nerdctl 1>/dev/null 2>&1; then
runtime="nerdctl"
elif command -v podman 1>/dev/null 2>&1; then
runtime="podman"
elif command -v docker 1>/dev/null 2>&1; then
runtime="docker"
else
echo "No supported container runtime found"
exit 1
fi
if [ "${OPTION}" == "create" ]; then
create_container_image_tar
elif [ "${OPTION}" == "register" ]; then

View File

@@ -38,7 +38,7 @@ sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@@ -50,32 +50,70 @@ Example (this one assumes you are using Ubuntu)
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
```
## Using other distrib than Ubuntu***
***Using other distrib than Ubuntu***
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
To leverage a Linux distribution other than Ubuntu 18.04 (Bionic) LTS for your Terraform configurations, you can adjust the AMI search filters within the 'data "aws_ami" "distro"' block by utilizing variables in your `terraform.tfvars` file. This approach ensures a flexible configuration that adapts to various Linux distributions without directly modifying the core Terraform files.
For example, to use:
### Example Usages
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
- **Debian Jessie**: To configure the usage of Debian Jessie, insert the subsequent lines into your `terraform.tfvars`:
```ini
data "aws_ami" "distro" {
most_recent = true
```hcl
ami_name_pattern = "debian-jessie-amd64-hvm-*"
ami_owners = ["379101102735"]
```
filter {
name = "name"
values = ["debian-jessie-amd64-hvm-*"]
}
- **Ubuntu 16.04**: To utilize Ubuntu 16.04 instead, apply the following configuration in your `terraform.tfvars`:
filter {
name = "virtualization-type"
values = ["hvm"]
}
```hcl
ami_name_pattern = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"
ami_owners = ["099720109477"]
```
owners = ["379101102735"]
}
```
- **Centos 7**: For employing Centos 7, incorporate these lines into your `terraform.tfvars`:
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
```hcl
ami_name_pattern = "dcos-centos7-*"
ami_owners = ["688023202711"]
```
```ini
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
```
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
```ini
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["dcos-centos7-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["688023202711"]
}
```
## Connecting to Kubernetes

View File

@@ -20,38 +20,20 @@ variable "aws_cluster_name" {
description = "Name of AWS Cluster"
}
variable "ami_name_pattern" {
description = "The name pattern to use for AMI lookup"
type = string
default = "debian-10-amd64-*"
}
variable "ami_virtualization_type" {
description = "The virtualization type to use for AMI lookup"
type = string
default = "hvm"
}
variable "ami_owners" {
description = "The owners to use for AMI lookup"
type = list(string)
default = ["136693071363"]
}
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = [var.ami_name_pattern]
values = ["debian-10-amd64-*"]
}
filter {
name = "virtualization-type"
values = [var.ami_virtualization_type]
values = ["hvm"]
}
owners = var.ami_owners
owners = ["136693071363"] # Debian-10
}
//AWS VPC Variables

View File

@@ -7,7 +7,7 @@ terraform {
required_providers {
equinix = {
source = "equinix/equinix"
version = "1.24.0"
version = "~> 1.14"
}
}
}

View File

@@ -1,5 +1,5 @@
.terraform
*.tfvars
!sample-inventory/cluster.tfvars
!sample-inventory\/cluster.tfvars
*.tfstate
*.tfstate.backup

View File

@@ -318,7 +318,6 @@ k8s_nodes:
mount_path: string # Path to where the partition should be mounted
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
netplan_critical_dhcp_interface: string # Name of interface to set the dhcp flag critical = true, to circumvent [this issue](https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1776013).
```
For example:

View File

@@ -19,8 +19,8 @@ data "cloudinit_config" "cloudinit" {
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
extra_partitions = [],
netplan_critical_dhcp_interface = ""
# template_file doesn't support lists
extra_partitions = ""
})
}
}
@@ -821,8 +821,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
extra_partitions = each.value.cloudinit.extra_partitions,
netplan_critical_dhcp_interface = each.value.cloudinit.netplan_critical_dhcp_interface,
extra_partitions = each.value.cloudinit.extra_partitions
}) : data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {

View File

@@ -1,4 +1,4 @@
%{~ if length(extra_partitions) > 0 || netplan_critical_dhcp_interface != "" }
%{~ if length(extra_partitions) > 0 }
#cloud-config
bootcmd:
%{~ for idx, partition in extra_partitions }
@@ -8,26 +8,11 @@ bootcmd:
%{~ endfor }
runcmd:
%{~ if netplan_critical_dhcp_interface != "" }
- netplan apply
%{~ endif }
%{~ for idx, partition in extra_partitions }
- mkdir -p ${partition.mount_path}
- chown nobody:nogroup ${partition.mount_path}
- mount ${partition.partition_path} ${partition.mount_path}
%{~ endfor ~}
%{~ if netplan_critical_dhcp_interface != "" }
write_files:
- path: /etc/netplan/90-critical-dhcp.yaml
content: |
network:
version: 2
ethernets:
${ netplan_critical_dhcp_interface }:
dhcp4: true
critical: true
%{~ endif }
%{~ endfor }
mounts:
%{~ for idx, partition in extra_partitions }

View File

@@ -142,14 +142,13 @@ variable "k8s_nodes" {
additional_server_groups = optional(list(string))
server_group = optional(string)
cloudinit = optional(object({
extra_partitions = optional(list(object({
extra_partitions = list(object({
volume_path = string
partition_path = string
partition_start = string
partition_end = string
mount_path = string
})), [])
netplan_critical_dhcp_interface = optional(string, "")
}))
}))
}))
}

View File

@@ -140,4 +140,4 @@ terraform destroy --var-file cluster-settings.tfvars \
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
* `server_groups`: Group servers together
* `servers`: The servers that should be included in the group.
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
* `anti_affinity`: If anti-affinity should be enabled, try to spread the VMs out on separate nodes.

View File

@@ -18,7 +18,7 @@ ssh_public_keys = [
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
machines = {
"control-plane-0" : {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
@@ -133,9 +133,9 @@ loadbalancers = {
server_groups = {
# "control-plane" = {
# servers = [
# "control-plane-0"
# "master-0"
# ]
# anti_affinity_policy = "strict"
# anti_affinity = true
# },
# "workers" = {
# servers = [
@@ -143,6 +143,6 @@ server_groups = {
# "worker-1",
# "worker-2"
# ]
# anti_affinity_policy = "yes"
# anti_affinity = true
# }
}

View File

@@ -22,7 +22,7 @@ locals {
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
# Else don't prefix with anything
resource-prefix = "%{if var.prefix != ""}${var.prefix}-%{endif}"
resource-prefix = "%{ if var.prefix != ""}${var.prefix}-%{ endif }"
}
resource "upcloud_network" "private" {
@@ -38,7 +38,7 @@ resource "upcloud_network" "private" {
resource "upcloud_storage" "additional_disks" {
for_each = {
for disk in local.disks : "${disk.node_name}_${disk.disk_name}" => disk.disk
for disk in local.disks: "${disk.node_name}_${disk.disk_name}" => disk.disk
}
size = each.value.size
@@ -165,7 +165,7 @@ resource "upcloud_firewall_rules" "master" {
for_each = upcloud_server.master
server_id = each.value.id
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.master_allowed_remote_ips
content {
@@ -181,7 +181,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -197,7 +197,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
@@ -213,7 +213,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -229,7 +229,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.master_allowed_ports
content {
@@ -245,7 +245,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -261,7 +261,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -277,7 +277,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -293,7 +293,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -309,7 +309,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -325,7 +325,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -354,7 +354,7 @@ resource "upcloud_firewall_rules" "k8s" {
for_each = upcloud_server.worker
server_id = each.value.id
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
@@ -370,7 +370,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -386,7 +386,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.worker_allowed_ports
content {
@@ -402,7 +402,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -418,7 +418,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -434,7 +434,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -450,7 +450,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -466,7 +466,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -482,7 +482,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -535,7 +535,7 @@ resource "upcloud_loadbalancer_frontend" "lb_frontend" {
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
for_each = {
for be_server in local.lb_backend_servers :
for be_server in local.lb_backend_servers:
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
if var.loadbalancer_enabled
}
@@ -552,7 +552,7 @@ resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
resource "upcloud_server_group" "server_groups" {
for_each = var.server_groups
title = each.key
anti_affinity_policy = each.value.anti_affinity_policy
anti_affinity = each.value.anti_affinity
labels = {}
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id]
}

View File

@@ -3,8 +3,8 @@ output "master_ip" {
value = {
for instance in upcloud_server.master :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
"public_ip": instance.network_interface[0].ip_address
"private_ip": instance.network_interface[1].ip_address
}
}
}
@@ -13,8 +13,8 @@ output "worker_ip" {
value = {
for instance in upcloud_server.worker :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
"public_ip": instance.network_interface[0].ip_address
"private_ip": instance.network_interface[1].ip_address
}
}
}

View File

@@ -99,7 +99,7 @@ variable "server_groups" {
description = "Server groups"
type = map(object({
anti_affinity_policy = string
anti_affinity = bool
servers = list(string)
}))
}

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.12.0"
version = "~>2.7.1"
}
}
required_version = ">= 0.13"

View File

@@ -18,7 +18,7 @@ ssh_public_keys = [
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
machines = {
"control-plane-0" : {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
@@ -28,7 +28,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {}
"additional_disks": {}
},
"worker-0" : {
"node_type" : "worker",
@@ -40,7 +40,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -61,7 +61,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -82,7 +82,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -134,9 +134,9 @@ loadbalancers = {
server_groups = {
# "control-plane" = {
# servers = [
# "control-plane-0"
# "master-0"
# ]
# anti_affinity_policy = "strict"
# anti_affinity = true
# },
# "workers" = {
# servers = [
@@ -144,6 +144,6 @@ server_groups = {
# "worker-1",
# "worker-2"
# ]
# anti_affinity_policy = "yes"
# anti_affinity = true
# }
}

View File

@@ -136,7 +136,7 @@ variable "server_groups" {
description = "Server groups"
type = map(object({
anti_affinity_policy = string
anti_affinity = bool
servers = list(string)
}))

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.12.0"
version = "~>2.7.1"
}
}
required_version = ">= 0.13"

View File

@@ -13,7 +13,6 @@
* CNI
* [Calico](docs/calico.md)
* [Flannel](docs/flannel.md)
* [Cilium](docs/cilium.md)
* [Kube Router](docs/kube-router.md)
* [Kube OVN](docs/kube-ovn.md)
* [Weave](docs/weave.md)
@@ -30,6 +29,7 @@
* [Equinix Metal](/docs/equinix-metal.md)
* [vSphere](/docs/vsphere.md)
* [Operating Systems](docs/bootstrap-os.md)
* [Debian](docs/debian.md)
* [Flatcar Container Linux](docs/flatcar.md)
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)

View File

@@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
| Ansible Version | Python Version |
|-----------------|----------------|
| >= 2.15.5 | 3.9-3.11 |
| 2.14 | 3.9-3.11 |
## Inventory

View File

@@ -7,6 +7,10 @@ Kubespray supports multiple ansible versions but only the default (5.x) gets wid
## CentOS 8
CentOS 8 / Oracle Linux 8,9 / AlmaLinux 8,9 / Rocky Linux 8,9 ship only with iptables-nft (ie without iptables-legacy similar to RHEL8)
The only tested configuration for now is using Calico CNI
You need to add `calico_iptables_backend: "NFT"` to your configuration.
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -11,7 +11,7 @@ amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -44,8 +44,6 @@ containerd_registries_mirrors:
image_command_tool: crictl
```
The `containerd_registries` and `containerd_insecure_registries` configs are deprecated.
### Containerd Runtimes
Containerd supports multiple runtime configurations that can be used with

View File

@@ -42,22 +42,6 @@ crio_registries:
[CRI-O]: https://cri-o.io/
The following is a method to enable insecure registries.
```yaml
crio_insecure_registries:
- 10.0.0.2:5000
```
And you can config authentication for these registries after `crio_insecure_registries`.
```yaml
crio_registry_auth:
- registry: 10.0.0.2:5000
username: user
password: pass
```
## Note about user namespaces
CRI-O has support for user namespaces. This feature is optional and can be enabled by setting the following two variables.

41
docs/debian.md Normal file
View File

@@ -0,0 +1,41 @@
# Debian Jessie
Debian Jessie installation Notes:
- Add
```ini
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
to `/etc/default/grub`. Then update with
```ShellSession
sudo update-grub
sudo update-grub2
sudo reboot
```
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```ShellSession
apt-get -t jessie-backports install systemd
```
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
- Add the Ansible repository and install Ansible to get a proper version
```ShellSession
sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
```
- Install Jinja2 and Python-Netaddr
```ShellSession
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -97,9 +97,3 @@ Adding extra options to pass to the docker daemon:
## This string should be exactly as you wish it to appear.
docker_options: ""
```
For Debian based distributions, set the path to store the GPG key to avoid using the default one used in `apt_key` module (e.g. /etc/apt/trusted.gpg)
```yaml
docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
```

View File

@@ -54,11 +54,6 @@ kube_apiserver_enable_admission_plugins:
- PodNodeSelector
- PodSecurity
kube_apiserver_admission_control_config_file: true
# Creates config file for PodNodeSelector
# kube_apiserver_admission_plugins_needs_configuration: [PodNodeSelector]
# Define the default node selector, by default all the workloads will be scheduled on nodes
# with label network=srv1
# kube_apiserver_admission_plugins_podnodeselector_default_node_selector: "network=srv1"
# EventRateLimit plugin configuration
kube_apiserver_admission_event_rate_limits:
limit_1:
@@ -120,7 +115,7 @@ kube_pod_security_default_enforce: restricted
Let's take a deep look to the resultant **kubernetes** configuration:
* The `anonymous-auth` (on `kube-apiserver`) is set to `true` by default. This is fine, because it is considered safe if you enable `RBAC` for the `authorization-mode`.
* The `enable-admission-plugins` includes `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
* The `enable-admission-plugins` has not the `PodSecurityPolicy` admission plugin. This because it is going to be definitely removed from **kubernetes** `v1.25`. For this reason we decided to set the newest `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.

View File

@@ -1,6 +1,6 @@
# AWS ALB Ingress Controller
**NOTE:** The current image version is `v1.1.9`. Please file any issues you find and note the version used.
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).

View File

@@ -70,9 +70,3 @@ If using [control plane load-balancing](https://kube-vip.io/docs/about/architect
```yaml
kube_vip_lb_enable: true
```
In addition, [load-balancing method](https://kube-vip.io/docs/installation/flags/#environment-variables) could be changed:
```yaml
kube_vip_lb_fwdmethod: masquerade
```

View File

@@ -29,6 +29,10 @@ metallb_config:
nodeselector:
kubernetes.io/os: linux
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
@@ -69,6 +73,7 @@ metallb_config:
primary:
ip_range:
- 192.0.1.0-192.0.1.254
auto_assign: true
pool1:
ip_range:
@@ -77,8 +82,8 @@ metallb_config:
pool2:
ip_range:
- 192.0.3.0/24
avoid_buggy_ips: true # When set to true, .0 and .255 addresses will be avoided.
- 192.0.2.2-192.0.2.2
auto_assign: false
```
## Layer2 Mode

View File

@@ -95,7 +95,7 @@ If you use the settings like the one above, you'll need to define in your invent
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
the ones defined
in [kubesprays-defaults's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/defaults/main/download.yml)
in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main/main.yml)
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
same repository path, you won't have to override anything else.
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].

View File

@@ -29,6 +29,10 @@ If the RHEL 7/8 hosts are already registered to a valid Red Hat support subscrip
## RHEL 8
RHEL 8 ships only with iptables-nft (ie without iptables-legacy)
The only tested configuration for now is using Calico CNI
You need to use K8S 1.17+ and to add `calico_iptables_backend: "NFT"` to your configuration
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -1,6 +1,6 @@
# Node Layouts
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `multinode`.
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
@@ -18,8 +18,7 @@ never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.
This is necessary to tests setups requiring that nodes are etcd clients (use of cilium as `network_plugin` for instance)
`multinode` layout consists of two separate `kube_node` and a merged single `etcd+kube_control_plane` node.
Note, the canal network plugin deploys flannel as well plus calico policy controller.

View File

@@ -186,8 +186,6 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
* *containerd_additional_runtimes* - Sets the additional Containerd runtimes used by the Kubernetes CRI plugin.
[Default config](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/container-engine/containerd/defaults/main.yml) can be overridden in inventory vars.
* *crio_criu_support_enabled* - When set to `true`, enables the container checkpoint/restore in CRI-O. It's required to install [CRIU](https://criu.org/Installation) on the host when dumping/restoring checkpoints. And it's recommended to enable the feature gate `ContainerCheckpoint` so that the kubelet get a higher level API to simplify the operations (**Note**: It's still in experimental stage, just for container analytics so far). You can follow the [documentation](https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/).
* *http_proxy/https_proxy/no_proxy/no_proxy_exclude_workers/additional_no_proxy* - Proxy variables for deploying behind a
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
that correspond to each node.
@@ -254,6 +252,8 @@ node_taints:
- "node.example.com/external=true:NoSchedule"
```
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
Addons deployed in kube-system namespaces are handled.
* *kubernetes_audit* - When set to `true`, enables Auditing.
The auditing parameters can be tuned via the following variables (which default values are shown below):
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
@@ -271,7 +271,6 @@ node_taints:
* `audit_webhook_mode`: batch
* `audit_webhook_batch_max_size`: 100
* `audit_webhook_batch_max_wait`: 1s
* *kubectl_alias* - Bash alias of kubectl to interact with Kubernetes cluster much easier.
### Custom flags for Kube Components

View File

@@ -12,7 +12,7 @@
hosts: kube_control_plane[0]
tasks:
- name: Include kubespray-default variables
include_vars: ../roles/kubespray-defaults/defaults/main/main.yml
include_vars: ../roles/kubespray-defaults/defaults/main.yaml
- name: Copy get_cinder_pvs.sh to master
copy:
src: get_cinder_pvs.sh

View File

@@ -2,15 +2,13 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.24.1
version: 2.23.3
readme: README.md
authors:
- luksi1
tags:
- infrastructure
repository: https://github.com/kubernetes-sigs/kubespray
dependencies:
ansible.utils: '>=2.5.0'
build_ignore:
- .github
- '*.tar.gz'

View File

@@ -57,7 +57,7 @@ loadbalancer_apiserver_healthcheck_port: 8081
# https_proxy: ""
# https_proxy_cert_file: ""
## Refer to roles/kubespray-defaults/defaults/main/main.yml before modifying no_proxy
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug

View File

@@ -1,8 +1,5 @@
# Registries defined within cri-o.
# crio_insecure_registries:
# - 10.0.0.2:5000
# Auth config for the registries
# crio_registry_auth:
# - registry: 10.0.0.2:5000
# username: user

View File

@@ -24,12 +24,3 @@
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true
## Enable distributed tracing
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
# etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd

View File

@@ -103,6 +103,10 @@ ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
@@ -136,6 +140,8 @@ ingress_alb_enabled: false
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# cert_manager_tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
# - key: node-role.kubernetes.io/control-plane
# effect: NoSchedule
# cert_manager_affinity:
@@ -170,27 +176,33 @@ cert_manager_enabled: false
# MetalLB deployment
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
# metallb_speaker_nodeselector:
# kubernetes.io/os: "linux"
# metallb_controller_nodeselector:
# kubernetes.io/os: "linux"
# metallb_speaker_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_controller_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_version: v0.13.9
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
# metallb_config:
# speaker:
# nodeselector:
# kubernetes.io/os: "linux"
# tollerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# controller:
# nodeselector:
# kubernetes.io/os: "linux"
# tolerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# address_pools:
# primary:
# ip_range:
@@ -232,7 +244,7 @@ metallb_speaker_enabled: "{{ metallb_enabled }}"
# - pool2
argocd_enabled: false
# argocd_version: v2.8.4
# argocd_version: v2.8.0
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
@@ -247,14 +259,3 @@ argocd_enabled: false
# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"
# Kube VIP
kube_vip_enabled: false
# kube_vip_arp_enabled: true
# kube_vip_controlplane_enabled: true
# kube_vip_address: 192.168.56.120
# loadbalancer_apiserver:
# address: "{{ kube_vip_address }}"
# port: 6443
# kube_vip_interface: eth0
# kube_vip_services_enabled: false

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.28.6
kube_version: v1.27.7
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -243,6 +243,15 @@ kubernetes_audit: false
# kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
# Custom PodSecurityPolicySpec for restricted policy
# podsecuritypolicy_restricted_spec: {}
# Custom PodSecurityPolicySpec for privileged policy
# podsecuritypolicy_privileged_spec: {}
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false
# Use ansible_host as external api ip when copying over kubeconfig.

View File

@@ -1,51 +0,0 @@
---
# custom_cni network plugin configuration
# There are two deployment options to choose from, select one
## OPTION 1 - Static manifest files
## With this option, referred manifest file will be deployed
## as if the `kubectl apply -f` method was used with it.
#
## List of Kubernetes resource manifest files
## See tests/files/custom_cni/README.md for example
# custom_cni_manifests: []
## OPTION 1 EXAMPLE - Cilium static manifests in Kubespray tree
# custom_cni_manifests:
# - "{{ playbook_dir }}/../tests/files/custom_cni/cilium.yaml"
## OPTION 2 - Helm chart application
## This allows the CNI backend to be deployed to Kubespray cluster
## as common Helm application.
#
## Helm release name - how the local instance of deployed chart will be named
# custom_cni_chart_release_name: ""
#
## Kubernetes namespace to deploy into
# custom_cni_chart_namespace: "kube-system"
#
## Helm repository name - how the local record of Helm repository will be named
# custom_cni_chart_repository_name: ""
#
## Helm repository URL
# custom_cni_chart_repository_url: ""
#
## Helm chart reference - path to the chart in the repository
# custom_cni_chart_ref: ""
#
## Helm chart version
# custom_cni_chart_version: ""
#
## Custom Helm values to be used for deployment
# custom_cni_chart_values: {}
## OPTION 2 EXAMPLE - Cilium deployed from official public Helm chart
# custom_cni_chart_namespace: kube-system
# custom_cni_chart_release_name: cilium
# custom_cni_chart_repository_name: cilium
# custom_cni_chart_repository_url: https://helm.cilium.io
# custom_cni_chart_ref: cilium/cilium
# custom_cni_chart_version: 1.14.3
# custom_cni_chart_values:
# cluster:
# name: "cilium-demo"

View File

@@ -1,10 +1,4 @@
# See roles/network_plugin/kube-router/defaults/main.yml
# Kube router version
# Default to v2
# kube_router_version: "v2.0.0"
# Uncomment to use v1 (Deprecated)
# kube_router_version: "v1.6.0"
# See roles/network_plugin/kube-router//defaults/main.yml
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true
@@ -25,9 +19,6 @@
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_loadbalancer_ip: false
# Enables BGP graceful restarts
# kube_router_bgp_graceful_restart: true
# Adjust manifest of kube-router daemonset template with DSR needed changes
# kube_router_enable_dsr: false

View File

@@ -1,2 +1,2 @@
---
requires_ansible: '>=2.15.5'
requires_ansible: '>=2.14.0'

View File

@@ -40,11 +40,11 @@ WORKDIR /kubespray
RUN --mount=type=bind,target=./requirements.txt,src=./requirements.txt \
--mount=type=bind,target=./tests/requirements.txt,src=./tests/requirements.txt \
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main/main.yml,src=./roles/kubespray-defaults/defaults/main/main.yml \
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main.yaml,src=./roles/kubespray-defaults/defaults/main.yaml \
update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \

View File

@@ -4,8 +4,8 @@
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.15.5 # 2.15 versions before 2.15.5 are known to be buggy for kubespray
maximal_ansible_version: 2.17.0
minimal_ansible_version: 2.14.0
maximal_ansible_version: 2.15.0
ansible_connection: local
tags: always
tasks:

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Prepare for etcd install
@@ -17,7 +29,35 @@
- { role: download, tags: download, when: "not skip_downloads" }
- name: Install etcd
import_playbook: install_etcd.yml
hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"
- name: Install etcd certs on nodes if required
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- name: Install Kubernetes nodes
hosts: k8s_cluster

View File

@@ -1,29 +0,0 @@
---
- name: Add worker nodes to the etcd play if needed
hosts: kube_node
roles:
- { role: kubespray-defaults }
tasks:
- name: Check if nodes needs etcd client certs (depends on network_plugin)
group_by:
key: "_kubespray_needs_etcd"
when:
- kube_network_plugin in ["flannel", "canal", "cilium"] or
(cilium_deploy_additionally | default(false)) or
(kube_network_plugin == "calico" and calico_datastore == "etcd")
- etcd_deployment_type != "kubeadm"
tags: etcd
- name: Install etcd
hosts: etcd:kube_control_plane:_kubespray_needs_etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"

View File

@@ -1,8 +1,5 @@
---
- name: Check ansible version
import_playbook: ansible_version.yml
# These are inventory compatibility tasks to ensure we keep compatibility with old style group names
# This is an inventory compatibility playbook to ensure we keep compatibility with old style group names
- name: Add kube-master nodes to kube_control_plane
hosts: kube-master
@@ -48,11 +45,3 @@
- name: Add nodes to no-floating group
group_by:
key: 'no_floating'
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Recover etcd
hosts: etcd[0]

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Confirm node removal
hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}"

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Gather facts
import_playbook: facts.yml

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Generate the etcd certificates beforehand

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Download images to ansible host cache via first kube_control_plane node
@@ -36,7 +48,35 @@
- { role: container-engine, tags: "container-engine", when: deploy_container_engine }
- name: Install etcd
import_playbook: install_etcd.yml
hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"
- name: Install etcd certs on nodes if required
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- name: Handle upgrades to master components first to maintain backwards compat.
gather_facts: False

View File

@@ -1,9 +1,9 @@
ansible==8.5.0
cryptography==41.0.4
ansible==7.6.0
cryptography==41.0.1
jinja2==3.1.2
jmespath==1.0.1
MarkupSafe==2.1.3
netaddr==0.9.0
netaddr==0.8.0
pbr==5.11.1
ruamel.yaml==0.17.35
ruamel.yaml.clib==0.2.8
ruamel.yaml==0.17.31
ruamel.yaml.clib==0.2.7

View File

@@ -0,0 +1,43 @@
import os
from pathlib import Path
import testinfra.utils.ansible_runner
import yaml
from ansible.cli.playbook import PlaybookCLI
from ansible.playbook import Playbook
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ["MOLECULE_INVENTORY_FILE"]
).get_hosts("all")
def read_playbook(playbook):
cli_args = [os.path.realpath(playbook), testinfra_hosts]
cli = PlaybookCLI(cli_args)
cli.parse()
loader, inventory, variable_manager = cli._play_prereqs()
pb = Playbook.load(cli.args[0], variable_manager, loader)
for play in pb.get_plays():
yield variable_manager.get_vars(play)
def get_playbook():
playbooks_path = Path(__file__).parent.parent
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
if "playbooks" in data["provisioner"].keys():
if "converge" in data["provisioner"]["playbooks"].keys():
return data["provisioner"]["playbooks"]["converge"]
else:
return os.path.join(playbooks_path, "converge.yml")
def test_user(host):
for vars in read_playbook(get_playbook()):
assert host.user(vars["user"]["name"]).exists
if "group" in vars["user"].keys():
assert host.group(vars["user"]["group"]).exists
else:
assert host.group(vars["user"]["name"]).exists

View File

@@ -0,0 +1,40 @@
import os
from pathlib import Path
import testinfra.utils.ansible_runner
import yaml
from ansible.cli.playbook import PlaybookCLI
from ansible.playbook import Playbook
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ["MOLECULE_INVENTORY_FILE"]
).get_hosts("all")
def read_playbook(playbook):
cli_args = [os.path.realpath(playbook), testinfra_hosts]
cli = PlaybookCLI(cli_args)
cli.parse()
loader, inventory, variable_manager = cli._play_prereqs()
pb = Playbook.load(cli.args[0], variable_manager, loader)
for play in pb.get_plays():
yield variable_manager.get_vars(play)
def get_playbook():
playbooks_path = Path(__file__).parent.parent
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
if "playbooks" in data["provisioner"].keys():
if "converge" in data["provisioner"]["playbooks"].keys():
return data["provisioner"]["playbooks"]["converge"]
else:
return os.path.join(playbooks_path, "converge.yml")
def test_ssh_config(host):
for vars in read_playbook(get_playbook()):
assert host.file(vars["ssh_bastion_confing__name"]).exists
assert host.file(vars["ssh_bastion_confing__name"]).is_file

View File

@@ -18,7 +18,6 @@ containerd_runc_runtime:
base_runtime_spec: cri-base.json
options:
systemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
binaryName: "{{ bin_dir }}/runc"
containerd_additional_runtimes: []
# Example for Kata Containers as additional runtime:
@@ -48,6 +47,9 @@ containerd_metrics_address: ""
containerd_metrics_grpc_histogram: false
containerd_registries:
"docker.io": "https://registry-1.docker.io"
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
@@ -102,6 +104,3 @@ containerd_supported_distributions:
- "UnionTech"
- "UniontechOS"
- "openEuler"
# Enable container device interface
enable_cdi: false

View File

@@ -1,4 +1,10 @@
---
- name: Restart containerd
command: /bin/true
notify:
- Containerd | restart containerd
- Containerd | wait for containerd
- name: Containerd | restart containerd
systemd:
name: containerd
@@ -6,7 +12,6 @@
enabled: yes
daemon-reload: yes
masked: no
listen: Restart containerd
- name: Containerd | wait for containerd
command: "{{ containerd_bin_dir }}/ctr images ls -q"
@@ -14,4 +19,3 @@
retries: 8
delay: 4
until: containerd_ready.rc == 0
listen: Restart containerd

View File

@@ -61,9 +61,6 @@
src: containerd.service.j2
dest: /etc/systemd/system/containerd.service
mode: 0644
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:containerd.service'"
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
# Remove once we drop support for systemd < 250
notify: Restart containerd
- name: Containerd | Ensure containerd directories exist

View File

@@ -20,10 +20,6 @@ oom_score = {{ containerd_oom_score }}
max_container_log_line_size = {{ containerd_max_container_log_line_size }}
enable_unprivileged_ports = {{ containerd_enable_unprivileged_ports | default(false) | lower }}
enable_unprivileged_icmp = {{ containerd_enable_unprivileged_icmp | default(false) | lower }}
{% if enable_cdi %}
enable_cdi = true
cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
{% endif %}
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "{{ containerd_default_runtime | default('runc') }}"
snapshotter = "{{ containerd_snapshotter | default('overlayfs') }}"
@@ -39,11 +35,7 @@ oom_score = {{ containerd_oom_score }}
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}.options]
{% for key, value in runtime.options.items() %}
{% if value | string != "true" and value | string != "false" %}
{{ key }} = "{{ value }}"
{% else %}
{{ key }} = {{ value }}
{% endif %}
{% endfor %}
{% endfor %}
{% if kata_containers_enabled %}

View File

@@ -2,6 +2,7 @@ server = "https://{{ item.prefix }}"
{% for mirror in item.mirrors %}
[host."{{ mirror.host }}"]
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
{% if mirror.skip_verify is defined %}
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
override_path = {{ mirror.override_path | default('false') | string | lower }}
{% endif %}
{% endfor %}

View File

@@ -1,31 +1,35 @@
---
- name: Restart and enable cri-dockerd
command: /bin/true
notify:
- Cri-dockerd | reload systemd
- Cri-dockerd | restart docker.service
- Cri-dockerd | reload cri-dockerd.socket
- Cri-dockerd | reload cri-dockerd.service
- Cri-dockerd | enable cri-dockerd service
- name: Cri-dockerd | reload systemd
systemd:
name: cri-dockerd
daemon_reload: true
masked: no
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | restart docker.service
service:
name: docker.service
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | reload cri-dockerd.socket
service:
name: cri-dockerd.socket
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | reload cri-dockerd.service
service:
name: cri-dockerd.service
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | enable cri-dockerd service
service:
name: cri-dockerd.service
enabled: yes
listen: Restart and enable cri-dockerd

View File

@@ -18,9 +18,6 @@
src: "{{ item }}.j2"
dest: "/etc/systemd/system/{{ item }}"
mode: 0644
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:{{ item }}'"
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
# Remove once we drop support for systemd < 250
with_items:
- cri-dockerd.service
- cri-dockerd.socket

View File

@@ -69,6 +69,10 @@ youki_runtime:
type: oci
root: /run/youki
# TODO(cristicalin): remove this after 2.21
crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"
crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"
# Reserve 16M uids and gids for user namespaces (256 pods * 65536 uids/gids)
# at the end of the uid/gid space
crio_remap_enable: false
@@ -93,6 +97,3 @@ crio_man_files:
8:
- crio
- crio-status
# If set to true, it will enable the CRIU support in cri-o
crio_criu_support_enabled: false

View File

@@ -1,12 +1,16 @@
---
- name: Restart crio
command: /bin/true
notify:
- CRI-O | reload systemd
- CRI-O | reload crio
- name: CRI-O | reload systemd
systemd:
daemon_reload: true
listen: Restart crio
- name: CRI-O | reload crio
service:
name: crio
state: restarted
enabled: yes
listen: Restart crio

View File

@@ -0,0 +1,120 @@
---
# TODO(cristicalin): drop this file after 2.21
- name: CRI-O kubic repo name for debian os family
set_fact:
crio_kubic_debian_repo_name: "{{ ((ansible_distribution == 'Ubuntu') | ternary('x', '')) ~ ansible_distribution ~ '_' ~ ansible_distribution_version }}"
when: ansible_os_family == "Debian"
- name: Remove legacy CRI-O kubic apt repo key
apt_key:
url: "https://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/Release.key"
state: absent
environment: "{{ proxy_env }}"
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic apt repo
apt_repository:
repo: "deb http://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
filename: devel-kubic-libcontainers-stable
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic cri-o apt repo
apt_repository:
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
filename: devel-kubic-libcontainers-stable-cri-o
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages (CentOS_$releasever)
baseurl: http://{{ crio_download_base }}/CentOS_{{ ansible_distribution_major_version }}/
state: absent
when:
- ansible_os_family == "RedHat"
- ansible_distribution not in ["Amazon", "Fedora"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }} (CentOS_$releasever)"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_{{ ansible_distribution_major_version }}/"
state: absent
when:
- ansible_os_family == "RedHat"
- ansible_distribution not in ["Amazon", "Fedora"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages
baseurl: http://{{ crio_download_base }}/Fedora_{{ ansible_distribution_major_version }}/
state: absent
when:
- ansible_distribution in ["Fedora"]
- not is_ostree
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }}"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/Fedora_{{ ansible_distribution_major_version }}/"
state: absent
when:
- ansible_distribution in ["Fedora"]
- not is_ostree
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages
baseurl: http://{{ crio_download_base }}/CentOS_7/
state: absent
when: ansible_distribution in ["Amazon"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }}"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_7/"
state: absent
when: ansible_distribution in ["Amazon"]
- name: Disable modular repos for CRI-O
community.general.ini_file:
path: "/etc/yum.repos.d/{{ item.repo }}.repo"
section: "{{ item.section }}"
option: enabled
value: 0
mode: 0644
become: true
when: is_ostree
loop:
- repo: "fedora-updates-modular"
section: "updates-modular"
- repo: "fedora-modular"
section: "fedora-modular"
# Disable any older module version if we enabled them before
- name: Disable CRI-O ex module
command: "rpm-ostree ex module disable cri-o:{{ item }}"
become: true
when:
- is_ostree
- ostree_version is defined and ostree_version.stdout is version('2021.9', '>=')
with_items:
- 1.22
- 1.23
- 1.24
- name: Cri-o | remove installed packages
package:
name: "{{ item }}"
state: absent
when: not is_ostree
with_items:
- cri-o
- cri-o-runc
- oci-systemd-hook

View File

@@ -27,6 +27,9 @@
import_tasks: "setup-amazon.yaml"
when: ansible_distribution in ["Amazon"]
- name: Cri-o | clean up reglacy repos
import_tasks: "cleanup.yaml"
- name: Cri-o | build a list of crio runtimes with Katacontainers runtimes
set_fact:
crio_runtimes: "{{ crio_runtimes + kata_runtimes }}"

View File

@@ -17,7 +17,7 @@
- name: CRI-O | Remove cri-o apt repo
apt_repository:
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
state: present
filename: devel-kubic-libcontainers-stable-cri-o
when: crio_kubic_debian_repo_name is defined
tags:

View File

@@ -273,11 +273,6 @@ pinns_path = ""
pinns_path = "{{ bin_dir }}/pinns"
{% endif %}
{% if crio_criu_support_enabled %}
# Enable CRIU integration, requires that the criu binary is available in $PATH.
enable_criu_support = true
{% endif %}
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
# The runtime to use is picked based on the runtime_handler provided by the CRI.
# If no runtime_handler is provided, the runtime will be picked based on the level
@@ -382,7 +377,7 @@ enable_metrics = {{ crio_enable_metrics | bool | lower }}
# The port on which the metrics server will listen.
metrics_port = {{ crio_metrics_port }}
{% if nri_enabled and crio_version is version('v1.26.0', operator='>=') %}
{% if nri_enabled and crio_version >= v1.26.0 %}
[crio.nri]
enable_nri=true

View File

@@ -5,9 +5,6 @@ docker_cli_version: "{{ docker_version }}"
docker_package_info:
pkgs:
# Path where to store repo key
# docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
docker_repo_key_info:
repo_keys:

View File

@@ -1,25 +1,28 @@
---
- name: Restart docker
command: /bin/true
notify:
- Docker | reload systemd
- Docker | reload docker.socket
- Docker | reload docker
- Docker | wait for docker
- name: Docker | reload systemd
systemd:
name: docker
daemon_reload: true
masked: no
listen: Restart docker
- name: Docker | reload docker.socket
service:
name: docker.socket
state: restarted
when: ansible_os_family in ['Flatcar', 'Flatcar Container Linux by Kinvolk'] or is_fedora_coreos
listen: Restart docker
- name: Docker | reload docker
service:
name: docker
state: restarted
listen: Restart docker
- name: Docker | wait for docker
command: "{{ docker_bin_dir }}/docker images"
@@ -27,4 +30,3 @@
retries: 20
delay: 1
until: docker_ready.rc == 0
listen: Restart docker

View File

@@ -57,7 +57,6 @@
apt_key:
id: "{{ item }}"
url: "{{ docker_repo_key_info.url }}"
keyring: "{{ docker_repo_key_keyring|default(omit) }}"
state: present
register: keyserver_task_result
until: keyserver_task_result is succeeded

View File

@@ -7,4 +7,3 @@ kata_containers_qemu_default_memory: "{{ ansible_memtotal_mb }}"
kata_containers_qemu_debug: 'false'
kata_containers_qemu_sandbox_cgroup_only: 'true'
kata_containers_qemu_enable_mem_prealloc: 'false'
kata_containers_virtio_fs_cache: 'always'

View File

@@ -1,12 +1,11 @@
# Copyright (c) 2017-2019 Intel Corporation
# Copyright (c) 2021 Adobe Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "config/configuration-qemu.toml.in"
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
@@ -19,46 +18,20 @@ kernel = "/opt/kata/share/kata-containers/vmlinux.container"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
{% endif %}
image = "/opt/kata/share/kata-containers/kata-containers.img"
# initrd = "/opt/kata/share/kata-containers/kata-containers-initrd.img"
machine_type = "q35"
# rootfs filesystem type:
# - ext4 (default)
# - xfs
# - erofs
rootfs_type="ext4"
# Enable confidential guest support.
# Toggling that setting may trigger different hardware features, ranging
# from memory encryption to both memory and CPU-state encryption and integrity.
# The Kata Containers runtime dynamically detects the available feature set and
# aims at enabling the largest possible one, returning an error if none is
# available, or none is supported by the hypervisor.
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
# aims at enabling the largest possible one.
# Default false
# confidential_guest = true
# Choose AMD SEV-SNP confidential guests
# In case of using confidential guests on AMD hardware that supports both SEV
# and SEV-SNP, the following enables SEV-SNP guests. SEV guests are default.
# Default false
# sev_snp_guest = true
# Enable running QEMU VMM as a non-root user.
# By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as
# a non-root random user. See documentation for the limitations of this mode.
# rootless = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = ["enable_iommu"]
enable_annotations = []
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
@@ -82,25 +55,11 @@ kernel_params = ""
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Path to the firmware volume.
# firmware TDVF or OVMF can be split into FIRMWARE_VARS.fd (UEFI variables
# as configuration) and FIRMWARE_CODE.fd (UEFI program image). UEFI variables
# can be customized per each user while UEFI code is kept same.
firmware_volume = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Qemu seccomp sandbox feature
# comma-separated list of seccomp sandbox features to control the syscall access.
# For example, `seccompsandbox= "on,obsolete=deny,spawn=deny,resourcecontrol=deny"`
# Note: "elevateprivileges=deny" doesn't work with daemonize option, so it's removed from the seccomp sandbox
# Another note: enabling this feature may reduce performance, you may enable
# /proc/sys/net/core/bpf_jit_enable to reduce the impact. see https://man7.org/linux/man-pages/man8/bpfc.8.html
#seccompsandbox="on,obsolete=deny,spawn=deny,resourcecontrol=deny"
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
@@ -151,12 +110,6 @@ default_memory = {{ kata_containers_qemu_default_memory }}
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# Default maximum memory in MiB per SB / VM
# unspecified or == 0 --> will be set to the actual amount of physical RAM
# > 0 <= amount of physical RAM --> will be set to the specified number
# > amount of physical RAM --> will be set to the actual amount of physical RAM
default_maxmemory = 0
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
@@ -175,13 +128,12 @@ default_maxmemory = 0
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
# - virtio-fs-nydus
{% if kata_containers_version is version('2.2.0', '>=') %}
shared_fs = "virtio-fs"
{% else %}
@@ -189,39 +141,27 @@ shared_fs = "virtio-9p"
{% endif %}
# Path to vhost-user-fs daemon.
{% if kata_containers_version is version('2.5.0', '>=') %}
virtio_fs_daemon = "/opt/kata/libexec/virtiofsd"
{% else %}
virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
{% endif %}
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/opt/kata/libexec/virtiofsd"]
valid_virtio_fs_daemon_paths = [
"/opt/kata/libexec/virtiofsd",
"/opt/kata/libexec/kata-qemu/virtiofsd",
]
# Your distribution recommends: ["/opt/kata/libexec/kata-qemu/virtiofsd"]
valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/kata-qemu/virtiofsd"]
# Default size of DAX cache in MiB
virtio_fs_cache_size = 0
# Default size of virtqueues
virtio_fs_queue_size = 1024
# Extra args for virtiofsd daemon
#
# Format example:
# ["--arg1=xxx", "--arg2=yyy"]
# Examples:
# Set virtiofsd log level to debug : ["--log-level=debug"]
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"]
virtio_fs_extra_args = ["--thread-pool-size=1"]
# Cache mode:
#
# - never
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
@@ -232,27 +172,13 @@ virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"]
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "{{ kata_containers_virtio_fs_cache }}"
virtio_fs_cache = "always"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# aio is the I/O mechanism used by qemu
# Options:
#
# - threads
# Pthread based disk I/O.
#
# - native
# Native Linux I/O.
#
# - io_uring
# Linux io_uring API. This provides the fastest I/O operations on Linux, requires kernel>5.1 and
# qemu >=5.0.
block_device_aio = "io_uring"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
@@ -316,11 +242,6 @@ vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
# The timeout for reconnecting on non-server spdk sockets when the remote end goes away.
# qemu will delay this many seconds and then attempt to reconnect.
# Zero disables reconnecting, and the default is zero.
vhost_user_reconnect_timeout_sec = 0
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
@@ -332,12 +253,17 @@ vhost_user_reconnect_timeout_sec = 0
# Your distribution recommends: [""]
valid_file_mem_backends = [""]
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# -pflash can add image file to VM. The arguments of it should be in format
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
pflashes = []
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. And Debug also enables the hmp socket.
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
enable_debug = {{ kata_containers_qemu_debug }}
@@ -352,18 +278,21 @@ enable_debug = {{ kata_containers_qemu_debug }}
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
#
# nvdimm is not supported when `confidential_guest = true`.
#
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge.
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
@@ -400,15 +329,15 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}".
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered while scanning for hooks,
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
#
@@ -453,19 +382,6 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
# be default_memory.
#enable_guest_swap = true
# use legacy serial for guest console if available and implemented for architecture. Default false
#use_legacy_serial = true
# disable applying SELinux on the VMM process (default false)
disable_selinux=false
# disable applying SELinux on the container process
# If set to false, the type `container_t` is applied to the container process by default.
# Note: To enable guest SELinux, the guest rootfs must be CentOS that is created and built
# with `SELINUX=yes`.
# (default: true)
disable_guest_selinux=true
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
@@ -509,6 +425,31 @@ disable_guest_selinux=true
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
@@ -516,17 +457,24 @@ enable_debug = {{ kata_containers_qemu_debug }}
# Enable agent tracing.
#
# If enabled, the agent will generate OpenTelemetry trace spans.
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicitly with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - If the runtime also has tracing enabled, the agent spans will be
# associated with the appropriate runtime parent span.
# - If enabled, the runtime will wait for the container to shutdown,
# increasing the container shutdown time slightly.
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
@@ -552,6 +500,21 @@ kernel_modules=[]
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
@@ -583,19 +546,6 @@ internetworking_model="tcfilter"
# (default: true)
disable_guest_seccomp=true
# vCPUs pinning settings
# if enabled, each vCPU thread will be scheduled to a fixed CPU
# qualified condition: num(vCPU threads) == num(CPUs in sandbox's CPUSet)
# enable_vcpus_pinning = false
# Apply a custom SELinux security policy to the container process inside the VM.
# This is used when you want to apply a type other than the default `container_t`,
# so general users should not uncomment and apply it.
# (format: "user:role:type")
# Note: You cannot specify MCS policy with the label because the sensitivity levels and
# categories are determined automatically by high-level container runtimes such as containerd.
#guest_selinux_label="system_u:system_r:container_t"
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
@@ -613,9 +563,11 @@ disable_guest_seccomp=true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
@@ -624,49 +576,15 @@ disable_guest_seccomp=true
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/virtcontainers#ContainerType
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only={{ kata_containers_qemu_sandbox_cgroup_only }}
# If enabled, the runtime will attempt to determine appropriate sandbox size (memory, CPU) before booting the virtual machine. In
# this case, the runtime will not dynamically update the amount of memory and CPU in the virtual machine. This is generally helpful
# when a hardware architecture or hypervisor solutions is utilized which does not support CPU and/or memory hotplug.
# Compatibility for determining appropriate sandbox (VM) size:
# - When running with pods, sandbox sizing information will only be available if using Kubernetes >= 1.23 and containerd >= 1.6. CRI-O
# does not yet support sandbox sizing annotations.
# - When running single containers using a tool like ctr, container sizing information will be available.
static_sandbox_resource_mgmt=false
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
# These will not be exposed to the container workloads, and are only provided for potential guest services.
sandbox_bind_mounts=[]
# VFIO Mode
# Determines how VFIO devices should be be presented to the container.
# Options:
#
# - vfio
# Matches behaviour of OCI runtimes (e.g. runc) as much as
# possible. VFIO devices will appear in the container as VFIO
# character devices under /dev/vfio. The exact names may differ
# from the host (they need to match the VM's IOMMU group numbers
# rather than the host's)
#
# - guest-kernel
# This is a Kata-specific behaviour that's useful in certain cases.
# The VFIO device is managed by whatever driver in the VM kernel
# claims it. This means it will appear as one or more device nodes
# or network interfaces depending on the nature of the device.
# Using this mode requires specially built workloads that know how
# to locate the relevant device interfaces within the VM.
#
vfio_mode="guest-kernel"
# If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will
# be created on the host and shared via virtio-fs. This is potentially slower, but allows sharing of files from host to guest.
disable_guest_empty_dir=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.

View File

@@ -1,34 +1,31 @@
---
crictl_checksums:
arm:
v1.28.0: 1ea267f3872f4b7f311963ab43ce6653ceeaf8727206c889b56587c95497e9dd
v1.27.1: ec24fb7e4d45b7f3f3df254b22333839f9bdbde585187a51c93d695abefbf147
v1.27.0: 0b6983195cc62bfc98de1f3fc2ee297a7274fb79ccabf413b8a20765f12d522a
v1.26.1: f6b537fd74aed9ccb38be2f49dc9a18859dffb04ed73aba796d3265a1bdb3c57
v1.26.0: 88891ee29eab097ab1ed88d55094e7bf464f3347bc9f056140e45efeddd15b33
v1.25.0: c4efe3649af5542f2b07cdfc0be62e9e13c7bb846a9b59d57e190c764f28dae4
arm64:
v1.28.0: 06e9224e42bc5e23085751e93cccdac89f7930ba6f7a45b8f8fc70ef663c37c4
v1.27.1: 322bf64d12f9e5cd9540987d47446bf9b0545ceb1900ef93376418083ad88241
v1.27.0: 9317560069ded8e7bf8b9488fdb110d9e62f0fbc0e33ed09fe972768b47752bd
v1.26.1: cfa28be524b5da1a6dded455bb497dfead27b1fd089e1161eb008909509be585
v1.26.0: b632ca705a98edc8ad7806f4279feaff956ac83aa109bba8a85ed81e6b900599
v1.25.0: 651c939eca010bbf48cc3932516b194028af0893025f9e366127f5b50ad5c4f4
amd64:
v1.28.0: 8dc78774f7cbeaf787994d386eec663f0a3cf24de1ea4893598096cb39ef2508
v1.27.1: b70e8d7bde8ec6ab77c737b6c69be8cb518ce446365734c6db95f15c74a93ce8
v1.27.0: d335d6e16c309fbc3ff1a29a7e49bb253b5c9b4b030990bf7c6b48687f985cee
v1.26.1: 0c1a0f9900c15ee7a55e757bcdc220faca5dd2e1cfc120459ad1f04f08598127
v1.26.0: cda5e2143bf19f6b548110ffba0fe3565e03e8743fadd625fee3d62fc4134eed
v1.25.0: 86ab210c007f521ac4cdcbcf0ae3fb2e10923e65f16de83e0e1db191a07f0235
ppc64le:
v1.28.0: b70fb7bee5982aa1318ba25088319f1d0d1415567f1f76cd69011b8a14da4daf
v1.27.1: c408bb5e797bf02215acf9604c43007bd09cf69353cefa8f20f2c16ab1728a85
v1.27.0: 3e4301c2d4b561d861970004002fe15d49af907963de06c70d326f2af1f145e0
v1.26.1: e3026d88722b40deec87711c897df99db3585e2caea17ebd79df5c78f9296583
v1.26.0: 5538c88b8ccde419e6158ab9c06dfcca1fa0abecf33d0a75b2d22ceddd283f0d
v1.25.0: 1b77d1f198c67b2015104eee6fe7690465b8efa4675ea6b4b958c63d60a487e7
crio_archive_checksums:
arm:
v1.28.2: 0
v1.28.1: 0
v1.28.0: 0
v1.27.1: 0
v1.27.0: 0
v1.26.4: 0
@@ -36,10 +33,10 @@ crio_archive_checksums:
v1.26.2: 0
v1.26.1: 0
v1.26.0: 0
v1.25.4: 0
v1.25.3: 0
v1.25.2: 0
arm64:
v1.28.2: 739923cb744a862039557f23823f4cc12feba121bd26ca3cc01d80cc8aaa1efb
v1.28.1: 98a96c6b6bdf20c60e1a7948847c28b57d9e6e47e396b2e405811ea2c24ab9dc
v1.28.0: c8ea800244d9e4ce74af85126afadea2939cd6f7ddd152d0f09fafbf294ef1cc
v1.27.1: ddf601e28dc22d878cdd34549402a236afaa47e0a08f39b09e65bab7034b1b97
v1.27.0: c6615360311bff7fdfe1933e8d5030a2e9926b7196c4e7a07fcb10e51a676272
v1.26.4: dbc64d796eb9055f2e070476bb1f32ab7b7bf42ef0ec23212c51beabfd5ac43f
@@ -47,10 +44,10 @@ crio_archive_checksums:
v1.26.2: 8bd9c912de7f8805c162e089a34ca29e607c48a149940193466ccf7bdf74f606
v1.26.1: 30fe91a60c54b627962da0c21f947424d3cdf484067bc5cda3b3777c10c85384
v1.26.0: 8605b166d00c674e6363ee2336600fa6c6730a6a724f03ab3b72a0d5f9efcd1d
v1.25.4: eb46a48b62124fe9e18c85c3c26638630099744a8b9c21577f1da92dcbb4c243
v1.25.3: 09cabe5499a12013618fb5d8d9c71df56d68f54f6cbe67cab75e6d886eb30214
v1.25.2: a5f77e13001794145d96306d0db779f0ff855a8d6e556e6a0e4a1f56b624b430
amd64:
v1.28.2: c8002a622e268b73f8d45b0adbdff9422b832106a23be137fabdc8a233b3f787
v1.28.1: 63cee2e67e283e29d790caa52531bcca7bc59473fb73bde75f4fd8daa169d4bf
v1.28.0: fa87497c12815766d18f332b38a4d823fa6ad6bb3d159e383a5557e6c912eb3b
v1.27.1: 23c0b26f9df65671f20c042466c0e6c543e16ba769bbf63aa26abef170f393ba
v1.27.0: 8f99db9aeea00299cb3f28ee61646472014cac91930e4c7551c9153f8f720093
v1.26.4: cfeca97f1ca612813ae0a56a05d33a9f94e3b1fd8df1debb16f322676819314a
@@ -58,10 +55,10 @@ crio_archive_checksums:
v1.26.2: 7e030b2e89d4eb2701d9164e67c804fcb872c29accd76f29bcc148a86a920531
v1.26.1: cc2fc263f9f88072c744e019ba1c919d9ce2d71603b1b72d288c47c82a86bf08
v1.26.0: 79837d8b7af95547b92dbab105268dd6382ce2a7afbddad93cc168ab0ca766c8
v1.25.4: d667d20f032793356006ef97d4318dee9f57d62692ab80ee55bc5915555a2ad5
v1.25.3: b990709ca45784726489aac77bb1e309d9148944483870d737a3a595f6305491
v1.25.2: 2e8128a897ab4e608e7f958337705611ebb7d27e14189135439262588edfc738
ppc64le:
v1.28.2: 0
v1.28.1: 0
v1.28.0: 0
v1.27.1: 0
v1.27.0: 0
v1.26.4: 0
@@ -69,20 +66,13 @@ crio_archive_checksums:
v1.26.2: 0
v1.26.1: 0
v1.26.0: 0
v1.25.4: 0
v1.25.3: 0
v1.25.2: 0
# Checksum
# Kubernetes versions above Kubespray's current target version are untested and should be used with caution.
kubelet_checksums:
arm:
v1.29.1: 0
v1.29.0: 0
v1.28.6: 0
v1.28.5: 0
v1.28.4: 0
v1.28.3: 0
v1.28.2: 0
v1.28.1: 0
v1.28.0: 0
v1.27.9: 0
v1.27.8: 0
v1.27.7: 0
v1.27.6: 0
@@ -106,16 +96,24 @@ kubelet_checksums:
v1.26.2: 24af93f03e514bb66c6bbacb9f00f393ed57df6e43b2846337518ec0b4d1b717
v1.26.1: fe940be695f73c03275f049cb17f2bf2eb137014334930ce5c6de12573c1f21f
v1.26.0: cabf702fc542fcbb1173c713f1cbec72fd1d9ded36cdcdbbd05d9c308d8360d1
v1.25.16: 41af783a67bae265495afd13414cde94d4e72a7690c6b08ed0b5a3ec5cfcc4c0
v1.25.15: 3c29e3347674e341d9dbbaf549ba71ca688f3ca8fd55e557374df9aede5ec1c1
v1.25.14: 23ae6b8022872bea9b5d09344f6ae38433893c318973a5032afa4c5cf9a037a9
v1.25.13: f31075c2c6df02074cc9fe6e02162c930439cd69071442cde590c1c1571a246d
v1.25.12: 0b1bc0624e33e450b5e1edb47042c73179d09501b330ce3ef72153e9fc5d5018
v1.25.11: 36c60e804e41aae3c5c830f398c8a7743a349f887c155077f34445d402ed858e
v1.25.10: 3078a5ed11ebb84fc613da259904db59ec3e620665602bd45af16ef9983800cc
v1.25.9: 31cdc6043660cf6fe7c3360439ea8ed49dbc56cb2d01e65aff6d26d0ba7faacc
v1.25.8: f9e920805ad9d2fa761010c6be94deed1885b6a1ba892ff18e57ad97d275490d
v1.25.7: 1f6dc3c81435747d3a9e48f930ae1baea024f505a3b7716866430fef1cb5dc29
v1.25.6: 9dcc25bcb16ae966642286b4a2ac97f12a0d4ff96204958c9cedda37c1fcd029
v1.25.5: fdaade890ed44ce55a1086dd1b1bde44daac02f90eacd9faf14fd182af1ffda0
v1.25.4: 1af9c17daa07c215a8ce40f7e65896279276e11b6f7a7d9ae850a0561e149ad8
v1.25.3: 9745a48340ca61b00f0094e4b8ff210839edcf05420f0d57b3cb1748cb887060
v1.25.2: 995f885543fa61a08bd4f1008ba6d7417a1c45bd2a8e0f70c67a83e53b46eea5
v1.25.1: 6fe430ad91e1ed50cf5cc396aa204fda0889c36b8a3b84619d633cd9a6a146e2
v1.25.0: ad45ac3216aa186648fd034dec30a00c1a2d2d1187cab8aae21aa441a13b4faa
arm64:
v1.29.1: e46417ab1ceae995f0e00d4177959a36ed34b807829422bc9dda70b263fe5c5d
v1.29.0: 0e0e4544c2a0a3475529154b7534d0d58683466efa04a2bb2e763b476db0bb16
v1.28.6: ee2c060deff330d3338e24aec9734c9e5d5aea4fea1905c0795bccff6997a65e
v1.28.5: 28ddb696eb6e076f2a2f59ccaa2e409785a63346e5bda819717c6e0f58297702
v1.28.4: bf203989dd9b3987b8a0d2331dcce6319f834b57df810fafba5a4805d54823ac
v1.28.3: 64f56e9c55183919153fe59df2c9015dff09c56de13a3cbccc0f04a95b76dab9
v1.28.2: 32269e9ec38c561d028b65c3048ea6a100e1292cbe9e505565222455c8096577
v1.28.1: 9b7fa64b2785da4a38768377961e227f8da629c56a5df43ca1b665dd07b56f3c
v1.28.0: 05dd12e35783cab4960e885ec0e7d0e461989b94297e7bea9018ccbd15c4dce9
v1.27.10: 0edadc44ef36be8d8106cad9972360c0477540e2d8c0bbeb38fd97fd1d7801d5
v1.27.9: 8a14bc3739f5ca3b23d08301c2e769ee58c8d1cecb7243b46b1c098ae77effd7
v1.27.8: 71849182ceb018dc084f499ad28b7b1afb7f23e35ccaf8421941dd5dafef0d4c
@@ -141,16 +139,24 @@ kubelet_checksums:
v1.26.2: 33e77f93d141d3b9e207ae50ff050186dea084ac26f9ec88280f85bab9dad310
v1.26.1: f4b514162b52d19909cf0ddf0b816d8d7751c5f1de60eda90cd84dcccc56c399
v1.26.0: fb033c1d079cac8babb04a25abecbc6cc1a2afb53f56ef1d73f8dc3b15b3c09e
v1.25.16: 5f379fc59db0efc288236dbd0abd32b1b0206d1c435001b9c0c3996171e20ffd
v1.25.15: 9ca686d5fac093bd3dfe72e8614a5d8d482b7e22d6a78ff5a2a639fc54e603b6
v1.25.14: 3a3d4ac26b26baef43188a6f52d40a20043db3ffdbcbefab8be222b58ce0f713
v1.25.13: 7a29aabb40a984f104b88c09312e897bb710e6bb68022537f8700e70971b984b
v1.25.12: 3402d0fcec5105bb08b917bb5a29a979c674aa10a12a1dfe4e0d80b292f9fe56
v1.25.11: 0140cf3aee0b9386fc8430c32bc94c169a6e50640947933733896e01490cbf6c
v1.25.10: 16ad59fb38337b19afa29ca7d05ccd61e859599c598445d0e11c760fc1086e6b
v1.25.9: 5af11a2948c87076540bb78ee99563b373301c9d8bb694395e2ce7fb14a76344
v1.25.8: 6c995b05b54cc0ce4eb6bf3097565167069b2ce45ba965972430c631c467d239
v1.25.7: cb9176563c7a75be1e8ea23d8e366ced97becabd4626fde01620ec71d2eb1fc2
v1.25.6: 6dade59b6fe4b94f03ee173692f5713e023b0cd1abaa8f5ebe4263b49a63df38
v1.25.5: 18aa53ff59740a11504218905b51b29cc78fb8b5dd818a619141afa9dafb8f5a
v1.25.4: 8ff80a12381fad2e96c9cec6712591018c830cdd327fc7bd825237aa51a6ada3
v1.25.3: 929d25fc3f901749b058141a9c624ff379759869e09df49b75657c0be3141091
v1.25.2: c9348c0bae1d723a39235fc041053d9453be6b517082f066b3a089c3edbdd2ae
v1.25.1: b6baa99b99ecc1f358660208a9a27b64c65f3314ff95a84c73091b51ac98484b
v1.25.0: 69572a7b3d179d4a479aa2e0f90e2f091d8d84ef33a35422fc89975dc137a590
amd64:
v1.29.1: 1b1975c58d38be1a99a8bcba4564ac489afd223b0abe9f2ab08bbde89d2412a3
v1.29.0: e1c38137db8d8777eed8813646b59bf4d22d19b9011ab11dc28e2e34f6b80a05
v1.28.6: 8506df1f20a5f8bba0592f5a4cf5d0cc541047708e664cb88580735400d0b26f
v1.28.5: bf37335da58182783a8c63866ec1f895b4c436e3ed96bdd87fe3f8ae8004ba1d
v1.28.4: db2a473b73c3754d4011590f2f0aa877657608499590c6b0f8b40bec96a3e9ba
v1.28.3: a3a058b4ba30da01ffe1801cd38fcad58a9022a2d39e080b4b2e0e9749a75ad5
v1.28.2: 17edb866636f14eceaad58c56eab12af7ab3be3c78400aff9680635d927f1185
v1.28.1: 2bc22332f44f8fcd3fce57879fd873f977949ebd261571fbae31fbb2713a5dd3
v1.28.0: bfb6b977100963f2879a33e5fbaa59a5276ba829a957a6819c936e9c1465f981
v1.27.10: 25a34bf98bb8a296ea07f1ebbcb496b1e6b6c6da3247695288a7c99fc8c1be2c
v1.27.9: ede60eea3acbac3f35dbb23d7b148f45cf169ebbb20af102d3ce141fc0bac60c
v1.27.8: 2e0557b38c5b9a1263eed25a0b84d741453ed9c0c7bd916f80eadaf7edfb7784
@@ -176,16 +182,24 @@ kubelet_checksums:
v1.26.2: e6dd2ee432a093492936ff8505f084b5ed41662f50231f1c11ae08ee8582a3f5
v1.26.1: 8b99dd73f309ca1ac4005db638e82f949ffcfb877a060089ec0e729503db8198
v1.26.0: b64949fe696c77565edbe4100a315b6bf8f0e2325daeb762f7e865f16a6e54b5
v1.25.16: b159f4b0ce7987385902faf6b97530489a6340d728a9688c5791d8d18144b4b7
v1.25.15: 1136c5717df316c6d4efd96a676574825f771666b7a9148338f0079bb9412720
v1.25.14: b9d1dbd9e7c1d3bda6249f38d7cd4f63e4188fa31cddd80d5e8ac1ce3a9a4d96
v1.25.13: 0399cfd7031cf5f3d7f8485b243a5ef37230e63d105d5f29966f0f81a58a8f6d
v1.25.12: 7aa7d0b4512e6d79ada2017c054b07aaf30d4dc0d740449364a5e2c26e2c1842
v1.25.11: 4801700e29405e49a7e51cccb806decd65ca3a5068d459a40be3b4c5846b9a46
v1.25.10: 280515c431b8c966e475de1b953b960242549cb86f0821ad819224085b449c9b
v1.25.9: 0eb951237379ef023aa52deedb0df5eae54fa71caeb52bdb30a4660806bed23e
v1.25.8: 3aa821165da6f1bb9fdb82a91b294b7f4abfc4fdfb21a94fa1e09a9785876516
v1.25.7: 2e3216ac291c78d82fb8988c15d9fd4cf14e2ddd9b17ff91e3abf2e5f3e14fd9
v1.25.6: 8485ac4a60455b77a9b518c13b3aeb0d32338ab7e9894a0b5d217fea585cd2be
v1.25.5: 16b23e1254830805b892cfccf2687eb3edb4ea54ffbadb8cc2eee6d3b1fab8e6
v1.25.4: 7f7437e361f829967ee02e30026d7e85219693432ac5e930cc98dd9c7ddb2fac
v1.25.3: d5c89c5e5dae6afa5f06a3e0e653ac3b93fa9a93c775a715531269ec91a54abe
v1.25.2: 631e31b3ec648f920292fdc1bde46053cca5d5c71d622678d86907d556efaea3
v1.25.1: 63e38bcbc4437ce10227695f8722371ec0d178067f1031d09fe1f59b6fcf214a
v1.25.0: 7f9183fce12606818612ce80b6c09757452c4fb50aefea5fc5843951c5020e24
ppc64le:
v1.29.1: 467d2b457205363f53f72081295ea390fc25215b0ccc29dc04c4f82925266067
v1.29.0: 67f09f866d3e4aee8211ce9887ec8bc427b188474a882a7af999fc0fee939028
v1.28.6: 8f79f40bef88aaedfdf7256de48a972295b0069ae0ddefa90dff3f8690c825ce
v1.28.5: ae9fe81804ba67ee81e8a5fe1dc18fe285267764c61f831886a25245a11d8528
v1.28.4: d79c97811fb10c1b1f48b69573f1164f108630631d9dba0d991fe924bd305f20
v1.28.3: f20cfb8c9de73cdc66fbbecd03bb936ce57fe86ebced8ea93aa64ebda0235c21
v1.28.2: 79f568ac700d29f88d669c6b6a09adb3b726bdd13c10aa0839cbc70b414372e5
v1.28.1: 547fc76f0c1d78352fad841ebeacd387fe48750b2648565dfd49197621622fbb
v1.28.0: 22de59965f2d220afa24bf04f4c6d6b65a4bb1cd80756c13381973b1ac3b4578
v1.27.10: c5014bed224347245fadec3d763846ec33ccd7a580d0c4ee19a45a948392f20c
v1.27.9: f270051c9b0f36da10a5d27011783be042edd396e8c729709c2396f29b72b6d2
v1.27.8: 2354fdb19b5018cabe43fde1979965686afd3c95b75531e678a0064c4a30b4e9
@@ -211,17 +225,25 @@ kubelet_checksums:
v1.26.2: 6f03bc34a34856a3e0e667bea8d6817596c53c7093482d44516163639ce39625
v1.26.1: bf795bec9b01a9497f46f47b2f3466628fba11f7834c9b2a0f3fa1028c061144
v1.26.0: df13099611f4eada791e5f41948ef94f7b52c468dff1a6fcad54f4c56b467a7f
v1.25.16: b4287a95c2f0c36d9669c62a25cc0491a56c2e9170ae0ba084eb6b55be834baa
v1.25.15: 1e63c65b08ba9a8599e326f0fef1af82947f0bfac7453914530641f0ad546b8d
v1.25.14: 23ae6b8022872bea9b5d09344f6ae38433893c318973a5032afa4c5cf9a037a9
v1.25.13: 33860d57189d30305740caab9c543c5fe96b08983e2fa0f37d68b4fa689a46a2
v1.25.12: 3e4bee69a79064e4058cb839bbee829d330b3d50a3164fefbb065d4c59922e33
v1.25.11: 5e3aad37146dfd7ea39df39ce4d3e5b32b975d1ff74b2fef61af61c295af3838
v1.25.10: 933aa6e25c5d1401705eca42d6b98592c920178a8bc7517c9c97cb42722c14c9
v1.25.9: 090c3ab9eeab564a8191954c041e9e4e840b835bbaa134e42752337b3e652d0a
v1.25.8: 98927a677d63079da20a1101f892d977a84b25bab5f811aebcd66687566a4b2a
v1.25.7: c591355869ee7e664f04a1ebe6208a4acee48a16bfe926df1b699184ad903ba2
v1.25.6: 148b0b7c23ed0db4a62a6398e2a43b65c53b3a5fb92f1fe79f8069ef2caced26
v1.25.5: 3071e26e648ff50880d699ccabd677537b9e2762d1ece9e11401adde664f8e28
v1.25.4: 3d4806fae6f39f091ea3d9fb195aa6d3e1ef779f56e485b6afbb328c25e15bdc
v1.25.3: 447a8b34646936bede22c93ca85f0a98210c9f61d6963a7d71f7f6a5152af1d1
v1.25.2: a45dc00ac3a8074c3e9ec6a45b63c0a654529a657d929f28bd79c550a0d213d7
v1.25.1: c1e3373ac088e934635fb13004a21ada39350033bfa0e4b258c114cb86b69138
v1.25.0: 8015f88d1364cf77436c157de8a5d3ab87f1cb2dfaa9289b097c92a808845491
kubectl_checksums:
arm:
v1.29.1: a4b478cc0e9adaab0c5bb3627c20c5228ea0fe2aeff9e805d611eb3edb761972
v1.29.0: a2388eb458d07ec734e4fa02fd0147456a1922a7d6b8e67a32db9d64a4d7621c
v1.28.6: 2358d98d4970c177a3af0ae1c2398f69922074a961a61cdff6ae4a7f13106dc1
v1.28.5: 0819c9d0ea66a1e20d74d9a455090e1f67fe07d671866be342ab55532203f4bc
v1.28.4: 835ef8d72f8dec4493b855ddd8e4163f107053496d923c89c216489a45757df6
v1.28.3: b252ec9e97abde80fe067eb215a1acb69a8c83022cba897fd2c4d387bd45f5ca
v1.28.2: 6576aa70413ff00c593a07b549b8b9d9e5ef73c42bb39ab4af475e0fdb540613
v1.28.1: eaa05dab1bffb8593d8e5caa612530ee5c914ee2be73429b7ce36c3becad893f
v1.28.0: 372c4e7bbe98c7067c4b7820c4a440c931ad77f7cb83d3237b439ca3c14d3d37
v1.27.10: 4d81649935ec127f9aa21954697f82e0796f61e8e6406fd058b3a8b80e858c8e
v1.27.9: 89b76aa415018377f2c5fc33fc4d45f4997cc63677336f1768ee8a11593515ce
v1.27.8: 2f2936f950beb3f08ee0e45fbf80d020163829b95aa11c99ec726ee1a922329c
@@ -247,16 +269,24 @@ kubectl_checksums:
v1.26.2: a8944021fc9022f73976d8ab2736f21b64b30de3b2a6ccfddd0316ca1d3c6a1d
v1.26.1: e067d59ac19e287026b5c2b75a1077b1312ba82ad64ee01dff2cdafd57720f39
v1.26.0: 8eef310d0de238c582556d81ab8cbe8d6fca3c0e43ee337a905dcdd3578f9dda
v1.25.16: c3456afacfc37a38404ba9a0b096903164bff5840c1212ff699edc9c17cf1bd2
v1.25.15: c8164490620c6a6fbb869f6fdb017c4ee7482055a80230743e05caa213186900
v1.25.14: 3d2c4ed5158225979ebef1b8de36b018cac651d6d757bd228868274d4802c043
v1.25.13: 6782043d984705b08babade6cc379e0160027a90fb525745f253586ab85ff933
v1.25.12: fd31a737a179653b911bbb705b633353cd7499ec01ce5915160ffebe5e0db5e9
v1.25.11: e405402a30caccc22fad9d5b51b5a03c3ebf96815c2a724d7cf322f9485c83be
v1.25.10: 252dcefaa23709524db6554870877831c6208f1023beca6fb2dd2bb3d271673b
v1.25.9: 56040c7eb3b5f0a0a11cec70e4170ce71985e33ee30292d020b083d303c39eb2
v1.25.8: 0d2563c2656be268c977a79e076b6b4eb95eef17f16ac6723da61768f638c8d3
v1.25.7: 63b48d41992c2fe779c983428d388423ff1a08e01726a10218752036924e93cc
v1.25.6: f42042c831a7b8ad2d2c1f01502ed630a5047d68a62a1de1ed908ba5fb003bb1
v1.25.5: fec9a0f7cd922744935dd5dfc2366ab307424ef4c533299d67edf7de15346e51
v1.25.4: 49ab7f05bb27a710575c2d77982cbfb4a09247ec94a8e21af28a6e300b698a44
v1.25.3: 59e1dba0951f19d4d18eb04db50fcd437c1d57460f2008bc03e668f71b8ea685
v1.25.2: d6b581a41b010ef86a9364102f8612d2ee7fbc7dd2036e40ab7c85adb52331cb
v1.25.1: e8c6bfd8797e42501d14c7d75201324630f15436f712c4f7e46ce8c8067d9adc
v1.25.0: 0b907cfdcabafae7d2d4ac7de55e3ef814df999acdf6b1bd0ecf6abbef7c7131
arm64:
v1.29.1: 96d6dc7b2bdcd344ce58d17631c452225de5bbf59b83fd3c89c33c6298fb5d8b
v1.29.0: 8f7a4bd6bae900a4ddab12bd1399aa652c0d59ea508f39b910e111d248893ff7
v1.28.6: 0de705659a80c3fef01df43cc0926610fe31482f728b0f992818abd9bdcd2cb9
v1.28.5: f87fe017ae3ccfd93df03bf17edd4089672528107f230563b8c9966909661ef2
v1.28.4: edf1e17b41891ec15d59dd3cc62bcd2cdce4b0fd9c2ee058b0967b17534457d7
v1.28.3: 06511f03e34d8ee350bd55717845e27ebec3116526db7c60092eeb33a475a337
v1.28.2: ea6d89b677a8d9df331a82139bb90d9968131530b94eab26cee561531eff4c53
v1.28.1: 46954a604b784a8b0dc16754cfc3fa26aabca9fd4ffd109cd028bfba99d492f6
v1.28.0: f5484bd9cac66b183c653abed30226b561f537d15346c605cc81d98095f1717c
v1.27.10: 2e1996379d5a8b132e0606fcd3df3c8689e11882630b75cca3b7135126847871
v1.27.9: bda475539fdeda9d8a85a84b967af361af264d0826c121b23b0b62ee9b00cd2d
v1.27.8: 97ed6739e2803e63fd2d9de78be22d5ba6205bb63179a16ec773063526525a8e
@@ -282,16 +312,24 @@ kubectl_checksums:
v1.26.2: 291e85bef77e8440205c873686e9938d7f87c0534e9a491de64e3cc0584295b6
v1.26.1: 4027cb0a2840bc14ec3f18151b3360dd2d1f6ce730ed5ac28bd846c17e7d73f5
v1.26.0: 79b14e4ddada9e81d2989f36a89faa9e56f8abe6e0246e7bdc305c93c3731ea4
v1.25.16: d6c23c80828092f028476743638a091f2f5e8141273d5228bf06c6671ef46924
v1.25.15: ae213606b3965872b4e97ceb58fce5be796e7b26ea680681e8a3c2b549fe1701
v1.25.14: a52ec9119e390ad872a74fc560a6569b1758a4217fd2b03e966f77aaa2a2b706
v1.25.13: 90bb3c9126b64f5eee2bef5a584da8bf0a38334e341b427b6986261af5f0d49b
v1.25.12: 315a1515b7fe254d7aa4f5928007b4f4e586bfd91ea6cbf392718099920dcb8a
v1.25.11: 2eb5109735c1442dd3b91a15ff74e24748efd967a3d7bf1a2b16e7aa78400677
v1.25.10: d5ade4f3962dc89ac80fb47010231f79b3f83b2c9569183941c0189157e514fa
v1.25.9: 741e65b681a22074aaf9459b57dbcef6a9e993472b3019a87f57c191bc68575f
v1.25.8: 28cf5f666cb0c11a8a2b3e5ae4bf93e56b74ab6051720c72bb231887bfc1a7c6
v1.25.7: 2c60befa0fefd3bb601e9aa0fc81ae6fb204b514849fe7fa30bea0285449a84b
v1.25.6: 1a4e2850e94d44039c73eae7a6e005b3e1435c00a62bd58df7643bdeb8475cfd
v1.25.5: 7bc650f28a5b4436df2abcfae5905e461728ba416146beac17a2634fa82a6f0a
v1.25.4: a8e9cd3c6ca80b67091fc41bc7fe8e9f246835925c835823a08a20ed9bcea1ba
v1.25.3: cfd5092ce347a69fe49c93681a164d9a8376d69eef587da894207c62ec7d6a5d
v1.25.2: b26aa656194545699471278ad899a90b1ea9408d35f6c65e3a46831b9c063fd5
v1.25.1: 73602eabf20b877f88642fafcbe1eda439162c2c1dbcc9ed09fdd4d7ac9919ea
v1.25.0: 24db547bbae294c5c44f2b4a777e45f0e2f3d6295eace0d0c4be2b2dfa45330d
amd64:
v1.29.1: 69ab3a931e826bf7ac14d38ba7ca637d66a6fcb1ca0e3333a2cafdf15482af9f
v1.29.0: 0e03ab096163f61ab610b33f37f55709d3af8e16e4dcc1eb682882ef80f96fd5
v1.28.6: c8351fe0611119fd36634dd3f53eb94ec1a2d43ef9e78b92b4846df5cc7aa7e3
v1.28.5: 2a44c0841b794d85b7819b505da2ff3acd5950bd1bcd956863714acc80653574
v1.28.4: 893c92053adea6edbbd4e959c871f5c21edce416988f968bec565d115383f7b8
v1.28.3: 0c680c90892c43e5ce708e918821f92445d1d244f9b3d7513023bcae9a6246d1
v1.28.2: c922440b043e5de1afa3c1382f8c663a25f055978cbc6e8423493ec157579ec5
v1.28.1: e7a7d6f9d06fab38b4128785aa80f65c54f6675a0d2abef655259ddd852274e1
v1.28.0: 4717660fd1466ec72d59000bb1d9f5cdc91fac31d491043ca62b34398e0799ce
v1.27.10: bfb219643c28d9842fceae51590776f06987835d93fc3cb9b0149c9111c741ac
v1.27.9: d0caae91072297b2915dd65f6ef3055d27646dce821ec67d18da35ba9a8dc85b
v1.27.8: 027b3161e99fa0a7fa529e8f17f73ee2c0807c81c721ca7cf307f6b41c17bc57
@@ -317,16 +355,24 @@ kubectl_checksums:
v1.26.2: fcf86d21fb1a49b012bce7845cf00081d2dd7a59f424b28621799deceb5227b3
v1.26.1: d57be22cfa25f7427cfb538cfc8853d763878f8b36c76ce93830f6f2d67c6e5d
v1.26.0: b6769d8ac6a0ed0f13b307d289dc092ad86180b08f5b5044af152808c04950ae
v1.25.16: 5a9bc1d3ebfc7f6f812042d5f97b82730f2bdda47634b67bddf36ed23819ab17
v1.25.15: 6428297af0b06d1bb87601258fb61c13d82bf3187b2329b5f38b6f0fec5be575
v1.25.14: 06351e043b8ecd1206854643a2094ccf218180c1b3fab5243f78d2ccfc630ca2
v1.25.13: 22c5d5cb95b671ea7d7accd77e60e4a787b6d40a6b8ba4d6c364cb3ca818c29a
v1.25.12: 75842752ea07cb8ee2210df40faa7c61e1317e76d5c7968e380cae83447d4a0f
v1.25.11: d12bc7d26313546827683ff7b79d0cb2e7ac17cdad4dce138ed518e478b148a7
v1.25.10: 62129056c9e390b23253aadfce1fe23e43316cb3d79a73303d687d86d73707f2
v1.25.9: aaa5ea3b3630730d2b8a8ef3cccb14b47754602c7207c7b0717158ae83c7cb10
v1.25.8: 80e70448455f3d19c3cb49bd6ff6fc913677f4f240d368fa2b9f0d400c8cd16e
v1.25.7: 6cdbaf3fdd1032fc8e560ccc0a75b5bd6fa5b6cb45491e9677872f511131ad3d
v1.25.6: ba876aef0e9d7e2e8fedac036ec194de5ec9b6d2953e30ff82a2758c6ba32174
v1.25.5: 6a660cd44db3d4bfe1563f6689cbe2ffb28ee4baf3532e04fff2d7b909081c29
v1.25.4: e4e569249798a09f37e31b8b33571970fcfbdecdd99b1b81108adc93ca74b522
v1.25.3: f57e568495c377407485d3eadc27cda25310694ef4ffc480eeea81dea2b60624
v1.25.2: 8639f2b9c33d38910d706171ce3d25be9b19fc139d0e3d4627f38ce84f9040eb
v1.25.1: 9cc2d6ce59740b6acf6d5d4a04d4a7d839b0a81373248ef0ce6c8d707143435b
v1.25.0: e23cc7092218c95c22d8ee36fb9499194a36ac5b5349ca476886b7edc0203885
ppc64le:
v1.29.1: b7780124ccfe9640f3a37d242d31e8dbb252bcd379bd0d7bf3776d15baf15ca3
v1.29.0: ea926d8cf25e2ce982ff5c375da32b51ccbd122b721b1bc4a32f52a9a0d073ab
v1.28.6: 60fdb4386b5499dd6a6e3a369f35eef63c99647f7a0436fdbeb4db8c052d14f6
v1.28.5: 4448a9f95421cbe69726aa4d2967d706bc43466b9c656c7425b55431b1c20dd4
v1.28.4: 816ca2cef39c0d1ac8ad60c05ae6f6ea5c4a0ca33748240bd1f019381244ca23
v1.28.3: 2b7331a91f558a748167672c18458aa205d4d6d2794654dfd308942e9a376ca4
v1.28.2: 87cca30846fec99a4fbea122b21e938717b309631bd2220de52049fce30d2e81
v1.28.1: 81b45c27abbdf2be6c5203dfccfd76ded1ac273f9f7672e6dcdf3440aa191324
v1.28.0: 7a9dcb4c75b33b9dac497c1a756b1f12c7c63f86fc0f321452360fbe1a79ce0f
v1.27.10: 445928336932248cb104d99919e659696afa60f8dd8513821f92775e893d0dcb
v1.27.9: 2464d947370b8902e1245b0a75a4ecf55fe2aeee5bc87f2add7da00b73535a59
v1.27.8: e25a09dea99192ff43ee13af61bfadd7c79eb538dc8e85376b6c590b4d471204
@@ -352,19 +398,25 @@ kubectl_checksums:
v1.26.2: 88ab57859ac8c4559834d901884ed906c88ea4fa41191e576b34b6c8bd6b7a16
v1.26.1: 5cfd9fea8dea939a2bd914e1caa7829aa537702ddf14e02a59bf451146942fde
v1.26.0: 9e2b2a03ee5fc726ebd39b6a09498b6508ade8831262859c280d39df33f8830d
v1.25.16: 8146007196c882685b5f8c0ae284557e9e9898310794a44a2331015f5515f2d6
v1.25.15: 4a17d611557001fe8ce786b86739dbdebef3eb863044c1344f1fa809898d651a
v1.25.14: 75ae23850056f129995b3193406ed9dad7f8fd87dcfe6de41428e5805b0a8226
v1.25.13: 18449025a441c41435058c6991eabf5782cfe3ffd8d60e44d22c5756f4c8c88c
v1.25.12: 890bdad38904a0747726aa3a9b4b2e598ac235fc9527830218b2a2c742e77f10
v1.25.11: 46eb5c04ab0e7c42fcc453828e2fd0997233a07693d27beb80169da586c9ee97
v1.25.10: f5fb1795d7397318e41a73fedc9158edece5a7476cc1d681292cdfa9db7673f5
v1.25.9: 07ce8e81e6d511df81778d40df1234e01b7796dcbe64f0b01480059269531086
v1.25.8: 4a86d6b9dcc0ce19f9299c62c1e93d52e4d50004d21eb219b9bb0a5ebba6df6f
v1.25.7: 2841880968f4deb4b1bac6632da618a0a4d23f8aa4bac8e9ef6d4eb28fb1ff96
v1.25.6: 0b07cd68ce0286793528241e57506ee963eee2dfe841cc0b200aff174d6f2493
v1.25.5: 816b6bfcbe312a4e6fbaaa459f52620af307683470118b9a4afb0f8e1054beb8
v1.25.4: 23f5cec67088fa0c3efc17110ede5f6120d3ad18ad6b996846642c2f46b43da0
v1.25.3: bd59ac682fffa37806f768328fee3cb791772c4a12bcb155cc64b5c81b6c47ce
v1.25.2: 1e3665de15a591d52943e6417f3102b5d413bc1d86009801ad0def04e8c920c5
v1.25.1: 957170066abc4d4c178ac8d84263a191d351e98978b86b0916c1b8c061da8282
v1.25.0: dffe15c626d7921d77e85f390b15f13ebc3a9699785f6b210cd13fa6f4653513
kubeadm_checksums:
arm:
v1.29.1: 0
v1.29.0: 0
v1.28.6: 0
v1.28.5: 0
v1.28.4: 0
v1.28.3: 0
v1.28.2: 0
v1.28.1: 0
v1.28.0: 0
v1.27.10: 0
v1.27.9: 0
v1.27.8: 0
v1.27.7: 0
v1.27.6: 0
@@ -388,16 +440,24 @@ kubeadm_checksums:
v1.26.2: 84982a359967571baf572aa1a61397906f987d989854ebb67a03b09ea4502af4
v1.26.1: 0dbd0a197013a3fdc5cb3e012fa8b0d50f38fd3dda56254f4648e08ac867fb60
v1.26.0: 3368537a5e78fdbfa3cbcae0d19102372a0f4eb6b6a78e7b6b187d5db86d6c9e
v1.25.16: 82b4f084b2a5a4fd2e96b2f792ec356cc538671eb3f22ec22f51439e353f1289
v1.25.15: 3fc4a94e379b98770d162701412a134c8d90dab3ca33b51f23fefdaca00bd7b5
v1.25.14: b100c352408e542123002d2a9fa0b02fa0238f96b68ef83219e782ee60c69f66
v1.25.13: 5b0098ceb20ed06cafe14db88b7cb488d6a5d0821d45702575905726980d09af
v1.25.12: cbef860af5577defb738179f79dac41be210579d9ce299fb05e3d825ccc56e4e
v1.25.11: 0179f8650321fce8b51f3969edb4c923c62b1ddc4480ea0e784d018287a5d458
v1.25.10: d76e23ed5b3c041c58f7dd3829d3ef40ef4ed57f7d91d4e90371bd743b192743
v1.25.9: 653641f84fb5152858f2f2c8817e37a0216b2882dc35e545b6bdf77aa3b7048c
v1.25.8: 3ea26bc2f880db6c3e462601fc639b34501373ee849a364f5aaa6eb263c6ffba
v1.25.7: 77c37ed2a1ea238f1e0b4dee6fcb1bcb9c9f0fe3846c3ba52542adc07d9ce314
v1.25.6: ac94908d61d06ea7a99e2775aff308f0dbd1195eeed92411e2240b10dc88e21f
v1.25.5: c1753bffff88e3f192acc46f2ea4b7058a920c593f475cfb0ea015e6d9667ee1
v1.25.4: a20379513e5d91073a52a0a3e7a9201e2d7b23daa55d68456465d8c9ef69427c
v1.25.3: 3f357e1e57936ec7812d35681be249b079bbdc1c7f13a75e6159379398e37d5e
v1.25.2: 2f794569c3322bb66309c7f67126b7f88155dfb1f70eea789bec0edf4e10015e
v1.25.1: ecb7a459ca23dfe527f4eedf33fdb0df3d55519481a8be3f04a5c3a4d41fa588
v1.25.0: 67b6b58cb6abd5a4c9024aeaca103f999077ce6ec8e2ca13ced737f5139ad2f0
arm64:
v1.29.1: 3bff8c50c104c45e416cce9991706c6ac46365f0defbcd54f8cf4ace0fa68dcf
v1.29.0: bbddee2d46d2e1643ae3623698b45b13aa2e858616d61c642f2f49e5bb14c980
v1.28.6: 4298cad464e92eec19cdf3e6a607a82a1d626ae70fedba7956175152ab983457
v1.28.5: 22bb6b3377204e93d008f33ac4924d77adca1478f1ae3b515c03476ba54f1adc
v1.28.4: a4422780020954436b8e76ab1c59b68c5581a54432dd3e566c4709bb40c8d4f9
v1.28.3: dcb37d78ccdfe9d8dd6f100e188ddc6e3f5570d0c49db68470073683b453a1e7
v1.28.2: 010789a94cf512d918ec4a3ef8ec734dea0061d89a8293059ef9101ca1bf6bff
v1.28.1: 7d2f68917470a5d66bd2a7d62897f59cb4afaeffb2f26c028afa119acd8c3fc8
v1.28.0: b9b473d2d9136559b19eb465006af77df45c09862cd7ce6673a33aae517ff5ab
v1.27.10: ed0447155a7e967ae23480b06b31b2c0aaa871e7c59dfd82ae25b03a1eccf6e6
v1.27.9: d3d022842b0b8e4661222e8873249f5acafdbef52fd1bfb98152a582352b3c40
v1.27.8: 0d0f5b2781d663d314e785d14361aa5a09cfaf6e1694aa3cc731f4f06342ec13
@@ -423,16 +483,24 @@ kubeadm_checksums:
v1.26.2: f210d8617acf7c601196294f7ca97e4330b75dad00df6b8dd12393730c501473
v1.26.1: db101c4bb8e33bd69241de227ed317feee6d44dbd674891e1b9e11c6e8b369bb
v1.26.0: 652844c9518786273e094825b74a1988c871552dc6ccf71366558e67409859d1
v1.25.16: 55cc8e3c5985858b9f683bf6c7352d76f073d3dc136f450e8761c0ed7092c0f3
v1.25.15: e3f152ae529dd6363b2748f39d219f87490f3e995e8bd5332e756c4df692f5f4
v1.25.14: 525181225d963ddbc17765587a7b5919aa68d7264a197f6a1359f32e7f4a2e03
v1.25.13: d5380bd3f0562aee30d888f22b5650c7af54da83d9fe5187821bcedf21885a11
v1.25.12: 6a22e2e830f9df16a96a1ac5a4034b950b89a0cc90b19dc1fb104b268e4cd251
v1.25.11: 570d87d56a24778bd0854270eeddc8bcfb275f1c711cced5b5948f631e0c3ede
v1.25.10: 373766681e012768d7302ce571f06c82ea18636ae95fe0b6f0fa474e51bf4deb
v1.25.9: cd15d440685f54a2c7a6cf1d96093fa7a55b9dfae8c48448b3ea92a922c07c72
v1.25.8: e7f0c738e48d905eae145631497a9ef59e792300e5247be2a1fbaa0a8907b308
v1.25.7: 64bd532bed1814a28d021e227d18d81cf5b93ac05763a5a5fa6a38b6bb55caee
v1.25.6: 54256821830d23b458a29f5d4695331d600e57644604ef68d0e35e5d2a4ffb4b
v1.25.5: 426dddad1c60b7617f4095507cef524d76ec268a0201c1df154c108287a0b98e
v1.25.4: 3f5b273e8852d13fa39892a30cf64928465c32d0eb741118ba89714b51f03cd5
v1.25.3: 61bb61eceff78b44be62a12bce7c62fb232ce1338928e4207deeb144f82f1d06
v1.25.2: 437dc97b0ca25b3fa8d74b39e4059a77397b55c1a6d16bddfd5a889d91490ce0
v1.25.1: f4d57d89c53b7fb3fe347c9272ed40ec55eab120f4f09cd6b684e97cb9cbf1f0
v1.25.0: 07d9c6ffd3676502acd323c0ca92f44328a1f0e89a7d42a664099fd3016cf16b
amd64:
v1.29.1: d4d81d9020b550c896376fb9e0586a9f15a332175890d061619b52b3e9bc6cbd
v1.29.0: 629d4630657caace9c819fd3797f4a70c397fbd41a2a7e464a0507dad675d52c
v1.28.6: bda3eda8d51e8746a42b535b7eab7df52b091a796227c3212dc30909a8f1b431
v1.28.5: 2b54078c5ea9e85b27f162f508e0bf834a2753e52a57e896812ec3dca92fe9cd
v1.28.4: b4d2531b7cddf782f59555436bc098485b5fa6c05afccdeecf0d62d21d84f5bd
v1.28.3: ce3848b1dfa562e0fa2f911a3d8e3bb07ba040eea76654d68e213315c8846ac0
v1.28.2: 6a4808230661c69431143db2e200ea2d021c7f1b1085e6353583075471310d00
v1.28.1: 6134dbc92dcb83c3bae1a8030f7bb391419b5d13ea94badd3a79b7ece75b2736
v1.28.0: 12ea68bfef0377ccedc1a7c98a05ea76907decbcf1e1ec858a60a7b9b73211bb
v1.27.10: 23985e958443ac1aabdbeeedc675358abc0638eb580707829fd42b0996a0aae5
v1.27.9: 78dddac376fa2f04116022cb44ed39ccb9cb0104e05c5b21b220d5151e5c0f86
v1.27.8: f8864769b8b2d7a14f53eb983f23317ff14d68ab76aba71e9de17ce84c38d4eb
@@ -458,16 +526,24 @@ kubeadm_checksums:
v1.26.2: 277d880dc6d79994fd333e49d42943b7c9183b1c4ffdbf9da59f806acec7fd82
v1.26.1: 1531abfe96e2e9d8af9219192c65d04df8507a46a081ae1e101478e95d2b63da
v1.26.0: 72631449f26b7203701a1b99f6914f31859583a0e247c3ac0f6aaf59ca80af19
v1.25.16: 11c70502ac5bad303b5b4103b9eb5b2a83376cf6a1bce878b6018c6ca44a7d6e
v1.25.15: f79ab4d8f3a11c9f6f1b2c040552d4e1b0462fa9f9ddde7431b990e4b6e387ff
v1.25.14: 6cce9224e8b939bb0c218ab1b047a934a8c2b23f07c7ade4586b5e1a4013c80f
v1.25.13: 4694df9c5d700280c186980907ec8e695364f461b20e868336a71edabac2253e
v1.25.12: 293252f0a1727bfad4ef4fe99d704a56ecea45e39b0ea77f629c55da39e377da
v1.25.11: 6ff43cc8266a21c7b62878a0a9507b085bbb079a37b095fab5bcd31f2dbd80e0
v1.25.10: 7300211efa962d1ca27121ae68be6f06c7f2dca4ca8e5087a2a69f36daa6b9dc
v1.25.9: 157be24d998111a51d52db016f9010cd6418506a028f87b5a712f518b392a3f3
v1.25.8: 2ae844776ac48273d868f92a7ed9d54b4a6e9b0e4d05874d77b7c0f4bfa60379
v1.25.7: 54e369043d5c7ac320ccbd51757019274dbfefce36c9abee746e387ac8203704
v1.25.6: d8bf16d1a808dce10d4eb9b391ddd6ee8a81e94c669441f20b1227083dbc4417
v1.25.5: af0b25c7a995c2d208ef0b9d24b70fe6f390ebb1e3987f4e0f548854ba9a3b87
v1.25.4: b8a6119d2a3a7c6add43dcf8f920436bf7fe71a77a086e96e40aa9d6f70be826
v1.25.3: 01b59ce429263c62b85d2db18f0ccdef076b866962ed63971ff2bd2864deea7b
v1.25.2: 63ee3de0c386c6f3c155874b46b07707cc72ce5b9e23f336befd0b829c1bd2ad
v1.25.1: adaa1e65c1cf9267a01e889d4c13884f883cf27948f00abb823f10486f1a8420
v1.25.0: 10b30b87af2cdc865983d742891eba467d038f94f3926bf5d0174f1abf6628f8
ppc64le:
v1.29.1: 3ec6d90c05dd8e4c6bb1f42fd2fe0f091d85317efaf47d9baebd9af506b3878b
v1.29.0: 4c414a463ed4277e9062c797d1c0435aa7aec2fd1688c5d34e3161c898113cb5
v1.28.6: 71fc8af0f80599a991ece0c31b21ca85f3ce49322941a305048d9287c249446c
v1.28.5: a9bf8b18711639d9d002f63cebc22c8df1627737891c640f2229461d19b8c321
v1.28.4: 24e4b42b1d0ec68fc291fcc57fa88ec34b9e8ba758e01639873ef2068222af4a
v1.28.3: 0ae62912b057f3228dd7a9fbe2492c4b8c3a661f27a1d46e70b0b6627ccf60fb
v1.28.2: fdc28482a4316c84d61b0997c29c4d4c7b11459af9c654fdee3b4a3031f0fcb7
v1.28.1: 73e06f2b614ed5665951f7c059e225a7b0b31319c64a3f57e146fbe7a77fe54e
v1.28.0: 146fe9194486e46accd5054fa93939f9608fdbeefefc4bc68e4c40fb4a84ccc9
v1.27.10: c928ad330bae724b1ef9775e07285408727513a024e3d86e3d72e05768859db8
v1.27.9: 92da9084fa9f8b8b55436b61ec3c697ef951b0b0416a3b3a7f0dd0e5e4d8cd88
v1.27.8: d65b972cd661cb28972f0df731f9e5b65d959920275bad5ef44ff94d3bb8331d
@@ -493,6 +569,23 @@ kubeadm_checksums:
v1.26.2: f5d610c4a8a4f99ac6dd07f8cbc0db1de602d5a8895cdaa282c72e36183e310b
v1.26.1: 89ad4d60d266e32147c51e7fb972a9aa6c382391822fa00e27a20f769f3586e8
v1.26.0: 2431061b3980caa9950a9faaafdfb5cd641e0f787d381db5d10737c03ad800c6
v1.25.16: 2da24210b868d7799875f298fd489acd7488ad8d4552976ed6109d3627495913
v1.25.15: 867cac59e5b5962af5d5dfd99b65663b6f931b1d8b63617fe9b7133d258c1ee3
v1.25.14: d577396f08916db62ef8e797b0dde32b9456c4db14c65a993d514c6f4eb6ecee
v1.25.13: e023c0a82bc35ef505798033e5ef9a4c1aa414ad0f2ab9e921010569b348d03f
v1.25.12: 0cf32deb153472988290ec322bbf7e372182fb02151bc20966fb3ed7b2e463f2
v1.25.11: 96e9f98ed3808e82548f9aceda746680692e395cdabc483080d61aefe7899a56
v1.25.10: 62a7eaac2b478e8ed42789ba30c77685700e90290586123f2226aa70a2d13de4
v1.25.9: 0be1986ce05fba717fb59b281fb6cddc98daf580ab466cf43b511b1d0b826a43
v1.25.8: b3ea1579aa1b44d8499e35d6daa7da56a90387b99068a6e0692d6fba0899728d
v1.25.7: af166691b24f976bf4bf1fe95ff2882615b78653fb626466d388a24bbb597078
v1.25.6: f38785bd9ca8b3d7f99b55c25a6d88d0c3afbfd742f66ce51bb06d318bd7bf79
v1.25.5: d69b73af9e327cba5c771daf8320821ccda703f38506ee4ec5b1ff3776a6eb8f
v1.25.4: 9703e40cb0df48052c3cfb0afc85dc582e600558ab687d6409f40c382f147976
v1.25.3: 8fe9a69db91c779a8f29b216134508ba49f999fa1e36b295b99444f31266da17
v1.25.2: a53101ed297299bcf1c4f44ec67ff1cb489ab2d75526d8be10c3068f161601a7
v1.25.1: c7e2c8d2b852e1b30894b64875191ce388a3a416d41311b21f2d8594872fe944
v1.25.0: 31bc72e892f3a6eb5db78003d6b6200ba56da46a746455991cb422877afc153d
etcd_binary_checksums:
arm:
v3.5.10: 0
@@ -565,9 +658,6 @@ cni_binary_checksums:
v0.9.1: 5bd3c82ef248e5c6cc388f25545aa5a7d318778e5f9bc0a31475361bb27acefe
calicoctl_binary_checksums:
arm:
v3.26.4: 0
v3.26.3: 0
v3.26.2: 0
v3.26.1: 0
v3.26.0: 0
v3.25.2: 0
@@ -590,9 +680,6 @@ calicoctl_binary_checksums:
v3.22.4: 0
v3.22.3: 0
arm64:
v3.26.4: d647d9443ce89df62da6619643375a4f577f5a7fa4e1162416403df521826c2d
v3.26.3: c50272a39658a3b358b33c03fe10d1dde894764413279fecc72d40b95535b398
v3.26.2: 44de9118f481a1125e2d50cdfbb55073e744dd8e71d2be45eeb2757302910c67
v3.26.1: bba2fbdd6d2998bca144ae12c2675d65c4fbf51c0944d69b1b2f20e08cd14c22
v3.26.0: b88c4fd34293fa95d4291b7631502f6b9ad38b5f5a3889bb8012f36f001ff170
v3.25.2: 1cf28599dc1d52ef7c888731f508662a187129ff7bb3294f58319d79c517085c
@@ -615,9 +702,6 @@ calicoctl_binary_checksums:
v3.22.4: e84ba529091818282012fd460e7509995156e50854781c031c81e4f6c715a39a
v3.22.3: 3a3e70828c020efd911181102d21cb4390b7b68669898bd40c0c69b64d11bb63
amd64:
v3.26.4: 9960357ef6d61eda7abf80bd397544c1952f89d61e5eaf9f6540dae379a3ef61
v3.26.3: 82bd7d12b0f6973f9593fb62f5410ad6a81ff6b79e92f1afd3e664202e8387cf
v3.26.2: eba9bc34f44801a513c48f730a409dc1ece0ebfd9c1acc21fd3adf0eff93ecdc
v3.26.1: c8f61c1c8e2504410adaff4a7255c65785fe7805eebfd63340ccd3c472aa42cf
v3.26.0: 19ce069f121f9e245f785a7517521e20fe3294ce1add9d1b2bbcbb0a9b9de24e
v3.25.2: b6f6017b1c9520d8eaea101442d82020123d1efc622964b20d97d3e08e198eed
@@ -640,9 +724,6 @@ calicoctl_binary_checksums:
v3.22.4: cc412783992abeba6dc01d7bc67bdb2e3a0cf2f27fc3334bdfc02d326c3c9e15
v3.22.3: a9e5f6bad4ad8c543f6bdcd21d3665cdd23edc780860d8e52a87881a7b3e203c
ppc64le:
v3.26.4: 41cfa77cc27cfe89a046ddb033cf71a46512f4b81251e28c69fca2cee13617ff
v3.26.3: 30a32acbe71894a9783e350ed44294e739b3322f157b2c224ad3c058473e5701
v3.26.2: c4d42a85afb67020e9cf9dcafe184af6ad60c5609d60001b9505b1a83959b246
v3.26.1: 7f8baf18f4d7954b6f16b1ddcdadbc818cae2fe1f72137464ccc7b8e6fef03a0
v3.26.0: b82b931e3aa53248d87b24f00969abfe5ea4518c56a85b5894187c7b47dc452e
v3.25.2: a5e19931ce50953a36387c67b596e56c4b4fc7903893f1ad25248177027ad0dd
@@ -710,9 +791,6 @@ ciliumcli_binary_checksums:
v0.14.1: 0
v0.14.0: 0
calico_crds_archive_checksums:
v3.26.4: 481e52de684c049f3f7f7bac78f0f6f4ae424d643451adc9e3d3fa9d03fb6d57
v3.26.3: b51817e7ae5189b0737ccc901b7b5950a4f84b6029eebfdcc3e3b851bd410d03
v3.26.2: 8c15b29db525c4cab7bea304357c942a0d55483c03d9c2a0ed3303f66b8f9ff8
v3.26.1: 6d0afbbd4bdfe4deb18d0ac30adb7165eb08b0114ec5a00d016f37a8caf88849
v3.26.0: a263e507e79c0a131fdc2a49422d3bdf0456ea5786eb44ace2659aba879b5c7c
v3.25.2: 6a6e95a51a8ebf65d41d671f20854319cca1f26cd87fbcfc30d1382a06ecfee0
@@ -788,9 +866,6 @@ krew_archive_checksums:
v0.4.2: 0
helm_archive_checksums:
arm:
v3.13.2: 06e8436bde78d53ddb5095ba146fe6c7001297c7dceb9ef6b68992c3ecfde770
v3.13.1: a9c188c1a79d2eb1721aece7c4e7cfcd56fa76d1e37bd7c9c05d3969bb0499b4
v3.13.0: bb2cdde0d12c55f65e88e7c398e67463e74bc236f68b7f307a73174b35628c2e
v3.12.3: 6b67cf5fc441c1fcb4a860629b2ec613d0e6c8ac536600445f52a033671e985e
v3.12.2: 39cc63757901eaea5f0c30b464d3253a5d034ffefcb9b9d3c9e284887b9bb381
v3.12.1: 6ae6d1cb3b9f7faf68d5cd327eaa53c432f01e8fd67edba4e4c744dcbd8a0883
@@ -801,9 +876,6 @@ helm_archive_checksums:
v3.11.0: cddbef72886c82a123038883f32b04e739cc4bd7b9e5f869740d51e50a38be01
v3.10.3: dca718eb68c72c51fc7157c4c2ebc8ce7ac79b95fc9355c5427ded99e913ec4c
arm64:
v3.13.2: f5654aaed63a0da72852776e1d3f851b2ea9529cb5696337202703c2e1ed2321
v3.13.1: 8c4a0777218b266a7b977394aaf0e9cef30ed2df6e742d683e523d75508d6efe
v3.13.0: d12a0e73a7dbff7d89d13e0c6eb73f5095f72d70faea30531941d320678904d2
v3.12.3: 79ef06935fb47e432c0c91bdefd140e5b543ec46376007ca14a52e5ed3023088
v3.12.2: cfafbae85c31afde88c69f0e5053610c8c455826081c1b2d665d9b44c31b3759
v3.12.1: 50548d4fedef9d8d01d1ed5a2dd5c849271d1017127417dc4c7ef6777ae68f7e
@@ -814,9 +886,6 @@ helm_archive_checksums:
v3.11.0: 57d36ff801ce8c0201ce9917c5a2d3b4da33e5d4ea154320962c7d6fb13e1f2c
v3.10.3: 260cda5ff2ed5d01dd0fd6e7e09bc80126e00d8bdc55f3269d05129e32f6f99d
amd64:
v3.13.2: 55a8e6dce87a1e52c61e0ce7a89bf85b38725ba3e8deb51d4a08ade8a2c70b2d
v3.13.1: 98c363564d00afd0cc3088e8f830f2a0eeb5f28755b3d8c48df89866374a1ed0
v3.13.0: 138676351483e61d12dfade70da6c03d471bbdcac84eaadeb5e1d06fa114a24f
v3.12.3: 1b2313cd198d45eab00cc37c38f6b1ca0a948ba279c29e322bdf426d406129b5
v3.12.2: 2b6efaa009891d3703869f4be80ab86faa33fa83d9d5ff2f6492a8aebe97b219
v3.12.1: 1a7074f58ef7190f74ce6db5db0b70e355a655e2013c4d5db2317e63fa9e3dea
@@ -827,9 +896,6 @@ helm_archive_checksums:
v3.11.0: 6c3440d829a56071a4386dd3ce6254eab113bc9b1fe924a6ee99f7ff869b9e0b
v3.10.3: 950439759ece902157cf915b209b8d694e6f675eaab5099fb7894f30eeaee9a2
ppc64le:
v3.13.2: 11d96134cc4ec106c23cd8c163072e9aed6cd73e36a3da120e5876d426203f37
v3.13.1: f0d4ae95b4db25d03ced987e30d424564bd4727af6a4a0b7fca41f14203306fb
v3.13.0: d9be0057c21ce5994885630340b4f2725a68510deca6e3c455030d83336e4797
v3.12.3: 8f2182ae53dd129a176ee15a09754fa942e9e7e9adab41fd60a39833686fe5e6
v3.12.2: fb0313bfd6ec5a08d8755efb7e603f76633726160040434fd885e74b6c10e387
v3.12.1: 32b25dba14549a4097bf3dd62221cf6df06279ded391f7479144e3a215982aaf
@@ -841,9 +907,6 @@ helm_archive_checksums:
v3.10.3: 93cdf398abc68e388d1b46d49d8e1197544930ecd3e81cc58d0a87a4579d60ed
cri_dockerd_archive_checksums:
arm:
0.3.7: 0
0.3.6: 0
0.3.5: 0
0.3.4: 0
0.3.3: 0
0.3.2: 0
@@ -852,9 +915,6 @@ cri_dockerd_archive_checksums:
0.2.6: 0
0.2.5: 0
arm64:
0.3.7: 8da54563ee7ddee36b1adf1f96b3b7b97ec2bc0ec23559b89d9af8eae5e62d9e
0.3.6: 793b8f57cecf734c47bface10387a8e90994c570b516cb755900f21ebd0a663b
0.3.5: c20014dc5a71e6991a3bd7e1667c744e3807b5675b1724b26bb7c70093582cfe
0.3.4: 598709c96585936729140d31a76be778e86f9e31180ff3622a44b63806f37779
0.3.3: fa0aa587fc7615248f814930c2e0c9a252afb18dc37c8f4d6d0263faed45d5a7
0.3.2: b24ae82808bb5ee531348c952152746241ab9b1b7477466ba6c47a7698ef16ae
@@ -863,9 +923,6 @@ cri_dockerd_archive_checksums:
0.2.6: 90122641e45e8ff81dbdd4d84c06fd9744b807b87bff5d0db7f826ded326a9fd
0.2.5: 067242bf5e4b39fece10500a239612c7b0723ce9766ba309dbd22acaf1a2def2
amd64:
0.3.7: 518c5d5345085f36d311f274208705d7fdb79337a80c256871ce941d5a7d47a1
0.3.6: cf271d65abee88c0c0a6d9dacb151913bf37d25d45913a7e04b09efe408eae18
0.3.5: 30d47bd89998526d51a8518f9e8ef10baed408ab273879ee0e30350702092938
0.3.4: b77a1fbd70d12e5b1dacfa24e5824619ec54184dbc655e721b8523572651adeb
0.3.3: 169dce95e7252165c719e066a90b4a64af64119f9ee74fdca73bf9386bcf96c8
0.3.2: 93acc0b8c73c68720c9e40b89c2a220a2df315eb2cd3d162b294337c4dcb2193
@@ -874,9 +931,6 @@ cri_dockerd_archive_checksums:
0.2.6: 5d57b160d5a1f75333149823bec3e291a1a0960383ddc9ddd6e4ff177382c755
0.2.5: 1660052586390fd2668421d16265dfcc2bbdba79d923c7ede268cf91935657c1
ppc64le:
0.3.7: 0
0.3.6: 0
0.3.5: 0
0.3.4: 0
0.3.3: 0
0.3.2: 0
@@ -886,7 +940,6 @@ cri_dockerd_archive_checksums:
0.2.5: 0
runc_checksums:
arm:
v1.1.10: 0
v1.1.9: 0
v1.1.8: 0
v1.1.7: 0
@@ -929,11 +982,6 @@ runc_checksums:
v1.1.3: 3b1b7f953fc8402dec53dcf2de05b6b72d86850737efa9766f8ffefc7cae3c0a
crun_checksums:
arm:
1.11.2: 0
1.11.1: 0
1.9.2: 0
1.9.1: 0
1.8.7: 0
1.8.6: 0
1.8.5: 0
1.8.4: 0
@@ -943,11 +991,6 @@ crun_checksums:
1.7.2: 0
1.7.1: 0
arm64:
1.11.2: 9e1aeb86bce609eccff46a8b976ed06994bca27d639e564fd45756786c4d0123
1.11.1: c8b0d243f6ac4fb02665c157b5404e5184bdc9240dbdcdde0ccef2db352ce97a
1.9.2: 1ad8bd3c1aa693f59133c480aa13bbdf6d81e4528e72ce955612c6bae8cb1720
1.9.1: fab460328d425a72cfd1a70f8fc25c888b6f17cfd95abdace61035a80c3dfe4a
1.8.7: 004f40b48ec28e963eee79929002b9dfb88496be5699e6052358c67e47fdc88a
1.8.6: 1f86d20292284f29593594df8d8556d5363a9e087e169626604cc212c77d1727
1.8.5: 77032341af7c201db03a53e46707ba8b1af11cdd788530426f2da6ccb9535202
1.8.4: 29bbb848881868c58908933bab252e73ee055672d00b7f40cea751441ca74fa4
@@ -957,11 +1000,6 @@ crun_checksums:
1.7.2: 576a39ca227a911e0e758db8381d2786f782bfbd40b54684be4af5e1fe67b018
1.7.1: 8d458c975f6bf754e86ebedda9927abc3942cbebe4c4cb34a2f1df5acd399690
amd64:
1.11.2: acb62839ab8615f0e2485e8d71272b5659cbe35182eb24c5e96bd213240567fe
1.11.1: ca8c9cef23f4a3f7a635ee58a3d9fa35e768581fda89dc3b6baed219cc407a02
1.9.2: 2bb60bcd5652cb17e44f66f0b8ae48195434bd1d66593db97fba85c7778eac53
1.9.1: a2bc565c8bbcb1074b70cdec0c39ca93e4aa84f1188641d160531f4a8aae80f0
1.8.7: f26e90ab197df8b1cb81d70bcb2cd36a80299d6445470b3c1a84ceda59a34199
1.8.6: 23cd9901106ad7a8ebf33725a16b99a14b95368a085d6ffc2ede0b0c9b002bde
1.8.5: 75062fa96a7cabd70e6f6baf1e11da00131584cc74a2ef682a172769178d8731
1.8.4: 99be7d3c9ba3196c35d64b63fa14e9f5c37d1e91b194cfdbfa92dbcbebd651bc
@@ -971,11 +1009,6 @@ crun_checksums:
1.7.2: 2bd2640d43bc78be598e0e09dd5bb11631973fc79829c1b738b9a1d73fdc7997
1.7.1: 8e095f258eee554bb94b42af07aa5c54e0672a403d56b2cfecd49153a11d6760
ppc64le:
1.11.2: 0
1.11.1: 0
1.9.2: 0
1.9.1: 0
1.8.7: 0
1.8.6: 0
1.8.5: 0
1.8.4: 0
@@ -986,8 +1019,6 @@ crun_checksums:
1.7.1: 0
youki_checksums:
arm:
0.3.0: 0
0.2.0: 0
0.1.0: 0
0.0.5: 0
0.0.4: 0
@@ -995,8 +1026,6 @@ youki_checksums:
0.0.2: 0
0.0.1: 0
arm64:
0.3.0: 0
0.2.0: 0
0.1.0: 0
0.0.5: 0
0.0.4: 0
@@ -1004,8 +1033,6 @@ youki_checksums:
0.0.2: 0
0.0.1: 0
amd64:
0.3.0: 741ba3cd85d768bebba02598cedcf3b15a2160e4d6ce33a3d5c4e1b3080f9c1c
0.2.0: b268689a91db07feebfd41d5806b10c7d051fbcbf7efb15076e2228763ac0762
0.1.0: f00677e9674215b44f140f0c0f4b79b0001c72c073d2c5bb514b7a9dcb13bdbc
0.0.5: 8504f4c35a24b96782b9e0feb7813aba4e7262c55a39b8368e94c80c9a4ec564
0.0.4: c213376393cb16462ef56586e68fef9ec5b5dd80787e7152f911d7cfd72d952e
@@ -1013,8 +1040,6 @@ youki_checksums:
0.0.2: dd61f1c3af204ec8a29a52792897ca0d0f21dca0b0ec44a16d84511a19e4a569
0.0.1: 8bd712fe95c8a81194bfbc54c70516350f95153d67044579af95788fbafd943b
ppc64le:
0.3.0: 0
0.2.0: 0
0.1.0: 0
0.0.5: 0
0.0.4: 0
@@ -1023,7 +1048,6 @@ youki_checksums:
0.0.1: 0
kata_containers_binary_checksums:
arm:
3.2.0: 0
3.1.3: 0
3.1.2: 0
3.1.1: 0
@@ -1035,7 +1059,6 @@ kata_containers_binary_checksums:
2.5.1: 0
2.5.0: 0
arm64:
3.2.0: 0
3.1.3: 0
3.1.2: 0
3.1.1: 0
@@ -1047,7 +1070,6 @@ kata_containers_binary_checksums:
2.5.1: 0
2.5.0: 0
amd64:
3.2.0: 21bb8484a060450d6522f29bed7d88d773c28520774eaa2c522b6f47fd12c4a1
3.1.3: 266c906222c85b67867dea3c9bdb58c6da0b656be3a29f9e0bed227c939f3f26
3.1.2: 11a2921242cdacf08a72bbce85418fc21c2772615cec6f3de7fd371e04188388
3.1.1: 999bab0b362cdf856be6448d1ac4c79fa8d33e79a7dfd1cadaafa544f22ade83
@@ -1059,7 +1081,6 @@ kata_containers_binary_checksums:
2.5.1: 4e4fe5204ae9aea43aa9d9bee467a780d4ae9d52cd716edd7e28393a881377ad
2.5.0: 044e257c16b8dfa1df92663bd8e4b7f62dbef3e431bc427cdd498ff1b2163515
ppc64le:
3.2.0: 0
3.1.3: 0
3.1.2: 0
3.1.1: 0
@@ -1087,15 +1108,6 @@ gvisor_runsc_binary_checksums:
20230508: 0
20230501: 0
20230417: 0
20231030: 0
20231023: 0
20231016: 0
20231009: 0
20231003: 0
20230925: 0
20230920: 0
20230911: 0
20230904: 0
arm64:
20230807: 562c629abb6576d02a4b5a5c32cb4706e29122f72737c55a2bf87d012682117f
20230801: 69f4b7fd068fcc9a30181657ae5dcdd259e5fe71111d86e7cb0065e190b82fc3
@@ -1112,15 +1124,6 @@ gvisor_runsc_binary_checksums:
20230508: b1cffc3c3071fe92f2d6c14aa946d50f01b0650ce8a8ed51b240cebc2ae2d1f0
20230501: b0e0e74ca92efbb65cfa2de1fbb00f767056c2797ca1b1b091ecee9ae0be8122
20230417: 21d01bb86f31812d5bca09fa89129ceee6561e5dd2722afcc52e28649383f311
20231030: c4a11ed7066bff777db048167b01a8662d0e1a48672a8c78ab7c3d5e5a5297c7
20231023: 90572e057cc05360c052aa2a161038e65328b9860d85d1f5db6c24c4c6a2433e
20231016: 2bd8aee1ca3563e08afdd7019783d02c8c701c63e67fac3be6ee1243c5b0ee21
20231009: c6730e8ba356dd763b451b00e0206d0c69ff9857fd0a7ad456546db192b3ca4b
20231003: 5b18676dab77d2725da02489de61445336e590018678898b7d8ff0afcda4d9f8
20230925: 64b6f59a7ec247fa01db5b9ee0a66c5a2e4c5ddf891dda8c8db65cbf8a4f0ae2
20230920: 8db0d62d750c510e7c0458e7644926be5d8d11d099dde3ed97591f6ecc10e278
20230911: 34aa27e693666335c69d233747be2fccfa8665ef88381d451c0b1a33a2050ae1
20230904: 990734ea106aaac65bd97ed496f7f87115fc743896ee4ef897e12c10204aea9d
amd64:
20230807: bb5055d820a3698181b593e3f6d2b44e8e957a6df91bea7776fee030c007814f
20230801: 9df74be6ed44f4b35d5aa5ba1956bb3959680c6909748009a2f9476e04b0921e
@@ -1137,15 +1140,6 @@ gvisor_runsc_binary_checksums:
20230508: 2a1385d3ef6e31058671e2d2a1ce83130e934081fa2c5c93589eebf7856f5681
20230501: b60dccad63a07553809065d1c4d094dfc5e862353cc61a933b05100ffd1f831c
20230417: 7c0ccb6144861e45bd14e2ccd02f3fdb935219447256d16c71f6c8e42f90a73d
20231030: 61467bc39f58109dd7a2115e7bcecff460565fc681b0e89436a88d5c316300c8
20231023: 99363d5d432bf466f2de2f9de6be140fa1eee0ffc3fceceec87ef0fad907015e
20231016: dc06735fd3bb333a08294931c55c6867e679dc484713e9262ec5dc258e8a08d6
20231009: 9532413b235dace99911192ae82414fd20be4af7f1d3479624412e49be707500
20231003: e925c65f51c879ee29ceb9d346a400b2efd8e36deb68028845f3fea1ccb57710
20230925: 72a0ef23ae6d487e164aa9749c2e1f81fb0512a18a1f776d9adc7b0c156ed194
20230920: 1fe85e4a6963fea744cb70044d38063e4b884dccf7ed7308c1e75cb802fbc276
20230911: 2048af00625f45d7134a651cd836f0be29c2edfc8f061162ea3fde2b212c443b
20230904: 5c107f15c5be5cfacd1be0e3745fb680bee516ee613a0c6d1e9314795c911458
ppc64le:
20230807: 0
20230801: 0
@@ -1162,15 +1156,6 @@ gvisor_runsc_binary_checksums:
20230508: 0
20230501: 0
20230417: 0
20231030: 0
20231023: 0
20231016: 0
20231009: 0
20231003: 0
20230925: 0
20230920: 0
20230911: 0
20230904: 0
gvisor_containerd_shim_binary_checksums:
arm:
20230807: 0
@@ -1188,15 +1173,6 @@ gvisor_containerd_shim_binary_checksums:
20230508: 0
20230501: 0
20230417: 0
20231030: 0
20231023: 0
20231016: 0
20231009: 0
20231003: 0
20230925: 0
20230920: 0
20230911: 0
20230904: 0
arm64:
20230807: 0b80fba82a7c492dc8c7d8585d172d399116970155c2d35f3a29d37aa5eeb80d
20230801: 08323b1db170fe611f39306922bf56c8c4ee354d6881385fae0a27d84d6f5a62
@@ -1213,15 +1189,6 @@ gvisor_containerd_shim_binary_checksums:
20230508: d16d59d076b0856242d67eda95ee1b2301b04f14abd95ef4fe6c08509f304617
20230501: a5f4361897a634ac5832b269e1cc5bc1993825c06e4b0080770a948b36584754
20230417: 575163e65e1fda019cb34ee56120d5ecf63b0a3a00dda28c0fc138ce58a5bbff
20231030: 6b2a6ff15d37e3cffe09cacf8dc22255759dc87f6c2290253c9bc82b4ba15909
20231023: ac80cb6a9be9697eefdbe0b1f60e00657f8b97552b9f7d3105b03fa30fb9e99b
20231016: 3f513a2c042096f7636e6c2e1313ed43d2fae43b2428dcc3135f6f844e559f8d
20231009: 638167817a34b73b3c737f4755a7564abc51ab42327334e652d7a70116e8fada
20231003: ed8a6df203dcb80f44d239452c88ef4b86123e85443c3d7f019512d37611bab5
20230925: ed8a6df203dcb80f44d239452c88ef4b86123e85443c3d7f019512d37611bab5
20230920: a397cdf0f3a08d5f9b54ce40e01cc07373b270649caf065e2a21400f98c99687
20230911: f62538996f4680ace4c38dccb7fe433ad7c95622a65b1afad0f9cb35d249e1ab
20230904: 24480f319a51d8d66a4617d80b113069ca56e1157c731fb39a0f412a1d3e4176
amd64:
20230807: fa16b92b3a36665783951fa6abb541bca52983322d135eb1d91ae03b93f4b477
20230801: 372a7c60054ab3351f5d32c925fb19bb0467b7eb9cd6032f846ba3f5b23451f8
@@ -1238,15 +1205,6 @@ gvisor_containerd_shim_binary_checksums:
20230508: 7e4c74b8fc05738a7cdf7c8d99d8030007894cbc7dda774533482c5c459a8ba9
20230501: f951c2b8d017005787437d9f849c3edfff54c79d39eb172f92abe745652ef94d
20230417: 61c3d75a46c8d2652306b5b3ab33e4fb519b03a3852bea738c2700ec650afe4e
20231030: ea485191cd95d57d7e7fd1a59f7e42a624432a5634bd9b6e4f3d37e86ab0e935
20231023: 5045ac4983701aea469ec9934b9fd37c292259682c35e2e25f664633d183db93
20231016: a9f85a10b914526f78465142958c82c058484cbab1f6af5bf4cf1d95a0322ea3
20231009: 274eeb298a538d899ca19a659120b8249dd3169f3128a1e5794308a241ce3fcc
20231003: c96d0f062d979249e03355421659363a353310169df86511861505f329ab1614
20230925: c96d0f062d979249e03355421659363a353310169df86511861505f329ab1614
20230920: 09eafdf9d37e51fa9719a3752c11564124a5ba509fa50c651f81967a0051781d
20230911: 6f1f1fc33f0dc1255642ece98b909ebccf3a54a1a2c950d1d84928b78a7c3e22
20230904: 73b2e9c4761622ef82eefe74a3346196834728ad8b980101e2d87c6e3c48015c
ppc64le:
20230807: 0
20230801: 0
@@ -1263,22 +1221,8 @@ gvisor_containerd_shim_binary_checksums:
20230508: 0
20230501: 0
20230417: 0
20231030: 0
20231023: 0
20231016: 0
20231009: 0
20231003: 0
20230925: 0
20230920: 0
20230911: 0
20230904: 0
nerdctl_archive_checksums:
arm:
1.7.1: 799d35de7a182da35d850308c7f1787cd7321404348ff2d5ba64ad43b06b395a
1.7.0: 8b9e7cccbcc0a472685d1bc285f591f41005f8699e7265ea5438a3e06aefdcfd
1.6.2: 69363f4dbf2616d5238647bfbff60525b7b59417a26de8eb255b6d6a09171175
1.6.1: 89187ff46c5a515a5635a4017a476d82cdc1fc3de906135273c64329189b906e
1.6.0: 20dc5f6912de321d4b6aa8647ce77c261cd6d5726d0b5dfae56bd9cdbd7c28fb
1.5.0: 36c44498b08a08b652d01812e5f857009373fba64ce9c8ff22e101b205bbc5fb
1.4.0: b81bece6e8a10762a132f04f54df60d043df4856b5c5ce35d8e6c6936db0b6a0
1.3.1: ed24086dbea22612dbcc3d14ee6f1d152b0eb6905bd8c84d3413c1f4c8d45d10
@@ -1288,11 +1232,6 @@ nerdctl_archive_checksums:
1.1.0: cc3bc31b4df015806717149f13b3b329f8fb62e3631aa2abdbae71664ce5c40d
1.0.0: 8fd283a2f2272b15f3df43cd79642c25f19f62c3c56ad58bb68afb7ed92904c2
arm64:
1.7.1: 46affa0564bb74f595a817e7d5060140099d9cfd9e00e1272b4dbe8b0b85c655
1.7.0: 1255eea5bc2dbac9339d0a9acfb0651dda117504d52cd52b38cf3c2251db4f39
1.6.2: ece848045290dd61f542942248587e91125563af46c0ea972a7c908d0d39c96c
1.6.1: b91ec17a6f7bcb148ed7ad086da6c470ee33f7218c769d5d490e0a1d6a45fdb4
1.6.0: d5f1ed3cda151385d313f9007afc708cae0018c9da581088b092328db154d0c6
1.5.0: 1bb613049a91871614d407273e883057040e8393ef7be9508598a92b2efda4b7
1.4.0: 0edb064a7d68d0425152ed59472ce7566700b4e547afb300481498d4c7fc6cf1
1.3.1: 9e82a6a34c89d3e6a65dc8d77a3723d796d71e0784f54a0c762a2a1940294e3b
@@ -1302,11 +1241,6 @@ nerdctl_archive_checksums:
1.1.0: a0b57b39341b9d67a3f0ae74e19985c72e930bad14291cbbd8479ed6a6a64e83
1.0.0: 27622c9d95efe6d807d5f3770d24ddd71719c6ae18f76b5fc89663a51bcd6208
amd64:
1.7.1: 5fc0a6e8c3a71cbba95fbdb6833fb8a7cd8e78f53de10988362d4029c14b905a
1.7.0: 844c47b175a3d6bc8eaad0c51f23624a5ef10c09e55607803ec2bc846fb04df9
1.6.2: 67991fc144b03596f15be6c20ca112d10bd92ad467414e95b0f1d60d332ae34e
1.6.1: 992e4ffd3d88cf197f78b78333ac345faf5f184a119d43ad8a106f560781fd89
1.6.0: fc3e7eef775eff85eb6c16b2761a574e83de444831312bc92e755a1f5577872d
1.5.0: 6dc945e3dfdc38e77ceafd2ec491af753366a3cf83fefccb1debaed3459829f1
1.4.0: d8dcd4e270ae76ab294be3a451a2d8299010e69dce6ae559bc3193535610e4cc
1.3.1: 3ab552877100b336ebe3167aa57f66f99db763742f2bce9e6233644ef77fb7c9
@@ -1316,11 +1250,6 @@ nerdctl_archive_checksums:
1.1.0: fcfd36b0b9441541aab0793c0f586599e6d774781c74f16468a3300026120c0e
1.0.0: 3e993d714e6b88d1803a58d9ff5a00d121f0544c35efed3a3789e19d6ab36964
ppc64le:
1.7.1: 09fd0cbef25c98e08c5cc2d1e39da279cbf66c430fdf6c8738e56ce8f949dad9
1.7.0: e421ae655ff68461bad04b4a1a0ffe40c6f0fcfb0847d5730d66cd95a7fd10cd
1.6.2: 3b0d6e4c42b99e2dd8059ded81cde69f42b065d9f486142f3c9b0861ba7effef
1.6.1: 3924467d9430df991ebdf4e78211bac2b29e9a066d5000d98f8d4ebde2bb7b4c
1.6.0: c47717ed176f55b291d2068ed6e2445481c391936bd322614e0ff9effe06eb4d
1.5.0: 169d546d35ba3e6ef088cc81e101f58d5ecb08e71c5ed776c99482854ea3ef8a
1.4.0: 306d5915b387637407db67ceb96cd89ff7069f0024fb1bbc948a6602638eceaa
1.3.1: 21700f5fe8786ed7749b61b3dbd49e3f2345461e88fe2014b418a1bdeffbfb99
@@ -1331,20 +1260,12 @@ nerdctl_archive_checksums:
1.0.0: 2fb02e629a4be16b194bbfc64819132a72ede1f52596bd8e1ec2beaf7c28c117
containerd_archive_checksums:
arm:
1.7.11: 0
1.7.10: 0
1.7.9: 0
1.7.8: 0
1.7.7: 0
1.7.6: 0
1.7.5: 0
1.7.4: 0
1.7.3: 0
1.7.2: 0
1.7.1: 0
1.7.0: 0
1.6.25: 0
1.6.24: 0
1.6.23: 0
1.6.22: 0
1.6.21: 0
@@ -1523,7 +1444,6 @@ containerd_archive_checksums:
1.5.14: 0
skopeo_binary_checksums:
arm:
v1.13.3: 0
v1.13.2: 0
v1.13.1: 0
v1.13.0: 0
@@ -1535,7 +1455,6 @@ skopeo_binary_checksums:
v1.9.3: 0
v1.9.2: 0
arm64:
v1.13.3: 1f7726b020ff9bc931ce16caa13c29999738a231f1414028282cd8f8661eb747
v1.13.2: 520cc31c15796405b82d01c78629d5b581eced3512ca0b6b184ed82f5e18dc86
v1.13.1: 3b7db2b827fea432aa8a861b5caa250271c05da70bd240aa4045f692eba52e24
v1.13.0: d23e43323c0a441d1825f9da483b07c7f265f2bd0a4728f7daac4239460600a3
@@ -1547,7 +1466,6 @@ skopeo_binary_checksums:
v1.9.3: 27c88183de036ebd4ffa5bc5211329666e3c40ac69c5d938bcdab9b9ec248fd4
v1.9.2: 1b7b4411c9723dbbdda4ae9dde23a33d8ab093b54c97d3323784b117d3e9413f
amd64:
v1.13.3: 65707992885b1a4a446af6342874749478a1af7e17ab3f4df8fb89509e8b1966
v1.13.2: 2f00be6ee1c4cbfa7f2452be90a1a2ce88fd92a6d0f6a2e9d901bd2087bd9092
v1.13.1: 8c15c56a6caffeb863c17d73a6361218c04c7763e020fffc8d5d6745cacfa901
v1.13.0: 8cb477ee25010497fc9df53a6205dbd9fe264dd8a5ea4e934b9ec24d5bdc126c
@@ -1559,7 +1477,6 @@ skopeo_binary_checksums:
v1.9.3: 6e00cf4661c081fb1d010ce60904dccb880788a52bf10de16a40f32082415a87
v1.9.2: 5c82f8fc2bcb2502cf7cdf9239f54468d52f5a2a8072893c75408b78173c4ba6
ppc64le:
v1.13.3: 0
v1.13.2: 0
v1.13.1: 0
v1.13.0: 0
@@ -1572,8 +1489,6 @@ skopeo_binary_checksums:
v1.9.2: 0
yq_checksums:
arm:
v4.40.1: 5a005b89cb63994f999785716ab196160042516cad53a0535244f76aad468966
v4.35.2: 000e1a8e82be5e99341c507a2abe93e104f0d4619dc7df742e88043206544c7e
v4.35.1: b2349bc220394329bc95865375feb5d777f5a5177bcdede272788b218f057a05
v4.34.2: 161f2b64e7bf277614983014b2b842e9ae9c1f234a9ea12593b0e5ebe5a89681
v4.34.1: dfda7fc51bdf44d3551c4bca78ecd52c13d7137d99ec3f7b466c50333e0a0b7c
@@ -1584,8 +1499,6 @@ yq_checksums:
v4.32.1: 0fc6c3e41af7613dcd0e2af5c3c448e7d0a46eab8b38a22f682d71e35720daed
v4.31.2: 4873c86de5571487cb2dcfd68138fc9e0aa9a1382db958527fa8bc02349b5b26
arm64:
v4.40.1: 11491c62fa0af9995f26a64e9cce97c8404018bb6b0acd7d7ac1be0f437ecf28
v4.35.2: 6ea822bc966e7dc23bb7d675a1ff36bc2e7a9a9f88c402129eafbd6b19d8ff8a
v4.35.1: 1d830254fe5cc2fb046479e6c781032976f5cf88f9d01a6385898c29182f9bed
v4.34.2: 6ea70418755aa805b6d03e78a1c8a1bf236220f187ba3fb4f30663a35c43b4c1
v4.34.1: c1410df7b1266d34a89a91dcfeaf8eb27cb1c3f69822d72040d167ec61917ba0
@@ -1596,8 +1509,6 @@ yq_checksums:
v4.32.1: db4eba6ced2656e1c40e4d0f406ee189773bdda1054cbd097c1dba471e04dd4d
v4.31.2: 590f19f0a696103376e383c719fe0df28c62515627bf44e5e69403073ba83cbf
amd64:
v4.40.1: 97e931eb40791b7f0cf02363065684d807bdbc0c5973b97a37d60a7a71e8cf73
v4.35.2: 8afd786b3b8ba8053409c5e7d154403e2d4ed4cf3e93c237462dc9ef75f38c8d
v4.35.1: bd695a6513f1196aeda17b174a15e9c351843fb1cef5f9be0af170f2dd744f08
v4.34.2: 1952f93323e871700325a70610d2b33bafae5fe68e6eb4aec0621214f39a4c1e
v4.34.1: c5a92a572b3bd0024c7b1fe8072be3251156874c05f017c23f9db7b3254ae71a
@@ -1608,8 +1519,6 @@ yq_checksums:
v4.32.1: e53b82caa86477bd96cf447138c72c9a0a857142a5bcdd34440b2644693ed18f
v4.31.2: 71ef4141dbd9aec3f7fb45963b92460568d044245c945a7390831a5a470623f7
ppc64le:
v4.40.1: 7434cac727b3bc544e2e91abffb52bafa7180cc344cd84b5019ccbd2eebf9f0c
v4.35.2: 33242c57d1cab1b880b37ea7235c09966a8525319edc41ced2c70290c6a7c924
v4.35.1: 713e2c40c5d659cbed7bf093f4c718674a75f9fe5b10ac96fd422372af198684
v4.34.2: e149b36f93a1318414c0af971755a1488df4844356b6e9e052adf099a72e3a3a
v4.34.1: 3e629c51a07302920110893796f54f056a6ef232f791b9c67fdbe95362921a03

View File

@@ -57,8 +57,7 @@ download_retries: 4
docker_image_pull_command: "{{ docker_bin_dir }}/docker pull"
docker_image_info_command: "{{ docker_bin_dir }}/docker images -q | xargs -i {{ '{{' }} docker_bin_dir }}/docker inspect -f {% raw %}'{{ '{{' }} if .RepoTags }}{{ '{{' }} join .RepoTags \",\" }}{{ '{{' }} end }}{{ '{{' }} if .RepoDigests }},{{ '{{' }} join .RepoDigests \",\" }}{{ '{{' }} end }}' {% endraw %} {} | tr '\n' ','"
nerdctl_image_info_command: "{{ bin_dir }}/nerdctl -n k8s.io images --format '{% raw %}{{ .Repository }}:{{ .Tag }}{% endraw %}' 2>/dev/null | grep -v ^:$ | tr '\n' ','"
# Using the ctr instead of nerdctl to workdaround the https://github.com/kubernetes-sigs/kubespray/issues/10670
nerdctl_image_pull_command: "{{ bin_dir }}/ctr -n k8s.io images pull{% if containerd_registries_mirrors is defined %} --hosts-dir {{ containerd_cfg_dir }}/certs.d{%- endif -%}"
nerdctl_image_pull_command: "{{ bin_dir }}/nerdctl -n k8s.io pull --quiet"
crictl_image_info_command: "{{ bin_dir }}/crictl images --verbose | awk -F ': ' '/RepoTags|RepoDigests/ {print $2}' | tr '\n' ','"
crictl_image_pull_command: "{{ bin_dir }}/crictl pull"
@@ -101,7 +100,7 @@ github_image_repo: "ghcr.io"
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
calico_version: "v3.26.4"
calico_version: "v3.25.2"
calico_ctl_version: "{{ calico_version }}"
calico_cni_version: "{{ calico_version }}"
calico_flexvol_version: "{{ calico_version }}"
@@ -115,7 +114,6 @@ flannel_version: "v0.22.0"
flannel_cni_version: "v1.1.2"
cni_version: "v1.3.0"
weave_version: 2.8.1
pod_infra_version: "3.9"
cilium_version: "v1.13.4"
cilium_cli_version: "v0.15.0"
@@ -123,35 +121,41 @@ cilium_enable_hubble: false
kube_ovn_version: "v1.11.5"
kube_ovn_dpdk_version: "19.11-{{ kube_ovn_version }}"
kube_router_version: "v2.0.0"
kube_router_version: "v1.5.1"
multus_version: "v3.8"
helm_version: "v3.13.1"
nerdctl_version: "1.7.1"
helm_version: "v3.12.3"
nerdctl_version: "1.5.0"
krew_version: "v0.4.4"
skopeo_version: "v1.13.2"
# Get kubernetes major version (i.e. 1.17.4 => 1.17)
kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}"
pod_infra_supported_version:
v1.27: "3.9"
v1.26: "3.9"
v1.25: "3.8"
pod_infra_version: "{{ pod_infra_supported_version[kube_major_version] }}"
etcd_supported_versions:
v1.28: "v3.5.10"
v1.27: "v3.5.10"
v1.26: "v3.5.10"
v1.25: "v3.5.9"
etcd_version: "{{ etcd_supported_versions[kube_major_version] }}"
crictl_supported_versions:
v1.28: "v1.28.0"
v1.27: "v1.27.1"
v1.26: "v1.26.1"
v1.25: "v1.25.0"
crictl_version: "{{ crictl_supported_versions[kube_major_version] }}"
crio_supported_versions:
v1.28: v1.28.1
v1.27: v1.27.1
v1.26: v1.26.4
v1.25: v1.25.4
crio_version: "{{ crio_supported_versions[kube_major_version] }}"
yq_version: "v4.35.2"
yq_version: "v4.35.1"
# Download URLs
kubelet_download_url: "https://dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
@@ -281,7 +285,7 @@ coredns_image_is_namespaced: "{{ (coredns_version is version('v1.7.1', '>=')) }}
coredns_image_repo: "{{ kube_image_repo }}{{ '/coredns/coredns' if (coredns_image_is_namespaced | bool) else '/coredns' }}"
coredns_image_tag: "{{ coredns_version if (coredns_image_is_namespaced | bool) else (coredns_version | regex_replace('^v', '')) }}"
nodelocaldns_version: "1.22.28"
nodelocaldns_version: "1.22.20"
nodelocaldns_image_repo: "{{ kube_image_repo }}/dns/k8s-dns-node-cache"
nodelocaldns_image_tag: "{{ nodelocaldns_version }}"
@@ -307,14 +311,14 @@ rbd_provisioner_image_tag: "{{ rbd_provisioner_version }}"
local_path_provisioner_version: "v0.0.24"
local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
local_path_provisioner_image_tag: "{{ local_path_provisioner_version }}"
ingress_nginx_version: "v1.9.4"
ingress_nginx_version: "v1.8.1"
ingress_nginx_controller_image_repo: "{{ kube_image_repo }}/ingress-nginx/controller"
ingress_nginx_controller_image_tag: "{{ ingress_nginx_version }}"
ingress_nginx_kube_webhook_certgen_image_repo: "{{ kube_image_repo }}/ingress-nginx/kube-webhook-certgen"
ingress_nginx_kube_webhook_certgen_image_tag: "v20231011-8b53cabe0"
ingress_nginx_kube_webhook_certgen_image_tag: "v20230407"
alb_ingress_image_repo: "{{ docker_image_repo }}/amazon/aws-alb-ingress-controller"
alb_ingress_image_tag: "v1.1.9"
cert_manager_version: "v1.13.2"
cert_manager_version: "v1.11.1"
cert_manager_controller_image_repo: "{{ quay_image_repo }}/jetstack/cert-manager-controller"
cert_manager_controller_image_tag: "{{ cert_manager_version }}"
cert_manager_cainjector_image_repo: "{{ quay_image_repo }}/jetstack/cert-manager-cainjector"
@@ -336,9 +340,9 @@ csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe"
csi_livenessprobe_image_tag: "v2.5.0"
snapshot_controller_supported_versions:
v1.28: "v4.2.1"
v1.27: "v4.2.1"
v1.26: "v4.2.1"
v1.25: "v4.2.1"
snapshot_controller_image_repo: "{{ kube_image_repo }}/sig-storage/snapshot-controller"
snapshot_controller_image_tag: "{{ snapshot_controller_supported_versions[kube_major_version] }}"
@@ -695,7 +699,7 @@ downloads:
enabled: "{{ kube_network_plugin == 'cilium' or cilium_deploy_additionally | default(false) | bool }}"
file: true
version: "{{ cilium_cli_version }}"
dest: "{{ local_release_dir }}/cilium-{{ cilium_cli_version }}-{{ image_arch }}.tar.gz"
dest: "{{ local_release_dir }}/cilium-{{ cilium_cli_version }}-{{ image_arch }}"
sha256: "{{ ciliumcli_binary_checksum }}"
url: "{{ ciliumcli_download_url }}"
unarchive: true

View File

@@ -120,10 +120,3 @@ etcd_experimental_initial_corrupt_check: true
# may contain some private data, so it is recommended to set it to false
# in the production environment.
unsafe_show_logs: false
# Enable distributed tracing
# https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
etcd_experimental_enable_distributed_tracing: false
etcd_experimental_distributed_tracing_sample_rate: 100
etcd_experimental_distributed_tracing_address: "localhost:4317"
etcd_experimental_distributed_tracing_service_name: etcd

View File

@@ -1,14 +1,22 @@
---
- name: Backup etcd data
command: /bin/true
notify:
- Refresh Time Fact
- Set Backup Directory
- Create Backup Directory
- Stat etcd v2 data directory
- Backup etcd v2 data
- Backup etcd v3 data
when: etcd_cluster_is_healthy.rc == 0
- name: Refresh Time Fact
setup:
filter: ansible_date_time
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Set Backup Directory
set_fact:
etcd_backup_directory: "{{ etcd_backup_prefix }}/etcd-{{ ansible_date_time.date }}_{{ ansible_date_time.time }}"
listen: Restart etcd
- name: Create Backup Directory
file:
@@ -17,8 +25,6 @@
owner: root
group: root
mode: 0600
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Stat etcd v2 data directory
stat:
@@ -27,13 +33,9 @@
get_checksum: no
get_mime: no
register: etcd_data_dir_member
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Backup etcd v2 data
when:
- etcd_data_dir_member.stat.exists
- etcd_cluster_is_healthy.rc == 0
when: etcd_data_dir_member.stat.exists
command: >-
{{ bin_dir }}/etcdctl backup
--data-dir {{ etcd_data_dir }}
@@ -44,7 +46,6 @@
register: backup_v2_command
until: backup_v2_command.rc == 0
delay: "{{ retry_stagger | random + 3 }}"
listen: Restart etcd
- name: Backup etcd v3 data
command: >-
@@ -60,5 +61,3 @@
register: etcd_backup_v3_command
until: etcd_backup_v3_command.rc == 0
delay: "{{ retry_stagger | random + 3 }}"
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0

View File

@@ -1,18 +1,12 @@
---
- name: Find old etcd backups
ansible.builtin.find:
file_type: directory
recurse: false
paths: "{{ etcd_backup_prefix }}"
patterns: "etcd-*"
register: _etcd_backups
when: etcd_backup_retention_count >= 0
listen: Restart etcd
- name: Cleanup etcd backups
command: /bin/true
notify:
- Remove old etcd backups
- name: Remove old etcd backups
ansible.builtin.file:
state: absent
path: "{{ item }}"
loop: "{{ (_etcd_backups.files | sort(attribute='ctime', reverse=True))[etcd_backup_retention_count:] | map(attribute='path') }}"
shell:
chdir: "{{ etcd_backup_prefix }}"
cmd: "set -o pipefail && find . -name 'etcd-*' -type d | sort -n | head -n -{{ etcd_backup_retention_count }} | xargs rm -rf"
executable: /bin/bash
when: etcd_backup_retention_count >= 0
listen: Restart etcd

View File

@@ -1,27 +1,38 @@
---
- name: Restart etcd
command: /bin/true
notify:
- Backup etcd data
- Etcd | reload systemd
- Reload etcd
- Wait for etcd up
- Cleanup etcd backups
- name: Restart etcd-events
command: /bin/true
notify:
- Etcd | reload systemd
- Reload etcd-events
- Wait for etcd-events up
- name: Backup etcd
import_tasks: backup.yml
- name: Etcd | reload systemd
systemd:
daemon_reload: true
listen:
- Restart etcd
- Restart etcd-events
- name: Reload etcd
service:
name: etcd
state: restarted
when: is_etcd_master
listen: Restart etcd
- name: Reload etcd-events
service:
name: etcd-events
state: restarted
when: is_etcd_master
listen: Restart etcd-events
- name: Wait for etcd up
uri:
@@ -33,7 +44,6 @@
until: result.status is defined and result.status == 200
retries: 60
delay: 1
listen: Restart etcd
- name: Cleanup etcd backups
import_tasks: backup_cleanup.yml
@@ -48,7 +58,6 @@
until: result.status is defined and result.status == 200
retries: 60
delay: 1
listen: Restart etcd-events
- name: Set etcd_secret_changed
set_fact:

Some files were not shown because too many files have changed in this diff Show More