Compare commits

..

22 Commits

Author SHA1 Message Date
ChengHao Yang
45140b5582 Fix: galaxy.yml set version to 2.27.1 (#12345)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-27 07:00:33 -07:00
k8s-infra-cherrypick-robot
16760787ad Add version pinning for AWS tf provider to fix CI (#12326)
Co-authored-by: Chad Swenson <chadswen@gmail.com>
2025-06-19 19:48:51 -07:00
k8s-infra-cherrypick-robot
266117d174 fix manage-offline-container-images.sh get image_id (#12314)
Co-authored-by: DearJay <zhongtianjieyi143@gmail.com>
2025-06-15 07:46:57 -07:00
Ali Afsharzadeh
c59833b2e5 [release-2.27] Patch versions update (#12231)
* [release-2.27] Patch versions update

* Add calico crds archive checksum for v3.29.3

* Update kube_version in roles/kubespray-defaults/defaults/main/main.yml

* Revert crio version upgrade

* Upgrade calico to v3.29.4
2025-06-05 09:00:38 -07:00
Max Gautier
55194fcf6d Move 'pretend certificates' **after** cert distribution (#12221)
The link target will only exist after we distribute the certs on each node.
2025-05-16 07:43:14 -07:00
k8s-infra-cherrypick-robot
d10000ee90 Workaround missing etcd certds on control plane node (#12192)
Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-05-06 09:31:16 -07:00
Ali Afsharzadeh
6a67d28fab [release-2.27] Make fallback_ip cacheable in facts (#12182)
* Make fallback_ip cacheable in facts

* Move cacheable property after fallback_ip variable

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-02 22:03:55 -07:00
Chad Swenson
bf68231a5a Refactor control plane upgrades with reconfiguration support (#12015) (#12103)
* Refactor control plane upgrades with reconfiguration support

Adds revised support for:
- The previously removed `--config` argument for `kubeadm upgrade apply`
- Changes to `ClusterConfiguration` as part of the `upgrade-cluster.yml` playbook lifecycle
- kubeadm-config `v1beta4` `UpgradeConfiguration` for the `kubeadm upgrade apply` command: [UpgradeConfiguration v1beta4](https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/#kubeadm-k8s-io-v1beta4-UpgradeConfiguration).

* Add kubeadm upgrade node support

Per discussion:
- Use `kubeadm upgrade node` on secondary control plane upgrades
- Add support for UpgradeConfiguration.node in kubeadm-config.v1beta4
- Remove redundant `allowRCUpgrades` config
- Revert from `block` for first and secondary control plane back to unblocked tasks since they no longer share much code and it's more readable this way

* Add kubelet and kube-proxy reconfiguration to upgrades

* Fix task to use `kubeadm init phase etcd local`

* Rebase with changes from "Adapt checksums and versions to new hashes updater" PR

* Add `imagePullPolicy` and `imagePullSerial` to kubeadm-config v1beta4 `InitConfiguration.nodeRegistration`

(cherry picked from commit b551fe083d)
2025-04-02 23:18:38 -07:00
ChengHao Yang
de25806c56 Bump ingress-nginx to 1.12.1 and certgen to 1.5.2 (#12080)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-03-27 00:44:34 -07:00
ChengHao Yang
bbabe496c4 [calico] fix v3.29.2 crds archive checksum (#12082)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-03-26 02:14:33 -07:00
k8s-infra-cherrypick-robot
6073fee806 build(deps): bump cryptography from 44.0.1 to 44.0.2 (#12062)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.1 to 44.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.1...44.0.2)

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-21 06:32:32 -07:00
k8s-infra-cherrypick-robot
e354295476 fix: kubecontrollersconfigurations list permission (#12039)
[WARNING][1] kube-controllers/runconfig.go 193: unable to list KubeControllersConfiguration(default) error=connection is unauthorized: kubecontrollersconfigurations.crd.projectcalico.org "default" is forbidden: User "system:serviceaccount:kube-system:calico-kube-controllers" cannot list resource "kubecontrollersconfigurations" in API group "crd.projectcalico.org" at the cluster scope

Co-authored-by: darkobas <marko@datafund.io>
2025-03-15 09:15:47 -07:00
Kubernetes Prow Robot
1af53ce9a6 Merge pull request #12031 from VannTen/2.27-update-versions
[release-2.27] Patch versions update
2025-03-14 01:27:48 -07:00
Max Gautier
26779c01a9 CI: switch crio testing to ubuntu20
The switch to crun as a default runtime does not work with RHEL-8 like
OS, because of the default to cgroups v2

https://github.com/cri-o/cri-o/issues/8743
2025-03-13 15:43:14 +01:00
Max Gautier
5e083a5370 Update defaults versions to last checksums 2025-03-13 12:09:40 +01:00
Max Gautier
1528bdda39 Checksums updates 2025-03-13 12:05:40 +01:00
k8s-infra-cherrypick-robot
ccf2abb5b1 Remove amazon-linux2 from CI: issue with vm creation (#12017)
Co-authored-by: ant31 <2t.antoine@gmail.com>
2025-03-04 04:35:43 -08:00
k8s-infra-cherrypick-robot
ecd5b73c5e build(deps): bump cryptography from 44.0.0 to 44.0.1 (#11973)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.0 to 44.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.0...44.0.1)

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-19 01:08:27 -08:00
k8s-infra-cherrypick-robot
3514ae8d04 [release-2.27] Fix incorrect syntax for secondary nodelocaldns manifest (#11957)
* Fix incorrect syntax

* Fix incorrect syntax

---------

Co-authored-by: Raul Butuc <raulbutuc@gmail.com>
2025-02-07 08:57:56 -08:00
k8s-infra-cherrypick-robot
99e2bfe2fa [release-2.27] Fix CI by exclude the .ansible in .ansible-lint & remove ctr image pull workaround (#11956)
* exclude .ansible in ansible-lint

* remote ctr i pull workdaround

Signed-off-by: Kay Yan <kay.yan@daocloud.io>

---------

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Co-authored-by: Kay Yan <kay.yan@daocloud.io>
2025-02-07 08:05:58 -08:00
k8s-infra-cherrypick-robot
7d14c4283a [release-2.27] Updated sample in inventory (#11922)
* Updated sample in inventory

* Review changes

---------

Co-authored-by: Anshuman <anshuman@ibm.com>
2025-01-24 00:39:21 -08:00
k8s-infra-cherrypick-robot
eb413e4719 [release-2.27] Add manual option to the external_cloud_provider variable (#11884)
* Add `manual` option in the `external_cloud_provider` value

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Update external cloud provider description in roles & sample inventory

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-01-13 08:24:33 -08:00
117 changed files with 1584 additions and 1195 deletions

View File

@@ -37,5 +37,6 @@ exclude_paths:
- tests/files/custom_cni/cilium.yaml
- venv
- .github
- .ansible
mock_modules:
- gluster.gluster.gluster_volume

View File

@@ -7,8 +7,3 @@ updates:
labels:
- dependencies
- release-note-none
groups:
molecule:
patterns:
- molecule
- molecule-plugins*

View File

@@ -6,10 +6,11 @@ stages:
- deploy-extended
variables:
KUBESPRAY_VERSION: v2.27.0
KUBESPRAY_VERSION: v2.26.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
ANSIBLE_STDOUT_CALLBACK: "debug"
MAGIC: "ci check this"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
@@ -42,8 +43,6 @@ before_script:
- cluster-dump/
needs:
- pipeline-image
variables:
ANSIBLE_STDOUT_CALLBACK: "debug"
.job-moderated:
extends: .job
@@ -56,6 +55,7 @@ before_script:
.testcases: &testcases
extends: .job-moderated
retry: 1
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1

View File

@@ -25,7 +25,6 @@
--label 'git-branch'=$CI_COMMIT_REF_SLUG
--label 'git-tag=$CI_COMMIT_TAG'
--destination $PIPELINE_IMAGE
--log-timestamp=true
pipeline-image:
extends: .build-container

View File

@@ -7,7 +7,7 @@ pre-commit:
variables:
PRE_COMMIT_HOME: /pre-commit-cache
script:
- pre-commit run --all-files --show-diff-on-failure
- pre-commit run --all-files
cache:
key: pre-commit-all
paths:

View File

@@ -75,7 +75,7 @@ packet_ubuntu20-calico-all-in-one:
# ### PR JOBS PART2
packet_ubuntu20-crio:
extends: .packet_pr_manual
extends: .packet_pr
packet_ubuntu22-calico-all-in-one:
extends: .packet_pr
@@ -88,10 +88,10 @@ packet_ubuntu22-calico-all-in-one-upgrade:
packet_ubuntu24-calico-etcd-datastore:
extends: .packet_pr
packet_almalinux9-crio:
extends: .packet_pr
packet_almalinux8-crio:
extends: .packet_pr_manual
packet_almalinux9-kube-ovn:
packet_almalinux8-kube-ovn:
extends: .packet_pr
packet_debian11-calico-collection:
@@ -103,9 +103,6 @@ packet_debian11-macvlan:
packet_debian12-cilium:
extends: .packet_pr
packet_almalinux8-calico:
extends: .packet_pr
packet_rockylinux8-calico:
extends: .packet_pr
@@ -114,8 +111,13 @@ packet_rockylinux9-cilium:
variables:
RESET_CHECK: "true"
# Need an update of the container image to use schema v2
# update: quay.io/kubespray/vm-amazon-linux-2:latest
packet_amazon-linux-2-all-in-one:
extends: .packet_pr
extends: .packet_pr_manual
rules:
- when: manual
allow_failure: true
packet_opensuse-docker-cilium:
extends: .packet_pr
@@ -139,7 +141,7 @@ packet_debian12-docker:
packet_debian12-calico:
extends: .packet_pr_extended
packet_almalinux9-calico-remove-node:
packet_almalinux8-calico-remove-node:
extends: .packet_pr_extended
variables:
REMOVE_NODE_CHECK: "true"
@@ -148,10 +150,10 @@ packet_almalinux9-calico-remove-node:
packet_rockylinux9-calico:
extends: .packet_pr_extended
packet_almalinux9-calico:
packet_almalinux8-calico:
extends: .packet_pr_extended
packet_almalinux9-docker:
packet_almalinux8-docker:
extends: .packet_pr_extended
packet_ubuntu24-calico-all-in-one:
@@ -182,10 +184,10 @@ packet_ubuntu20-flannel-ha-once:
packet_fedora39-calico-swap-selinux:
extends: .packet_pr_manual
packet_almalinux9-calico-ha-ebpf:
packet_almalinux8-calico-ha-ebpf:
extends: .packet_pr_manual
packet_almalinux9-calico-nodelocaldns-secondary:
packet_almalinux8-calico-nodelocaldns-secondary:
extends: .packet_pr_manual
packet_debian11-custom-cni:

View File

@@ -20,6 +20,12 @@ repos:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.12.0
hooks:
- id: markdownlint
exclude: "^.github|(^docs/_sidebar\\.md$)"
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
@@ -29,7 +35,7 @@ repos:
files: "\\.sh$"
- repo: https://github.com/ansible/ansible-lint
rev: v25.1.0
rev: v24.12.2
hooks:
- id: ansible-lint
additional_dependencies:
@@ -45,6 +51,12 @@ repos:
- repo: local
hooks:
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: collection-build-install
name: Build and install kubernetes-sigs.kubespray Ansible collection
language: python
@@ -78,17 +90,3 @@ repos:
- jinja
additional_dependencies:
- jinja2
- id: render-readme-versions
name: Update versions in README.md to match their defaults values
language: python
additional_dependencies:
- ansible-core>=2.16.4
entry: scripts/render_readme_version.yml
pass_filenames: false
- repo: https://github.com/markdownlint/markdownlint
rev: v0.12.0
hooks:
- id: markdownlint
exclude: "^.github|(^docs/_sidebar\\.md$)"

View File

@@ -77,47 +77,43 @@ vagrant up
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye
- **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **CentOS/RHEL** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Oracle Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
Note:
- Upstart/SysV init based OS types are not supported.
- [Kernel requirements](docs/operations/kernel-requirements.md) (please read if the OS kernel version is < 4.19).
Note: Upstart/SysV init based OS types are not supported.
## Supported Components
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.32.0
- [etcd](https://github.com/etcd-io/etcd) v3.5.16
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.31.9
- [etcd](https://github.com/etcd-io/etcd) v3.5.21
- [docker](https://www.docker.com/) v26.1
- [containerd](https://containerd.io/) v1.7.24
- [cri-o](http://cri-o.io/) v1.32.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) v1.7.27
- [cri-o](http://cri-o.io/) v1.31.6 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.4.0
- [calico](https://github.com/projectcalico/calico) v3.29.1
- [cni-plugins](https://github.com/containernetworking/plugins) v1.4.1
- [calico](https://github.com/projectcalico/calico) v3.29.4
- [cilium](https://github.com/cilium/cilium) v1.15.9
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v4.1.0
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/rajch/weave) v2.8.7
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.15.3
- [coredns](https://github.com/coredns/coredns) v1.11.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.12.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.12.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.11.0
- [helm](https://helm.sh/) v3.16.4
- [metallb](https://metallb.universe.tf/) v0.13.9
@@ -133,15 +129,13 @@ Note:
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.16.4
<!-- END ANSIBLE MANAGED BLOCK -->
## Container Runtime Notes
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.30**
- **Minimum required version of Kubernetes is v1.29**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
@@ -155,10 +149,10 @@ Note:
Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Control Plane
- Memory: 2 GB
- Worker Node
- Memory: 1 GB
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
## Network Plugins

4
Vagrantfile vendored
View File

@@ -26,7 +26,6 @@ SUPPORTED_OS = {
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"almalinux9" => {box: "almalinux/9", user: "vagrant"},
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
"fedora39" => {box: "fedora/39-cloud-base", user: "vagrant"},
@@ -58,7 +57,8 @@ $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004"
$network_plugin ||= "flannel"
$inventories ||= []
$inventory ||= "inventory/sample"
$inventories ||= [$inventory]
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
$download_run_once ||= "True"

View File

@@ -67,23 +67,3 @@ Step(2) download files and run nginx container
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.
## upload2artifactory.py
After the steps above, this script can recursively upload each file under a directory to a generic repository in Artifactory.
Environment Variables:
- USERNAME -- At least permissions'Deploy/Cache' and 'Delete/Overwrite'.
- TOKEN -- Generate this with 'Set Me Up' in your user.
- BASE_URL -- The URL including the repository name.
Step(3) (optional) upload files to Artifactory
```shell
cd kubespray/contrib/offline/offline-files
export USERNAME=admin
export TOKEN=...
export BASE_URL=https://artifactory.example.com/artifactory/a-generic-repo/
./upload2artifactory.py
```

View File

@@ -146,7 +146,7 @@ function register_container_images() {
if [ "${org_image}" == "ID:" ]; then
org_image=$(echo "${load_image}" | awk '{print $4}')
fi
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
image_id=$(sudo ${runtime} image inspect --format "{{.Id}}" "${org_image}")
if [ -z "${file_name}" ]; then
echo "Failed to get file_name for line ${line}"
exit 1

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env python3
"""This is a helper script to manage-offline-files.sh.
After running manage-offline-files.sh, you can run upload2artifactory.py
to recursively upload each file to a generic repository in Artifactory.
This script recurses the current working directory and is intended to
be started from 'kubespray/contrib/offline/offline-files'
Environment Variables:
USERNAME -- At least permissions'Deploy/Cache' and 'Delete/Overwrite'.
TOKEN -- Generate this with 'Set Me Up' in your user.
BASE_URL -- The URL including the repository name.
"""
import os
import urllib.request
import base64
def upload_file(file_path, destination_url, username, token):
"""Helper function to upload a single file"""
try:
with open(file_path, 'rb') as f:
file_data = f.read()
request = urllib.request.Request(destination_url, data=file_data, method='PUT') # NOQA
auth_header = base64.b64encode(f"{username}:{token}".encode()).decode()
request.add_header("Authorization", f"Basic {auth_header}")
with urllib.request.urlopen(request) as response:
if response.status in [200, 201]:
print(f"Success: Uploaded {file_path}")
else:
print(f"Failed: {response.status} {response.read().decode('utf-8')}") # NOQA
except urllib.error.HTTPError as e:
print(f"HTTPError: {e.code} {e.reason} for {file_path}")
except urllib.error.URLError as e:
print(f"URLError: {e.reason} for {file_path}")
except OSError as e:
print(f"OSError: {e.strerror} for {file_path}")
def upload_files(base_url, username, token):
""" Recurse current dir and upload each file using urllib.request """
for root, _, files in os.walk(os.getcwd()):
for file in files:
file_path = os.path.join(root, file)
relative_path = os.path.relpath(file_path, os.getcwd())
destination_url = f"{base_url}/{relative_path}"
print(f"Uploading {file_path} to {destination_url}")
upload_file(file_path, destination_url, username, token)
if __name__ == "__main__":
a_user = os.getenv("USERNAME")
a_token = os.getenv("TOKEN")
a_url = os.getenv("BASE_URL")
if not a_user or not a_token or not a_url:
print(
"Error: Environment variables USERNAME, TOKEN, and BASE_URL must be set." # NOQA
)
exit()
upload_files(a_url, a_user, a_token)

3
contrib/terraform/OWNERS Normal file
View File

@@ -0,0 +1,3 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- miouge1

View File

@@ -1,5 +1,11 @@
terraform {
required_version = ">= 0.12.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {

2
docs/_sidebar.md generated
View File

@@ -68,6 +68,7 @@
* Operating Systems
* [Amazonlinux](/docs/operating_systems/amazonlinux.md)
* [Bootstrap-os](/docs/operating_systems/bootstrap-os.md)
* [Centos](/docs/operating_systems/centos.md)
* [Fcos](/docs/operating_systems/fcos.md)
* [Flatcar](/docs/operating_systems/flatcar.md)
* [Kylinlinux](/docs/operating_systems/kylinlinux.md)
@@ -82,7 +83,6 @@
* [Ha-mode](/docs/operations/ha-mode.md)
* [Hardening](/docs/operations/hardening.md)
* [Integration](/docs/operations/integration.md)
* [Kernel-requirements](/docs/operations/kernel-requirements.md)
* [Large-deployments](/docs/operations/large-deployments.md)
* [Mirror](/docs/operations/mirror.md)
* [Nodes](/docs/operations/nodes.md)

View File

@@ -106,6 +106,7 @@ The following tags are defined in playbooks:
| iptables | Flush and clear iptable when resetting |
| k8s-pre-upgrade | Upgrading K8s cluster |
| kata-containers | Configuring kata-containers runtime |
| krew | Install and manage krew |
| kubeadm | Roles linked to kubeadm tasks |
| kube-apiserver | Configuring static pod kube-apiserver |
| kube-controller-manager | Configuring static pod kube-controller-manager |
@@ -208,11 +209,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.27.0
docker pull quay.io/kubespray/kubespray:v2.27.0
git checkout v2.26.0
docker pull quay.io/kubespray/kubespray:v2.26.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.27.0 bash
quay.io/kubespray/kubespray:v2.26.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```

View File

@@ -6,8 +6,7 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: |
@@ -25,8 +24,7 @@ ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -44,8 +42,7 @@ ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -0,0 +1,7 @@
# CentOS and derivatives
## CentOS 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -1,11 +1,7 @@
# Red Hat Enterprise Linux (RHEL)
The documentation also applies to Red Hat derivatives, including Alma Linux, Rocky Linux, Oracle Linux, and CentOS.
## RHEL Support Subscription Registration
The content of this section does not apply to open-source derivatives.
In order to install packages via yum or dnf, RHEL 7/8 hosts are required to be registered for a valid Red Hat support subscription.
You can apply for a 1-year Development support subscription by creating a [Red Hat Developers](https://developers.redhat.com/) account. Be aware though that as the Red Hat Developers subscription is limited to only 1 year, it should not be used to register RHEL 7/8 hosts provisioned in Production environments.
@@ -29,12 +25,10 @@ rh_subscription_role: "Red Hat Enterprise Server"
rh_subscription_sla: "Self-Support"
```
If the RHEL 8/9 hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
If the RHEL 7/8 hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
## RHEL 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
The kernel version is lower than the kubenretes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md).

View File

@@ -1,35 +0,0 @@
# Kernel Requirements
For Kubernetes >=1.32.0, the recommended kernel LTS version from the 4.x series is 4.19. Any 5.x or 6.x versions are also supported. For cgroups v2 support, the minimum version is 4.15 and the recommended version is 5.8+. Refer to [this link](https://github.com/kubernetes/kubernetes/blob/v1.32.0/vendor/k8s.io/system-validators/validators/types_unix.go#L33). For more information, see [kernel version requirements](https://kubernetes.io/docs/reference/node/kernel-version-requirements).
If the OS kernel version is lower than required, add the following configuration to ignore the kubeadm preflight errors:
```yaml
kubeadm_ignore_preflight_errors:
- SystemVerification
```
The Kernel Version Matrixs:
| OS Verion | Kernel Verion | Kernel >=4.19 |
|--- | --- | --- |
| RHEL 9 | 5.14 | :white_check_mark: |
| RHEL 8 | 4.18 | :x: |
| Alma Linux 9 | 5.14 | :white_check_mark: |
| Alma Linux 8 | 4.18 | :x: |
| Rocky Linux 9 | 5.14 | :white_check_mark: |
| Rocky Linux 8 | 4.18 | :x: |
| Oracle Linux 9 | 5.14 | :white_check_mark: |
| Oracle Linux 8 | 4.18 | :x: |
| Ubuntu 24.04 | 6.6 | :white_check_mark: |
| Ubuntu 22.04 | 5.15 | :white_check_mark: |
| Ubuntu 20.04 | 5.4 | :white_check_mark: |
| Debian 12 | 6.1 | :white_check_mark: |
| Debian 11 | 5.10 | :white_check_mark: |
| Fedora 40 | 6.8 | :white_check_mark: |
| Fedora 39 | 6.5 | :white_check_mark: |
| openSUSE Leap 15.5 | 5.14 | :white_check_mark: |
| Amazon Linux 2 | 4.14 | :x: |
| openEuler 24.03 | 6.6 | :white_check_mark: |
| openEuler 22.03 | 5.10 | :white_check_mark: |
| openEuler 20.03 | 4.19 | :white_check_mark: |

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.28.0
version: 2.27.1
readme: README.md
authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -78,6 +78,8 @@
# gvisor_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
# gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
# [Optional] Krew: only if you set krew_enabled: true
# krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
## CentOS/Redhat/AlmaLinux
### For EL8, baseos and appstream must be available,

View File

@@ -255,6 +255,8 @@ argocd_enabled: false
# argocd_admin_password: "password"
# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"
# Kube VIP
kube_vip_enabled: false

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.32.0
kube_version: v1.31.9
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -60,7 +60,7 @@ credentials_dir: "{{ inventory_dir }}/credentials"
# kube_webhook_token_auth_url: https://...
# kube_webhook_token_auth_url_skip_tls_verify: false
## For webhook authorization, authorization_modes must include Webhook or kube_apiserver_authorization_config_authorizers must configure a type: Webhook
## For webhook authorization, authorization_modes must include Webhook
# kube_webhook_authorization: false
# kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false

4
logo/OWNERS Normal file
View File

@@ -0,0 +1,4 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- thomeced

View File

@@ -1,6 +1,6 @@
ansible==9.13.0
# Needed for community.crypto module
cryptography==44.0.0
cryptography==44.0.2
# Needed for jinja2 json_query templating
jmespath==1.0.1
# Needed for ansible.utils.ipaddr

View File

@@ -19,8 +19,8 @@ platforms:
memory: 1024
provider_options:
driver: kvm
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 512
provider_options:

View File

@@ -62,8 +62,6 @@ containerd_registries_mirrors:
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
# ca: ["/etc/certs/mirror.pem"]
# client: [["/etc/certs/client.pem", ""],["/etc/certs/client.cert", "/etc/certs/client.key"]]
containerd_max_container_log_line_size: 16384

View File

@@ -25,8 +25,8 @@ platforms:
- k8s_cluster
provider_options:
driver: kvm
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 1024
groups:

View File

@@ -4,10 +4,4 @@ server = "{{ item.server | default("https://" + item.prefix) }}"
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
override_path = {{ mirror.override_path | default('false') | string | lower }}
{% if mirror.ca is defined %}
ca = ["{{ ([ mirror.ca ] | flatten ) | join('","') }}"]
{% endif %}
{% if mirror.client is defined %}
client = [{% for pair in mirror.client %}["{{ pair[0] }}", "{{ pair[1] }}"]{% if not loop.last %},{% endif %}{% endfor %}]
{% endif %}
{% endfor %}

View File

@@ -5,8 +5,8 @@ driver:
provider:
name: libvirt
platforms:
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 1024
nested: true

View File

@@ -15,8 +15,8 @@ platforms:
- k8s_cluster
provider_options:
driver: kvm
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 2
memory: 1024
groups:

View File

@@ -14,8 +14,8 @@ platforms:
- kube_control_plane
provider_options:
driver: kvm
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 1024
nested: true

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- pasqualet
reviewers:
- pasqualet

View File

@@ -14,8 +14,8 @@ platforms:
- kube_control_plane
provider_options:
driver: kvm
- name: almalinux9
box: almalinux/9
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 1024
nested: true

View File

@@ -153,3 +153,25 @@
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
# This is a hack around the fact kubeadm expect the same certs path on all kube_control_plane
# TODO: fix certs generation to have the same file everywhere
# OR work with kubeadm on node-specific config
- name: Gen_certs | Pretend all control plane have all certs (with symlinks)
file:
state: link
src: "{{ etcd_cert_dir }}/node-{{ inventory_hostname }}{{ item[0] }}.pem"
dest: "{{ etcd_cert_dir }}/node-{{ item[1] }}{{ item[0] }}.pem"
mode: "0640"
loop: "{{ suffixes | product(groups['kube_control_plane']) }}"
vars:
suffixes:
- ''
- '-key'
when:
- ('kube_control_plane' in group_names)
- item[1] != inventory_hostname
register: symlink_created
failed_when:
- symlink_created is failed
- ('refusing to convert from file to symlink' not in symlink_created.msg)

View File

@@ -9,7 +9,7 @@
- name: Generate etcd certs
include_tasks: "gen_certs_script.yml"
when:
- cert_management == "script"
- cert_management | d('script') == "script"
tags:
- etcd-secrets

View File

@@ -13,10 +13,10 @@ coredns_manifests:
- coredns-sa.yml.j2
- coredns-svc.yml.j2
- "{{ dns_autoscaler_manifests if enable_dns_autoscaler else [] }}"
- "{{ coredns-poddisruptionbudget.yml.j2 if coredns_pod_disruption_budget else [] }}"
- "{{ 'coredns-poddisruptionbudget.yml.j2' if coredns_pod_disruption_budget else [] }}"
nodelocaldns_manifests:
- nodelocaldns-config.yml.j2
- nodelocaldns-daemonset.yml.j2
- nodelocaldns-sa.yml.j2
- "{{ nodelocaldns-second-daemonset.yml.j2 if enable_nodelocaldns_secondary else [] }}"
- "{{ 'nodelocaldns-second-daemonset.yml.j2' if enable_nodelocaldns_secondary else [] }}"

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- alijahnas
- luckySB

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- alijahnas
- luckySB

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- kubespray-approvers
reviewers:
- kubespray-reviewers

View File

@@ -6,7 +6,6 @@ ingress_nginx_service_nodeport_http: ""
ingress_nginx_service_nodeport_https: ""
ingress_nginx_service_annotations: {}
ingress_publish_status_address: ""
ingress_nginx_publish_service: "{{ ingress_nginx_namespace }}/ingress-nginx"
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations: []

View File

@@ -79,12 +79,11 @@ spec:
{% if ingress_nginx_without_class %}
- --watch-ingress-without-class=true
{% endif %}
{% if ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% endif %}
{% if ingress_publish_status_address != "" %}
- --publish-status-address={{ ingress_publish_status_address }}
{% elif ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% elif ingress_nginx_publish_service != "" %}
- --publish-service={{ ingress_nginx_publish_service }}
{% endif %}
{% for extra_arg in ingress_nginx_extra_args %}
- {{ extra_arg }}
@@ -126,26 +125,6 @@ spec:
{% if not ingress_nginx_host_network %}
hostPort: {{ ingress_nginx_metrics_port }}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
containerPort: "{{ port | int }}"
protocol: TCP
{% if not ingress_nginx_host_network %}
hostPort: "{{ port | int }}"
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
containerPort: "{{ port | int }}"
protocol: UDP
{% if not ingress_nginx_host_network %}
hostPort: "{{ port | int }}"
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_webhook_enabled %}
- name: webhook
containerPort: 8443

View File

@@ -27,22 +27,6 @@ spec:
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_https %}
nodePort: {{ingress_nginx_service_nodeport_https | int}}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
port: "{{ port | int }}"
targetPort: "{{ port | int }}"
protocol: TCP
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
port: "{{ port | int }}"
targetPort: "{{ port | int }}"
protocol: UDP
{% endfor %}
{% endif %}
selector:
app.kubernetes.io/name: ingress-nginx

View File

@@ -0,0 +1,5 @@
---
krew_enabled: false
krew_root_dir: "/usr/local/krew"
krew_default_index_uri: https://github.com/kubernetes-sigs/krew-index.git
krew_no_upgrade_check: 0

View File

@@ -0,0 +1,38 @@
---
- name: Krew | Download krew
include_tasks: "../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.krew) }}"
- name: Krew | krew env
template:
src: krew.j2
dest: /etc/bash_completion.d/krew
mode: "0644"
- name: Krew | Copy krew manifest
template:
src: krew.yml.j2
dest: "{{ local_release_dir }}/krew.yml"
mode: "0644"
- name: Krew | Install krew # noqa command-instead-of-shell
shell: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }} install --archive={{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz --manifest={{ local_release_dir }}/krew.yml"
environment:
KREW_ROOT: "{{ krew_root_dir }}"
KREW_DEFAULT_INDEX_URI: "{{ krew_default_index_uri | default('') }}"
- name: Krew | Get krew completion
command: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }} completion bash"
changed_when: false
register: krew_completion
check_mode: false
ignore_errors: true # noqa ignore-errors
- name: Krew | Install krew completion
copy:
dest: /etc/bash_completion.d/krew.sh
content: "{{ krew_completion.stdout }}"
mode: "0755"
become: true
when: krew_completion.rc == 0

View File

@@ -0,0 +1,10 @@
---
- name: Krew | install krew on kube_control_plane
import_tasks: krew.yml
- name: Krew | install krew on localhost
import_tasks: krew.yml
delegate_to: localhost
connection: local
run_once: true
when: kubectl_localhost

View File

@@ -0,0 +1,7 @@
# krew bash env(kubespray)
export KREW_ROOT="{{ krew_root_dir }}"
{% if krew_default_index_uri is defined %}
export KREW_DEFAULT_INDEX_URI='{{ krew_default_index_uri }}'
{% endif %}
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
export KREW_NO_UPGRADE_CHECK={{ krew_no_upgrade_check }}

View File

@@ -0,0 +1,100 @@
apiVersion: krew.googlecontainertools.github.com/v1alpha2
kind: Plugin
metadata:
name: krew
spec:
version: "{{ krew_version }}"
homepage: https://krew.sigs.k8s.io/
shortDescription: Package manager for kubectl plugins.
caveats: |
krew is now installed! To start using kubectl plugins, you need to add
krew's installation directory to your PATH:
* macOS/Linux:
- Add the following to your ~/.bashrc or ~/.zshrc:
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
- Restart your shell.
* Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
To list krew commands and to get help, run:
$ kubectl krew
For a full list of available plugins, run:
$ kubectl krew search
You can find documentation at
https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
platforms:
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-darwin_amd64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: darwin
arch: amd64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-darwin_arm64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: darwin
arch: arm64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_amd64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: amd64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_arm
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: arm
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_arm64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: arm64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew.exe
files:
- from: ./krew-windows_amd64.exe
to: krew.exe
- from: ./LICENSE
to: .
selector:
matchLabels:
os: windows
arch: amd64

View File

@@ -10,6 +10,12 @@ dependencies:
tags:
- helm
- role: kubernetes-apps/krew
when:
- krew_enabled
tags:
- krew
- role: kubernetes-apps/registry
when:
- registry_enabled

View File

@@ -0,0 +1,5 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- oomichi

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- bozzo
reviewers:
- bozzo

View File

@@ -0,0 +1,5 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- alijahnas
reviewers:

View File

@@ -101,6 +101,7 @@ rules:
verbs:
# read its own config
- get
- list
# create a default if none exists
- create
# update status

View File

@@ -248,6 +248,9 @@ kube_apiserver_tracing_sampling_rate_per_million: 100
# Enable kubeadm file discovery if anonymous access has been removed
kubeadm_use_file_discovery: "{{ remove_anonymous_access }}"
# imagePullSerial specifies if image pulling performed by kubeadm must be done serially or in parallel. Default: true
kubeadm_image_pull_serial: true
# Supported asymmetric encryption algorithm types for the cluster's keys and certificates.
# can be one of RSA-2048(default), RSA-3072, RSA-4096, ECDSA-P256
# ref: https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/#kubeadm-k8s-io-v1beta4-ClusterConfiguration

View File

@@ -0,0 +1,10 @@
---
- name: Kubeadm | Check api is up
uri:
url: "https://{{ ip | default(fallback_ip) }}:{{ kube_apiserver_port }}/healthz"
validate_certs: false
when: ('kube_control_plane' in group_names)
register: _result
retries: 60
delay: 5
until: _result.status == 200

View File

@@ -24,11 +24,11 @@
- name: Parse certificate key if not set
set_fact:
kubeadm_certificate_key: "{{ hostvars[first_kube_control_plane]['kubeadm_upload_cert'].stdout_lines[-1] | trim }}"
kubeadm_certificate_key: "{{ hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'].stdout_lines[-1] | trim }}"
run_once: true
when:
- hostvars[first_kube_control_plane]['kubeadm_upload_cert'] is defined
- hostvars[first_kube_control_plane]['kubeadm_upload_cert'] is not skipped
- hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'] is defined
- hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'] is not skipped
- name: Create kubeadm ControlPlane config
template:

View File

@@ -228,7 +228,7 @@
- name: Kubeadm | Join other control plane nodes
include_tasks: kubeadm-secondary.yml
- name: Kubeadm | upgrade kubernetes cluster
- name: Kubeadm | upgrade kubernetes cluster to {{ kube_version }}
include_tasks: kubeadm-upgrade.yml
when:
- upgrade_cluster_setup

View File

@@ -1,56 +1,81 @@
---
- name: Kubeadm | Check api is up
uri:
url: "https://{{ ip | default(fallback_ip) }}:{{ kube_apiserver_port }}/healthz"
validate_certs: false
when: ('kube_control_plane' in group_names)
register: _result
retries: 60
delay: 5
until: _result.status == 200
- name: Ensure kube-apiserver is up before upgrade
import_tasks: check-api.yml
# kubeadm-config.v1beta4 with UpgradeConfiguration requires some values that were previously allowed as args to be specified in the config file
- name: Kubeadm | Upgrade first control plane node
command: >-
timeout -k 600s 600s
{{ bin_dir }}/kubeadm
upgrade apply -y {{ kube_version }}
{{ bin_dir }}/kubeadm upgrade apply -y {{ kube_version }}
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif -%}
register: kubeadm_upgrade
# Retry is because upload config sometimes fails
retries: 3
until: kubeadm_upgrade.rc == 0
when: inventory_hostname == first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
notify: Control plane | restart kubelet
- name: Kubeadm | Upgrade other control plane nodes
command: >-
timeout -k 600s 600s
{{ bin_dir }}/kubeadm
upgrade apply -y {{ kube_version }}
{{ bin_dir }}/kubeadm upgrade node
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif -%}
register: kubeadm_upgrade
# Retry is because upload config sometimes fails
retries: 3
until: kubeadm_upgrade.rc == 0
when: inventory_hostname != first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
# kubeadm upgrade no longer reconciles ClusterConfiguration and KubeProxyConfiguration changes, this must be done separately after upgrade to ensure the latest config is applied
- name: Update kubeadm and kubelet configmaps after upgrade
command: "{{ bin_dir }}/kubeadm init phase upload-config all --config {{ kube_config_dir }}/kubeadm-config.yaml"
register: kubeadm_upload_config
# Retry is because upload config sometimes fails
retries: 3
until: kubeadm_upload_config.rc == 0
when:
- inventory_hostname == first_kube_control_plane
- name: Update kube-proxy configmap after upgrade
command: "{{ bin_dir }}/kubeadm init phase addon kube-proxy --config {{ kube_config_dir }}/kubeadm-config.yaml"
register: kube_proxy_upload_config
# Retry is because upload config sometimes fails
retries: 3
until: kube_proxy_upload_config.rc == 0
when:
- inventory_hostname == first_kube_control_plane
- ('addon/kube-proxy' not in kubeadm_init_phases_skip)
- name: Rewrite kubeadm managed etcd static pod manifests with updated configmap
command: "{{ bin_dir }}/kubeadm init phase etcd local --config {{ kube_config_dir }}/kubeadm-config.yaml"
when:
- etcd_deployment_type == "kubeadm"
notify: Control plane | restart kubelet
- name: Rewrite kubernetes control plane static pod manifests with updated configmap
command: "{{ bin_dir }}/kubeadm init phase control-plane all --config {{ kube_config_dir }}/kubeadm-config.yaml"
notify: Control plane | restart kubelet
- name: Flush kubelet handlers
meta: flush_handlers
- name: Ensure kube-apiserver is up after upgrade and control plane configuration updates
import_tasks: check-api.yml
- name: Kubeadm | Remove binding to anonymous user
command: "{{ kubectl }} -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo --ignore-not-found"
when: remove_anonymous_access
@@ -60,8 +85,8 @@
path: "{{ item }}"
state: absent
with_items:
- /root/.kube/cache
- /root/.kube/http-cache
- /root/.kube/cache
- /root/.kube/http-cache
# FIXME: https://github.com/kubernetes/kubeadm/issues/1318
- name: Kubeadm | scale down coredns replicas to 0 if not using coredns dns_mode
@@ -75,6 +100,6 @@
until: scale_down_coredns is succeeded
run_once: true
when:
- kubeadm_scale_down_coredns_enabled
- dns_mode not in ['coredns', 'coredns_dual']
- kubeadm_scale_down_coredns_enabled
- dns_mode not in ['coredns', 'coredns_dual']
changed_when: false

View File

@@ -18,18 +18,6 @@
mode: "0640"
when: kube_webhook_authorization | default(false)
- name: Create structured AuthorizationConfiguration file
copy:
content: "{{ authz_config | to_nice_yaml(indent=2, sort_keys=false) }}"
dest: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
mode: "0640"
vars:
authz_config:
apiVersion: apiserver.config.k8s.io/{{ 'v1alpha1' if kube_version is version('v1.30.0', '<') else 'v1beta1' if kube_version is version('v1.32.0', '<') else 'v1' }}
kind: AuthorizationConfiguration
authorizers: "{{ kube_apiserver_authorization_config_authorizers }}"
when: kube_apiserver_use_authorization_config_file
- name: Create kube-scheduler config
template:
src: kubescheduler-config.yaml.j2
@@ -55,7 +43,7 @@
- name: Install kubectl bash completion
shell: "{{ bin_dir }}/kubectl completion bash >/etc/bash_completion.d/kubectl.sh"
when: ansible_os_family in ["Debian","RedHat", "Suse"]
when: ansible_os_family in ["Debian","RedHat"]
tags:
- kubectl
ignore_errors: true # noqa ignore-errors
@@ -66,7 +54,7 @@
owner: root
group: root
mode: "0755"
when: ansible_os_family in ["Debian","RedHat", "Suse"]
when: ansible_os_family in ["Debian","RedHat"]
tags:
- kubectl
- upgrade
@@ -85,7 +73,7 @@
state: present
marker: "# Ansible entries {mark}"
when:
- ansible_os_family in ["Debian","RedHat", "Suse"]
- ansible_os_family in ["Debian","RedHat"]
- kubectl_alias is defined and kubectl_alias != ""
tags:
- kubectl

View File

@@ -126,11 +126,7 @@ apiServer:
{% if kube_api_anonymous_auth is defined %}
anonymous-auth: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
authorization-config: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
{% else %}
authorization-mode: {{ authorization_modes | join(',') }}
{% endif %}
bind-address: {{ kube_apiserver_bind_address }}
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
enable-admission-plugins: {{ kube_apiserver_enable_admission_plugins | join(',') }}
@@ -180,7 +176,7 @@ apiServer:
{% if kube_webhook_token_auth | default(false) %}
authentication-token-webhook-config-file: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
{% if kube_webhook_authorization | default(false) %}
authorization-webhook-config-file: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_encrypt_secret_data %}
@@ -247,11 +243,6 @@ apiServer:
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}

View File

@@ -29,6 +29,8 @@ nodeRegistration:
- name: cloud-provider
value: external
{% endif %}
imagePullPolicy: {{ k8s_image_pull_policy }}
imagePullSerial: {{ kubeadm_image_pull_serial | lower }}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches_dir }}
@@ -142,13 +144,8 @@ apiServer:
- name: anonymous-auth
value: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
value: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
{% else %}
- name: authorization-mode
value: "{{ authorization_modes | join(',') }}"
{% endif %}
- name: bind-address
value: "{{ kube_apiserver_bind_address }}"
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
@@ -217,7 +214,7 @@ apiServer:
- name: authentication-token-webhook-config-file
value: "{{ kube_config_dir }}/webhook-token-auth-config.yaml"
{% endif %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
{% if kube_webhook_authorization | default(false) %}
- name: authorization-webhook-config-file
value: "{{ kube_config_dir }}/webhook-authorization-config.yaml"
{% endif %}
@@ -304,11 +301,6 @@ apiServer:
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}
@@ -478,6 +470,42 @@ scheduler:
{% endfor %}
{% endif %}
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: UpgradeConfiguration
apply:
kubernetesVersion: {{ kube_version }}
allowExperimentalUpgrades: true
certificateRenewal: {{ kubeadm_upgrade_auto_cert_renewal | lower }}
etcdUpgrade: {{ (etcd_deployment_type == "kubeadm") | lower }}
forceUpgrade: true
{% if kubeadm_ignore_preflight_errors | length > 0 %}
ignorePreflightErrors:
{% for ignore_error in kubeadm_ignore_preflight_errors %}
- "{{ ignore_error }}"
{% endfor %}
{% endif %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches_dir }}
{% endif %}
imagePullPolicy: {{ k8s_image_pull_policy }}
imagePullSerial: {{ kubeadm_image_pull_serial | lower }}
node:
certificateRenewal: {{ kubeadm_upgrade_auto_cert_renewal | lower }}
etcdUpgrade: {{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_ignore_preflight_errors | length > 0 %}
ignorePreflightErrors:
{% for ignore_error in kubeadm_ignore_preflight_errors %}
- "{{ ignore_error }}"
{% endfor %}
{% endif %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches_dir }}
{% endif %}
imagePullPolicy: {{ k8s_image_pull_policy }}
imagePullSerial: {{ kubeadm_image_pull_serial | lower }}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: {{ kube_proxy_bind_address }}

View File

@@ -1,8 +1,7 @@
---
# Set to true to allow pre-checks to fail and continue deployment
ignore_assert_errors: false
# Set to false to disable the backup parameter, set to true to accumulate backups of config files.
leave_etc_backup_files: true
nameservers: []
cloud_resolver: []
disable_host_nameservers: false

View File

@@ -22,11 +22,12 @@
- name: Stop if etcd group is empty in external etcd mode
assert:
that: groups.get('etcd') or etcd_deployment_type == 'kubeadm'
that: groups.get('etcd')
fail_msg: "Group 'etcd' cannot be empty in external etcd mode"
run_once: true
when:
- not ignore_assert_errors
- etcd_deployment_type != "kubeadm"
- name: Stop if non systemd OS type
assert:
@@ -39,12 +40,21 @@
msg: "{{ ansible_distribution }} is not a known OS"
when: not ignore_assert_errors
- name: Warn if `kube_network_plugin` is `none
- name: Stop if unknown network plugin
assert:
that: kube_network_plugin in ['calico', 'flannel', 'weave', 'cloud', 'cilium', 'cni', 'kube-ovn', 'kube-router', 'macvlan', 'custom_cni', 'none']
msg: "{{ kube_network_plugin }} is not supported"
when:
- kube_network_plugin is defined
- not ignore_assert_errors
- name: Warn the user if they are still using `etcd_kubeadm_enabled`
debug:
msg: |
msg: >
"WARNING! => `kube_network_plugin` is set to `none`. The network configuration will be skipped.
The cluster won't be ready to use, we recommend to select one of the available plugins"
changed_when: true
when:
- kube_network_plugin is defined
- kube_network_plugin == 'none'
- name: Stop if unsupported version of Kubernetes
@@ -53,23 +63,26 @@
msg: "The current release of Kubespray only support newer version of Kubernetes than {{ kube_version_min_required }} - You are trying to apply {{ kube_version }}"
when: not ignore_assert_errors
# simplify this items-list when https://github.com/ansible/ansible/issues/15753 is resolved
- name: "Stop if known booleans are set as strings (Use JSON format on CLI: -e \"{'key': true }\")"
assert:
that:
- download_run_once | type_debug == 'bool'
- deploy_netchecker | type_debug == 'bool'
- download_always_pull | type_debug == 'bool'
- helm_enabled | type_debug == 'bool'
- openstack_lbaas_enabled | type_debug == 'bool'
that: item.value | type_debug == 'bool'
msg: "{{ item.value }} isn't a bool"
run_once: true
with_items:
- { name: download_run_once, value: "{{ download_run_once }}" }
- { name: deploy_netchecker, value: "{{ deploy_netchecker }}" }
- { name: download_always_pull, value: "{{ download_always_pull }}" }
- { name: helm_enabled, value: "{{ helm_enabled }}" }
- { name: openstack_lbaas_enabled, value: "{{ openstack_lbaas_enabled }}" }
when: not ignore_assert_errors
- name: Stop if even number of etcd hosts
assert:
that: groups.get('etcd', groups.kube_control_plane) | length is not divisibleby 2
run_once: true
that: groups.etcd | length is not divisibleby 2
when:
- not ignore_assert_errors
- inventory_hostname in groups.get('etcd',[])
- name: Stop if memory is too small for control plane nodes
assert:
@@ -104,7 +117,8 @@
when:
- not ignore_assert_errors
- ('k8s_cluster' in group_names)
- kube_network_plugin not in ['calico', 'none']
- kube_network_node_prefix is defined
- kube_network_plugin != 'calico'
- name: Stop if ip var does not match local ips
assert:
@@ -208,37 +222,82 @@
when: kube_network_plugin != 'calico'
run_once: true
- name: Stop if unsupported options selected
- name: Stop if unknown dns mode
assert:
that:
- kube_network_plugin in ['calico', 'flannel', 'weave', 'cloud', 'cilium', 'cni', 'kube-ovn', 'kube-router', 'macvlan', 'custom_cni', 'none']
- dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
- kube_proxy_mode in ['iptables', 'ipvs']
- cert_management in ['script', 'none']
- resolvconf_mode in ['docker_dns', 'host_resolvconf', 'none']
- etcd_deployment_type in ['host', 'docker', 'kubeadm']
- etcd_deployment_type in ['host', 'kubeadm'] or container_manager == 'docker'
- container_manager in ['docker', 'crio', 'containerd']
msg: The selected choice is not supported
that: dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
msg: "dns_mode can only be 'coredns', 'coredns_dual', 'manual' or 'none'"
when: dns_mode is defined
run_once: true
- name: Stop if /etc/resolv.conf has no configured nameservers
assert:
that: configured_nameservers | length>0
fail_msg: "nameserver should not be empty in /etc/resolv.conf"
fail_msg: "nameserver should not empty in /etc/resolv.conf"
when:
- upstream_dns_servers | length == 0
- not disable_host_nameservers
- dns_mode in ['coredns', 'coredns_dual']
# TODO: Clean this task up after 2.28 is released
- name: Stop if etcd_kubeadm_enabled is defined
run_once: true
- name: Stop if unknown kube proxy mode
assert:
that: etcd_kubeadm_enabled is not defined
msg: |
`etcd_kubeadm_enabled` is removed.
You can set `etcd_deployment_type` to `kubeadm` instead of setting `etcd_kubeadm_enabled` to `true`."
that: kube_proxy_mode in ['iptables', 'ipvs']
msg: "kube_proxy_mode can only be 'iptables' or 'ipvs'"
when: kube_proxy_mode is defined
run_once: true
- name: Stop if unknown cert_management
assert:
that: cert_management | d('script') in ['script', 'none']
msg: "cert_management can only be 'script' or 'none'"
run_once: true
- name: Stop if unknown resolvconf_mode
assert:
that: resolvconf_mode in ['docker_dns', 'host_resolvconf', 'none']
msg: "resolvconf_mode can only be 'docker_dns', 'host_resolvconf' or 'none'"
when: resolvconf_mode is defined
run_once: true
- name: Stop if etcd deployment type is not host, docker or kubeadm
assert:
that: etcd_deployment_type in ['host', 'docker', 'kubeadm']
msg: "The etcd deployment type, 'etcd_deployment_type', must be host, docker or kubeadm"
when:
- inventory_hostname in groups.get('etcd',[])
- name: Stop if container manager is not docker, crio or containerd
assert:
that: container_manager in ['docker', 'crio', 'containerd']
msg: "The container manager, 'container_manager', must be docker, crio or containerd"
run_once: true
- name: Stop if etcd deployment type is not host or kubeadm when container_manager != docker
assert:
that: etcd_deployment_type in ['host', 'kubeadm']
msg: "The etcd deployment type, 'etcd_deployment_type', must be host or kubeadm when container_manager is not docker"
when:
- inventory_hostname in groups.get('etcd',[])
- container_manager != 'docker'
# TODO: Clean this task up when we drop backward compatibility support for `etcd_kubeadm_enabled`
- name: Stop if etcd deployment type is not host or kubeadm when container_manager != docker and etcd_kubeadm_enabled is not defined
run_once: true
when: etcd_kubeadm_enabled is defined
block:
- name: Warn the user if they are still using `etcd_kubeadm_enabled`
debug:
msg: >
"WARNING! => `etcd_kubeadm_enabled` is deprecated and will be removed in a future release.
You can set `etcd_deployment_type` to `kubeadm` instead of setting `etcd_kubeadm_enabled` to `true`."
changed_when: true
- name: Stop if `etcd_kubeadm_enabled` is defined and `etcd_deployment_type` is not `kubeadm` or `host`
assert:
that: etcd_deployment_type == 'kubeadm'
msg: >
It is not possible to use `etcd_kubeadm_enabled` when `etcd_deployment_type` is set to {{ etcd_deployment_type }}.
Unset the `etcd_kubeadm_enabled` variable and set `etcd_deployment_type` to desired deployment type (`host`, `kubeadm`, `docker`) instead."
when: etcd_kubeadm_enabled
- name: Stop if download_localhost is enabled but download_run_once is not
assert:
@@ -273,6 +332,14 @@
- containerd_version not in ['latest', 'edge', 'stable']
- container_manager == 'containerd'
- name: Stop if using deprecated containerd_config variable
assert:
that: containerd_config is not defined
msg: "Variable containerd_config is now deprecated. See https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/group_vars/all/containerd.yml for details."
when:
- containerd_config is defined
- not ignore_assert_errors
- name: Stop if auto_renew_certificates is enabled when certificates are managed externally (kube_external_ca_mode is true)
assert:
that: not auto_renew_certificates
@@ -281,6 +348,14 @@
- kube_external_ca_mode
- not ignore_assert_errors
- name: Stop if using deprecated comma separated list for admission plugins
assert:
that: "',' not in kube_apiserver_enable_admission_plugins[0]"
msg: "Comma-separated list for kube_apiserver_enable_admission_plugins is now deprecated, use separate list items for each plugin."
when:
- kube_apiserver_enable_admission_plugins is defined
- kube_apiserver_enable_admission_plugins | length > 0
- name: Verify that the packages list is sorted
vars:
pkgs_lists: "{{ pkgs.keys() | list }}"

View File

@@ -6,7 +6,7 @@
option: servers
value: "{{ nameserverentries | join(',') }}"
mode: '0600'
backup: "{{ leave_etc_backup_files }}"
backup: true
when:
- ('127.0.0.53' not in nameserverentries
or systemd_resolved_enabled.rc != 0)
@@ -24,7 +24,7 @@
option: searches
value: "{{ (default_searchdomains | default([]) + searchdomains) | join(',') }}"
mode: '0600'
backup: "{{ leave_etc_backup_files }}"
backup: true
notify: Preinstall | update resolvconf for networkmanager
- name: NetworkManager | Add DNS options to NM configuration
@@ -34,5 +34,5 @@
option: options
value: "ndots:{{ ndots }},timeout:{{ dns_timeout | default('2') }},attempts:{{ dns_attempts | default('2') }}"
mode: '0600'
backup: "{{ leave_etc_backup_files }}"
backup: true
notify: Preinstall | update resolvconf for networkmanager

View File

@@ -28,7 +28,7 @@
line: "precedence ::ffff:0:0/96 100"
state: present
create: true
backup: "{{ leave_etc_backup_files }}"
backup: true
mode: "0644"
when:
- disable_ipv6_dns

View File

@@ -20,7 +20,7 @@
block: "{{ hostvars.localhost.etc_hosts_inventory_block }}"
state: "{{ 'present' if populate_inventory_to_hosts_file else 'absent' }}"
create: true
backup: "{{ leave_etc_backup_files }}"
backup: true
unsafe_writes: true
marker: "# Ansible inventory hosts {mark}"
mode: "0644"
@@ -31,7 +31,7 @@
regexp: ".*{{ apiserver_loadbalancer_domain_name }}$"
line: "{{ loadbalancer_apiserver.address }} {{ apiserver_loadbalancer_domain_name }}"
state: present
backup: "{{ leave_etc_backup_files }}"
backup: true
unsafe_writes: true
when:
- populate_loadbalancer_apiserver_to_hosts_file
@@ -69,7 +69,7 @@
line: "{{ item.key }} {{ item.value | join(' ') }}"
regexp: "^{{ item.key }}.*$"
state: present
backup: "{{ leave_etc_backup_files }}"
backup: true
unsafe_writes: true
loop: "{{ etc_hosts_localhosts_dict_target | default({}) | dict2items }}"

View File

@@ -10,7 +10,7 @@
create: true
state: present
insertbefore: BOF
backup: "{{ leave_etc_backup_files }}"
backup: true
marker: "# Ansible entries {mark}"
mode: "0644"
notify: Preinstall | propagate resolvconf to k8s components

View File

@@ -7,7 +7,7 @@
blockinfile:
path: "{{ dhclientconffile }}"
state: absent
backup: "{{ leave_etc_backup_files }}"
backup: true
marker: "# Ansible entries {mark}"
notify: Preinstall | propagate resolvconf to k8s components

File diff suppressed because it is too large Load Diff

View File

@@ -57,8 +57,7 @@ download_retries: 4
docker_image_pull_command: "{{ docker_bin_dir }}/docker pull"
docker_image_info_command: "{{ docker_bin_dir }}/docker images -q | xargs -i {{ '{{' }} docker_bin_dir }}/docker inspect -f {% raw %}'{{ '{{' }} if .RepoTags }}{{ '{{' }} join .RepoTags \",\" }}{{ '{{' }} end }}{{ '{{' }} if .RepoDigests }},{{ '{{' }} join .RepoDigests \",\" }}{{ '{{' }} end }}' {% endraw %} {} | tr '\n' ','"
nerdctl_image_info_command: "{{ bin_dir }}/nerdctl -n k8s.io images --format '{% raw %}{{ .Repository }}:{{ .Tag }}{% endraw %}' 2>/dev/null | grep -v ^:$ | tr '\n' ','"
# Using the ctr instead of nerdctl to workdaround the https://github.com/kubernetes-sigs/kubespray/issues/10670
nerdctl_image_pull_command: "{{ bin_dir }}/ctr -n k8s.io images pull{% if containerd_registries_mirrors is defined %} --hosts-dir {{ containerd_cfg_dir }}/certs.d{%- endif -%}"
nerdctl_image_pull_command: "{{ bin_dir }}/nerdctl -n k8s.io pull --quiet"
crictl_image_info_command: "{{ bin_dir }}/crictl images --verbose | awk -F ': ' '/RepoTags|RepoDigests/ {print $2}' | tr '\n' ','"
crictl_image_pull_command: "{{ bin_dir }}/crictl pull"
@@ -75,12 +74,12 @@ image_arch: "{{ host_architecture | default('amd64') }}"
# Versions
crun_version: 1.17
runc_version: v1.2.3
runc_version: v1.2.6
kata_containers_version: 3.1.3
youki_version: 0.4.1
gvisor_version: 20240305
containerd_version: 1.7.24
cri_dockerd_version: 0.3.11
containerd_version: 1.7.27
cri_dockerd_version: 0.3.16
# this is relevant when container_manager == 'docker'
docker_containerd_version: 1.6.32
@@ -100,7 +99,7 @@ github_image_repo: "ghcr.io"
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
calico_version: "v3.29.1"
calico_version: "v3.29.4"
calico_ctl_version: "{{ calico_version }}"
calico_cni_version: "{{ calico_version }}"
calico_policy_version: "{{ calico_version }}"
@@ -112,10 +111,10 @@ calico_apiserver_enabled: false
flannel_version: "v0.22.0"
flannel_cni_version: "v1.1.2"
weave_version: 2.8.7
cni_version: "v1.4.0"
cni_version: "v1.4.1"
cilium_version: "v1.15.9"
cilium_cli_version: "v0.16.0"
cilium_cli_version: "v0.16.24"
cilium_enable_hubble: false
kube_ovn_version: "v1.12.21"
@@ -124,33 +123,34 @@ kube_router_version: "v2.0.0"
multus_version: "v4.1.0"
helm_version: "v3.16.4"
nerdctl_version: "1.7.7"
krew_version: "v0.4.4"
skopeo_version: "v1.16.1"
# Get kubernetes major version (i.e. 1.17.4 => 1.17)
kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}"
pod_infra_supported_versions:
v1.32: "3.10"
v1.31: "3.10"
v1.30: "3.9"
v1.29: "3.9"
pod_infra_version: "{{ pod_infra_supported_versions[kube_major_version] }}"
etcd_supported_versions:
v1.32: "v3.5.16"
v1.31: "v3.5.16"
v1.30: "v3.5.16"
v1.31: "v3.5.21"
v1.30: "v3.5.21"
v1.29: "v3.5.21"
etcd_version: "{{ etcd_supported_versions[kube_major_version] }}"
crictl_supported_versions:
v1.32: "v1.32.0"
v1.31: "v1.31.1"
v1.30: "v1.30.1"
v1.29: "v1.29.0"
crictl_version: "{{ crictl_supported_versions[kube_major_version] }}"
crio_supported_versions:
v1.32: v1.32.0
v1.31: v1.31.3
v1.30: v1.30.3
v1.31: v1.31.6
v1.30: v1.30.11
v1.29: v1.29.13
crio_version: "{{ crio_supported_versions[kube_major_version] }}"
# Scheduler plugins doesn't build for K8s 1.29 yet
@@ -187,6 +187,7 @@ kata_containers_download_url: "{{ github_url }}/kata-containers/kata-containers/
gvisor_runsc_download_url: "{{ storage_googleapis_url }}/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
gvisor_containerd_shim_runsc_download_url: "{{ storage_googleapis_url }}/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
nerdctl_download_url: "{{ github_url }}/containerd/nerdctl/releases/download/v{{ nerdctl_version }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
krew_download_url: "{{ github_url }}/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
containerd_download_url: "{{ github_url }}/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
cri_dockerd_download_url: "{{ github_url }}/Mirantis/cri-dockerd/releases/download/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz"
skopeo_download_url: "{{ github_url }}/lework/skopeo-binary/releases/download/{{ skopeo_version }}/skopeo-linux-{{ image_arch }}"
@@ -212,6 +213,7 @@ kata_containers_binary_checksum: "{{ kata_containers_binary_checksums[image_arch
gvisor_runsc_binary_checksum: "{{ gvisor_runsc_binary_checksums[image_arch][gvisor_version] }}"
gvisor_containerd_shim_binary_checksum: "{{ gvisor_containerd_shim_binary_checksums[image_arch][gvisor_version] }}"
nerdctl_archive_checksum: "{{ nerdctl_archive_checksums[image_arch][nerdctl_version] }}"
krew_archive_checksum: "{{ krew_archive_checksums[host_os][image_arch][krew_version] }}"
containerd_archive_checksum: "{{ containerd_archive_checksums[image_arch][containerd_version] }}"
skopeo_binary_checksum: "{{ skopeo_binary_checksums[image_arch][skopeo_version] }}"
@@ -326,13 +328,13 @@ rbd_provisioner_image_tag: "{{ rbd_provisioner_version }}"
local_path_provisioner_version: "v0.0.24"
local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
local_path_provisioner_image_tag: "{{ local_path_provisioner_version }}"
ingress_nginx_version: "v1.12.0"
ingress_nginx_version: "v1.12.1"
ingress_nginx_controller_image_repo: "{{ kube_image_repo }}/ingress-nginx/controller"
ingress_nginx_opentelemetry_image_repo: "{{ kube_image_repo }}/ingress-nginx/opentelemetry"
ingress_nginx_controller_image_tag: "{{ ingress_nginx_version }}"
ingress_nginx_opentelemetry_image_tag: "v20230721-3e2062ee5"
ingress_nginx_kube_webhook_certgen_image_repo: "{{ kube_image_repo }}/ingress-nginx/kube-webhook-certgen"
ingress_nginx_kube_webhook_certgen_image_tag: "v1.5.0"
ingress_nginx_kube_webhook_certgen_image_tag: "v1.5.2"
alb_ingress_image_repo: "{{ docker_image_repo }}/amazon/aws-alb-ingress-controller"
alb_ingress_image_tag: "v1.1.9"
cert_manager_version: "v1.15.3"
@@ -357,9 +359,9 @@ csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe"
csi_livenessprobe_image_tag: "v2.5.0"
snapshot_controller_supported_versions:
v1.32: "v7.0.2"
v1.31: "v7.0.2"
v1.30: "v7.0.2"
v1.29: "v7.0.2"
snapshot_controller_image_repo: "{{ kube_image_repo }}/sig-storage/snapshot-controller"
snapshot_controller_image_tag: "{{ snapshot_controller_supported_versions[kube_major_version] }}"
@@ -943,6 +945,19 @@ downloads:
groups:
- kube_control_plane
krew:
enabled: "{{ krew_enabled }}"
file: true
version: "{{ krew_version }}"
dest: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
sha256: "{{ krew_archive_checksum }}"
url: "{{ krew_download_url }}"
unarchive: true
owner: "root"
mode: "0755"
groups:
- kube_control_plane
registry:
enabled: "{{ registry_enabled }}"
container: true

View File

@@ -18,10 +18,10 @@ kubelet_fail_swap_on: true
kubelet_swap_behavior: LimitedSwap
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.32.0
kube_version: v1.31.9
## The minimum version working
kube_version_min_required: v1.30.0
kube_version_min_required: v1.29.0
## Kube Proxy mode One of ['iptables', 'ipvs']
kube_proxy_mode: ipvs
@@ -411,6 +411,7 @@ dashboard_enabled: false
# Addons which can be enabled
helm_enabled: false
krew_enabled: false
registry_enabled: false
metrics_server_enabled: false
enable_network_policy: true
@@ -487,62 +488,7 @@ external_hcloud_cloud:
## the k8s cluster. Only 'AlwaysAllow', 'AlwaysDeny', 'Node' and
## 'RBAC' modes are tested. Order is important.
authorization_modes: ['Node', 'RBAC']
## Structured authorization config
## Structured AuthorizationConfiguration is a new feature in Kubernetes v1.29+ (GA in v1.32) that configures the API server's authorization modes with a structured configuration file.
## AuthorizationConfiguration files offer features not available with the `--authorization-mode` flag, although Kubespray supports both methods and authorization-mode remains the default for now.
## Note: Because the `--authorization-config` and `--authorization-mode` flags are mutually exclusive, the `authorization_modes` ansible variable is ignored when `kube_apiserver_use_authorization_config_file` is set to true. The two features cannot be used at the same time.
## Docs: https://kubernetes.io/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file
## Examples: https://kubernetes.io/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/
## KEP: https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3221-structured-authorization-configuration
kube_apiserver_use_authorization_config_file: false
kube_apiserver_authorization_config_authorizers:
- type: Node
name: node
- type: RBAC
name: rbac
## Example for use with kube_webhook_authorization: true
# - type: Webhook
# name: webhook
# webhook:
# connectionInfo:
# type: KubeConfigFile
# kubeConfigFile: "{{ kube_config_dir }}/webhook-authorization-config.yaml"
# subjectAccessReviewVersion: v1beta1
# matchConditionSubjectAccessReviewVersion: v1
# timeout: 3s
# failurePolicy: NoOpinion
# matchConditions:
# # Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/
# # only send resource requests to the webhook
# - expression: has(request.resourceAttributes)
# # Don't intercept requests from kube-system service accounts
# - expression: "!('system:serviceaccounts:kube-system' in request.groups)"
# ## Below expressions avoid issues with kubeadm init and other system components that should be authorized by Node and RBAC
# # Don't process node and bootstrap token requests with the webhook
# - expression: "!('system:nodes' in request.groups)"
# - expression: "!('system:bootstrappers' in request.groups)"
# - expression: "!('system:bootstrappers:kubeadm:default-node-token' in request.groups)"
# # Don't process kubeadm requests with the webhook
# - expression: "!('kubeadm:cluster-admins' in request.groups)"
# - expression: "!('system:masters' in request.groups)"
## Two workarounds are required to use AuthorizationConfiguration with kubeadm v1.29.x:
## 1. Enable the StructuredAuthorizationConfiguration feature gate:
# kube_apiserver_feature_gates:
# - StructuredAuthorizationConfiguration=true
## 2. Use the following kubeadm_patches to remove defaulted authorization-mode flags (Workaround for a kubeadm defaulting bug on v1.29.x. fixed in 1.30+ via: https://github.com/kubernetes/kubernetes/pull/123654)
# kubeadm_patches:
# - target: kube-apiserver
# type: strategic
# patch:
# spec:
# containers:
# - name: kube-apiserver
# $deleteFromPrimitiveList/command:
# - --authorization-mode=Node,RBAC
rbac_enabled: "{{ ('RBAC' in authorization_modes and not kube_apiserver_use_authorization_config_file) or (kube_apiserver_use_authorization_config_file and kube_apiserver_authorization_config_authorizers | selectattr('type', 'equalto', 'RBAC') | list | length > 0) }}"
rbac_enabled: "{{ 'RBAC' in authorization_modes }}"
# When enabled, API bearer tokens (including service account tokens) can be used to authenticate to the kubelet's HTTPS endpoint
kubelet_authentication_token_webhook: true
@@ -745,6 +691,9 @@ proxy_disable_env:
https_proxy: ''
no_proxy: ''
# krew root dir
krew_root_dir: "/usr/local/krew"
# sysctl_file_path to add sysctl conf to
sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"

View File

@@ -16,6 +16,7 @@
- name: Set fallback_ip
set_fact:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
cacheable: true
when: fallback_ip is not defined
- name: Set no_proxy
@@ -23,3 +24,12 @@
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
# TODO: Clean this task up when we drop backward compatibility support for `etcd_kubeadm_enabled`
- name: Set `etcd_deployment_type` to "kubeadm" if `etcd_kubeadm_enabled` is true
set_fact:
etcd_deployment_type: kubeadm
when:
- etcd_kubeadm_enabled is defined and etcd_kubeadm_enabled
tags:
- always

View File

@@ -352,9 +352,7 @@ spec:
privileged: true
resources:
limits:
{% if calico_node_cpu_limit != "0" %}
cpu: {{ calico_node_cpu_limit }}
{% endif %}
memory: {{ calico_node_memory_limit }}
requests:
cpu: {{ calico_node_cpu_requests }}

View File

@@ -126,10 +126,6 @@ spec:
- name: TYPHA_PROMETHEUSMETRICSPORT
value: "{{ typha_prometheusmetricsport }}"
{% endif %}
{% if calico_ipam_host_local %}
- name: USE_POD_CIDR
value: "true"
{% endif %}
{% if typha_secure %}
volumeMounts:
- mountPath: /etc/typha
@@ -139,6 +135,10 @@ spec:
subPath: ca.crt
name: cacert
readOnly: true
{% endif %}
{% if calico_ipam_host_local %}
- name: USE_POD_CIDR
value: "true"
{% endif %}
livenessProbe:
httpGet:

View File

@@ -58,7 +58,7 @@ calico_felix_floating_ips: Disabled
# Limits for apps
calico_node_memory_limit: 500M
calico_node_cpu_limit: "0"
calico_node_cpu_limit: 300m
calico_node_memory_requests: 64M
calico_node_cpu_requests: 150m
calico_felix_chaininsertmode: Insert

View File

@@ -0,0 +1,4 @@
# See the OWNERS docs at https://go.k8s.io/owners
emeritus_approvers:
- oilbeater

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- bozzo
reviewers:
- bozzo

View File

@@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- simon
reviewers:
- simon

View File

@@ -0,0 +1,8 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- qvicksilver
- yujunz
reviewers:
- qvicksilver
- yujunz

View File

@@ -347,6 +347,9 @@
- /etc/bash_completion.d/kubectl.sh
- /etc/bash_completion.d/crictl
- /etc/bash_completion.d/nerdctl
- /etc/bash_completion.d/krew
- /etc/bash_completion.d/krew.sh
- "{{ krew_root_dir }}"
- /etc/modules-load.d/kube_proxy-ipvs.conf
- /etc/modules-load.d/kubespray-br_netfilter.conf
- /etc/modules-load.d/kubespray-kata-containers.conf

View File

@@ -1,35 +0,0 @@
[build-system]
requires = ["setuptools >= 61.0",
"setuptools_scm >= 8.0",
]
build-backend = "setuptools.build_meta"
[project]
name = "kubespray_component_hash_update"
version = "1.0.0"
dependencies = [
"more_itertools",
"ruamel.yaml",
"requests",
"packaging",
]
requires-python = ">= 3.10"
authors = [
{ name = "Craig Rodrigues", email = "rodrigc@crodrigues.org" },
{ name = "Simon Wessel" },
{ name = "Max Gautier", email = "mg@max.gautier.name" },
]
maintainers = [
{ name = "The Kubespray maintainers" },
]
description = "Download or compute hashes for new versions of components deployed by Kubespray"
classifiers = [
"License :: OSI Approved :: Apache-2.0",
]
[project.scripts]
update-hashes = "component_hash_update.download:main"

View File

@@ -1,94 +0,0 @@
"""
Static download metadata for components updated by the update-hashes command.
"""
infos = {
"calicoctl_binary": {
"url": "https://github.com/projectcalico/calico/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOA87D0g",
},
"ciliumcli_binary": {
"url": "https://github.com/cilium/cilium-cli/releases/download/v{version}/cilium-{os}-{arch}.tar.gz.sha256sum",
"graphql_id": "R_kgDOE0nmLg",
},
"cni_binary": {
"url": "https://github.com/containernetworking/plugins/releases/download/v{version}/cni-plugins-{os}-{arch}-v{version}.tgz.sha256",
"graphql_id": "R_kgDOBQqEpg",
},
"containerd_archive": {
"url": "https://github.com/containerd/containerd/releases/download/v{version}/containerd-{version}-{os}-{arch}.tar.gz.sha256sum",
"graphql_id": "R_kgDOAr9FWA",
},
"cri_dockerd_archive": {
"binary": True,
"url": "https://github.com/Mirantis/cri-dockerd/releases/download/v{version}/cri-dockerd-{version}.{arch}.tgz",
"graphql_id": "R_kgDOEvvLcQ",
},
"crictl": {
"url": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v{version}/crictl-v{version}-{os}-{arch}.tar.gz.sha256",
"graphql_id": "R_kgDOBMdURA",
},
"crio_archive": {
"url": "https://storage.googleapis.com/cri-o/artifacts/cri-o.{arch}.v{version}.tar.gz.sha256sum",
"graphql_id": "R_kgDOBAr5pg",
},
"crun": {
"url": "https://github.com/containers/crun/releases/download/{version}/crun-{version}-linux-{arch}",
"binary": True,
"graphql_id": "R_kgDOBip3vA",
},
"etcd_binary": {
"url": "https://github.com/etcd-io/etcd/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOAKtHtg",
},
"gvisor_containerd_shim_binary": {
"url": "https://storage.googleapis.com/gvisor/releases/release/{version}/{alt_arch}/containerd-shim-runsc-v1.sha512",
"hashtype": "sha512",
"tags": True,
"graphql_id": "R_kgDOB9IlXg",
},
"gvisor_runsc_binary": {
"url": "https://storage.googleapis.com/gvisor/releases/release/{version}/{alt_arch}/runsc.sha512",
"hashtype": "sha512",
"tags": True,
"graphql_id": "R_kgDOB9IlXg",
},
"kata_containers_binary": {
"url": "https://github.com/kata-containers/kata-containers/releases/download/{version}/kata-static-{version}-{arch}.tar.xz",
"binary": True,
"graphql_id": "R_kgDOBsJsHQ",
},
"kubeadm": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubeadm.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"kubectl": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubectl.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"kubelet": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubelet.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"nerdctl_archive": {
"url": "https://github.com/containerd/nerdctl/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOEvuRnQ",
},
"runc": {
"url": "https://github.com/opencontainers/runc/releases/download/v{version}/runc.sha256sum",
"graphql_id": "R_kgDOAjP4QQ",
},
"skopeo_binary": {
"url": "https://github.com/lework/skopeo-binary/releases/download/v{version}/skopeo-{os}-{arch}.sha256",
"graphql_id": "R_kgDOHQ6J9w",
},
"youki": {
"url": "https://github.com/youki-dev/youki/releases/download/v{version}/youki-{version}-{alt_arch}-gnu.tar.gz",
"binary": True,
"graphql_id": "R_kgDOFPvgPg",
},
"yq": {
"url": "https://github.com/mikefarah/yq/releases/download/v{version}/checksums-bsd", # see https://github.com/mikefarah/yq/pull/1691 for why we use this url
"graphql_id": "R_kgDOApOQGQ",
},
}

View File

@@ -1,335 +0,0 @@
#!/usr/bin/env python3
# After a new version of Kubernetes has been released,
# run this script to update roles/kubespray-defaults/defaults/main/download.yml
# with new hashes.
import sys
import os
import logging
import subprocess
from itertools import groupby, chain
from more_itertools import partition
from functools import cache
import argparse
import requests
import hashlib
from datetime import datetime
from ruamel.yaml import YAML
from packaging.version import Version, InvalidVersion
from importlib.resources import files
from pathlib import Path
from typing import Optional, Any
from . import components
CHECKSUMS_YML = Path("roles/kubespray-defaults/defaults/main/checksums.yml")
logger = logging.getLogger(__name__)
def open_yaml(file: Path):
yaml = YAML()
yaml.explicit_start = True
yaml.preserve_quotes = True
yaml.width = 4096
with open(file, "r") as checksums_yml:
data = yaml.load(checksums_yml)
return data, yaml
arch_alt_name = {
"amd64": "x86_64",
"arm64": "aarch64",
"ppc64le": None,
"arm": None,
}
# TODO: downloads not supported
# gvisor: sha512 checksums
# helm_archive: PGP signatures
# krew_archive: different yaml structure (in our download)
# calico_crds_archive: different yaml structure (in our download)
# TODO:
# noarch support -> k8s manifests, helm charts
# different checksum format (needs download role changes)
# different verification methods (gpg, cosign) ( needs download role changes) (or verify the sig in this script and only use the checksum in the playbook)
# perf improvements (async)
def download_hash(downloads: {str: {str: Any}}) -> None:
# Handle file with multiples hashes, with various formats.
# the lambda is expected to produce a dictionary of hashes indexed by arch name
download_hash_extract = {
"calicoctl_binary": lambda hashes: {
line.split("-")[-1]: line.split()[0]
for line in hashes.strip().split("\n")
if line.count("-") == 2 and line.split("-")[-2] == "linux"
},
"etcd_binary": lambda hashes: {
line.split("-")[-1].removesuffix(".tar.gz"): line.split()[0]
for line in hashes.strip().split("\n")
if line.split("-")[-2] == "linux"
},
"nerdctl_archive": lambda hashes: {
line.split()[1].removesuffix(".tar.gz").split("-")[3]: line.split()[0]
for line in hashes.strip().split("\n")
if [x for x in line.split(" ") if x][1].split("-")[2] == "linux"
},
"runc": lambda hashes: {
parts[1].split(".")[1]: parts[0]
for parts in (line.split() for line in hashes.split("\n")[3:9])
},
"yq": lambda rhashes_bsd: {
pair[0].split("_")[-1]: pair[1]
# pair = (yq_<os>_<arch>, <hash>)
for pair in (
(line.split()[1][1:-1], line.split()[3])
for line in rhashes_bsd.splitlines()
if line.startswith("SHA256")
)
if pair[0].startswith("yq")
and pair[0].split("_")[1] == "linux"
and not pair[0].endswith(".tar.gz")
},
}
checksums_file = (
Path(
subprocess.Popen(
["git", "rev-parse", "--show-toplevel"], stdout=subprocess.PIPE
)
.communicate()[0]
.rstrip()
.decode("utf-8")
)
/ CHECKSUMS_YML
)
logger.info("Opening checksums file %s...", checksums_file)
data, yaml = open_yaml(checksums_file)
s = requests.Session()
@cache
def _get_hash_by_arch(download: str, version: str) -> {str: str}:
hash_file = s.get(
downloads[download]["url"].format(
version=version,
os="linux",
),
allow_redirects=True,
)
hash_file.raise_for_status()
return download_hash_extract[download](hash_file.content.decode())
releases, tags = map(
dict, partition(lambda r: r[1].get("tags", False), downloads.items())
)
repos = {
"with_releases": [r["graphql_id"] for r in releases.values()],
"with_tags": [t["graphql_id"] for t in tags.values()],
}
response = s.post(
"https://api.github.com/graphql",
json={
"query": files(__package__).joinpath("list_releases.graphql").read_text(),
"variables": repos,
},
headers={
"Authorization": f"Bearer {os.environ['API_KEY']}",
},
)
if "x-ratelimit-used" in response.headers._store:
logger.info(
"Github graphQL API ratelimit status: used %s of %s. Next reset at %s",
response.headers["X-RateLimit-Used"],
response.headers["X-RateLimit-Limit"],
datetime.fromtimestamp(int(response.headers["X-RateLimit-Reset"])),
)
response.raise_for_status()
def valid_version(possible_version: str) -> Optional[Version]:
try:
return Version(possible_version)
except InvalidVersion:
return None
repos = response.json()["data"]
github_versions = dict(
zip(
chain(releases.keys(), tags.keys()),
[
{
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for repo in repos["with_releases"]
]
+ [
{
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for repo in repos["with_tags"]
],
strict=True,
)
)
components_supported_arch = {
component.removesuffix("_checksums"): [a for a in archs.keys()]
for component, archs in data.items()
}
new_versions = {
c: {
v
for v in github_versions[c]
if any(
v > version
and (
(v.major, v.minor) == (version.major, version.minor)
or c.startswith("gvisor")
)
for version in [
max(minors)
for _, minors in groupby(cur_v, lambda v: (v.minor, v.major))
]
)
# only get:
# - patch versions (no minor or major bump) (exception for gvisor which does not have a major.minor.patch scheme
# - newer ones (don't get old patch version)
}
- set(cur_v)
for component, archs in data.items()
if (c := component.removesuffix("_checksums")) in downloads.keys()
# this is only to bound cur_v in the scope
and (
cur_v := sorted(
Version(str(k)) for k in next(archs.values().__iter__()).keys()
)
)
}
hash_set_to_0 = {
c: {
Version(str(v))
for v, h in chain.from_iterable(a.items() for a in archs.values())
if h == 0
}
for component, archs in data.items()
if (c := component.removesuffix("_checksums")) in downloads.keys()
}
def get_hash(component: str, version: Version, arch: str):
if component in download_hash_extract:
hashes = _get_hash_by_arch(component, version)
return hashes[arch]
else:
hash_file = s.get(
downloads[component]["url"].format(
version=version,
os="linux",
arch=arch,
alt_arch=arch_alt_name[arch],
),
allow_redirects=True,
)
hash_file.raise_for_status()
if downloads[component].get("binary", False):
return hashlib.new(
downloads[component].get("hashtype", "sha256"), hash_file.content
).hexdigest()
return hash_file.content.decode().split()[0]
for component, versions in chain(new_versions.items(), hash_set_to_0.items()):
c = component + "_checksums"
for arch in components_supported_arch[component]:
for version in versions:
data[c][arch][
str(version)
] = f"{downloads[component].get('hashtype', 'sha256')}:{get_hash(component, version, arch)}"
data[c] = {
arch: {
v: versions[v]
for v in sorted(
versions.keys(), key=lambda v: Version(str(v)), reverse=True
)
}
for arch, versions in data[c].items()
}
with open(checksums_file, "w") as checksums_yml:
yaml.dump(data, checksums_yml)
logger.info("Updated %s", checksums_file)
def main():
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
parser = argparse.ArgumentParser(
description=f"Add new patch versions hashes in {CHECKSUMS_YML}",
formatter_class=argparse.RawTextHelpFormatter,
epilog=f"""
This script only lookup new patch versions relative to those already existing
in the data in {CHECKSUMS_YML},
which means it won't add new major or minor versions.
In order to add one of these, edit {CHECKSUMS_YML}
by hand, adding the new versions with a patch number of 0 (or the lowest relevant patch versions)
and a hash value of 0.
; then run this script.
Note that the script will try to add the versions on all
architecture keys already present for a given download target.
EXAMPLES:
crictl_checksums:
...
amd64:
+ 1.30.0: 0
1.29.0: d16a1ffb3938f5a19d5c8f45d363bd091ef89c0bc4d44ad16b933eede32fdcbb
1.28.0: 8dc78774f7cbeaf787994d386eec663f0a3cf24de1ea4893598096cb39ef2508""",
)
# Workaround for https://github.com/python/cpython/issues/53834#issuecomment-2060825835
# Fixed in python 3.14
class Choices(tuple):
def __init__(self, _iterable=None, default=None):
self.default = default or []
def __contains__(self, item):
return super().__contains__(item) or item == self.default
choices = Choices(components.infos.keys(), default=list(components.infos.keys()))
parser.add_argument(
"only",
nargs="*",
choices=choices,
help="if provided, only obtain hashes for these compoments",
default=choices.default,
)
parser.add_argument(
"-e",
"--exclude",
action="append",
choices=components.infos.keys(),
help="do not obtain hashes for this component",
default=[],
)
args = parser.parse_args()
download_hash(
{k: components.infos[k] for k in (set(args.only) - set(args.exclude))}
)

View File

@@ -1,24 +0,0 @@
query($with_releases: [ID!]!, $with_tags: [ID!]!) {
with_releases: nodes(ids: $with_releases) {
... on Repository {
releases(first: 100) {
nodes {
tagName
isPrerelease
}
}
}
}
with_tags: nodes(ids: $with_tags) {
... on Repository {
refs(refPrefix: "refs/tags/", last: 25) {
nodes {
name
}
}
}
}
}

205
scripts/download_hash.py Normal file
View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
# After a new version of Kubernetes has been released,
# run this script to update roles/kubespray-defaults/defaults/main/download.yml
# with new hashes.
import sys
from itertools import count, groupby
from collections import defaultdict
from functools import cache
import argparse
import requests
from ruamel.yaml import YAML
from packaging.version import Version
CHECKSUMS_YML = "../roles/kubespray-defaults/defaults/main/checksums.yml"
def open_checksums_yaml():
yaml = YAML()
yaml.explicit_start = True
yaml.preserve_quotes = True
yaml.width = 4096
with open(CHECKSUMS_YML, "r") as checksums_yml:
data = yaml.load(checksums_yml)
return data, yaml
def version_compare(version):
return Version(version.removeprefix("v"))
downloads = {
"calicoctl_binary": "https://github.com/projectcalico/calico/releases/download/{version}/SHA256SUMS",
"ciliumcli_binary": "https://github.com/cilium/cilium-cli/releases/download/{version}/cilium-{os}-{arch}.tar.gz.sha256sum",
"cni_binary": "https://github.com/containernetworking/plugins/releases/download/{version}/cni-plugins-{os}-{arch}-{version}.tgz.sha256",
"containerd_archive": "https://github.com/containerd/containerd/releases/download/v{version}/containerd-{version}-{os}-{arch}.tar.gz.sha256sum",
"crictl": "https://github.com/kubernetes-sigs/cri-tools/releases/download/{version}/crictl-{version}-{os}-{arch}.tar.gz.sha256",
"crio_archive": "https://storage.googleapis.com/cri-o/artifacts/cri-o.{arch}.{version}.tar.gz.sha256sum",
"etcd_binary": "https://github.com/etcd-io/etcd/releases/download/{version}/SHA256SUMS",
"kubeadm": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubeadm.sha256",
"kubectl": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubectl.sha256",
"kubelet": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubelet.sha256",
"nerdctl_archive": "https://github.com/containerd/nerdctl/releases/download/v{version}/SHA256SUMS",
"runc": "https://github.com/opencontainers/runc/releases/download/{version}/runc.sha256sum",
"skopeo_binary": "https://github.com/lework/skopeo-binary/releases/download/{version}/skopeo-{os}-{arch}.sha256",
"yq": "https://github.com/mikefarah/yq/releases/download/{version}/checksums-bsd", # see https://github.com/mikefarah/yq/pull/1691 for why we use this url
}
# TODO: downloads not supported
# youki: no checkusms in releases
# kata: no checksums in releases
# gvisor: sha512 checksums
# crun : PGP signatures
# cri_dockerd: no checksums or signatures
# helm_archive: PGP signatures
# krew_archive: different yaml structure
# calico_crds_archive: different yaml structure
# TODO:
# noarch support -> k8s manifests, helm charts
# different checksum format (needs download role changes)
# different verification methods (gpg, cosign) ( needs download role changes) (or verify the sig in this script and only use the checksum in the playbook)
# perf improvements (async)
def download_hash(only_downloads: [str]) -> None:
# Handle file with multiples hashes, with various formats.
# the lambda is expected to produce a dictionary of hashes indexed by arch name
download_hash_extract = {
"calicoctl_binary": lambda hashes : {
line.split('-')[-1] : line.split()[0]
for line in hashes.strip().split('\n')
if line.count('-') == 2 and line.split('-')[-2] == "linux"
},
"etcd_binary": lambda hashes : {
line.split('-')[-1].removesuffix('.tar.gz') : line.split()[0]
for line in hashes.strip().split('\n')
if line.split('-')[-2] == "linux"
},
"nerdctl_archive": lambda hashes : {
line.split()[1].removesuffix('.tar.gz').split('-')[3] : line.split()[0]
for line in hashes.strip().split('\n')
if [x for x in line.split(' ') if x][1].split('-')[2] == "linux"
},
"runc": lambda hashes : {
parts[1].split('.')[1] : parts[0]
for parts in (line.split()
for line in hashes.split('\n')[3:9])
},
"yq": lambda rhashes_bsd : {
pair[0].split('_')[-1] : pair[1]
# pair = (yq_<os>_<arch>, <hash>)
for pair in ((line.split()[1][1:-1], line.split()[3])
for line in rhashes_bsd.splitlines()
if line.startswith("SHA256"))
if pair[0].startswith("yq")
and pair[0].split('_')[1] == "linux"
and not pair[0].endswith(".tar.gz")
},
}
data, yaml = open_checksums_yaml()
s = requests.Session()
@cache
def _get_hash_by_arch(download: str, version: str) -> {str: str}:
hash_file = s.get(downloads[download].format(
version = version,
os = "linux",
),
allow_redirects=True)
if hash_file.status_code == 404:
print(f"Unable to find {download} hash file for version {version} at {hash_file.url}")
return None
hash_file.raise_for_status()
return download_hash_extract[download](hash_file.content.decode())
for download, url in (downloads if only_downloads == []
else {k:downloads[k] for k in downloads.keys() & only_downloads}).items():
checksum_name = f"{download}_checksums"
# Propagate new patch versions to all architectures
for arch in data[checksum_name].values():
for arch2 in data[checksum_name].values():
arch.update({
v:("NONE" if arch2[v] == "NONE" else 0)
for v in (set(arch2.keys()) - set(arch.keys()))
if v.split('.')[2] == '0'})
# this is necessary to make the script indempotent,
# by only adding a vX.X.0 version (=minor release) in each arch
# and letting the rest of the script populate the potential
# patch versions
for arch, versions in data[checksum_name].items():
for minor, patches in groupby(versions.copy().keys(), lambda v : '.'.join(v.split('.')[:-1])):
for version in (f"{minor}.{patch}" for patch in
count(start=int(max(patches, key=version_compare).split('.')[-1]),
step=1)):
# Those barbaric generators do the following:
# Group all patches versions by minor number, take the newest and start from that
# to find new versions
if version in versions and versions[version] != 0:
continue
if download in download_hash_extract:
hashes = _get_hash_by_arch(download, version)
if hashes == None:
break
sha256sum = hashes.get(arch)
if sha256sum == None:
break
else:
hash_file = s.get(downloads[download].format(
version = version,
os = "linux",
arch = arch
),
allow_redirects=True)
if hash_file.status_code == 404:
print(f"Unable to find {download} hash file for version {version} (arch: {arch}) at {hash_file.url}")
break
hash_file.raise_for_status()
sha256sum = hash_file.content.decode().split()[0]
if len(sha256sum) != 64:
raise Exception(f"Checksum has an unexpected length: {len(sha256sum)} (binary: {download}, arch: {arch}, release: {version}, checksum: '{sha256sum}')")
data[checksum_name][arch][version] = sha256sum
data[checksum_name] = {arch : {r : releases[r] for r in sorted(releases.keys(),
key=version_compare,
reverse=True)}
for arch, releases in data[checksum_name].items()}
with open(CHECKSUMS_YML, "w") as checksums_yml:
yaml.dump(data, checksums_yml)
print(f"\n\nUpdated {CHECKSUMS_YML}\n")
parser = argparse.ArgumentParser(description=f"Add new patch versions hashes in {CHECKSUMS_YML}",
formatter_class=argparse.RawTextHelpFormatter,
epilog=f"""
This script only lookup new patch versions relative to those already existing
in the data in {CHECKSUMS_YML},
which means it won't add new major or minor versions.
In order to add one of these, edit {CHECKSUMS_YML}
by hand, adding the new versions with a patch number of 0 (or the lowest relevant patch versions)
; then run this script.
Note that the script will try to add the versions on all
architecture keys already present for a given download target.
The '0' value for a version hash is treated as a missing hash, so the script will try to download it again.
To notify a non-existing version (yanked, or upstream does not have monotonically increasing versions numbers),
use the special value 'NONE'.
EXAMPLES:
crictl_checksums:
...
amd64:
+ v1.30.0: 0
v1.29.0: d16a1ffb3938f5a19d5c8f45d363bd091ef89c0bc4d44ad16b933eede32fdcbb
v1.28.0: 8dc78774f7cbeaf787994d386eec663f0a3cf24de1ea4893598096cb39ef2508"""
)
parser.add_argument('binaries', nargs='*', choices=downloads.keys())
args = parser.parse_args()
download_hash(args.binaries)

View File

@@ -1,51 +0,0 @@
#!/bin/sh
gh api graphql -H "X-Github-Next-Global-ID: 1" -f query='{
calicoctl_binary: repository(owner: "projectcalico", name: "calico") {
id
}
ciliumcli_binary: repository(owner: "cilium", name: "cilium-cli") {
id
}
crictl: repository(owner: "kubernetes-sigs", name: "cri-tools") {
id
}
crio_archive: repository(owner: "cri-o", name: "cri-o") {
id
}
etcd_binary: repository(owner: "etcd-io", name: "etcd") {
id
}
kubectl: repository(owner: "kubernetes", name: "kubernetes") {
id
}
nerdctl_archive: repository(owner: "containerd", name: "nerdctl") {
id
}
runc: repository(owner: "opencontainers", name: "runc") {
id
}
skopeo_binary: repository(owner: "lework", name: "skopeo-binary") {
id
}
yq: repository(owner: "mikefarah", name: "yq") {
id
}
youki: repository(owner: "youki-dev", name: "youki") {
id
}
kubernetes: repository(owner: "kubernetes", name: "kubernetes") {
id
}
cri_dockerd: repository(owner: "Mirantis", name: "cri-dockerd") {
id
}
kata: repository(owner: "kata-containers", name: "kata-containers") {
id
}
crun: repository(owner: "containers", name: "crun") {
id
}
gvisor: repository(owner: "google", name: "gvisor") {
id
}
}'

View File

@@ -1,34 +0,0 @@
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) {{ kube_version }}
- [etcd](https://github.com/etcd-io/etcd) {{ etcd_version }}
- [docker](https://www.docker.com/) v{{ docker_version }}
- [containerd](https://containerd.io/) v{{ containerd_version }}
- [cri-o](http://cri-o.io/) {{ crio_version }} (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) {{ cni_version }}
- [calico](https://github.com/projectcalico/calico) {{ calico_version }}
- [cilium](https://github.com/cilium/cilium) {{ cilium_version }}
- [flannel](https://github.com/flannel-io/flannel) {{ flannel_version }}
- [kube-ovn](https://github.com/alauda/kube-ovn) {{ kube_ovn_version }}
- [kube-router](https://github.com/cloudnativelabs/kube-router) {{ kube_router_version }}
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) {{ multus_version }}
- [weave](https://github.com/rajch/weave) v{{ weave_version }}
- [kube-vip](https://github.com/kube-vip/kube-vip) {{ kube_vip_version }}
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) {{ cert_manager_version }}
- [coredns](https://github.com/coredns/coredns) {{ coredns_version }}
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) {{ ingress_nginx_version }}
- [argocd](https://argoproj.github.io/) {{ argocd_version }}
- [helm](https://helm.sh/) {{ helm_version }}
- [metallb](https://metallb.universe.tf/) {{ metallb_version }}
- [registry](https://github.com/distribution/distribution) v{{ registry_version }}
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) {{ cephfs_provisioner_version }}
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) {{ rbd_provisioner_version }}
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) {{ aws_ebs_csi_plugin_version }}
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) {{ azure_csi_plugin_version }}
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) {{ cinder_csi_plugin_version }}
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) {{ gcp_pd_csi_plugin_version }}
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) {{ local_path_provisioner_version }}
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) {{ local_volume_provisioner_version }}
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) {{ node_feature_discovery_version }}

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env ansible-playbook
---
- name: Update README.md versions
hosts: localhost
connection: local
gather_facts: false
vars:
fallback_ip: 'bypass tasks in kubespray-defaults'
roles:
- kubespray-defaults
tasks:
- name: Include versions not in kubespray-defaults
include_vars: "{{ item }}"
loop:
- ../roles/container-engine/docker/defaults/main.yml
- ../roles/kubernetes/node/defaults/main.yml
- ../roles/kubernetes-apps/argocd/defaults/main.yml
- name: Render versions in README.md
blockinfile:
marker: '<!-- {mark} ANSIBLE MANAGED BLOCK -->'
block: "\n{{ lookup('ansible.builtin.template', 'readme_versions.md.j2') }}\n\n"
path: ../README.md

View File

@@ -1,8 +1,8 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- yankay
- woopstar
- ant31
reviewers:
- yankay
- woopstar
- ant31

View File

@@ -76,13 +76,6 @@ images:
converted: true
tag: "latest"
almalinux-9:
filename: AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
url: https://repo.almalinux.org/almalinux/9.5/cloud/x86_64/images/AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
checksum: sha256:abddf01589d46c841f718cec239392924a03b34c4fe84929af5d543c50e37e37
converted: true
tag: "latest"
rockylinux-8:
filename: Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
url: https://download.rockylinux.org/pub/rocky/8.6/images/Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
@@ -145,10 +138,3 @@ images:
checksum: sha256:c6af522d36d659b66da668cc4eb86b032a9cff05a95a0e37505a63e70ed585dc
converted: true
tag: "latest"
flatcar-4081:
filename: flatcar_production_kubevirt_image.qcow2
url: https://stable.release.flatcar-linux.net/amd64-usr/4081.2.1/flatcar_production_kubevirt_image.qcow2
checksum: sha512:6999ef068380c9842e4caf7afc2a1c66d4d03309f7bfa2f5f500757c36d1f935961f5662cc69376aa3d701e4c2d264f4356d4daadbb68e55becb710067e22c5d
converted: true
tag: latest

View File

@@ -9,22 +9,19 @@ create-tf:
delete-tf:
./scripts/delete-tf.sh
$(INVENTORY_DIR):
mkdir $@
create-packet: init-packet | $(INVENTORY_DIR)
create-packet: init-packet
ansible-playbook cloud_playbooks/create-packet.yml -c local \
-e @"files/${CI_JOB_NAME}.yml" \
-e test_name="$(subst .,-,$(CI_PIPELINE_ID)-$(CI_JOB_ID))" \
-e branch="$(CI_COMMIT_BRANCH)" \
-e pipeline_id="$(CI_PIPELINE_ID)" \
-e inventory_path=$|
-e inventory_path=$(INVENTORY_DIR)
delete-packet: ;
create-vagrant: | $(INVENTORY_DIR)
create-vagrant:
vagrant up
cp $(CI_PROJECT_DIR)/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory $|
cp $(CI_PROJECT_DIR)/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory $(INVENTORY_DIR)
delete-vagrant:
vagrant destroy -f

View File

@@ -27,7 +27,6 @@ mode: all-in-one
cloud_init:
centos-8: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
almalinux-8: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
almalinux-9: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
rockylinux-8: "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlczoKIC0gc3VkbwogLSBob3N0bmFtZQpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
rockylinux-9: "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlczoKIC0gc3VkbwogLSBob3N0bmFtZQpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
debian-11: "I2Nsb3VkLWNvbmZpZwogdXNlcnM6CiAgLSBuYW1lOiBrdWJlc3ByYXkKICAgIHN1ZG86IEFMTD0oQUxMKSBOT1BBU1NXRDpBTEwKICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICAgaG9tZTogL2hvbWUva3ViZXNwcmF5CiAgICBzc2hfYXV0aG9yaXplZF9rZXlzOgogICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1"

View File

@@ -4,8 +4,11 @@
- name: Start vms for CI job
vars:
# Workaround for compatibility when testing upgrades with old == before e9d406ed088d4291ef1d9018c170a4deed2bf928
# TODO: drop after 2.27.0
legacy_groups: "{{ (['kube_control_plane', 'kube_node', 'calico_rr'] | intersect(item) | length > 0) | ternary(['k8s_cluster'], []) }}"
tvars:
kubespray_groups: "{{ item }}"
kubespray_groups: "{{ item + legacy_groups }}"
kubernetes.core.k8s:
definition: "{{ lookup('template', 'vm.yml.j2', template_vars=tvars) }}"
loop: "{{ scenarios[mode | d('default')] }}"

View File

@@ -8,6 +8,9 @@ unsafe_show_logs: true
docker_registry_mirrors:
- "https://mirror.gcr.io"
containerd_grpc_max_recv_message_size: 16777216
containerd_grpc_max_send_message_size: 16777216
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
@@ -17,6 +20,9 @@ containerd_registries_mirrors:
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
containerd_max_container_log_line_size: 16384
crio_registries:
- prefix: docker.io
insecure: false

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-9
cloud_image: almalinux-8
mode: ha
vm_memory: 3072

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-9
cloud_image: almalinux-8
mode: default
vm_memory: 3072

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-9
cloud_image: almalinux-8
mode: ha
# Kubespray settings

View File

@@ -4,6 +4,19 @@ cloud_image: almalinux-8
mode: default
vm_memory: 3072
# Workaround for RHEL8: kernel version 4.18 is lower than Kubernetes system verification.
kubeadm_ignore_preflight_errors:
- SystemVerification
# Kubespray settings
metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy
local_path_provisioner_enabled: true
# NTP mangement
ntp_enabled: true
ntp_timezone: Etc/UTC
ntp_manage_config: true
ntp_tinker_panic: true
ntp_force_sync_immediately: true
# Scheduler plugins
scheduler_plugins_enabled: true

Some files were not shown because too many files have changed in this diff Show More