Compare commits

..

78 Commits

Author SHA1 Message Date
Antoine Legrand
2720c8137a Revert "Add support for ipv6 only cluster via "enable_ipv6only_stack_networks…"
This reverts commit 76c0a3aa75.
2025-02-03 13:53:39 +01:00
Bas
59e1638ae1 Bugfix/11936 - backup: "{{ leave_etc_backup_files }}" (#11937)
* Adding the var: leave_etc_backup_files

* Fix for #11936 - backup: "{{ leave_etc_backup_files }}"
2025-01-30 06:19:23 -08:00
dependabot[bot]
6af849089e build(deps): bump the molecule group with 2 updates (#11933)
Bumps the molecule group with 2 updates: [molecule](https://github.com/ansible-community/molecule) and [molecule-plugins[vagrant]](https://github.com/ansible-community/molecule-plugins).


Updates `molecule` from 24.12.0 to 25.1.0
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v24.12.0...v25.1.0)

Updates `molecule-plugins[vagrant]` from 23.6.0 to 23.7.0
- [Release notes](https://github.com/ansible-community/molecule-plugins/releases)
- [Commits](https://github.com/ansible-community/molecule-plugins/compare/v23.6.0...v23.7.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: molecule
- dependency-name: molecule-plugins[vagrant]
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: molecule
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-28 07:05:26 -08:00
Arthur Outhenin-Chalandre
46e1fbcdd9 dependabot: add group for molecule (#11927)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2025-01-28 00:59:23 -08:00
Max Gautier
1567e8ee6c Add timestamp to kaniko builds (#11923)
The build steps at the start of CI takes about 2 minutes; now that we
have greatly reduced the overall duration, this is not an insignificant
impact.

Add timestamps to the build process to see measure which steps of the
image build take the most time.
2025-01-27 06:17:23 -08:00
Boris
76c0a3aa75 Add support for ipv6 only cluster via "enable_ipv6only_stack_networks" (#11831) 2025-01-27 04:15:22 -08:00
Qasim Mehmood
e107022b4b Publish the ingress-nginx service address if manual address not defined and not using host network (#11879) 2025-01-24 00:47:21 -08:00
Anshuman Agarwala
ebcf9c3fff Updated sample in inventory (#11895)
* Updated sample in inventory

* Review changes
2025-01-23 21:39:21 -08:00
Max Gautier
d23c1464c9 Remove krew support (#11824)
* Remove krew installation support

Krew is fundamentally to install kubectl plugins, which are eminently a
client side things.
It's also not difficult to install on a client machine.

* Remove krew cleanup
2025-01-23 20:45:21 -08:00
Kubernetes Prow Robot
cbd0b7bbc3 Merge pull request #11901 from VannTen/cleanup/verify_settings
Cleanup of preinstall assertions
2025-01-23 08:40:58 -08:00
Max Gautier
67a73764e4 Remove deprecation checks admission plugins list
This assertion is present since 2022, users inventories' should be clean
from it now.
2025-01-23 14:32:43 +01:00
Max Gautier
fba31beb07 Remove containerd_config assert
This assert is present since 2021, we can assume now users have removed
it from their inventories.
2025-01-23 14:32:43 +01:00
Max Gautier
775361206c Drop compatibility for etcd_kubeadm_enabled
This has been deprecated for a long time, time to pull the plug.
We leave an assert for one release to have a straightforward failure if
some users were still using the variable.
2025-01-23 14:32:42 +01:00
Max Gautier
12a2c5eaa8 verify_settings: consolidate choices validation 2025-01-23 14:32:42 +01:00
Max Gautier
ed789c9b97 etcd_kubeadm simplify assert 2025-01-23 14:32:41 +01:00
Max Gautier
85d9e3e2ae Don't check address space when using 'none' network plugin
Since 'none' can be, for instance, a manual calico deployment, don't
check whether there is enough ip for pods on a node, because the plugin
can use another mechanism than the podCIDR to allocate IPs.
2025-01-23 14:32:40 +01:00
Max Gautier
98cdb5348c verify settings: fix etcd assertion when implicity etcd group
When the etcd group is not specified we assume it's kube_control_plane.
In that case, etcd still can't be even, so instead of only checking the
etcd group we need to default to kube_control_plane
2025-01-23 14:30:28 +01:00
Max Gautier
f53552e56b verify_settings: Consolidate assert loop in one task 2025-01-23 14:30:26 +01:00
Max Gautier
277ab7339a verify_settings: fix bad task name + remove redundant conditions 2025-01-23 14:29:48 +01:00
Max Gautier
191f71afea Drop explicit k8s_cluster group in CI inventory (#11858)
This removes compatibility with releases below 2.27.0, now that it has
been released and that we're testing upgrades against it.
2025-01-23 02:34:58 -08:00
Max Gautier
bfe858ba06 CI: cleanup dependencies, pre-commit autoupdate (#11904)
ansible-lint and yamllint are run as pre-commit hooks, which are
installed by pre-commit directly. So there is no need to put them in
tests/requirements.txt.

So remove them and make it leaner.
2025-01-23 01:56:59 -08:00
Max Gautier
f8c4d5a899 Fix: hide 'ansible managed' balises in README.md (#11919)
[//]: -> apparently does not work for hiding on Github markdown
2025-01-23 01:34:58 -08:00
c-romeo
9008c40d0e fix Calico typha deployment issue: #11916 (#11917) 2025-01-23 01:05:01 -08:00
Kubernetes Prow Robot
5a7e1be070 Merge pull request #11905 from VannTen/feat/readme_template_version
Update README.md versions automatically in pre-commit
2025-01-22 19:42:37 -08:00
Max Gautier
2a7b50a016 calico: don't set calico-node cpu limits by default (#11914)
Upstream calico isn't doing that, and:
- this can cause throttling
- the cpu needed by calico is very cluster / workload dependent
- missing cpu limits will not starve other pods (unlike missing memory
  requests), because the kernel scheduler will still gives priority to
  other process in pods not exceeding their requests
2025-01-22 19:24:36 -08:00
Max Gautier
d2e51e777c CI: cleanup vars identical to kubespray defaults (#11903) 2025-01-21 05:46:37 -08:00
Max Gautier
89476b48e5 CI: scope stdout debug callback to kubespray test runs
The debug callback apparently breaks using ansible-playbook in
pre-commit, so scope the variables to only where we're using it instead.
2025-01-21 14:07:32 +01:00
Max Gautier
3f01d4725d Apply new pre-commit version updater 2025-01-21 12:10:43 +01:00
Max Gautier
a142f40e25 Update versions in README.md with pre-commit
Currently, versions in README.md need to be manually updated, and we
check it's done with a bash script.

Add a small utility playbook to add versions in README.md from their
actual default values, automatically.
This is done in pre-commit, and replace the scripted check ; instead it
will autofix the README.md, and fails in CI if needed.

We switch markdownlint behind the local hooks to gave it the opportunity
to catch a problem with the rendering.
2025-01-21 12:10:21 +01:00
Max Gautier
0e91000a04 CI: remove retry from jobs (#11899)
Since e8ee42280 (CI: remove deletion tasks of 'packet' VMs, 2024-09-13),
our tests appears to not be flakey anymore.
The current retry slow down the testing feedback on pull request.

Since it's not needed anymore, don't retry and fail fast.
2025-01-19 18:38:35 -08:00
Kubernetes Prow Robot
e73c2d081c Merge pull request #11898 from VannTen/cleanup/ci/run_without_sample
Run CI without the sample inventory
2025-01-17 08:00:36 -08:00
Max Gautier
5862bff044 ci: show pre-commit diff on failure
Sometimes the change done by pre-commit are not obvious, this should
help.
2025-01-17 16:22:58 +01:00
Max Gautier
b548ccbe7f Adapt CI/vagrant to run without sample inventory 2025-01-17 16:22:57 +01:00
Kubernetes Prow Robot
a5142e7dfd Merge pull request #11891 from VannTen/download_graphql
Overhaul of the python hashes updater
2025-01-17 04:16:07 -08:00
ChengHao Yang
3930919283 Cleanup OWNERS files in each folders (#11892)
* Cleanup not in k-sigs members OWNERS

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Cleanup inactive members on Kubespray

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-01-15 12:10:34 -08:00
Kay Yan
b104bb7a57 [kubernetes] Support Kubernetes v1.32.0 with RHEL8 (#11885)
* [kubernetes] Support Kubernetes v1.32.0

* add workaround for RHEL8

Signed-off-by: Kay Yan <kay.yan@daocloud.io>

---------

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Co-authored-by: Mohamed Zaian <mohamedzaian@gmail.com>
2025-01-15 08:54:35 -08:00
Max Gautier
bc36e9d440 hash-updater: apply formatter 2025-01-15 14:34:48 +01:00
Max Gautier
d8629b8e7e download: separate static metadata into it's own file
By separating logic from data, we should make it easier to add new
components.
2025-01-15 14:32:49 +01:00
Bas
c84336b48c Contrib: upload2artifactory.py (#11886)
* Contrib: upload2artifactory.py

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>

* Pythonic

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>

* Suggested

Co-authored-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* upload2artifactory.py documentation.

---------

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
Co-authored-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2025-01-15 05:18:33 -08:00
Christian Kröger
403a73ac11 [ingress-nginx] expose custom tcp and udp ports in ingress-nginx-controller (#11850) 2025-01-15 05:14:33 -08:00
Fredrik Liv
5ca23e3bfe Changed to use first_kube_control_plane to parse kubeadm_certificate_key (#11875)
Co-authored-by: nvalembois <nvalembois@live.com>
2025-01-14 08:34:34 -08:00
Max Gautier
4d3f06e69e download: cleanup graphQL query
- remove unused parts in the response
- clarify variables names
2025-01-14 17:04:29 +01:00
Max Gautier
d17bd286ea download: allow excluding some component
This is handy when some component releases is buggy (missing file at the
download links) to not block everything else.

Move the filtering up the stack so we don't have to do it multiples
times.
2025-01-14 17:04:28 +01:00
Max Gautier
55cff4f3d3 download: get checksums file relative to git root
This means the update-hashes command can be run anywhere in Kubespray
repository without having to figure out the correct path.
2025-01-14 17:04:28 +01:00
Max Gautier
76e07daa12 download: put grapqQL query in package + read from importlib 2025-01-14 17:04:27 +01:00
Max Gautier
a551922c84 Adapt download.py to run as a package script 2025-01-14 17:04:27 +01:00
Max Gautier
ba3258d7f0 Move download_hash.py into a python package
Can operate on several branches without the need for backport
2025-01-14 17:04:26 +01:00
Max Gautier
9b56840d51 download: create pyproject.toml 2025-01-14 17:04:24 +01:00
Max Gautier
4351b47ebe download: convert to logging 2025-01-14 17:04:18 +01:00
Max Gautier
b08c5e8b14 download: Log Github rate-limit status 2025-01-14 17:02:29 +01:00
Kay Yan
3527cb1916 Update CI test from AlmaLinux8 to AlmaLinux9 (#11889)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-01-14 02:50:32 -08:00
Max Gautier
81790cab91 download: remove unneeded imports 2025-01-14 10:41:42 +01:00
Max Gautier
9fbc566d98 download: Support adding new versions and update the doc 2025-01-14 10:41:41 +01:00
Max Gautier
ff768cc9fe download: support multiple hash algorithm 2025-01-14 10:41:41 +01:00
Max Gautier
ff3d9a0443 download: Support for gvisor (part 2)
Gvisor releases, besides only being tags, have some particularities:
- they are of the form yyyymmdd.p -> this get interpreted as a yaml
  float, so we need to explicitely convert to string to make it work.
- there is no semver-like attached to the version numbers, but the API
  (= OCI container runtime interface) is expected to be stable (see
  linked discussion)
- some older tags don't have hashs for some archs

Link: https://groups.google.com/g/gvisor-users/c/SxMeHt0Yb6Y/m/Xtv7seULCAAJ
2025-01-14 10:41:40 +01:00
Max Gautier
6608efb2c4 download: compute version from Github tags for gvisor
Gvisor is the only one of our deployed components which use tags instead
of proper releases. So the tags scraping support will, for now, cater to
gvisor particularities, notably in the tag name format and the fact that
some older releases don't have the same URL scheme.
2025-01-14 10:41:39 +01:00
Max Gautier
479fda6355 download: support cri-dockerd, youki, kata, crun 2025-01-14 10:41:39 +01:00
Max Gautier
3a44411aa1 Support project using alternates names for arch
(the url should use `alt_arch` instead of `arch` for those)
2025-01-14 10:41:38 +01:00
Max Gautier
9334bc1fee support components with no premade hashes 2025-01-14 10:41:38 +01:00
Max Gautier
c94daa4ff5 download: Update yaml data with new hashes 2025-01-14 10:41:37 +01:00
Max Gautier
5be8155394 remove old loops and generators 2025-01-14 10:41:36 +01:00
Max Gautier
08913c4aa0 Don't use 'checksum' in the components names 2025-01-14 10:41:36 +01:00
Max Gautier
38dd224ffe Extract get_hash into it's own function
Also, always raise even for 404 not found (should not happen now that
we'll use GraphQL to find the exact set of versions)
2025-01-14 10:41:36 +01:00
Max Gautier
24c59cee59 download_hash: adapt download urls to v-less versions 2025-01-14 10:41:35 +01:00
Max Gautier
2be54b2bd7 Filter new versions for new ones and same minor releases
We're only interested in new patch releases for auto-update.
2025-01-14 10:41:35 +01:00
Max Gautier
ae68766015 Filter by github results InvalidVersion
Containerd use the same repository for releases of it's gRPC API (which
we are not interested in).
Conveniently, those releases have tags which are not valid version
number (being prefixed with 'api/').

This could also be potentially useful for similar cases.
The risk of missing releases because of this are low, since it would
require that a project issue a new release with an invalid format, then
switch back to the previous format (or we miss the fact it's not
updating for a long period of time).
2025-01-14 10:41:34 +01:00
Max Gautier
9f58ba60f3 download: compute new versions from Github API
We obtain the set of version from Github, then for each component we do
a set comparison to determine which versions we don't have.
2025-01-14 10:41:34 +01:00
Max Gautier
a6219c84c9 Put graphql query in it's own file 2025-01-14 10:41:33 +01:00
Max Gautier
7941be127d downloads: add graphql node ids
The Github graphQL API needs IDs for querying a variable array of
repository.

Use a dict for components instead of an array of url and record the
corresponding node ID for each component (there are duplicates because
some binaries are provided by the same project/repository).
2025-01-14 10:41:33 +01:00
Max Gautier
c938dfa634 scripts: get_nodes_ids.sh
Add the script used to obtain graphql node IDs from Github so it's
easier to add a new component.
2025-01-14 10:41:31 +01:00
ChengHao Yang
5a353cb04f Add manual option to the external_cloud_provider variable (#11883)
* Add `manual` option in the `external_cloud_provider` value

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Update external cloud provider description in roles & sample inventory

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-01-13 00:12:34 -08:00
kyrie
1f186ed451 add containerd registry mirror certificate configuration (#11857)
Signed-off-by: KubeKyrie <shaolong.qin@daocloud.io>
2025-01-09 01:48:31 -08:00
Chad Swenson
8443f370d4 Structured AuthorizationConfiguration (#11852)
Adds the ability to configure the Kubernetes API server with a structured authorization configuration file.

Structured AuthorizationConfiguration is a new feature in Kubernetes v1.29+ (GA in v1.32) that configures the API server's authorization modes with a structured configuration file.
AuthorizationConfiguration files offer features not available with the `--authorization-mode` flag, although Kubespray supports both methods and authorization-mode remains the default for now.

Note: Because the `--authorization-config` and `--authorization-mode` flags are mutually exclusive, the `authorization_modes` ansible variable is ignored when `kube_apiserver_use_authorization_config_file` is set to true. The two features cannot be used at the same time.

Docs: https://kubernetes.io/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file
Blog + Examples: https://kubernetes.io/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/
KEP: https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3221-structured-authorization-configuration

I tested this all the way back to k8s v1.29 when AuthorizationConfiguration was first introduced as an alpha feature, although v1.29 required some additional workarounds with `kubeadm_patches`, which I included in example comments.

I also included some example comments with CEL expressions that allowed me to configure webhook authorizers without hitting kubeadm 1.29+ issues that block cluster creation and upgrades such as this one: https://github.com/kubernetes/cloud-provider-openstack/issues/2575.
My workaround configures the webhook to ignore requests from kubeadm and system components, which prevents fatal errors from webhooks that are not available yet, and should be authorized by Node or RBAC anyway.
2025-01-07 09:14:28 +01:00
ChengHao Yang
1801debaea Add Flatcar 4081.2.1 image to test-infra (#11849)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-01-07 08:38:28 +01:00
Kay Yan
369be00960 increase the memory requirement to 2GB (#11864)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-01-07 08:00:28 +01:00
Kay Yan
ae1805587b cleaup for 2.27.0 (#11854)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-01-07 05:06:29 +01:00
Noam
55d1e4a4b5 enable bash completion tasks for Suse OS family (#11860)
* remove check for os family on bash completion tasks

* add Suse
2025-01-06 15:36:16 +01:00
Max Gautier
ac9b76eb2e Ignore Mem preflight errors on ubuntu upgrade testcase (#11859) 2025-01-06 11:52:16 +01:00
108 changed files with 1138 additions and 1007 deletions

View File

@@ -7,3 +7,8 @@ updates:
labels:
- dependencies
- release-note-none
groups:
molecule:
patterns:
- molecule
- molecule-plugins*

View File

@@ -6,11 +6,10 @@ stages:
- deploy-extended
variables:
KUBESPRAY_VERSION: v2.26.0
KUBESPRAY_VERSION: v2.27.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
ANSIBLE_STDOUT_CALLBACK: "debug"
MAGIC: "ci check this"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
@@ -43,6 +42,8 @@ before_script:
- cluster-dump/
needs:
- pipeline-image
variables:
ANSIBLE_STDOUT_CALLBACK: "debug"
.job-moderated:
extends: .job
@@ -55,7 +56,6 @@ before_script:
.testcases: &testcases
extends: .job-moderated
retry: 1
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1

View File

@@ -25,6 +25,7 @@
--label 'git-branch'=$CI_COMMIT_REF_SLUG
--label 'git-tag=$CI_COMMIT_TAG'
--destination $PIPELINE_IMAGE
--log-timestamp=true
pipeline-image:
extends: .build-container

View File

@@ -7,7 +7,7 @@ pre-commit:
variables:
PRE_COMMIT_HOME: /pre-commit-cache
script:
- pre-commit run --all-files
- pre-commit run --all-files --show-diff-on-failure
cache:
key: pre-commit-all
paths:

View File

@@ -88,10 +88,10 @@ packet_ubuntu22-calico-all-in-one-upgrade:
packet_ubuntu24-calico-etcd-datastore:
extends: .packet_pr
packet_almalinux8-crio:
packet_almalinux9-crio:
extends: .packet_pr
packet_almalinux8-kube-ovn:
packet_almalinux9-kube-ovn:
extends: .packet_pr
packet_debian11-calico-collection:
@@ -103,6 +103,9 @@ packet_debian11-macvlan:
packet_debian12-cilium:
extends: .packet_pr
packet_almalinux8-calico:
extends: .packet_pr
packet_rockylinux8-calico:
extends: .packet_pr
@@ -136,7 +139,7 @@ packet_debian12-docker:
packet_debian12-calico:
extends: .packet_pr_extended
packet_almalinux8-calico-remove-node:
packet_almalinux9-calico-remove-node:
extends: .packet_pr_extended
variables:
REMOVE_NODE_CHECK: "true"
@@ -145,10 +148,10 @@ packet_almalinux8-calico-remove-node:
packet_rockylinux9-calico:
extends: .packet_pr_extended
packet_almalinux8-calico:
packet_almalinux9-calico:
extends: .packet_pr_extended
packet_almalinux8-docker:
packet_almalinux9-docker:
extends: .packet_pr_extended
packet_ubuntu24-calico-all-in-one:
@@ -179,10 +182,10 @@ packet_ubuntu20-flannel-ha-once:
packet_fedora39-calico-swap-selinux:
extends: .packet_pr_manual
packet_almalinux8-calico-ha-ebpf:
packet_almalinux9-calico-ha-ebpf:
extends: .packet_pr_manual
packet_almalinux8-calico-nodelocaldns-secondary:
packet_almalinux9-calico-nodelocaldns-secondary:
extends: .packet_pr_manual
packet_debian11-custom-cni:

View File

@@ -20,12 +20,6 @@ repos:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.12.0
hooks:
- id: markdownlint
exclude: "^.github|(^docs/_sidebar\\.md$)"
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
@@ -35,7 +29,7 @@ repos:
files: "\\.sh$"
- repo: https://github.com/ansible/ansible-lint
rev: v24.12.2
rev: v25.1.0
hooks:
- id: ansible-lint
additional_dependencies:
@@ -51,12 +45,6 @@ repos:
- repo: local
hooks:
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: collection-build-install
name: Build and install kubernetes-sigs.kubespray Ansible collection
language: python
@@ -90,3 +78,17 @@ repos:
- jinja
additional_dependencies:
- jinja2
- id: render-readme-versions
name: Update versions in README.md to match their defaults values
language: python
additional_dependencies:
- ansible-core>=2.16.4
entry: scripts/render_readme_version.yml
pass_filenames: false
- repo: https://github.com/markdownlint/markdownlint
rev: v0.12.0
hooks:
- id: markdownlint
exclude: "^.github|(^docs/_sidebar\\.md$)"

View File

@@ -77,43 +77,47 @@ vagrant up
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye
- **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/centos.md#centos-8)
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Oracle Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
Note: Upstart/SysV init based OS types are not supported.
Note:
- Upstart/SysV init based OS types are not supported.
- [Kernel requirements](docs/operations/kernel-requirements.md) (please read if the OS kernel version is < 4.19).
## Supported Components
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.31.4
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.32.0
- [etcd](https://github.com/etcd-io/etcd) v3.5.16
- [docker](https://www.docker.com/) v26.1
- [containerd](https://containerd.io/) v1.7.24
- [cri-o](http://cri-o.io/) v1.31.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [cri-o](http://cri-o.io/) v1.32.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [cni-plugins](https://github.com/containernetworking/plugins) v1.4.0
- [calico](https://github.com/projectcalico/calico) v3.29.1
- [cilium](https://github.com/cilium/cilium) v1.15.9
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v4.1.0
- [weave](https://github.com/rajch/weave) v2.8.7
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.15.3
- [coredns](https://github.com/coredns/coredns) v1.11.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.12.0
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.11.0
- [helm](https://helm.sh/) v3.16.4
- [metallb](https://metallb.universe.tf/) v0.13.9
@@ -129,13 +133,15 @@ Note: Upstart/SysV init based OS types are not supported.
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.16.4
<!-- END ANSIBLE MANAGED BLOCK -->
## Container Runtime Notes
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.29**
- **Minimum required version of Kubernetes is v1.30**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
@@ -149,10 +155,10 @@ Note: Upstart/SysV init based OS types are not supported.
Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
- Control Plane
- Memory: 2 GB
- Worker Node
- Memory: 1 GB
## Network Plugins

4
Vagrantfile vendored
View File

@@ -26,6 +26,7 @@ SUPPORTED_OS = {
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"almalinux9" => {box: "almalinux/9", user: "vagrant"},
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
"fedora39" => {box: "fedora/39-cloud-base", user: "vagrant"},
@@ -57,8 +58,7 @@ $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004"
$network_plugin ||= "flannel"
$inventory ||= "inventory/sample"
$inventories ||= [$inventory]
$inventories ||= []
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
$download_run_once ||= "True"

View File

@@ -67,3 +67,23 @@ Step(2) download files and run nginx container
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.
## upload2artifactory.py
After the steps above, this script can recursively upload each file under a directory to a generic repository in Artifactory.
Environment Variables:
- USERNAME -- At least permissions'Deploy/Cache' and 'Delete/Overwrite'.
- TOKEN -- Generate this with 'Set Me Up' in your user.
- BASE_URL -- The URL including the repository name.
Step(3) (optional) upload files to Artifactory
```shell
cd kubespray/contrib/offline/offline-files
export USERNAME=admin
export TOKEN=...
export BASE_URL=https://artifactory.example.com/artifactory/a-generic-repo/
./upload2artifactory.py
```

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python3
"""This is a helper script to manage-offline-files.sh.
After running manage-offline-files.sh, you can run upload2artifactory.py
to recursively upload each file to a generic repository in Artifactory.
This script recurses the current working directory and is intended to
be started from 'kubespray/contrib/offline/offline-files'
Environment Variables:
USERNAME -- At least permissions'Deploy/Cache' and 'Delete/Overwrite'.
TOKEN -- Generate this with 'Set Me Up' in your user.
BASE_URL -- The URL including the repository name.
"""
import os
import urllib.request
import base64
def upload_file(file_path, destination_url, username, token):
"""Helper function to upload a single file"""
try:
with open(file_path, 'rb') as f:
file_data = f.read()
request = urllib.request.Request(destination_url, data=file_data, method='PUT') # NOQA
auth_header = base64.b64encode(f"{username}:{token}".encode()).decode()
request.add_header("Authorization", f"Basic {auth_header}")
with urllib.request.urlopen(request) as response:
if response.status in [200, 201]:
print(f"Success: Uploaded {file_path}")
else:
print(f"Failed: {response.status} {response.read().decode('utf-8')}") # NOQA
except urllib.error.HTTPError as e:
print(f"HTTPError: {e.code} {e.reason} for {file_path}")
except urllib.error.URLError as e:
print(f"URLError: {e.reason} for {file_path}")
except OSError as e:
print(f"OSError: {e.strerror} for {file_path}")
def upload_files(base_url, username, token):
""" Recurse current dir and upload each file using urllib.request """
for root, _, files in os.walk(os.getcwd()):
for file in files:
file_path = os.path.join(root, file)
relative_path = os.path.relpath(file_path, os.getcwd())
destination_url = f"{base_url}/{relative_path}"
print(f"Uploading {file_path} to {destination_url}")
upload_file(file_path, destination_url, username, token)
if __name__ == "__main__":
a_user = os.getenv("USERNAME")
a_token = os.getenv("TOKEN")
a_url = os.getenv("BASE_URL")
if not a_user or not a_token or not a_url:
print(
"Error: Environment variables USERNAME, TOKEN, and BASE_URL must be set." # NOQA
)
exit()
upload_files(a_url, a_user, a_token)

View File

@@ -1,3 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- miouge1

2
docs/_sidebar.md generated
View File

@@ -68,7 +68,6 @@
* Operating Systems
* [Amazonlinux](/docs/operating_systems/amazonlinux.md)
* [Bootstrap-os](/docs/operating_systems/bootstrap-os.md)
* [Centos](/docs/operating_systems/centos.md)
* [Fcos](/docs/operating_systems/fcos.md)
* [Flatcar](/docs/operating_systems/flatcar.md)
* [Kylinlinux](/docs/operating_systems/kylinlinux.md)
@@ -83,6 +82,7 @@
* [Ha-mode](/docs/operations/ha-mode.md)
* [Hardening](/docs/operations/hardening.md)
* [Integration](/docs/operations/integration.md)
* [Kernel-requirements](/docs/operations/kernel-requirements.md)
* [Large-deployments](/docs/operations/large-deployments.md)
* [Mirror](/docs/operations/mirror.md)
* [Nodes](/docs/operations/nodes.md)

View File

@@ -106,7 +106,6 @@ The following tags are defined in playbooks:
| iptables | Flush and clear iptable when resetting |
| k8s-pre-upgrade | Upgrading K8s cluster |
| kata-containers | Configuring kata-containers runtime |
| krew | Install and manage krew |
| kubeadm | Roles linked to kubeadm tasks |
| kube-apiserver | Configuring static pod kube-apiserver |
| kube-controller-manager | Configuring static pod kube-controller-manager |
@@ -209,11 +208,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.26.0
docker pull quay.io/kubespray/kubespray:v2.26.0
git checkout v2.27.0
docker pull quay.io/kubespray/kubespray:v2.27.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.26.0 bash
quay.io/kubespray/kubespray:v2.27.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```

View File

@@ -6,7 +6,8 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: |
@@ -24,7 +25,8 @@ ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -42,7 +44,8 @@ ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -1,7 +0,0 @@
# CentOS and derivatives
## CentOS 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -1,7 +1,11 @@
# Red Hat Enterprise Linux (RHEL)
The documentation also applies to Red Hat derivatives, including Alma Linux, Rocky Linux, Oracle Linux, and CentOS.
## RHEL Support Subscription Registration
The content of this section does not apply to open-source derivatives.
In order to install packages via yum or dnf, RHEL 7/8 hosts are required to be registered for a valid Red Hat support subscription.
You can apply for a 1-year Development support subscription by creating a [Red Hat Developers](https://developers.redhat.com/) account. Be aware though that as the Red Hat Developers subscription is limited to only 1 year, it should not be used to register RHEL 7/8 hosts provisioned in Production environments.
@@ -25,10 +29,12 @@ rh_subscription_role: "Red Hat Enterprise Server"
rh_subscription_sla: "Self-Support"
```
If the RHEL 7/8 hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
If the RHEL 8/9 hosts are already registered to a valid Red Hat support subscription via an alternative configuration management approach prior to the deployment of Kubespray, the successful RHEL `subscription-manager` status check will simply result in the RHEL subscription registration tasks being skipped.
## RHEL 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
The kernel version is lower than the kubenretes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md).

View File

@@ -0,0 +1,35 @@
# Kernel Requirements
For Kubernetes >=1.32.0, the recommended kernel LTS version from the 4.x series is 4.19. Any 5.x or 6.x versions are also supported. For cgroups v2 support, the minimum version is 4.15 and the recommended version is 5.8+. Refer to [this link](https://github.com/kubernetes/kubernetes/blob/v1.32.0/vendor/k8s.io/system-validators/validators/types_unix.go#L33). For more information, see [kernel version requirements](https://kubernetes.io/docs/reference/node/kernel-version-requirements).
If the OS kernel version is lower than required, add the following configuration to ignore the kubeadm preflight errors:
```yaml
kubeadm_ignore_preflight_errors:
- SystemVerification
```
The Kernel Version Matrixs:
| OS Verion | Kernel Verion | Kernel >=4.19 |
|--- | --- | --- |
| RHEL 9 | 5.14 | :white_check_mark: |
| RHEL 8 | 4.18 | :x: |
| Alma Linux 9 | 5.14 | :white_check_mark: |
| Alma Linux 8 | 4.18 | :x: |
| Rocky Linux 9 | 5.14 | :white_check_mark: |
| Rocky Linux 8 | 4.18 | :x: |
| Oracle Linux 9 | 5.14 | :white_check_mark: |
| Oracle Linux 8 | 4.18 | :x: |
| Ubuntu 24.04 | 6.6 | :white_check_mark: |
| Ubuntu 22.04 | 5.15 | :white_check_mark: |
| Ubuntu 20.04 | 5.4 | :white_check_mark: |
| Debian 12 | 6.1 | :white_check_mark: |
| Debian 11 | 5.10 | :white_check_mark: |
| Fedora 40 | 6.8 | :white_check_mark: |
| Fedora 39 | 6.5 | :white_check_mark: |
| openSUSE Leap 15.5 | 5.14 | :white_check_mark: |
| Amazon Linux 2 | 4.14 | :x: |
| openEuler 24.03 | 6.6 | :white_check_mark: |
| openEuler 22.03 | 5.10 | :white_check_mark: |
| openEuler 20.03 | 4.19 | :white_check_mark: |

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.27.0
version: 2.28.0
readme: README.md
authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -45,9 +45,11 @@ loadbalancer_apiserver_healthcheck_port: 8081
## If set the possible values only 'external' after K8s v1.31.
# cloud_provider:
## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack', 'vsphere', 'huaweicloud' and 'hcloud'
## When openstack or vsphere are used make sure to source in the required fields
# External Cloud Controller Manager (Formerly known as cloud provider)
# cloud_provider must be "external", otherwise this setting is invalid.
# Supported external cloud controllers are: 'openstack', 'vsphere', 'oci', 'huaweicloud', 'hcloud' and 'manual'
# 'manual' does not install the cloud controller manager used by Kubespray.
# If you fill in a value other than the above, the check will fail.
# external_cloud_provider:
## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed

View File

@@ -78,8 +78,6 @@
# gvisor_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
# gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
# [Optional] Krew: only if you set krew_enabled: true
# krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
## CentOS/Redhat/AlmaLinux
### For EL8, baseos and appstream must be available,

View File

@@ -255,8 +255,6 @@ argocd_enabled: false
# argocd_admin_password: "password"
# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"
# Kube VIP
kube_vip_enabled: false

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.31.4
kube_version: v1.32.0
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -60,7 +60,7 @@ credentials_dir: "{{ inventory_dir }}/credentials"
# kube_webhook_token_auth_url: https://...
# kube_webhook_token_auth_url_skip_tls_verify: false
## For webhook authorization, authorization_modes must include Webhook
## For webhook authorization, authorization_modes must include Webhook or kube_apiserver_authorization_config_authorizers must configure a type: Webhook
# kube_webhook_authorization: false
# kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false
@@ -268,11 +268,6 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# kube_cpu_reserved: 100m
# kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000"
# Reservation for control plane hosts
# kube_master_memory_reserved: 512Mi
# kube_master_cpu_reserved: 200m
# kube_master_ephemeral_storage_reserved: 2Gi
# kube_master_pid_reserved: "1000"
## Optionally reserve resources for OS system daemons.
# system_reserved: true
@@ -283,10 +278,6 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# system_memory_reserved: 512Mi
# system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi
## Reservation for master hosts
# system_master_memory_reserved: 256Mi
# system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
## Eviction Thresholds to avoid system OOMs
# https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds

View File

@@ -0,0 +1,11 @@
# Reservation for control plane kubernetes components
# kube_memory_reserved: 512Mi
# kube_cpu_reserved: 200m
# kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000"
# Reservation for control plane host system
# system_memory_reserved: 256Mi
# system_cpu_reserved: 250m
# system_ephemeral_storage_reserved: 2Gi
# system_pid_reserved: "1000"

View File

@@ -1,4 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- thomeced

View File

@@ -19,8 +19,8 @@ platforms:
memory: 1024
provider_options:
driver: kvm
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 1
memory: 512
provider_options:

View File

@@ -62,6 +62,8 @@ containerd_registries_mirrors:
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
# ca: ["/etc/certs/mirror.pem"]
# client: [["/etc/certs/client.pem", ""],["/etc/certs/client.cert", "/etc/certs/client.key"]]
containerd_max_container_log_line_size: 16384

View File

@@ -25,8 +25,8 @@ platforms:
- k8s_cluster
provider_options:
driver: kvm
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 1
memory: 1024
groups:

View File

@@ -4,4 +4,10 @@ server = "{{ item.server | default("https://" + item.prefix) }}"
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
override_path = {{ mirror.override_path | default('false') | string | lower }}
{% if mirror.ca is defined %}
ca = ["{{ ([ mirror.ca ] | flatten ) | join('","') }}"]
{% endif %}
{% if mirror.client is defined %}
client = [{% for pair in mirror.client %}["{{ pair[0] }}", "{{ pair[1] }}"]{% if not loop.last %},{% endif %}{% endfor %}]
{% endif %}
{% endfor %}

View File

@@ -5,8 +5,8 @@ driver:
provider:
name: libvirt
platforms:
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 1
memory: 1024
nested: true

View File

@@ -15,8 +15,8 @@ platforms:
- k8s_cluster
provider_options:
driver: kvm
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 2
memory: 1024
groups:

View File

@@ -14,8 +14,8 @@ platforms:
- kube_control_plane
provider_options:
driver: kvm
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 1
memory: 1024
nested: true

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- pasqualet
reviewers:
- pasqualet

View File

@@ -14,8 +14,8 @@ platforms:
- kube_control_plane
provider_options:
driver: kvm
- name: almalinux8
box: almalinux/8
- name: almalinux9
box: almalinux/9
cpus: 1
memory: 1024
nested: true

View File

@@ -9,7 +9,7 @@
- name: Generate etcd certs
include_tasks: "gen_certs_script.yml"
when:
- cert_management | d('script') == "script"
- cert_management == "script"
tags:
- etcd-secrets

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- alijahnas
- luckySB

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- alijahnas
- luckySB

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- kubespray-approvers
reviewers:
- kubespray-reviewers

View File

@@ -6,6 +6,7 @@ ingress_nginx_service_nodeport_http: ""
ingress_nginx_service_nodeport_https: ""
ingress_nginx_service_annotations: {}
ingress_publish_status_address: ""
ingress_nginx_publish_service: "{{ ingress_nginx_namespace }}/ingress-nginx"
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations: []

View File

@@ -79,11 +79,12 @@ spec:
{% if ingress_nginx_without_class %}
- --watch-ingress-without-class=true
{% endif %}
{% if ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% endif %}
{% if ingress_publish_status_address != "" %}
- --publish-status-address={{ ingress_publish_status_address }}
{% elif ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% elif ingress_nginx_publish_service != "" %}
- --publish-service={{ ingress_nginx_publish_service }}
{% endif %}
{% for extra_arg in ingress_nginx_extra_args %}
- {{ extra_arg }}
@@ -125,6 +126,26 @@ spec:
{% if not ingress_nginx_host_network %}
hostPort: {{ ingress_nginx_metrics_port }}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
containerPort: "{{ port | int }}"
protocol: TCP
{% if not ingress_nginx_host_network %}
hostPort: "{{ port | int }}"
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
containerPort: "{{ port | int }}"
protocol: UDP
{% if not ingress_nginx_host_network %}
hostPort: "{{ port | int }}"
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_webhook_enabled %}
- name: webhook
containerPort: 8443

View File

@@ -27,6 +27,22 @@ spec:
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_https %}
nodePort: {{ingress_nginx_service_nodeport_https | int}}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
port: "{{ port | int }}"
targetPort: "{{ port | int }}"
protocol: TCP
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
port: "{{ port | int }}"
targetPort: "{{ port | int }}"
protocol: UDP
{% endfor %}
{% endif %}
selector:
app.kubernetes.io/name: ingress-nginx

View File

@@ -1,5 +0,0 @@
---
krew_enabled: false
krew_root_dir: "/usr/local/krew"
krew_default_index_uri: https://github.com/kubernetes-sigs/krew-index.git
krew_no_upgrade_check: 0

View File

@@ -1,38 +0,0 @@
---
- name: Krew | Download krew
include_tasks: "../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.krew) }}"
- name: Krew | krew env
template:
src: krew.j2
dest: /etc/bash_completion.d/krew
mode: "0644"
- name: Krew | Copy krew manifest
template:
src: krew.yml.j2
dest: "{{ local_release_dir }}/krew.yml"
mode: "0644"
- name: Krew | Install krew # noqa command-instead-of-shell
shell: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }} install --archive={{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz --manifest={{ local_release_dir }}/krew.yml"
environment:
KREW_ROOT: "{{ krew_root_dir }}"
KREW_DEFAULT_INDEX_URI: "{{ krew_default_index_uri | default('') }}"
- name: Krew | Get krew completion
command: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }} completion bash"
changed_when: false
register: krew_completion
check_mode: false
ignore_errors: true # noqa ignore-errors
- name: Krew | Install krew completion
copy:
dest: /etc/bash_completion.d/krew.sh
content: "{{ krew_completion.stdout }}"
mode: "0755"
become: true
when: krew_completion.rc == 0

View File

@@ -1,10 +0,0 @@
---
- name: Krew | install krew on kube_control_plane
import_tasks: krew.yml
- name: Krew | install krew on localhost
import_tasks: krew.yml
delegate_to: localhost
connection: local
run_once: true
when: kubectl_localhost

View File

@@ -1,7 +0,0 @@
# krew bash env(kubespray)
export KREW_ROOT="{{ krew_root_dir }}"
{% if krew_default_index_uri is defined %}
export KREW_DEFAULT_INDEX_URI='{{ krew_default_index_uri }}'
{% endif %}
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
export KREW_NO_UPGRADE_CHECK={{ krew_no_upgrade_check }}

View File

@@ -1,100 +0,0 @@
apiVersion: krew.googlecontainertools.github.com/v1alpha2
kind: Plugin
metadata:
name: krew
spec:
version: "{{ krew_version }}"
homepage: https://krew.sigs.k8s.io/
shortDescription: Package manager for kubectl plugins.
caveats: |
krew is now installed! To start using kubectl plugins, you need to add
krew's installation directory to your PATH:
* macOS/Linux:
- Add the following to your ~/.bashrc or ~/.zshrc:
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
- Restart your shell.
* Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
To list krew commands and to get help, run:
$ kubectl krew
For a full list of available plugins, run:
$ kubectl krew search
You can find documentation at
https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
platforms:
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-darwin_amd64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: darwin
arch: amd64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-darwin_arm64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: darwin
arch: arm64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_amd64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: amd64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_arm
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: arm
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew
files:
- from: ./krew-linux_arm64
to: krew
- from: ./LICENSE
to: .
selector:
matchLabels:
os: linux
arch: arm64
- uri: {{ krew_download_url }}
sha256: {{ krew_archive_checksum }}
bin: krew.exe
files:
- from: ./krew-windows_amd64.exe
to: krew.exe
- from: ./LICENSE
to: .
selector:
matchLabels:
os: windows
arch: amd64

View File

@@ -10,12 +10,6 @@ dependencies:
tags:
- helm
- role: kubernetes-apps/krew
when:
- krew_enabled
tags:
- krew
- role: kubernetes-apps/registry
when:
- registry_enabled

View File

@@ -1,5 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
reviewers:
- oomichi

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- bozzo
reviewers:
- bozzo

View File

@@ -1,5 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- alijahnas
reviewers:

View File

@@ -24,11 +24,11 @@
- name: Parse certificate key if not set
set_fact:
kubeadm_certificate_key: "{{ hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'].stdout_lines[-1] | trim }}"
kubeadm_certificate_key: "{{ hostvars[first_kube_control_plane]['kubeadm_upload_cert'].stdout_lines[-1] | trim }}"
run_once: true
when:
- hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'] is defined
- hostvars[groups['kube_control_plane'][0]]['kubeadm_upload_cert'] is not skipped
- hostvars[first_kube_control_plane]['kubeadm_upload_cert'] is defined
- hostvars[first_kube_control_plane]['kubeadm_upload_cert'] is not skipped
- name: Create kubeadm ControlPlane config
template:

View File

@@ -18,6 +18,18 @@
mode: "0640"
when: kube_webhook_authorization | default(false)
- name: Create structured AuthorizationConfiguration file
copy:
content: "{{ authz_config | to_nice_yaml(indent=2, sort_keys=false) }}"
dest: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
mode: "0640"
vars:
authz_config:
apiVersion: apiserver.config.k8s.io/{{ 'v1alpha1' if kube_version is version('v1.30.0', '<') else 'v1beta1' if kube_version is version('v1.32.0', '<') else 'v1' }}
kind: AuthorizationConfiguration
authorizers: "{{ kube_apiserver_authorization_config_authorizers }}"
when: kube_apiserver_use_authorization_config_file
- name: Create kube-scheduler config
template:
src: kubescheduler-config.yaml.j2
@@ -43,7 +55,7 @@
- name: Install kubectl bash completion
shell: "{{ bin_dir }}/kubectl completion bash >/etc/bash_completion.d/kubectl.sh"
when: ansible_os_family in ["Debian","RedHat"]
when: ansible_os_family in ["Debian","RedHat", "Suse"]
tags:
- kubectl
ignore_errors: true # noqa ignore-errors
@@ -54,7 +66,7 @@
owner: root
group: root
mode: "0755"
when: ansible_os_family in ["Debian","RedHat"]
when: ansible_os_family in ["Debian","RedHat", "Suse"]
tags:
- kubectl
- upgrade
@@ -73,7 +85,7 @@
state: present
marker: "# Ansible entries {mark}"
when:
- ansible_os_family in ["Debian","RedHat"]
- ansible_os_family in ["Debian","RedHat", "Suse"]
- kubectl_alias is defined and kubectl_alias != ""
tags:
- kubectl

View File

@@ -126,7 +126,11 @@ apiServer:
{% if kube_api_anonymous_auth is defined %}
anonymous-auth: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
authorization-config: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
{% else %}
authorization-mode: {{ authorization_modes | join(',') }}
{% endif %}
bind-address: {{ kube_apiserver_bind_address }}
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
enable-admission-plugins: {{ kube_apiserver_enable_admission_plugins | join(',') }}
@@ -176,7 +180,7 @@ apiServer:
{% if kube_webhook_token_auth | default(false) %}
authentication-token-webhook-config-file: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization | default(false) %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
authorization-webhook-config-file: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_encrypt_secret_data %}
@@ -243,6 +247,11 @@ apiServer:
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}

View File

@@ -142,8 +142,13 @@ apiServer:
- name: anonymous-auth
value: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
value: "{{ kube_config_dir }}/apiserver-authorization-config.yaml"
{% else %}
- name: authorization-mode
value: "{{ authorization_modes | join(',') }}"
{% endif %}
- name: bind-address
value: "{{ kube_apiserver_bind_address }}"
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
@@ -212,7 +217,7 @@ apiServer:
- name: authentication-token-webhook-config-file
value: "{{ kube_config_dir }}/webhook-token-auth-config.yaml"
{% endif %}
{% if kube_webhook_authorization | default(false) %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
- name: authorization-webhook-config-file
value: "{{ kube_config_dir }}/webhook-authorization-config.yaml"
{% endif %}
@@ -299,6 +304,11 @@ apiServer:
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}

View File

@@ -1,7 +1,8 @@
---
# Set to true to allow pre-checks to fail and continue deployment
ignore_assert_errors: false
# Set to false to disable the backup parameter, set to true to accumulate backups of config files.
leave_etc_backup_files: true
nameservers: []
cloud_resolver: []
disable_host_nameservers: false

View File

@@ -22,12 +22,11 @@
- name: Stop if etcd group is empty in external etcd mode
assert:
that: groups.get('etcd')
that: groups.get('etcd') or etcd_deployment_type == 'kubeadm'
fail_msg: "Group 'etcd' cannot be empty in external etcd mode"
run_once: true
when:
- not ignore_assert_errors
- etcd_deployment_type != "kubeadm"
- name: Stop if non systemd OS type
assert:
@@ -40,21 +39,12 @@
msg: "{{ ansible_distribution }} is not a known OS"
when: not ignore_assert_errors
- name: Stop if unknown network plugin
assert:
that: kube_network_plugin in ['calico', 'flannel', 'weave', 'cloud', 'cilium', 'cni', 'kube-ovn', 'kube-router', 'macvlan', 'custom_cni', 'none']
msg: "{{ kube_network_plugin }} is not supported"
when:
- kube_network_plugin is defined
- not ignore_assert_errors
- name: Warn the user if they are still using `etcd_kubeadm_enabled`
- name: Warn if `kube_network_plugin` is `none
debug:
msg: >
msg: |
"WARNING! => `kube_network_plugin` is set to `none`. The network configuration will be skipped.
The cluster won't be ready to use, we recommend to select one of the available plugins"
changed_when: true
when:
- kube_network_plugin is defined
- kube_network_plugin == 'none'
- name: Stop if unsupported version of Kubernetes
@@ -63,26 +53,23 @@
msg: "The current release of Kubespray only support newer version of Kubernetes than {{ kube_version_min_required }} - You are trying to apply {{ kube_version }}"
when: not ignore_assert_errors
# simplify this items-list when https://github.com/ansible/ansible/issues/15753 is resolved
- name: "Stop if known booleans are set as strings (Use JSON format on CLI: -e \"{'key': true }\")"
assert:
that: item.value | type_debug == 'bool'
msg: "{{ item.value }} isn't a bool"
that:
- download_run_once | type_debug == 'bool'
- deploy_netchecker | type_debug == 'bool'
- download_always_pull | type_debug == 'bool'
- helm_enabled | type_debug == 'bool'
- openstack_lbaas_enabled | type_debug == 'bool'
run_once: true
with_items:
- { name: download_run_once, value: "{{ download_run_once }}" }
- { name: deploy_netchecker, value: "{{ deploy_netchecker }}" }
- { name: download_always_pull, value: "{{ download_always_pull }}" }
- { name: helm_enabled, value: "{{ helm_enabled }}" }
- { name: openstack_lbaas_enabled, value: "{{ openstack_lbaas_enabled }}" }
when: not ignore_assert_errors
- name: Stop if even number of etcd hosts
assert:
that: groups.etcd | length is not divisibleby 2
that: groups.get('etcd', groups.kube_control_plane) | length is not divisibleby 2
run_once: true
when:
- not ignore_assert_errors
- inventory_hostname in groups.get('etcd',[])
- name: Stop if memory is too small for control plane nodes
assert:
@@ -117,8 +104,7 @@
when:
- not ignore_assert_errors
- ('k8s_cluster' in group_names)
- kube_network_node_prefix is defined
- kube_network_plugin != 'calico'
- kube_network_plugin not in ['calico', 'none']
- name: Stop if ip var does not match local ips
assert:
@@ -185,7 +171,7 @@
- name: Check external_cloud_provider value
assert:
that: external_cloud_provider in ['hcloud', 'huaweicloud', 'oci', 'openstack', 'vsphere']
that: external_cloud_provider in ['hcloud', 'huaweicloud', 'oci', 'openstack', 'vsphere', 'manual']
when:
- cloud_provider == 'external'
- not ignore_assert_errors
@@ -222,82 +208,37 @@
when: kube_network_plugin != 'calico'
run_once: true
- name: Stop if unknown dns mode
- name: Stop if unsupported options selected
assert:
that: dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
msg: "dns_mode can only be 'coredns', 'coredns_dual', 'manual' or 'none'"
when: dns_mode is defined
that:
- kube_network_plugin in ['calico', 'flannel', 'weave', 'cloud', 'cilium', 'cni', 'kube-ovn', 'kube-router', 'macvlan', 'custom_cni', 'none']
- dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
- kube_proxy_mode in ['iptables', 'ipvs']
- cert_management in ['script', 'none']
- resolvconf_mode in ['docker_dns', 'host_resolvconf', 'none']
- etcd_deployment_type in ['host', 'docker', 'kubeadm']
- etcd_deployment_type in ['host', 'kubeadm'] or container_manager == 'docker'
- container_manager in ['docker', 'crio', 'containerd']
msg: The selected choice is not supported
run_once: true
- name: Stop if /etc/resolv.conf has no configured nameservers
assert:
that: configured_nameservers | length>0
fail_msg: "nameserver should not empty in /etc/resolv.conf"
fail_msg: "nameserver should not be empty in /etc/resolv.conf"
when:
- upstream_dns_servers | length == 0
- not disable_host_nameservers
- dns_mode in ['coredns', 'coredns_dual']
- name: Stop if unknown kube proxy mode
assert:
that: kube_proxy_mode in ['iptables', 'ipvs']
msg: "kube_proxy_mode can only be 'iptables' or 'ipvs'"
when: kube_proxy_mode is defined
# TODO: Clean this task up after 2.28 is released
- name: Stop if etcd_kubeadm_enabled is defined
run_once: true
- name: Stop if unknown cert_management
assert:
that: cert_management | d('script') in ['script', 'none']
msg: "cert_management can only be 'script' or 'none'"
run_once: true
- name: Stop if unknown resolvconf_mode
assert:
that: resolvconf_mode in ['docker_dns', 'host_resolvconf', 'none']
msg: "resolvconf_mode can only be 'docker_dns', 'host_resolvconf' or 'none'"
when: resolvconf_mode is defined
run_once: true
- name: Stop if etcd deployment type is not host, docker or kubeadm
assert:
that: etcd_deployment_type in ['host', 'docker', 'kubeadm']
msg: "The etcd deployment type, 'etcd_deployment_type', must be host, docker or kubeadm"
when:
- inventory_hostname in groups.get('etcd',[])
- name: Stop if container manager is not docker, crio or containerd
assert:
that: container_manager in ['docker', 'crio', 'containerd']
msg: "The container manager, 'container_manager', must be docker, crio or containerd"
run_once: true
- name: Stop if etcd deployment type is not host or kubeadm when container_manager != docker
assert:
that: etcd_deployment_type in ['host', 'kubeadm']
msg: "The etcd deployment type, 'etcd_deployment_type', must be host or kubeadm when container_manager is not docker"
when:
- inventory_hostname in groups.get('etcd',[])
- container_manager != 'docker'
# TODO: Clean this task up when we drop backward compatibility support for `etcd_kubeadm_enabled`
- name: Stop if etcd deployment type is not host or kubeadm when container_manager != docker and etcd_kubeadm_enabled is not defined
run_once: true
when: etcd_kubeadm_enabled is defined
block:
- name: Warn the user if they are still using `etcd_kubeadm_enabled`
debug:
msg: >
"WARNING! => `etcd_kubeadm_enabled` is deprecated and will be removed in a future release.
You can set `etcd_deployment_type` to `kubeadm` instead of setting `etcd_kubeadm_enabled` to `true`."
changed_when: true
- name: Stop if `etcd_kubeadm_enabled` is defined and `etcd_deployment_type` is not `kubeadm` or `host`
assert:
that: etcd_deployment_type == 'kubeadm'
msg: >
It is not possible to use `etcd_kubeadm_enabled` when `etcd_deployment_type` is set to {{ etcd_deployment_type }}.
Unset the `etcd_kubeadm_enabled` variable and set `etcd_deployment_type` to desired deployment type (`host`, `kubeadm`, `docker`) instead."
when: etcd_kubeadm_enabled
that: etcd_kubeadm_enabled is not defined
msg: |
`etcd_kubeadm_enabled` is removed.
You can set `etcd_deployment_type` to `kubeadm` instead of setting `etcd_kubeadm_enabled` to `true`."
- name: Stop if download_localhost is enabled but download_run_once is not
assert:
@@ -332,14 +273,6 @@
- containerd_version not in ['latest', 'edge', 'stable']
- container_manager == 'containerd'
- name: Stop if using deprecated containerd_config variable
assert:
that: containerd_config is not defined
msg: "Variable containerd_config is now deprecated. See https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/group_vars/all/containerd.yml for details."
when:
- containerd_config is defined
- not ignore_assert_errors
- name: Stop if auto_renew_certificates is enabled when certificates are managed externally (kube_external_ca_mode is true)
assert:
that: not auto_renew_certificates
@@ -348,14 +281,6 @@
- kube_external_ca_mode
- not ignore_assert_errors
- name: Stop if using deprecated comma separated list for admission plugins
assert:
that: "',' not in kube_apiserver_enable_admission_plugins[0]"
msg: "Comma-separated list for kube_apiserver_enable_admission_plugins is now deprecated, use separate list items for each plugin."
when:
- kube_apiserver_enable_admission_plugins is defined
- kube_apiserver_enable_admission_plugins | length > 0
- name: Verify that the packages list is sorted
vars:
pkgs_lists: "{{ pkgs.keys() | list }}"

View File

@@ -6,7 +6,7 @@
option: servers
value: "{{ nameserverentries | join(',') }}"
mode: '0600'
backup: true
backup: "{{ leave_etc_backup_files }}"
when:
- ('127.0.0.53' not in nameserverentries
or systemd_resolved_enabled.rc != 0)
@@ -24,7 +24,7 @@
option: searches
value: "{{ (default_searchdomains | default([]) + searchdomains) | join(',') }}"
mode: '0600'
backup: true
backup: "{{ leave_etc_backup_files }}"
notify: Preinstall | update resolvconf for networkmanager
- name: NetworkManager | Add DNS options to NM configuration
@@ -34,5 +34,5 @@
option: options
value: "ndots:{{ ndots }},timeout:{{ dns_timeout | default('2') }},attempts:{{ dns_attempts | default('2') }}"
mode: '0600'
backup: true
backup: "{{ leave_etc_backup_files }}"
notify: Preinstall | update resolvconf for networkmanager

View File

@@ -28,7 +28,7 @@
line: "precedence ::ffff:0:0/96 100"
state: present
create: true
backup: true
backup: "{{ leave_etc_backup_files }}"
mode: "0644"
when:
- disable_ipv6_dns

View File

@@ -20,7 +20,7 @@
block: "{{ hostvars.localhost.etc_hosts_inventory_block }}"
state: "{{ 'present' if populate_inventory_to_hosts_file else 'absent' }}"
create: true
backup: true
backup: "{{ leave_etc_backup_files }}"
unsafe_writes: true
marker: "# Ansible inventory hosts {mark}"
mode: "0644"
@@ -31,7 +31,7 @@
regexp: ".*{{ apiserver_loadbalancer_domain_name }}$"
line: "{{ loadbalancer_apiserver.address }} {{ apiserver_loadbalancer_domain_name }}"
state: present
backup: true
backup: "{{ leave_etc_backup_files }}"
unsafe_writes: true
when:
- populate_loadbalancer_apiserver_to_hosts_file
@@ -69,7 +69,7 @@
line: "{{ item.key }} {{ item.value | join(' ') }}"
regexp: "^{{ item.key }}.*$"
state: present
backup: true
backup: "{{ leave_etc_backup_files }}"
unsafe_writes: true
loop: "{{ etc_hosts_localhosts_dict_target | default({}) | dict2items }}"

View File

@@ -10,7 +10,7 @@
create: true
state: present
insertbefore: BOF
backup: true
backup: "{{ leave_etc_backup_files }}"
marker: "# Ansible entries {mark}"
mode: "0644"
notify: Preinstall | propagate resolvconf to k8s components

View File

@@ -7,7 +7,7 @@
blockinfile:
path: "{{ dhclientconffile }}"
state: absent
backup: true
backup: "{{ leave_etc_backup_files }}"
marker: "# Ansible entries {mark}"
notify: Preinstall | propagate resolvconf to k8s components

View File

@@ -1,70 +1,77 @@
---
crictl_checksums:
arm:
v1.32.0: 0
v1.31.1: 0
v1.31.0: 0
v1.30.1: 0
v1.30.0: 0
v1.29.0: 0
arm64:
v1.32.0: f2f4e20658b72d00897f41e4b57093c8080e2d800ee894a5f4351f31d1833e30
v1.31.1: cd70f9b2f75c9619f40450d4b6e2c74aaab619917da517eff6787b442f8b0e56
v1.31.0: f9879541e92fd302db00b9d28ef617744bb8b8b62520bd4c0479819d7d4ae869
v1.30.1: 61da7c11926fd29b27e191c3c25d64f2cb51d39dff72a7c90c1fbbc8d5c70f85
v1.30.0: 3769043fc6018a9e1697fcb768bb89ecd429176bd71e849058916f79a46a07a8
v1.29.0: 0b615cfa00c331fb9c4524f3d4058a61cc487b33a3436d1269e7832cf283f925
amd64:
v1.32.0: f050b71d3a73a91a4e0990b90143ed04dcd100cc66f953736fcb6a2730e283c4
v1.31.1: 0a03ba6b1e4c253d63627f8d210b2ea07675a8712587e697657b236d06d7d231
v1.31.0: 9daa32308090aedee5a7f2ab1f1428fef6f669a64e993f0b5b98db8ef6edd71b
v1.30.1: 71873cdeeeb6c9ee0f79c27b45db38066da81f0c30dcda909b4eedc3aff63f59
v1.30.0: 3dd03954565808eaeb3a7ffc0e8cb7886a64a9aa94b2bfdfbdc6e2ed94842e49
v1.29.0: d16a1ffb3938f5a19d5c8f45d363bd091ef89c0bc4d44ad16b933eede32fdcbb
ppc64le:
v1.32.0: 4ffaf29bbda8df42ed2dda4f1ad33cc785987701dc8d1e0043c17cfea9af43e0
v1.31.1: 8a9f39335601ae3a749d90287a3f0980de01366748b83c0b067c0bf05228ad7d
v1.31.0: ed545379a61deff415172ea3ca6b847166c5d116c7a1271866286cd0242c09a2
v1.30.1: bafdeb709f714619c1f91579d485f46b44f1bbce2dc94227a3db761fbeb58664
v1.30.0: ada550cecb5647014f16dd3ff6c59d7ef7d942ca8cb6c51c15ed019622f39ee9
v1.29.0: 2803a1865045077f29f798b9c569e1db7d44b5c329a546a0fd183e906925b99f
crio_archive_checksums:
arm:
v1.32.0: 0
v1.31.3: 0
v1.31.2: 0
v1.31.1: 0
v1.31.0: 0
v1.30.3: 0
v1.30.2: 0
v1.30.1: 0
v1.30.0: 0
v1.29.2: 0
v1.29.1: 0
v1.29.0: 0
arm64:
v1.32.0: b092eddabedac98a0f8449dc535acfec0e14c21f59cabe8f9703043d995a1a41
v1.31.3: 150e828420848d7dc4d190c13b313c7033f9a255a6e656e32b98f27ac574daa0
v1.31.2: ba0e71699aa7a0e995ac2563b8aee2f2a3358ac120edb8b951e151824f16d5a4
v1.31.1: 760d00cecaf1b6bf5a3bfae39daa5e46a74408f7a6869cbb41716a5610a7a18f
v1.31.0: d54afe0140afde0bed09136bd923d8fb415c9016189e7f1b719565ec84edf737
v1.30.3: 2e47b4b307788b15263256e0e423574c60eec80e17576704df736a7ccc13d7bd
v1.30.2: 6c0ed1a8a38c65fda45d8b725b8742d247e9f658d8cd6c56baa05bd749b9ccbe
v1.30.1: 371a6da24dfc7c9e01f29191b36a0629474a37cd8300fa8a36483647a7859b72
v1.30.0: 7e7c934cebff6433594e4cdc440e1ceb5602741a35d74b2342dac6fb585c3549
v1.29.2: e2ddaeb9d46b6a39057e67f77f5840e79d2226839014d77eb6ef243b88761f7a
v1.29.1: f7d7ca187b44ec490f4511e32f5a6bdf2d5ff14fb3dd1b452e330d7369e69c29
v1.29.0: 2bf11aeb85362ce4b25a7d9fc17bbe80659013425430e5efb922b4388031a027
amd64:
v1.32.0: 8f483f1429d2d9cd6bfa6db2e3a4263151701dd4f05f2b1c06cf8e67c44ea67e
v1.31.3: 9241c5676934b1cba216abcd573361b72b5a88fe0696ada0ff338db7cee77b4b
v1.31.2: d035a728c0c3e05e734d69d4a488d7509ac281fa12ae0c228dee257e9da41237
v1.31.1: ea51b7db06ca97ecf7a76d0341ca168dca102a21fb14f97b1fc139c8e7fb1d47
v1.31.0: 3cc88ce3c19b2f9bbdfaa1bd42eea64bd7d5ffac6e714a83abbdea40df9ef8c2
v1.30.3: 622809ec7e21350a3ff7897c7d2cabdf4367b1a5904d346514adc485de3c7172
v1.30.2: 10be07d2626a093b58a29110e84256029d4c46aeb06a6b41e8bddc30bcfcaa4b
v1.30.1: 7293f51295d89106e59fe0f83af9599e71fe4f446e1b13c40687ef63ecc1b194
v1.30.0: c2b189febc9f9cb51f84eecad0da955182e31b98a9f456314546bb83ee2a901a
v1.29.2: 55e71ef1bceb1cd9490ec85fdbfc889d3f3a9dd2ef3b8954dcbcf33cb6609167
v1.29.1: 127ca9f57c2a3ad44dde2e64e0ec94169886245dffb74c12e68eedc80756c260
v1.29.0: 79c161d8db8ee7f0f4807d6232283d481ef0c20c514b61289238258f66734ac6
ppc64le:
v1.32.0: e0544544c91f603afaf54ed814c8519883212bcb149f53a8be9bb0c749e9ec86
v1.31.3: 0ce44aab3645256ac68840e54aa9720ac559c4b877e1566829b4c4b193999b75
v1.31.2: 57596bb63aef508e86f3b41672816f02a6dee3b1a71ce472756d2c7aed836407
v1.31.1: 94b3b1b8cebd3a3b3483cbefd11826fadaa240302c4b61f98c29bd2bf3dd72ee
v1.31.0: 46d901644f86d25dd62f12c16bd88cf26a0b9c400405f571fc5b68abdfefad95
v1.30.3: 44ed039a1c0c492b14212bbe59c63fe804e3cc525102f47475a5bc0ffd08f4e8
v1.30.2: 19169b1ef3324c749a0b0105b47288c0ef4949964b340c85229d00234e6148a1
v1.30.1: e6fb05de749a06316d046e46f8ff4345a413264e63f63dc9e3f1db2cb8a7c962
v1.30.0: e6fe5c39fa7b7cf8167bb59b94dc9028f8def0c4fec4c1c9028ec4b84da6c53a
v1.29.2: 6577d1476124bcd6bcfd25419bb0d1dc01585dc6e8246a986a7769ad2af407fa
v1.29.1: e26613e038d48271ad83877e5db5ad6f2116181d202495de849d378ab4a76062
v1.29.0: 8adddaf6cf0ed2905820dc162ca5ef541baa7b251368ee00c75435a872a886fb
# Checksum
# Kubernetes versions above Kubespray's current target version are untested and should be used with caution.
kubelet_checksums:
arm:
v1.32.0: 0
v1.31.4: 0
v1.31.3: 0
v1.31.2: 0
v1.31.1: 0
v1.31.0: 0
@@ -75,18 +82,8 @@ kubelet_checksums:
v1.30.2: 0
v1.30.1: 0
v1.30.0: 0
v1.29.10: 0
v1.29.9: 0
v1.29.8: 0
v1.29.7: 0
v1.29.6: 0
v1.29.5: 0
v1.29.4: 0
v1.29.3: 0
v1.29.2: 0
v1.29.1: 0
v1.29.0: 0
arm64:
v1.32.0: bda9b2324c96693b38c41ecea051bab4c7c434be5683050b5e19025b50dbc0bf
v1.31.4: fb6f02f3324a72307acc11998eb5b1c3778167ae165c98f9d49bd011498e72f8
v1.31.3: 0ec590052f2d1cee158a789d705ca931cbc2556ceed364c4ad754fd36c61be28
v1.31.2: 118e1b0e85357a81557f9264521c083708f295d7c5f954a4113500fd1afca8f8
@@ -101,20 +98,8 @@ kubelet_checksums:
v1.30.2: 72ceb082311b42032827a936f80cd2437b8eee03053d05dbe36ba48585febfb8
v1.30.1: c45049b829af876588ec1a30def3884ce77c2c175cd77485d49c78d2064a38fb
v1.30.0: fa887647422d34f3c7cc5b30fefcf97084d2c3277eff237c5808685ba8e4b15a
v1.29.12: 92237be83840bf8dd2318cb281ce309e907e0b665cac6b7629a5fa43a11ae606
v1.29.11: c0d0a26a4e0c2e0ec00a26afe7cd0b667bd52c6b2629314a9a93164e02f8f69a
v1.29.10: 1c750d983e3d2fddeb829d1b9bdcf83c7d81e6f9cf7e1b50ccd9daad47807915
v1.29.9: 2bfcfc069bc0ea86a5cecc14c8f442dcc0871a7ba5cc3161b6311592967c040f
v1.29.8: f645f4309cadf76b7567d89feeebf979cf46940ea23618c776165712798c0704
v1.29.7: f088079f26fb3bffc8a1c467e1caa5ad807023b63e70013e874163df87be6829
v1.29.6: 0f0fa9429d0bcf04f271dcf4f666582dd4a4b15d6f116a45f17b5fcda90c2d2c
v1.29.5: 0d4328a3c67e4f0dbf270fa49343f3eab9316adde1a1bd2a857fa56876a9aff1
v1.29.4: dc4bb6ea6cd35b024d63cc20d1c1800a9c695bd6f70411c57358d7c407513b00
v1.29.3: 891dce19ed0eae34050c2eca0454204892e97bfe1a926f988cd044a987a9c7c9
v1.29.2: 9b4aa572d4cd51a41b1067161d961423d0d12b120fb636ea887a12a975d4b19a
v1.29.1: e46417ab1ceae995f0e00d4177959a36ed34b807829422bc9dda70b263fe5c5d
v1.29.0: 0e0e4544c2a0a3475529154b7534d0d58683466efa04a2bb2e763b476db0bb16
amd64:
v1.32.0: 5ad4965598773d56a37a8e8429c3dc3d86b4c5c26d8417ab333ae345c053dae2
v1.31.4: 9062fbb2b6054ecab07b9e841b0a49ab4acc224860b01c218d01ba95017c5e49
v1.31.3: a5c9e871541251db15436fc307d945217e160d12920730070417ba8037e090df
v1.31.2: b0de6290267bbb4f6bcd9c4d50bb331e335f8dc47653644ae278844bb04c1fb6
@@ -129,20 +114,8 @@ kubelet_checksums:
v1.30.2: 6923abe67ef069afca61c71c585023840426e802b198298055af3a82e11a4e52
v1.30.1: 87bd6e5de9c0769c605da5fedb77a35c8b764e3bda1632447883c935dcf219d3
v1.30.0: 32a32ec3d7e7f8b2648c9dd503ce9ef63b4af1d1677f5b5aed7846fb02d66f18
v1.29.12: 45475d908f6c44bfbf994fec91a4d5ceebf41d93c9f3867e687b2fa67b57b5b0
v1.29.11: 1aaa9025cceac0c9a4df295a58aa79d8932a5b13a43c8910412c9ef970c42d21
v1.29.10: 4cc094062cd1cff49ca551208635669ab86e3982d38e8d0a77ab833a941ff708
v1.29.9: 1bf6e9e9a5612ea4f6e1a8f541a08be93fdf144b1430147de6916dae363c34a2
v1.29.8: df6e130928403af8b4f49f1197e26f2873a147cd0e23aa6597a26c982c652ae0
v1.29.7: f16329e64f5b2204c1cb906f694abebb7f6869d56e6e8b60b54afa0057006b84
v1.29.6: a946789d4fef64e6f5905dbd7dca01d4c3abd302d0da7958fdaa924fe2729c0b
v1.29.5: 261dc3f3c384d138835fe91a02071c642af94abb0cca56ebc04719240440944c
v1.29.4: 58571f0ed62543a9bbac541e52c15d8385083113a463e23aec1341d0b5043939
v1.29.3: d8b55a2f8a87c8cd2cbf867d76d1d7f98b7198a740db19bad6ed7b8b813de771
v1.29.2: f71a85039b71fe08f1c063a93d61a1c952dc8f9a8c6be9b13fbdac8f0d9ff960
v1.29.1: 1b1975c58d38be1a99a8bcba4564ac489afd223b0abe9f2ab08bbde89d2412a3
v1.29.0: e1c38137db8d8777eed8813646b59bf4d22d19b9011ab11dc28e2e34f6b80a05
ppc64le:
v1.32.0: 99d409a8023224d84c361e29cdf21ac0458a5449f03e12550288aa654539e3a1
v1.31.4: 184154c5aa25539cf0547bbcde6d8bee7b8e05984f28da8c996a513787eef8ed
v1.31.3: 46bd2fcd44ce9ec2a77009ae8248a3fd652305e9866c562b01524a99b18cda7a
v1.31.2: b7eb859eaa5494273c587b0dcbb75a5a27251df5e140087de542cb7e358d79b1
@@ -157,21 +130,9 @@ kubelet_checksums:
v1.30.2: 268dfbb7ee3abcb8ff9fd0a88f81204e40dd33d177f7878941c9ff6b7cca0474
v1.30.1: 1ac58eae0aa02fefad47d2318bfa5846ae0d7d11a5b691850cd86b2b614ceffe
v1.30.0: 8d4aa6b10bcddae9a7c754492743cfea88c1c6a4628cab98cdd29bb18d505d03
v1.29.12: 77cfe8af37201bb6ab1173c90eded7c8bbc264a731c189a43db6336ca89b8930
v1.29.11: aaf4384f7c2c7c2cb734480394a159ff51bba59b5b0db310f2fc15d2c762a3a1
v1.29.10: fbe2c747416ae82690cb255d1122266fd251d182a409e6b6c99a03836254efe1
v1.29.9: 0ad253452a40c87dec85bb6621e8ec9658c11856707868e5a65d2283b19ed2fa
v1.29.8: 916a7887ee0a9469b11f82645eecc5b97466faaa9dc4c7d13473c6ff22f2f305
v1.29.7: 52a70e6c9cab9f123cc0f2677b65ac6426cfc549d375c64008b43bcb8fae1d76
v1.29.6: 77c2256d6863ac0e33a0e8e8c4cc798618ae73aac91b4f18b9e87d8e62973c61
v1.29.5: b0caa52184a3e89a7f529c776ebabd7d34aecad560614f787fe08cff777a43cb
v1.29.4: 1ecc89b6f17df357835e3e56f553ec27f2aea69a5865dfb39cff77e6e70e6adb
v1.29.3: 811f2b17f443cd694b8650f5ec2c7e3a59394f8bf3e25d16182549aaab16a420
v1.29.2: b0eb5e0362a4e153ed1239c65b0abb02b2d9fbbca6846d0bab8b285de8c84fca
v1.29.1: 467d2b457205363f53f72081295ea390fc25215b0ccc29dc04c4f82925266067
v1.29.0: 67f09f866d3e4aee8211ce9887ec8bc427b188474a882a7af999fc0fee939028
kubectl_checksums:
arm:
v1.32.0: 6b33ea8c80f785fb07be4d021301199ae9ee4f8d7ea037a8ae544d5a7514684e
v1.31.4: 055d1672f63fda86c6dfa5a2354d627f908f68bde6bf8394fdc9a99cadc4de19
v1.31.3: e0d00fbac98e67b774ff1ed9a0e6fc5be5c1f08cc69b0c8b483904ed15ad8c50
v1.31.2: f2a638bdaa4764e82259ed1548ce2c86056e33a3d09147f7f0c2d4ee5b5e300c
@@ -186,20 +147,8 @@ kubectl_checksums:
v1.30.2: 2dab982920d87bc9a17c539bfa4f94b758afc454bb044029dee06144e8dbee08
v1.30.1: b05c4c4b1c440e8797445b8b15e9f4a00010f1365533a2420b9e68428da19d89
v1.30.0: ff54e96c73f4b87d740768f77edada7df8f2003f278d3c79bbbaa047b1fc708d
v1.29.12: 64e736bd7e6fdbad029ba20a3c2f0a97a26ecea90ec2bd7d2d02c9c46f587daa
v1.29.11: 3ee9c2a1d5de61cd2cb90440533af91b80f079db9288e697a8fc643a9ba241aa
v1.29.10: 5062cab3e174a98d5a6d44f31a0055caa4f98f48839687b5742f086315bdbb58
v1.29.9: 7e1e681c2ea5f620a444922b0883bfdb201c1f5c3a54238ff6a55206a0ce3d76
v1.29.8: f59f597d5e6174479185b54d0014e0bf84b7110c707fe07b133f94a7d7ae45be
v1.29.7: cf875cbbdca7ea0e190075c7a4b3f2fa59864079c1fe9da482f8806b1ad64364
v1.29.6: 7762244b8da5564d2ee6a65403dd3aa3f94e8e9b16887c51936a4e941de8fd95
v1.29.5: f3c83a9674098c5a4f27defed001934719f487897dd61db1992057e5ed103b3e
v1.29.4: ff4a1f437dc902b73505841a7705a6405694856a798e962ec2fdf7793f0aeadb
v1.29.3: 12f72bd88eaa04cd8f09827c64195a695fdd5fb64e11c98524c83d21bcb0e37a
v1.29.2: f1bab202f0ce0c4209af0a977fc3dd4076397b1983544e09942ca4f586dff900
v1.29.1: a4b478cc0e9adaab0c5bb3627c20c5228ea0fe2aeff9e805d611eb3edb761972
v1.29.0: a2388eb458d07ec734e4fa02fd0147456a1922a7d6b8e67a32db9d64a4d7621c
arm64:
v1.32.0: ba4004f98f3d3a7b7d2954ff0a424caa2c2b06b78c17b1dccf2acc76a311a896
v1.31.4: b97e93c20e3be4b8c8fa1235a41b4d77d4f2022ed3d899230dbbbbd43d26f872
v1.31.3: a3953ad2b32eca0b429249a5fbdf4f8ef7d57223c45cc0401fd80fd12c7b9071
v1.31.2: bb9fd6e5a92c2e2378954a2f1a8b4ccb2e8ba5a3635f870c3f306a53b359f971
@@ -214,20 +163,8 @@ kubectl_checksums:
v1.30.2: 56becf07105fbacd2b70f87f3f696cfbed226cb48d6d89ed7f65ba4acae3f2f8
v1.30.1: d90446719b815e3abfe7b2c46ddf8b3fda17599f03ab370d6e47b1580c0e869e
v1.30.0: 669af0cf520757298ea60a8b6eb6b719ba443a9c7d35f36d3fb2fd7513e8c7d2
v1.29.12: 1cf2c00bb4f5ee6df69678e95af8ba9a4d4b1050ddefb0ae9d84b5c6f6c0e817
v1.29.11: d0fcb8ead20f45ffab2d680b84a93c8e459b2c7c1d6dadf566769cf59f04c506
v1.29.10: 4cfa950fbd354bdc655cc425494aa77fe81710bc8f7d3f95285338aac223cc82
v1.29.9: 0fc73b3e4bf5395e0182ae62df24a96d5870baa44fabcc50b5eb2d8dcf22dd78
v1.29.8: adf0007e702e05f59fb8de159463765c4440f872515bd04c24939d9c8fb5e4c7
v1.29.7: 7b6649aaa298be728c5fb7ccb65f98738a4e8bda0741afbd5a9ed9e488c0e725
v1.29.6: 21816488cf3af4cf2b956ee58f7afc5b4964c29488f63756f5ddcf09b0df5be9
v1.29.5: 9ee9168def12ac6a6c0c6430e0f73175e756ed262db6040f8aa2121ad2c1f62e
v1.29.4: 61537408eedcad064d7334384aed508a8aa1ea786311b87b505456a2e0535d36
v1.29.3: 191a96b27e3c6ae28b330da4c9bfefc9592762670727df4fcf124c9f1d5a466a
v1.29.2: 3507ecb4224cf05ae2151a98d4932253624e7762159936d5347b19fe037655ca
v1.29.1: 96d6dc7b2bdcd344ce58d17631c452225de5bbf59b83fd3c89c33c6298fb5d8b
v1.29.0: 8f7a4bd6bae900a4ddab12bd1399aa652c0d59ea508f39b910e111d248893ff7
amd64:
v1.32.0: 646d58f6d98ee670a71d9cdffbf6625aeea2849d567f214bc43a35f8ccb7bf70
v1.31.4: 298e19e9c6c17199011404278f0ff8168a7eca4217edad9097af577023a5620f
v1.31.3: 981f6b49577068bc174275184d8ee7105d8e54f40733792c519cd85023984c0f
v1.31.2: 399e9d1995da80b64d2ef3606c1a239018660d8b35209fba3f7b0bc11c631c68
@@ -242,20 +179,8 @@ kubectl_checksums:
v1.30.2: c6e9c45ce3f82c90663e3c30db3b27c167e8b19d83ed4048b61c1013f6a7c66e
v1.30.1: 5b86f0b06e1a5ba6f8f00e2b01e8ed39407729c4990aeda961f83a586f975e8a
v1.30.0: 7c3807c0f5c1b30110a2ff1e55da1d112a6d0096201f1beb81b269f582b5d1c5
v1.29.12: 35fc028853e6f5299a53f22ab58273ea2d882c0f261ead0a2eed5b844b12dbfb
v1.29.11: 14d7ea4ada60ff15ef3b7734a83c4d05cff164d4843b6f4c081a50b86547c17d
v1.29.10: 24f2f09a635d36b2ce36eaebf191326e2b25097eec541a3e47fee6726ef06cef
v1.29.9: 7b0de2466458cc3c12cf8742dc800c77d4fa72e831aa522df65e510d33b329e2
v1.29.8: 038454e0d79748aab41668f44ca6e4ac8affd1895a94f592b9739a0ae2a5f06a
v1.29.7: e3df008ef60ea50286ea93c3c40a020e178a338cea64a185b4e21792d88c75d6
v1.29.6: 339553c919874ebe3b719e9e1fcd68b55bc8875f9b5a005cf4c028738d54d309
v1.29.5: 603c8681fc0d8609c851f9cc58bcf55eeb97e2934896e858d0232aa8d1138366
v1.29.4: 10e343861c3cb0010161e703307ba907add2aeeeaffc6444779ad915f9889c88
v1.29.3: 89c0435cec75278f84b62b848b8c0d3e15897d6947b6c59a49ddccd93d7312bf
v1.29.2: 7816d067740f47f949be826ac76943167b7b3a38c4f0c18b902fffa8779a5afa
v1.29.1: 69ab3a931e826bf7ac14d38ba7ca637d66a6fcb1ca0e3333a2cafdf15482af9f
v1.29.0: 0e03ab096163f61ab610b33f37f55709d3af8e16e4dcc1eb682882ef80f96fd5
ppc64le:
v1.32.0: 9f3f239e2601ce53ec4e70b80b7684f9c89817cc9938ed0bb14f125a3c4f8c8f
v1.31.4: 5089625fc8f4dc7082c6e0186a839c8d4e791ad15bcbbc586d4839f25f12a3df
v1.31.3: a5855c5fb02cc40c68eee603f08a5c5bcf86d85e6c9e757f450d4fd6138e89d4
v1.31.2: 3a9405b1f8f606f282abb03bf3f926d160be454c21b3867505f15ad2123d4139
@@ -270,21 +195,11 @@ kubectl_checksums:
v1.30.2: 738bc1bad45df79fc4313d167a68ed5a1cf747f1f94e4434f0733e3126989f2e
v1.30.1: ef01ae21e91600469db3df01172144fac6c61083e7d3282bef72ce732d76d0d8
v1.30.0: f8a9eac6e12bc8ab7debe6c197d6536f5b3a9f199e8837afd8e4405291351811
v1.29.12: 439ebbe9d1710cccfeb3fe1fede02cffbee68c2f3ff1af17d25ada49620af1ba
v1.29.11: 6f9a43ae26afac5ab2e5d7fb6db21bd99187ca35f5eca909dd4e05137c6151a5
v1.29.10: 306973163a10c01a76ebab384b0e900b3293b4f6cc935dfd517f0bb69399be54
v1.29.9: 65bc83b5c7bda9ab04b8fe59d7f81e74bc70a4b2d1b7cb0e83fdd7b4ac284211
v1.29.8: 5a9d8631cf3200bdd714bfe7be1fd25a8e33e1dbd6a79f2df3bfe9076f525ed2
v1.29.7: fd2bb7de3d46a375c63499f8235dc22901b563a9554f315f7606e0bac78fff94
v1.29.6: cc145dc1f27f56c81aa2c96c97370e1341b41fbb4fc64cfde4ef4956230fc0e9
v1.29.5: 1d2635f6bd0218c53037c113171479e15e51b60823f7f1b93afb48ae1d9e5b09
v1.29.4: 10a1a7e4423483a386ab1ab9237cda1e9d24423c2cf23b7fa514f533aa23ce87
v1.29.3: 84292286ed2941e52a9df9ccaaf30c3bfebe02a096b67e553d8b643295f231f0
v1.29.2: 382552d15a1aa7ec5a316b2a912e7fbdaaff2f3c714cd38b2b0c6a48b670fed8
v1.29.1: b7780124ccfe9640f3a37d242d31e8dbb252bcd379bd0d7bf3776d15baf15ca3
v1.29.0: ea926d8cf25e2ce982ff5c375da32b51ccbd122b721b1bc4a32f52a9a0d073ab
kubeadm_checksums:
arm:
v1.32.0: 0
v1.31.4: 0
v1.31.3: 0
v1.31.2: 0
v1.31.1: 0
v1.31.0: 0
@@ -295,18 +210,8 @@ kubeadm_checksums:
v1.30.2: 0
v1.30.1: 0
v1.30.0: 0
v1.29.10: 0
v1.29.9: 0
v1.29.8: 0
v1.29.7: 0
v1.29.6: 0
v1.29.5: 0
v1.29.4: 0
v1.29.3: 0
v1.29.2: 0
v1.29.1: 0
v1.29.0: 0
arm64:
v1.32.0: 5da9746a449a3b8a8312b6dd8c48dcb861036cf394306cfbc66a298ba1e8fbde
v1.31.4: 4598c2f0c69e60feb47a070376da358f16efe0e1403c6aca97fa8f7ab1d0e7c0
v1.31.3: 8113900524bd1c8b3ce0b3ece0d37f96291cbf359946afae58a596319a5575c8
v1.31.2: 0f9d231569b3195504f8458415e9b3080e23fb6a749fe7752abfc7a2884efadf
@@ -321,20 +226,8 @@ kubeadm_checksums:
v1.30.2: 7268762b7afd44bf07619985dd52c376b63e47d73b8f9a3b08cc49624a8fbd55
v1.30.1: bda423cb4b9d056f99a2ef116bdf227fadbc1c3309fa3d76da571427a7f41478
v1.30.0: c36afd28921303e6db8e58274de16c60a80a1e75030fc3c4e9c4ed6249b6b696
v1.29.12: d953ed504c2ddd08272d45cc94439fc69b7ffd77ff1d0c78917b3275a5c9c044
v1.29.11: c1482dec2a478e7142b4c4d6fe9434cc04b02e4f760c19bab9287ec05df2d539
v1.29.10: a10b015db9b5b5a29420a9f8e696c39f11d171bb9d67ad39c1a6b05f7da6d823
v1.29.9: aa3bd8120fb4c2e8d6eb1b2b9bb27dad92f08ce53275b7246f0f32eff3a025c1
v1.29.8: 37ff550d5c1af726032a38206daeedfd2e0de6124c41379b314dbbcba394d0b1
v1.29.7: d0ad904dc3823821c3920499fc151fc83fb6cb9e1c920e39173f96720ad0e053
v1.29.6: 3ba6879ef491cdd8433647020d345d86c0ea8e77f726375bc4b5495888bbf778
v1.29.5: d4db8c514f2764edc039462c218dbcd316577f76f21b209b76e9a4b1f08e3100
v1.29.4: 438287a91e08cbefecab79be8ac893a935c3dbf6e87bea895fb99f2bc38cf06e
v1.29.3: ce2e4c230f954e59ae77e34c4ff2ae08cad3970505ae1e21b6337e6d83b21682
v1.29.2: e05720feb9d2d67eff25b0156a5c22e2de37be2ffab4e1f4d31e8c526fafd0e1
v1.29.1: 3bff8c50c104c45e416cce9991706c6ac46365f0defbcd54f8cf4ace0fa68dcf
v1.29.0: bbddee2d46d2e1643ae3623698b45b13aa2e858616d61c642f2f49e5bb14c980
amd64:
v1.32.0: 8a10abe691a693d6deeeb1c992bc75da9d8c76718a22327688f7eb1d7c15f0d6
v1.31.4: 6c8e2fd2fa2cab51debf215fcb9149b94e7046f69ff558290066875200975cf6
v1.31.3: dcfcc6eb79e94994d5f1b04a7746239214030ce8a2e8b0e21a4772938f911d12
v1.31.2: e3d3f1051d9f7e431aabaf433f121c76fcf6d8401b7ea51f4c7af65af44f1e54
@@ -349,20 +242,8 @@ kubeadm_checksums:
v1.30.2: 672b0cae2accce5eac10a1fe4ea6b166e5b518c79ccf71a2fbe7b53c2ca74062
v1.30.1: 651faa3bbbfb368ed00460e4d11732614310b690b767c51810a7b638cc0961a2
v1.30.0: 29f4232c50e6524abba3443ff3b9948d386964d79eb8dfefb409e1f8a8434c14
v1.29.12: bce712631bc425726b45930e58b00790c2ab3deec4282f86af353ea907817c46
v1.29.11: 6cf3567bd69a14859fb80fb39a09196dc2de1729ae72566e7e4819c5600e49c6
v1.29.10: 9098c908e0f3a601e8bef9b2cdb4a9777e18204595a6542be58b3928c7b51440
v1.29.9: e313046d79c3f78d487804fc32248c0575af129bc8a31bca74b191efa036e0b1
v1.29.8: fe054355e0ae8dc35d868a3d3bc408ccdff0969c20bf7a231ae9b71484e41be3
v1.29.7: 7699c6f06fbc8e813766b8237de69a095ad820fe484856ffd921a7894b5af605
v1.29.6: 8f1e04079e614dd549e36be8114ee7022517d646ea715b5778e7c6ab353eb354
v1.29.5: e424dcdbe661314b6ca1fcc94726eb554bc3f4392b060b9626f9df8d7d44d42c
v1.29.4: ea20ab064f716ab7f69a36d72df340257b31c9721ea86e1cf9d70b35999ddeea
v1.29.3: 6abaa1208bf40b6d1f49e518bd68c8ae4a1be0c5b7d3e45d87979999ab070d8b
v1.29.2: 2d4e4fa8685bcbfb661cb41050cd4756f50a7aa147f68492d51a99f9cdfd69ac
v1.29.1: d4d81d9020b550c896376fb9e0586a9f15a332175890d061619b52b3e9bc6cbd
v1.29.0: 629d4630657caace9c819fd3797f4a70c397fbd41a2a7e464a0507dad675d52c
ppc64le:
v1.32.0: d79fe8cbd1d98bcbe56b8c0c3a64716603581cecf274951af49aa07748bf175a
v1.31.4: 9d0a6abf9595f79660f29625ed649df4f64369e1552aa68eb7ad49b45455ab04
v1.31.3: 646130bbb60949bdc9b7a449298537369b0eff0ff39b9f7f4222a4760ab824be
v1.31.2: 57771542703fbb18916728b3701298fda62f28a1d9f144ae3712846d2bb50f8a
@@ -377,19 +258,6 @@ kubeadm_checksums:
v1.30.2: 8aee71554003411470a5933cdff7896736ae1182055c0de6bb3782d0a7581c71
v1.30.1: dc529fae8227422a23a8d4f70e28161fa207a4da7cb24d340aae0592dd729ea5
v1.30.0: a77badcaff292862df8324e17f74ab7ce3c6ea9f390647878f1838a3a832f413
v1.29.12: e3763de339534dc0eb6faca17a06c183cefb5cf0486f00527233a9eea0788373
v1.29.11: 48bf5ef0a67ad4072a5f956c30d1710a19550df2b441c34f214297d7d7455809
v1.29.10: 5f90095f3bd107fef219984e0aade9a7ab12c960392596e9ec08f31dca73acc1
v1.29.9: c3d815d39d53c8baa61ee6cd1185c59fb29ecf49775585b689a2ae7dde62ca7a
v1.29.8: 8a43d185d3eea5abcf98abe8d4b8ca9fe8d7afef65073e026a3bb16cf5c9df99
v1.29.7: 8570e534f3712511284b2e0122d8fe46e36050a0c009df852b69b2de931c53b7
v1.29.6: 577cdd37fc929be0ffcdc2aa5337bba36a409e00f538da0dcca611a4161be461
v1.29.5: 05c92f52d75268f0aaff5056e0d6b3e03002b2d17432360750100ada9b2c381b
v1.29.4: ec47a2dbe1969b9513b0313b5b07b72a870e5da54864d9c8391ec5e857404659
v1.29.3: c0e1f6e9451f28c7b8abf7d3a081fe97578ada69908135e3390f5783511ff7f8
v1.29.2: a0f8ffa8cbfa4bb061ff028df2f6dbb31a9527c561d8c0186d679559f9f347b4
v1.29.1: 3ec6d90c05dd8e4c6bb1f42fd2fe0f091d85317efaf47d9baebd9af506b3878b
v1.29.0: 4c414a463ed4277e9062c797d1c0435aa7aec2fd1688c5d34e3161c898113cb5
etcd_binary_checksums:
arm:
v3.5.16: 0
@@ -578,46 +446,6 @@ calico_crds_archive_checksums:
v3.27.2: 8154bb4aad887f2a5500b505fe203a918f72c4e602b04c688c4b94f76a26e925
v3.27.1: 76abb0db222af279e3514cfae02be9259097b565bbb2ffcb776ca00566480edb
v3.27.0: 2a4b5132035dfd6ac4abc8d545f33de139350eca523e0c5cfe4ac32e43fcb2f1
krew_archive_checksums:
darwin:
arm:
v0.4.4: 0
v0.4.3: 0
arm64:
v0.4.4: e6ac776140b228d6bdfda11247baf4e9b11068f42005d0975fc260c629954464
v0.4.3: 22f29ce3c3c9c030e2eaf3939d2b00f0187dfdbbfaee37fba8ffaadc46e51372
amd64:
v0.4.4: 5f4d2f34868a87cf1188212cf7cb598e76a32f389054089aad1fa46e6daf1e1b
v0.4.3: 6f6a774f03ad4190a709d7d4dcbb4af956ca0eb308cb0d0a44abc90777b0b21a
ppc64le:
v0.4.4: 0
v0.4.3: 0
linux:
arm:
v0.4.4: 4f3d550227e014f3ba7c72031108ffda0654cb755f70eb96be413a5102d23333
v0.4.3: 68eb9e9f5bba29c7c19fb52bfc43a31300f92282a4e81f0c51ad26ed2c73eb03
arm64:
v0.4.4: f8f0cdbf698ed3e8cb46e7bd213754701341a10e11ccb69c90d4863e0cf5a16a
v0.4.3: 0994923848882ad0d4825d5af1dc227687a10a02688f785709b03549dd34d71d
amd64:
v0.4.4: e471396b0ed4f2be092b4854cc030dfcbb12b86197972e7bef0cb89ad9c72477
v0.4.3: 5df32eaa0e888a2566439c4ccb2ef3a3e6e89522f2f2126030171e2585585e4f
ppc64le:
v0.4.4: 0
v0.4.3: 0
windows:
arm:
v0.4.4: 0
v0.4.3: 0
arm64:
v0.4.4: 0
v0.4.3: 0
amd64:
v0.4.4: da0dfeb2a598f11fb9ce871ee7f3b1a69beb371a45f531ee65a71b2201511d28
v0.4.3: d1343a366a867e9de60b23cc3d8ee935ee185af25ff8f717a5e696ba3cae7c85
ppc64le:
v0.4.4: 0
v0.4.3: 0
helm_archive_checksums:
arm:
v3.16.4: 432e774d1087d3773737888d384c62477b399227662b42cbf0c32e95e6e72556

View File

@@ -124,34 +124,33 @@ kube_router_version: "v2.0.0"
multus_version: "v4.1.0"
helm_version: "v3.16.4"
nerdctl_version: "1.7.7"
krew_version: "v0.4.4"
skopeo_version: "v1.16.1"
# Get kubernetes major version (i.e. 1.17.4 => 1.17)
kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}"
pod_infra_supported_versions:
v1.32: "3.10"
v1.31: "3.10"
v1.30: "3.9"
v1.29: "3.9"
pod_infra_version: "{{ pod_infra_supported_versions[kube_major_version] }}"
etcd_supported_versions:
v1.32: "v3.5.16"
v1.31: "v3.5.16"
v1.30: "v3.5.16"
v1.29: "v3.5.16"
etcd_version: "{{ etcd_supported_versions[kube_major_version] }}"
crictl_supported_versions:
v1.32: "v1.32.0"
v1.31: "v1.31.1"
v1.30: "v1.30.1"
v1.29: "v1.29.0"
crictl_version: "{{ crictl_supported_versions[kube_major_version] }}"
crio_supported_versions:
v1.31: v1.31.0
v1.32: v1.32.0
v1.31: v1.31.3
v1.30: v1.30.3
v1.29: v1.29.1
crio_version: "{{ crio_supported_versions[kube_major_version] }}"
# Scheduler plugins doesn't build for K8s 1.29 yet
@@ -188,7 +187,6 @@ kata_containers_download_url: "{{ github_url }}/kata-containers/kata-containers/
gvisor_runsc_download_url: "{{ storage_googleapis_url }}/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
gvisor_containerd_shim_runsc_download_url: "{{ storage_googleapis_url }}/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
nerdctl_download_url: "{{ github_url }}/containerd/nerdctl/releases/download/v{{ nerdctl_version }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
krew_download_url: "{{ github_url }}/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
containerd_download_url: "{{ github_url }}/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
cri_dockerd_download_url: "{{ github_url }}/Mirantis/cri-dockerd/releases/download/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz"
skopeo_download_url: "{{ github_url }}/lework/skopeo-binary/releases/download/{{ skopeo_version }}/skopeo-linux-{{ image_arch }}"
@@ -214,7 +212,6 @@ kata_containers_binary_checksum: "{{ kata_containers_binary_checksums[image_arch
gvisor_runsc_binary_checksum: "{{ gvisor_runsc_binary_checksums[image_arch][gvisor_version] }}"
gvisor_containerd_shim_binary_checksum: "{{ gvisor_containerd_shim_binary_checksums[image_arch][gvisor_version] }}"
nerdctl_archive_checksum: "{{ nerdctl_archive_checksums[image_arch][nerdctl_version] }}"
krew_archive_checksum: "{{ krew_archive_checksums[host_os][image_arch][krew_version] }}"
containerd_archive_checksum: "{{ containerd_archive_checksums[image_arch][containerd_version] }}"
skopeo_binary_checksum: "{{ skopeo_binary_checksums[image_arch][skopeo_version] }}"
@@ -360,9 +357,9 @@ csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe"
csi_livenessprobe_image_tag: "v2.5.0"
snapshot_controller_supported_versions:
v1.32: "v7.0.2"
v1.31: "v7.0.2"
v1.30: "v7.0.2"
v1.29: "v7.0.2"
snapshot_controller_image_repo: "{{ kube_image_repo }}/sig-storage/snapshot-controller"
snapshot_controller_image_tag: "{{ snapshot_controller_supported_versions[kube_major_version] }}"
@@ -946,19 +943,6 @@ downloads:
groups:
- kube_control_plane
krew:
enabled: "{{ krew_enabled }}"
file: true
version: "{{ krew_version }}"
dest: "{{ local_release_dir }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
sha256: "{{ krew_archive_checksum }}"
url: "{{ krew_download_url }}"
unarchive: true
owner: "root"
mode: "0755"
groups:
- kube_control_plane
registry:
enabled: "{{ registry_enabled }}"
container: true

View File

@@ -18,10 +18,10 @@ kubelet_fail_swap_on: true
kubelet_swap_behavior: LimitedSwap
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.31.4
kube_version: v1.32.0
## The minimum version working
kube_version_min_required: v1.29.0
kube_version_min_required: v1.30.0
## Kube Proxy mode One of ['iptables', 'ipvs']
kube_proxy_mode: ipvs
@@ -285,7 +285,8 @@ kubelet_shutdown_grace_period_critical_pods: 20s
cloud_provider: ""
# External Cloud Controller Manager (Formerly known as cloud provider)
# cloud_provider must be "external", otherwise this setting is invalid.
# Supported external cloud controllers are: 'openstack', 'vsphere', 'oci', 'huaweicloud' and 'hcloud'
# Supported external cloud controllers are: 'openstack', 'vsphere', 'oci', 'huaweicloud', 'hcloud' and 'manual'
# 'manual' does not install the cloud controller manager used by Kubespray.
# If you fill in a value other than the above, the check will fail.
external_cloud_provider: ""
@@ -410,7 +411,6 @@ dashboard_enabled: false
# Addons which can be enabled
helm_enabled: false
krew_enabled: false
registry_enabled: false
metrics_server_enabled: false
enable_network_policy: true
@@ -487,7 +487,62 @@ external_hcloud_cloud:
## the k8s cluster. Only 'AlwaysAllow', 'AlwaysDeny', 'Node' and
## 'RBAC' modes are tested. Order is important.
authorization_modes: ['Node', 'RBAC']
rbac_enabled: "{{ 'RBAC' in authorization_modes }}"
## Structured authorization config
## Structured AuthorizationConfiguration is a new feature in Kubernetes v1.29+ (GA in v1.32) that configures the API server's authorization modes with a structured configuration file.
## AuthorizationConfiguration files offer features not available with the `--authorization-mode` flag, although Kubespray supports both methods and authorization-mode remains the default for now.
## Note: Because the `--authorization-config` and `--authorization-mode` flags are mutually exclusive, the `authorization_modes` ansible variable is ignored when `kube_apiserver_use_authorization_config_file` is set to true. The two features cannot be used at the same time.
## Docs: https://kubernetes.io/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file
## Examples: https://kubernetes.io/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/
## KEP: https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3221-structured-authorization-configuration
kube_apiserver_use_authorization_config_file: false
kube_apiserver_authorization_config_authorizers:
- type: Node
name: node
- type: RBAC
name: rbac
## Example for use with kube_webhook_authorization: true
# - type: Webhook
# name: webhook
# webhook:
# connectionInfo:
# type: KubeConfigFile
# kubeConfigFile: "{{ kube_config_dir }}/webhook-authorization-config.yaml"
# subjectAccessReviewVersion: v1beta1
# matchConditionSubjectAccessReviewVersion: v1
# timeout: 3s
# failurePolicy: NoOpinion
# matchConditions:
# # Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/
# # only send resource requests to the webhook
# - expression: has(request.resourceAttributes)
# # Don't intercept requests from kube-system service accounts
# - expression: "!('system:serviceaccounts:kube-system' in request.groups)"
# ## Below expressions avoid issues with kubeadm init and other system components that should be authorized by Node and RBAC
# # Don't process node and bootstrap token requests with the webhook
# - expression: "!('system:nodes' in request.groups)"
# - expression: "!('system:bootstrappers' in request.groups)"
# - expression: "!('system:bootstrappers:kubeadm:default-node-token' in request.groups)"
# # Don't process kubeadm requests with the webhook
# - expression: "!('kubeadm:cluster-admins' in request.groups)"
# - expression: "!('system:masters' in request.groups)"
## Two workarounds are required to use AuthorizationConfiguration with kubeadm v1.29.x:
## 1. Enable the StructuredAuthorizationConfiguration feature gate:
# kube_apiserver_feature_gates:
# - StructuredAuthorizationConfiguration=true
## 2. Use the following kubeadm_patches to remove defaulted authorization-mode flags (Workaround for a kubeadm defaulting bug on v1.29.x. fixed in 1.30+ via: https://github.com/kubernetes/kubernetes/pull/123654)
# kubeadm_patches:
# - target: kube-apiserver
# type: strategic
# patch:
# spec:
# containers:
# - name: kube-apiserver
# $deleteFromPrimitiveList/command:
# - --authorization-mode=Node,RBAC
rbac_enabled: "{{ ('RBAC' in authorization_modes and not kube_apiserver_use_authorization_config_file) or (kube_apiserver_use_authorization_config_file and kube_apiserver_authorization_config_authorizers | selectattr('type', 'equalto', 'RBAC') | list | length > 0) }}"
# When enabled, API bearer tokens (including service account tokens) can be used to authenticate to the kubelet's HTTPS endpoint
kubelet_authentication_token_webhook: true
@@ -690,9 +745,6 @@ proxy_disable_env:
https_proxy: ''
no_proxy: ''
# krew root dir
krew_root_dir: "/usr/local/krew"
# sysctl_file_path to add sysctl conf to
sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"

View File

@@ -23,12 +23,3 @@
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
# TODO: Clean this task up when we drop backward compatibility support for `etcd_kubeadm_enabled`
- name: Set `etcd_deployment_type` to "kubeadm" if `etcd_kubeadm_enabled` is true
set_fact:
etcd_deployment_type: kubeadm
when:
- etcd_kubeadm_enabled is defined and etcd_kubeadm_enabled
tags:
- always

View File

@@ -352,7 +352,9 @@ spec:
privileged: true
resources:
limits:
{% if calico_node_cpu_limit != "0" %}
cpu: {{ calico_node_cpu_limit }}
{% endif %}
memory: {{ calico_node_memory_limit }}
requests:
cpu: {{ calico_node_cpu_requests }}

View File

@@ -126,6 +126,10 @@ spec:
- name: TYPHA_PROMETHEUSMETRICSPORT
value: "{{ typha_prometheusmetricsport }}"
{% endif %}
{% if calico_ipam_host_local %}
- name: USE_POD_CIDR
value: "true"
{% endif %}
{% if typha_secure %}
volumeMounts:
- mountPath: /etc/typha
@@ -135,10 +139,6 @@ spec:
subPath: ca.crt
name: cacert
readOnly: true
{% endif %}
{% if calico_ipam_host_local %}
- name: USE_POD_CIDR
value: "true"
{% endif %}
livenessProbe:
httpGet:

View File

@@ -58,7 +58,7 @@ calico_felix_floating_ips: Disabled
# Limits for apps
calico_node_memory_limit: 500M
calico_node_cpu_limit: 300m
calico_node_cpu_limit: "0"
calico_node_memory_requests: 64M
calico_node_cpu_requests: 150m
calico_felix_chaininsertmode: Insert

View File

@@ -1,4 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
emeritus_approvers:
- oilbeater

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- bozzo
reviewers:
- bozzo

View File

@@ -1,6 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- simon
reviewers:
- simon

View File

@@ -1,8 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- qvicksilver
- yujunz
reviewers:
- qvicksilver
- yujunz

View File

@@ -347,9 +347,6 @@
- /etc/bash_completion.d/kubectl.sh
- /etc/bash_completion.d/crictl
- /etc/bash_completion.d/nerdctl
- /etc/bash_completion.d/krew
- /etc/bash_completion.d/krew.sh
- "{{ krew_root_dir }}"
- /etc/modules-load.d/kube_proxy-ipvs.conf
- /etc/modules-load.d/kubespray-br_netfilter.conf
- /etc/modules-load.d/kubespray-kata-containers.conf

View File

@@ -0,0 +1,35 @@
[build-system]
requires = ["setuptools >= 61.0",
"setuptools_scm >= 8.0",
]
build-backend = "setuptools.build_meta"
[project]
name = "kubespray_component_hash_update"
version = "1.0.0"
dependencies = [
"more_itertools",
"ruamel.yaml",
"requests",
"packaging",
]
requires-python = ">= 3.10"
authors = [
{ name = "Craig Rodrigues", email = "rodrigc@crodrigues.org" },
{ name = "Simon Wessel" },
{ name = "Max Gautier", email = "mg@max.gautier.name" },
]
maintainers = [
{ name = "The Kubespray maintainers" },
]
description = "Download or compute hashes for new versions of components deployed by Kubespray"
classifiers = [
"License :: OSI Approved :: Apache-2.0",
]
[project.scripts]
update-hashes = "component_hash_update.download:main"

View File

@@ -0,0 +1,94 @@
"""
Static download metadata for components updated by the update-hashes command.
"""
infos = {
"calicoctl_binary": {
"url": "https://github.com/projectcalico/calico/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOA87D0g",
},
"ciliumcli_binary": {
"url": "https://github.com/cilium/cilium-cli/releases/download/v{version}/cilium-{os}-{arch}.tar.gz.sha256sum",
"graphql_id": "R_kgDOE0nmLg",
},
"cni_binary": {
"url": "https://github.com/containernetworking/plugins/releases/download/v{version}/cni-plugins-{os}-{arch}-v{version}.tgz.sha256",
"graphql_id": "R_kgDOBQqEpg",
},
"containerd_archive": {
"url": "https://github.com/containerd/containerd/releases/download/v{version}/containerd-{version}-{os}-{arch}.tar.gz.sha256sum",
"graphql_id": "R_kgDOAr9FWA",
},
"cri_dockerd_archive": {
"binary": True,
"url": "https://github.com/Mirantis/cri-dockerd/releases/download/v{version}/cri-dockerd-{version}.{arch}.tgz",
"graphql_id": "R_kgDOEvvLcQ",
},
"crictl": {
"url": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v{version}/crictl-v{version}-{os}-{arch}.tar.gz.sha256",
"graphql_id": "R_kgDOBMdURA",
},
"crio_archive": {
"url": "https://storage.googleapis.com/cri-o/artifacts/cri-o.{arch}.v{version}.tar.gz.sha256sum",
"graphql_id": "R_kgDOBAr5pg",
},
"crun": {
"url": "https://github.com/containers/crun/releases/download/{version}/crun-{version}-linux-{arch}",
"binary": True,
"graphql_id": "R_kgDOBip3vA",
},
"etcd_binary": {
"url": "https://github.com/etcd-io/etcd/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOAKtHtg",
},
"gvisor_containerd_shim_binary": {
"url": "https://storage.googleapis.com/gvisor/releases/release/{version}/{alt_arch}/containerd-shim-runsc-v1.sha512",
"hashtype": "sha512",
"tags": True,
"graphql_id": "R_kgDOB9IlXg",
},
"gvisor_runsc_binary": {
"url": "https://storage.googleapis.com/gvisor/releases/release/{version}/{alt_arch}/runsc.sha512",
"hashtype": "sha512",
"tags": True,
"graphql_id": "R_kgDOB9IlXg",
},
"kata_containers_binary": {
"url": "https://github.com/kata-containers/kata-containers/releases/download/{version}/kata-static-{version}-{arch}.tar.xz",
"binary": True,
"graphql_id": "R_kgDOBsJsHQ",
},
"kubeadm": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubeadm.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"kubectl": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubectl.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"kubelet": {
"url": "https://dl.k8s.io/release/v{version}/bin/linux/{arch}/kubelet.sha256",
"graphql_id": "R_kgDOAToIkg",
},
"nerdctl_archive": {
"url": "https://github.com/containerd/nerdctl/releases/download/v{version}/SHA256SUMS",
"graphql_id": "R_kgDOEvuRnQ",
},
"runc": {
"url": "https://github.com/opencontainers/runc/releases/download/v{version}/runc.sha256sum",
"graphql_id": "R_kgDOAjP4QQ",
},
"skopeo_binary": {
"url": "https://github.com/lework/skopeo-binary/releases/download/v{version}/skopeo-{os}-{arch}.sha256",
"graphql_id": "R_kgDOHQ6J9w",
},
"youki": {
"url": "https://github.com/youki-dev/youki/releases/download/v{version}/youki-{version}-{alt_arch}-gnu.tar.gz",
"binary": True,
"graphql_id": "R_kgDOFPvgPg",
},
"yq": {
"url": "https://github.com/mikefarah/yq/releases/download/v{version}/checksums-bsd", # see https://github.com/mikefarah/yq/pull/1691 for why we use this url
"graphql_id": "R_kgDOApOQGQ",
},
}

View File

@@ -0,0 +1,335 @@
#!/usr/bin/env python3
# After a new version of Kubernetes has been released,
# run this script to update roles/kubespray-defaults/defaults/main/download.yml
# with new hashes.
import sys
import os
import logging
import subprocess
from itertools import groupby, chain
from more_itertools import partition
from functools import cache
import argparse
import requests
import hashlib
from datetime import datetime
from ruamel.yaml import YAML
from packaging.version import Version, InvalidVersion
from importlib.resources import files
from pathlib import Path
from typing import Optional, Any
from . import components
CHECKSUMS_YML = Path("roles/kubespray-defaults/defaults/main/checksums.yml")
logger = logging.getLogger(__name__)
def open_yaml(file: Path):
yaml = YAML()
yaml.explicit_start = True
yaml.preserve_quotes = True
yaml.width = 4096
with open(file, "r") as checksums_yml:
data = yaml.load(checksums_yml)
return data, yaml
arch_alt_name = {
"amd64": "x86_64",
"arm64": "aarch64",
"ppc64le": None,
"arm": None,
}
# TODO: downloads not supported
# gvisor: sha512 checksums
# helm_archive: PGP signatures
# krew_archive: different yaml structure (in our download)
# calico_crds_archive: different yaml structure (in our download)
# TODO:
# noarch support -> k8s manifests, helm charts
# different checksum format (needs download role changes)
# different verification methods (gpg, cosign) ( needs download role changes) (or verify the sig in this script and only use the checksum in the playbook)
# perf improvements (async)
def download_hash(downloads: {str: {str: Any}}) -> None:
# Handle file with multiples hashes, with various formats.
# the lambda is expected to produce a dictionary of hashes indexed by arch name
download_hash_extract = {
"calicoctl_binary": lambda hashes: {
line.split("-")[-1]: line.split()[0]
for line in hashes.strip().split("\n")
if line.count("-") == 2 and line.split("-")[-2] == "linux"
},
"etcd_binary": lambda hashes: {
line.split("-")[-1].removesuffix(".tar.gz"): line.split()[0]
for line in hashes.strip().split("\n")
if line.split("-")[-2] == "linux"
},
"nerdctl_archive": lambda hashes: {
line.split()[1].removesuffix(".tar.gz").split("-")[3]: line.split()[0]
for line in hashes.strip().split("\n")
if [x for x in line.split(" ") if x][1].split("-")[2] == "linux"
},
"runc": lambda hashes: {
parts[1].split(".")[1]: parts[0]
for parts in (line.split() for line in hashes.split("\n")[3:9])
},
"yq": lambda rhashes_bsd: {
pair[0].split("_")[-1]: pair[1]
# pair = (yq_<os>_<arch>, <hash>)
for pair in (
(line.split()[1][1:-1], line.split()[3])
for line in rhashes_bsd.splitlines()
if line.startswith("SHA256")
)
if pair[0].startswith("yq")
and pair[0].split("_")[1] == "linux"
and not pair[0].endswith(".tar.gz")
},
}
checksums_file = (
Path(
subprocess.Popen(
["git", "rev-parse", "--show-toplevel"], stdout=subprocess.PIPE
)
.communicate()[0]
.rstrip()
.decode("utf-8")
)
/ CHECKSUMS_YML
)
logger.info("Opening checksums file %s...", checksums_file)
data, yaml = open_yaml(checksums_file)
s = requests.Session()
@cache
def _get_hash_by_arch(download: str, version: str) -> {str: str}:
hash_file = s.get(
downloads[download]["url"].format(
version=version,
os="linux",
),
allow_redirects=True,
)
hash_file.raise_for_status()
return download_hash_extract[download](hash_file.content.decode())
releases, tags = map(
dict, partition(lambda r: r[1].get("tags", False), downloads.items())
)
repos = {
"with_releases": [r["graphql_id"] for r in releases.values()],
"with_tags": [t["graphql_id"] for t in tags.values()],
}
response = s.post(
"https://api.github.com/graphql",
json={
"query": files(__package__).joinpath("list_releases.graphql").read_text(),
"variables": repos,
},
headers={
"Authorization": f"Bearer {os.environ['API_KEY']}",
},
)
if "x-ratelimit-used" in response.headers._store:
logger.info(
"Github graphQL API ratelimit status: used %s of %s. Next reset at %s",
response.headers["X-RateLimit-Used"],
response.headers["X-RateLimit-Limit"],
datetime.fromtimestamp(int(response.headers["X-RateLimit-Reset"])),
)
response.raise_for_status()
def valid_version(possible_version: str) -> Optional[Version]:
try:
return Version(possible_version)
except InvalidVersion:
return None
repos = response.json()["data"]
github_versions = dict(
zip(
chain(releases.keys(), tags.keys()),
[
{
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for repo in repos["with_releases"]
]
+ [
{
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for repo in repos["with_tags"]
],
strict=True,
)
)
components_supported_arch = {
component.removesuffix("_checksums"): [a for a in archs.keys()]
for component, archs in data.items()
}
new_versions = {
c: {
v
for v in github_versions[c]
if any(
v > version
and (
(v.major, v.minor) == (version.major, version.minor)
or c.startswith("gvisor")
)
for version in [
max(minors)
for _, minors in groupby(cur_v, lambda v: (v.minor, v.major))
]
)
# only get:
# - patch versions (no minor or major bump) (exception for gvisor which does not have a major.minor.patch scheme
# - newer ones (don't get old patch version)
}
- set(cur_v)
for component, archs in data.items()
if (c := component.removesuffix("_checksums")) in downloads.keys()
# this is only to bound cur_v in the scope
and (
cur_v := sorted(
Version(str(k)) for k in next(archs.values().__iter__()).keys()
)
)
}
hash_set_to_0 = {
c: {
Version(str(v))
for v, h in chain.from_iterable(a.items() for a in archs.values())
if h == 0
}
for component, archs in data.items()
if (c := component.removesuffix("_checksums")) in downloads.keys()
}
def get_hash(component: str, version: Version, arch: str):
if component in download_hash_extract:
hashes = _get_hash_by_arch(component, version)
return hashes[arch]
else:
hash_file = s.get(
downloads[component]["url"].format(
version=version,
os="linux",
arch=arch,
alt_arch=arch_alt_name[arch],
),
allow_redirects=True,
)
hash_file.raise_for_status()
if downloads[component].get("binary", False):
return hashlib.new(
downloads[component].get("hashtype", "sha256"), hash_file.content
).hexdigest()
return hash_file.content.decode().split()[0]
for component, versions in chain(new_versions.items(), hash_set_to_0.items()):
c = component + "_checksums"
for arch in components_supported_arch[component]:
for version in versions:
data[c][arch][
str(version)
] = f"{downloads[component].get('hashtype', 'sha256')}:{get_hash(component, version, arch)}"
data[c] = {
arch: {
v: versions[v]
for v in sorted(
versions.keys(), key=lambda v: Version(str(v)), reverse=True
)
}
for arch, versions in data[c].items()
}
with open(checksums_file, "w") as checksums_yml:
yaml.dump(data, checksums_yml)
logger.info("Updated %s", checksums_file)
def main():
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
parser = argparse.ArgumentParser(
description=f"Add new patch versions hashes in {CHECKSUMS_YML}",
formatter_class=argparse.RawTextHelpFormatter,
epilog=f"""
This script only lookup new patch versions relative to those already existing
in the data in {CHECKSUMS_YML},
which means it won't add new major or minor versions.
In order to add one of these, edit {CHECKSUMS_YML}
by hand, adding the new versions with a patch number of 0 (or the lowest relevant patch versions)
and a hash value of 0.
; then run this script.
Note that the script will try to add the versions on all
architecture keys already present for a given download target.
EXAMPLES:
crictl_checksums:
...
amd64:
+ 1.30.0: 0
1.29.0: d16a1ffb3938f5a19d5c8f45d363bd091ef89c0bc4d44ad16b933eede32fdcbb
1.28.0: 8dc78774f7cbeaf787994d386eec663f0a3cf24de1ea4893598096cb39ef2508""",
)
# Workaround for https://github.com/python/cpython/issues/53834#issuecomment-2060825835
# Fixed in python 3.14
class Choices(tuple):
def __init__(self, _iterable=None, default=None):
self.default = default or []
def __contains__(self, item):
return super().__contains__(item) or item == self.default
choices = Choices(components.infos.keys(), default=list(components.infos.keys()))
parser.add_argument(
"only",
nargs="*",
choices=choices,
help="if provided, only obtain hashes for these compoments",
default=choices.default,
)
parser.add_argument(
"-e",
"--exclude",
action="append",
choices=components.infos.keys(),
help="do not obtain hashes for this component",
default=[],
)
args = parser.parse_args()
download_hash(
{k: components.infos[k] for k in (set(args.only) - set(args.exclude))}
)

View File

@@ -0,0 +1,24 @@
query($with_releases: [ID!]!, $with_tags: [ID!]!) {
with_releases: nodes(ids: $with_releases) {
... on Repository {
releases(first: 100) {
nodes {
tagName
isPrerelease
}
}
}
}
with_tags: nodes(ids: $with_tags) {
... on Repository {
refs(refPrefix: "refs/tags/", last: 25) {
nodes {
name
}
}
}
}
}

View File

@@ -1,205 +0,0 @@
#!/usr/bin/env python3
# After a new version of Kubernetes has been released,
# run this script to update roles/kubespray-defaults/defaults/main/download.yml
# with new hashes.
import sys
from itertools import count, groupby
from collections import defaultdict
from functools import cache
import argparse
import requests
from ruamel.yaml import YAML
from packaging.version import Version
CHECKSUMS_YML = "../roles/kubespray-defaults/defaults/main/checksums.yml"
def open_checksums_yaml():
yaml = YAML()
yaml.explicit_start = True
yaml.preserve_quotes = True
yaml.width = 4096
with open(CHECKSUMS_YML, "r") as checksums_yml:
data = yaml.load(checksums_yml)
return data, yaml
def version_compare(version):
return Version(version.removeprefix("v"))
downloads = {
"calicoctl_binary": "https://github.com/projectcalico/calico/releases/download/{version}/SHA256SUMS",
"ciliumcli_binary": "https://github.com/cilium/cilium-cli/releases/download/{version}/cilium-{os}-{arch}.tar.gz.sha256sum",
"cni_binary": "https://github.com/containernetworking/plugins/releases/download/{version}/cni-plugins-{os}-{arch}-{version}.tgz.sha256",
"containerd_archive": "https://github.com/containerd/containerd/releases/download/v{version}/containerd-{version}-{os}-{arch}.tar.gz.sha256sum",
"crictl": "https://github.com/kubernetes-sigs/cri-tools/releases/download/{version}/crictl-{version}-{os}-{arch}.tar.gz.sha256",
"crio_archive": "https://storage.googleapis.com/cri-o/artifacts/cri-o.{arch}.{version}.tar.gz.sha256sum",
"etcd_binary": "https://github.com/etcd-io/etcd/releases/download/{version}/SHA256SUMS",
"kubeadm": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubeadm.sha256",
"kubectl": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubectl.sha256",
"kubelet": "https://dl.k8s.io/release/{version}/bin/linux/{arch}/kubelet.sha256",
"nerdctl_archive": "https://github.com/containerd/nerdctl/releases/download/v{version}/SHA256SUMS",
"runc": "https://github.com/opencontainers/runc/releases/download/{version}/runc.sha256sum",
"skopeo_binary": "https://github.com/lework/skopeo-binary/releases/download/{version}/skopeo-{os}-{arch}.sha256",
"yq": "https://github.com/mikefarah/yq/releases/download/{version}/checksums-bsd", # see https://github.com/mikefarah/yq/pull/1691 for why we use this url
}
# TODO: downloads not supported
# youki: no checkusms in releases
# kata: no checksums in releases
# gvisor: sha512 checksums
# crun : PGP signatures
# cri_dockerd: no checksums or signatures
# helm_archive: PGP signatures
# krew_archive: different yaml structure
# calico_crds_archive: different yaml structure
# TODO:
# noarch support -> k8s manifests, helm charts
# different checksum format (needs download role changes)
# different verification methods (gpg, cosign) ( needs download role changes) (or verify the sig in this script and only use the checksum in the playbook)
# perf improvements (async)
def download_hash(only_downloads: [str]) -> None:
# Handle file with multiples hashes, with various formats.
# the lambda is expected to produce a dictionary of hashes indexed by arch name
download_hash_extract = {
"calicoctl_binary": lambda hashes : {
line.split('-')[-1] : line.split()[0]
for line in hashes.strip().split('\n')
if line.count('-') == 2 and line.split('-')[-2] == "linux"
},
"etcd_binary": lambda hashes : {
line.split('-')[-1].removesuffix('.tar.gz') : line.split()[0]
for line in hashes.strip().split('\n')
if line.split('-')[-2] == "linux"
},
"nerdctl_archive": lambda hashes : {
line.split()[1].removesuffix('.tar.gz').split('-')[3] : line.split()[0]
for line in hashes.strip().split('\n')
if [x for x in line.split(' ') if x][1].split('-')[2] == "linux"
},
"runc": lambda hashes : {
parts[1].split('.')[1] : parts[0]
for parts in (line.split()
for line in hashes.split('\n')[3:9])
},
"yq": lambda rhashes_bsd : {
pair[0].split('_')[-1] : pair[1]
# pair = (yq_<os>_<arch>, <hash>)
for pair in ((line.split()[1][1:-1], line.split()[3])
for line in rhashes_bsd.splitlines()
if line.startswith("SHA256"))
if pair[0].startswith("yq")
and pair[0].split('_')[1] == "linux"
and not pair[0].endswith(".tar.gz")
},
}
data, yaml = open_checksums_yaml()
s = requests.Session()
@cache
def _get_hash_by_arch(download: str, version: str) -> {str: str}:
hash_file = s.get(downloads[download].format(
version = version,
os = "linux",
),
allow_redirects=True)
if hash_file.status_code == 404:
print(f"Unable to find {download} hash file for version {version} at {hash_file.url}")
return None
hash_file.raise_for_status()
return download_hash_extract[download](hash_file.content.decode())
for download, url in (downloads if only_downloads == []
else {k:downloads[k] for k in downloads.keys() & only_downloads}).items():
checksum_name = f"{download}_checksums"
# Propagate new patch versions to all architectures
for arch in data[checksum_name].values():
for arch2 in data[checksum_name].values():
arch.update({
v:("NONE" if arch2[v] == "NONE" else 0)
for v in (set(arch2.keys()) - set(arch.keys()))
if v.split('.')[2] == '0'})
# this is necessary to make the script indempotent,
# by only adding a vX.X.0 version (=minor release) in each arch
# and letting the rest of the script populate the potential
# patch versions
for arch, versions in data[checksum_name].items():
for minor, patches in groupby(versions.copy().keys(), lambda v : '.'.join(v.split('.')[:-1])):
for version in (f"{minor}.{patch}" for patch in
count(start=int(max(patches, key=version_compare).split('.')[-1]),
step=1)):
# Those barbaric generators do the following:
# Group all patches versions by minor number, take the newest and start from that
# to find new versions
if version in versions and versions[version] != 0:
continue
if download in download_hash_extract:
hashes = _get_hash_by_arch(download, version)
if hashes == None:
break
sha256sum = hashes.get(arch)
if sha256sum == None:
break
else:
hash_file = s.get(downloads[download].format(
version = version,
os = "linux",
arch = arch
),
allow_redirects=True)
if hash_file.status_code == 404:
print(f"Unable to find {download} hash file for version {version} (arch: {arch}) at {hash_file.url}")
break
hash_file.raise_for_status()
sha256sum = hash_file.content.decode().split()[0]
if len(sha256sum) != 64:
raise Exception(f"Checksum has an unexpected length: {len(sha256sum)} (binary: {download}, arch: {arch}, release: {version}, checksum: '{sha256sum}')")
data[checksum_name][arch][version] = sha256sum
data[checksum_name] = {arch : {r : releases[r] for r in sorted(releases.keys(),
key=version_compare,
reverse=True)}
for arch, releases in data[checksum_name].items()}
with open(CHECKSUMS_YML, "w") as checksums_yml:
yaml.dump(data, checksums_yml)
print(f"\n\nUpdated {CHECKSUMS_YML}\n")
parser = argparse.ArgumentParser(description=f"Add new patch versions hashes in {CHECKSUMS_YML}",
formatter_class=argparse.RawTextHelpFormatter,
epilog=f"""
This script only lookup new patch versions relative to those already existing
in the data in {CHECKSUMS_YML},
which means it won't add new major or minor versions.
In order to add one of these, edit {CHECKSUMS_YML}
by hand, adding the new versions with a patch number of 0 (or the lowest relevant patch versions)
; then run this script.
Note that the script will try to add the versions on all
architecture keys already present for a given download target.
The '0' value for a version hash is treated as a missing hash, so the script will try to download it again.
To notify a non-existing version (yanked, or upstream does not have monotonically increasing versions numbers),
use the special value 'NONE'.
EXAMPLES:
crictl_checksums:
...
amd64:
+ v1.30.0: 0
v1.29.0: d16a1ffb3938f5a19d5c8f45d363bd091ef89c0bc4d44ad16b933eede32fdcbb
v1.28.0: 8dc78774f7cbeaf787994d386eec663f0a3cf24de1ea4893598096cb39ef2508"""
)
parser.add_argument('binaries', nargs='*', choices=downloads.keys())
args = parser.parse_args()
download_hash(args.binaries)

51
scripts/get_node_ids.sh Executable file
View File

@@ -0,0 +1,51 @@
#!/bin/sh
gh api graphql -H "X-Github-Next-Global-ID: 1" -f query='{
calicoctl_binary: repository(owner: "projectcalico", name: "calico") {
id
}
ciliumcli_binary: repository(owner: "cilium", name: "cilium-cli") {
id
}
crictl: repository(owner: "kubernetes-sigs", name: "cri-tools") {
id
}
crio_archive: repository(owner: "cri-o", name: "cri-o") {
id
}
etcd_binary: repository(owner: "etcd-io", name: "etcd") {
id
}
kubectl: repository(owner: "kubernetes", name: "kubernetes") {
id
}
nerdctl_archive: repository(owner: "containerd", name: "nerdctl") {
id
}
runc: repository(owner: "opencontainers", name: "runc") {
id
}
skopeo_binary: repository(owner: "lework", name: "skopeo-binary") {
id
}
yq: repository(owner: "mikefarah", name: "yq") {
id
}
youki: repository(owner: "youki-dev", name: "youki") {
id
}
kubernetes: repository(owner: "kubernetes", name: "kubernetes") {
id
}
cri_dockerd: repository(owner: "Mirantis", name: "cri-dockerd") {
id
}
kata: repository(owner: "kata-containers", name: "kata-containers") {
id
}
crun: repository(owner: "containers", name: "crun") {
id
}
gvisor: repository(owner: "google", name: "gvisor") {
id
}
}'

View File

@@ -0,0 +1,34 @@
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) {{ kube_version }}
- [etcd](https://github.com/etcd-io/etcd) {{ etcd_version }}
- [docker](https://www.docker.com/) v{{ docker_version }}
- [containerd](https://containerd.io/) v{{ containerd_version }}
- [cri-o](http://cri-o.io/) {{ crio_version }} (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) {{ cni_version }}
- [calico](https://github.com/projectcalico/calico) {{ calico_version }}
- [cilium](https://github.com/cilium/cilium) {{ cilium_version }}
- [flannel](https://github.com/flannel-io/flannel) {{ flannel_version }}
- [kube-ovn](https://github.com/alauda/kube-ovn) {{ kube_ovn_version }}
- [kube-router](https://github.com/cloudnativelabs/kube-router) {{ kube_router_version }}
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) {{ multus_version }}
- [weave](https://github.com/rajch/weave) v{{ weave_version }}
- [kube-vip](https://github.com/kube-vip/kube-vip) {{ kube_vip_version }}
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) {{ cert_manager_version }}
- [coredns](https://github.com/coredns/coredns) {{ coredns_version }}
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) {{ ingress_nginx_version }}
- [argocd](https://argoproj.github.io/) {{ argocd_version }}
- [helm](https://helm.sh/) {{ helm_version }}
- [metallb](https://metallb.universe.tf/) {{ metallb_version }}
- [registry](https://github.com/distribution/distribution) v{{ registry_version }}
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) {{ cephfs_provisioner_version }}
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) {{ rbd_provisioner_version }}
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) {{ aws_ebs_csi_plugin_version }}
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) {{ azure_csi_plugin_version }}
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) {{ cinder_csi_plugin_version }}
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) {{ gcp_pd_csi_plugin_version }}
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) {{ local_path_provisioner_version }}
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) {{ local_volume_provisioner_version }}
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) {{ node_feature_discovery_version }}

View File

@@ -0,0 +1,22 @@
#!/usr/bin/env ansible-playbook
---
- name: Update README.md versions
hosts: localhost
connection: local
gather_facts: false
vars:
fallback_ip: 'bypass tasks in kubespray-defaults'
roles:
- kubespray-defaults
tasks:
- name: Include versions not in kubespray-defaults
include_vars: "{{ item }}"
loop:
- ../roles/container-engine/docker/defaults/main.yml
- ../roles/kubernetes/node/defaults/main.yml
- ../roles/kubernetes-apps/argocd/defaults/main.yml
- name: Render versions in README.md
blockinfile:
marker: '<!-- {mark} ANSIBLE MANAGED BLOCK -->'
block: "\n{{ lookup('ansible.builtin.template', 'readme_versions.md.j2') }}\n\n"
path: ../README.md

View File

@@ -1,8 +1,8 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- woopstar
- yankay
- ant31
reviewers:
- woopstar
- yankay
- ant31

View File

@@ -76,6 +76,13 @@ images:
converted: true
tag: "latest"
almalinux-9:
filename: AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
url: https://repo.almalinux.org/almalinux/9.5/cloud/x86_64/images/AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
checksum: sha256:abddf01589d46c841f718cec239392924a03b34c4fe84929af5d543c50e37e37
converted: true
tag: "latest"
rockylinux-8:
filename: Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
url: https://download.rockylinux.org/pub/rocky/8.6/images/Rocky-8-GenericCloud-8.6-20220515.x86_64.qcow2
@@ -138,3 +145,10 @@ images:
checksum: sha256:c6af522d36d659b66da668cc4eb86b032a9cff05a95a0e37505a63e70ed585dc
converted: true
tag: "latest"
flatcar-4081:
filename: flatcar_production_kubevirt_image.qcow2
url: https://stable.release.flatcar-linux.net/amd64-usr/4081.2.1/flatcar_production_kubevirt_image.qcow2
checksum: sha512:6999ef068380c9842e4caf7afc2a1c66d4d03309f7bfa2f5f500757c36d1f935961f5662cc69376aa3d701e4c2d264f4356d4daadbb68e55becb710067e22c5d
converted: true
tag: latest

View File

@@ -9,19 +9,22 @@ create-tf:
delete-tf:
./scripts/delete-tf.sh
create-packet: init-packet
$(INVENTORY_DIR):
mkdir $@
create-packet: init-packet | $(INVENTORY_DIR)
ansible-playbook cloud_playbooks/create-packet.yml -c local \
-e @"files/${CI_JOB_NAME}.yml" \
-e test_name="$(subst .,-,$(CI_PIPELINE_ID)-$(CI_JOB_ID))" \
-e branch="$(CI_COMMIT_BRANCH)" \
-e pipeline_id="$(CI_PIPELINE_ID)" \
-e inventory_path=$(INVENTORY_DIR)
-e inventory_path=$|
delete-packet: ;
create-vagrant:
create-vagrant: | $(INVENTORY_DIR)
vagrant up
cp $(CI_PROJECT_DIR)/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory $(INVENTORY_DIR)
cp $(CI_PROJECT_DIR)/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory $|
delete-vagrant:
vagrant destroy -f

View File

@@ -27,6 +27,7 @@ mode: all-in-one
cloud_init:
centos-8: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
almalinux-8: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
almalinux-9: "I2Nsb3VkLWNvbmZpZwpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
rockylinux-8: "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlczoKIC0gc3VkbwogLSBob3N0bmFtZQpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
rockylinux-9: "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlczoKIC0gc3VkbwogLSBob3N0bmFtZQpzeXN0ZW1faW5mbzoKICBkaXN0cm86IHJoZWwKdXNlcnM6CiAtIG5hbWU6IGt1YmVzcHJheQogICBncm91cHM6IHdoZWVsCiAgIHN1ZG86ICdBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMJwogICBzaGVsbDogL2Jpbi9iYXNoCiAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICBob21lOiAvaG9tZS9rdWJlc3ByYXkKICAgc3NoX2F1dGhvcml6ZWRfa2V5czoKICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1Cgo="
debian-11: "I2Nsb3VkLWNvbmZpZwogdXNlcnM6CiAgLSBuYW1lOiBrdWJlc3ByYXkKICAgIHN1ZG86IEFMTD0oQUxMKSBOT1BBU1NXRDpBTEwKICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIGxvY2tfcGFzc3dkOiBGYWxzZQogICAgaG9tZTogL2hvbWUva3ViZXNwcmF5CiAgICBzc2hfYXV0aG9yaXplZF9rZXlzOgogICAgICAtIHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRQ2FuVGkvZUt4MCt0SFlKQWVEaHErc0ZTMk9iVVAxL0k2OWY3aVYzVXRrS2xUMjBKZlcxZjZGZVh0LzA0VmYyN1dRcStOcXM2dkdCcUQ5UVhTWXVmK3QwL3M3RVBMalRlaTltZTFtcHFyK3VUZStLRHRUUDM5cGZEMy9lVkNhZUI3MjZHUDJGa2FEMEZ6cG1FYjY2TzNOcWh4T1E5Nkd4LzlYVHV3L0szbGxqNE9WRDZHcmpSM0I3YzRYdEVCc1pjWnBwTUovb0gxbUd5R1hkaDMxbVdRU3FBUk8vUDhVOEd3dDArSEdwVXdoL2hkeTN0K1NZb1RCMkd3VmIwem95Vnd0VnZmRFF6c204ZnEzYXY0S3ZlejhrWXVOREp2MDV4NGx2VVpnUjE1WkRSWHNBbmRoUXlxb1hkQ0xBZTArZWFLWHE5QmtXeEtGYjloUGUwQVVqamE1"

View File

@@ -4,11 +4,8 @@
- name: Start vms for CI job
vars:
# Workaround for compatibility when testing upgrades with old == before e9d406ed088d4291ef1d9018c170a4deed2bf928
# TODO: drop after 2.27.0
legacy_groups: "{{ (['kube_control_plane', 'kube_node', 'calico_rr'] | intersect(item) | length > 0) | ternary(['k8s_cluster'], []) }}"
tvars:
kubespray_groups: "{{ item + legacy_groups }}"
kubespray_groups: "{{ item }}"
kubernetes.core.k8s:
definition: "{{ lookup('template', 'vm.yml.j2', template_vars=tvars) }}"
loop: "{{ scenarios[mode | d('default')] }}"

View File

@@ -8,9 +8,6 @@ unsafe_show_logs: true
docker_registry_mirrors:
- "https://mirror.gcr.io"
containerd_grpc_max_recv_message_size: 16777216
containerd_grpc_max_send_message_size: 16777216
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
@@ -20,9 +17,6 @@ containerd_registries_mirrors:
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
containerd_max_container_log_line_size: 16384
crio_registries:
- prefix: docker.io
insecure: false

View File

@@ -4,19 +4,6 @@ cloud_image: almalinux-8
mode: default
vm_memory: 3072
# Kubespray settings
metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy
local_path_provisioner_enabled: true
# NTP mangement
ntp_enabled: true
ntp_timezone: Etc/UTC
ntp_manage_config: true
ntp_tinker_panic: true
ntp_force_sync_immediately: true
# Scheduler plugins
scheduler_plugins_enabled: true
# Workaround for RHEL8: kernel version 4.18 is lower than Kubernetes system verification.
kubeadm_ignore_preflight_errors:
- SystemVerification

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: ha
vm_memory: 3072

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: default
vm_memory: 3072

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: ha
# Kubespray settings

View File

@@ -0,0 +1,22 @@
---
# Instance settings
cloud_image: almalinux-9
mode: default
vm_memory: 3072
# Kubespray settings
metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy
local_path_provisioner_enabled: true
# NTP mangement
ntp_enabled: true
ntp_timezone: Etc/UTC
ntp_manage_config: true
ntp_tinker_panic: true
ntp_force_sync_immediately: true
# Scheduler plugins
scheduler_plugins_enabled: true

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: default
# Kubespray settings

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: default
vm_memory: 3072

View File

@@ -1,6 +1,6 @@
---
# Instance settings
cloud_image: almalinux-8
cloud_image: almalinux-9
mode: default
vm_memory: 3072

View File

@@ -2,3 +2,7 @@
# Instance settings
cloud_image: amazon-linux-2
mode: all-in-one
# Workaround for RHEL8: kernel version 4.18 is lower than Kubernetes system verification.
kubeadm_ignore_preflight_errors:
- SystemVerification

View File

@@ -9,3 +9,7 @@ metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy
# Workaround for RHEL8: kernel version 4.18 is lower than Kubernetes system verification.
kubeadm_ignore_preflight_errors:
- SystemVerification

View File

@@ -8,7 +8,6 @@ kubeadm_certificate_key: 3998c58db6497dd17d909394e62d515368c06ec617710d02edea31c
kube_proxy_mode: iptables
kube_network_plugin: flannel
helm_enabled: true
krew_enabled: true
kubernetes_audit: true
etcd_events_cluster_enabled: true
local_volume_provisioner_enabled: true

View File

@@ -2,12 +2,10 @@
# Instance settings
cloud_image: ubuntu-2204
mode: all-in-one
vm_memory: 1600
vm_memory: 1800
# Kubespray settings
auto_renew_certificates: true
kubeadm_ignore_preflight_errors:
- Mem
# Currently ipvs not available on KVM: https://packages.ubuntu.com/search?suite=focal&arch=amd64&mode=exactfilename&searchon=contents&keywords=ip_vs_sh.ko
kube_proxy_mode: iptables

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2204
mode: all-in-one
vm_memory: 1600
vm_memory: 1800
# Kubespray settings
auto_renew_certificates: true

Some files were not shown because too many files have changed in this diff Show More