Compare commits

...

58 Commits

Author SHA1 Message Date
Joshua N Haupt
422e7366ec Fix Gluster image_id and update openstack_blockstorage_volume_v3 (#12910)
This fixes the Terraform Gluster Compute image_id bug and updates the openstack_blockstorage_volume_v2 to
openstack_blockstorage_volume_v3.

Resolves:
[Bug] OpenStack Compute variable handling of image_id and image_name for Gluster nodes is broken

https://github.com/kubernetes-sigs/kubespray/issues/12902

Update openstack_blockstorage_volume_v2 to openstack_blockstorage_volume_v3

https://github.com/kubernetes-sigs/kubespray/issues/12901

Signed-off-by: Joshua Nathaniel Haupt <joshua@hauptj.com>
2026-02-04 11:08:26 +05:30
Tushar240503
bf69e67240 refactor/dynamic-role-loading-network (#12933)
Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-02-03 21:58:29 +05:30
Tushar240503
c5c2cf16a0 Move inline defaults to defaults/main.yml (#12926) 2026-02-03 14:14:29 +05:30
Ali Afsharzadeh
69e042bd9e Remove software-properties-common from pipeline.Dockerfile (#12945)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-02-02 20:04:32 +05:30
dependabot[bot]
20da3bb1b0 build(deps): bump cryptography from 46.0.3 to 46.0.4 (#12944)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.3 to 46.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.3...46.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-02 09:26:30 +05:30
Ieere Song
4d4058ee8e fix: typo in validate_inventory task name (missing backtick) (#12940)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-01-31 20:02:24 +05:30
Tushar240503
f071fccc33 updated prometheus-operator crd checksum autobump (#12939)
* updated prometheus-operator crd checksum autobump

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

* updated to Next-Gen format

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

---------

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-01-31 19:44:24 +05:30
Eugene Shutov
70daea701a local_path_provisioner: add resources (#12548)
* local_path_provisioner: add resources

* Update roles/kubernetes-apps/external_provisioner/local_path_provisioner/templates/local-path-storage-deployment.yml.j2

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:08:25 +05:30
Ali Afsharzadeh
3e42b84e94 Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04 (#12935)
* Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04

* Add --break-system-packages flag to testcases_run.sh file
2026-01-30 19:57:44 +05:30
Max Gautier
868ff3cea9 Auto-bump checksums on last 3 branches (#12934)
We now have all supported release branches (last 3) using the new
checksums format, which means they all work with the auto-bump tooling.
2026-01-30 15:39:44 +05:30
Max Gautier
0b69a18e35 Remove nifcloud terraform provider support (it is no longer available) (#12936)
The nifcloud terraform provider has been deleted, so remove support and
CI.
2026-01-30 15:05:44 +05:30
ChengHao Yang
e30076016c Releng: Galaxy version upgrade to 2.31.0 (#12909)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-30 13:35:43 +05:30
ChengHao Yang
f4ccdb5e72 Docs: update 2.29.0 to 2.30.0 (#12899)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-29 23:45:50 +05:30
Max Gautier
fcecaf6943 wait for control plane node to become ready after joining (#12794)
When joining a control plane node and "upgrading" the cluster setup (for
example, to update etcd addresses after adding a new etcd) in the same
playbook run, the node can take a bit of time to become ready after
joining.
This triggers a kubeadm preflight check (ControlPlaneNodesReady) in
kubeadm upgrade, which is run directly after the join tasks.

Add a configurable wait for the control plane node to become Ready to
fix this race condition.
2026-01-28 22:15:51 +05:30
Max Gautier
37f7a86014 etcd-certs: only change necessary permissions (#12908)
We currently **recursively** set the permissions of /etc/ssl/etcd/ssl
(default path) to 700. But this removes group permission from the files
under it, and certain composents (like calio with etcd datastore) rely
on it ; thus, the upgrade of a cluster can fail because the
calico-kube-controller can't access the certs, and thus the etcd.

This works in other case because as far as I can tell, the apiserver
which do access the etcd run as root (the owner of the files, not just
the "group owner")

We also for some reasons do this twice.

Only create the etcd cert directory with the correct permissions once,
not recursively.
2026-01-27 20:25:52 +05:30
Max Gautier
fff7f10a85 Patch versions updates (#12912)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-27 20:21:53 +05:30
ChengHao Yang
dc09298f7e Docs: cilium_kube_proxy_replacement change boolean (#12898)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-27 16:43:48 +05:30
dependabot[bot]
680db0c921 build(deps): bump jmespath from 1.0.1 to 1.1.0 (#12905)
Bumps [jmespath](https://github.com/jmespath/jmespath.py) from 1.0.1 to 1.1.0.
- [Changelog](https://github.com/jmespath/jmespath.py/blob/develop/CHANGELOG.rst)
- [Commits](https://github.com/jmespath/jmespath.py/compare/1.0.1...1.1.0)

---
updated-dependencies:
- dependency-name: jmespath
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 16:39:49 +05:30
dependabot[bot]
9977d4dc10 build(deps): bump actions/checkout from 6.0.1 to 6.0.2 (#12906)
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8e8c483db8...de0fac2e45)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:41:53 +05:30
dependabot[bot]
1b6129566b build(deps): bump peter-evans/create-pull-request from 8.0.0 to 8.1.0 (#12907)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 8.0.0 to 8.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](98357b18bf...c0f553fe54)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:37:51 +05:30
Ali Afsharzadeh
c3404c3685 Upgrade cilium from 1.18.5 to 1.18.6 (#12900)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-26 20:21:50 +05:30
Max Gautier
fba8708486 RELEASE.md: fix minor typo (#12891) 2026-01-22 16:43:29 +05:30
accuROAMC
8dacb9cd16 cri-o: fix duplicate top-level "auths" keys in registry config template (#12845)
The config.json.j2 template was generating invalid JSON when multiple
crio_registry_auth entries were defined, resulting in multiple top-level
"auths" objects being rendered, e.g.:

{
  "auths": { "registry1": { "auth": "xxxx" } },
  "auths": { "registry2": { "auth": "yyyy" } }
}

This change moves the loop inside the "auths" object so that all registries
are rendered as siblings under a single "auths" key, producing valid JSON:

{
  "auths": {
    "registry1": { "auth": "xxxx" },
    "registry2": { "auth": "yyyy" }
  }
}
2026-01-20 19:20:50 +05:30
Max Gautier
df3f0a2341 k8s-certs-renew: fix broken script (#12876)
Unproquer quoting of variable assignment make the shell interpret it as
a command ; since the variable is unused anyway, just delete it.
2026-01-19 22:57:47 +05:30
Kubernetes Prow Robot
62e90b3122 Merge pull request #12872 from VannTen/fix/defaut_lb_address
Use loadbalancer IP as default apiserver endpoint if no LB hostname is used
2026-01-19 21:45:50 +05:30
Max Gautier
6b5cc5bdfb Fix defaults for apiserver_loadbalancer_domain_name
Since we're not longer injecting pseudo DNS into /etc/hosts,
'lb-apiserver.kubernetes.local' (the previous default) won't resolve to
anything.

Instead, default to the loadbalancer IP if defined, or to the node local
loadbalancer if it's in use.

Make the necessary adjustements in use site to deal with ip addresses as
well as hostnames.
2026-01-19 09:43:48 +01:00
dependabot[bot]
a277cfdee7 build(deps): bump stefanbuck/github-issue-parser from 3.2.2 to 3.2.3 (#12874)
Bumps [stefanbuck/github-issue-parser](https://github.com/stefanbuck/github-issue-parser) from 3.2.2 to 3.2.3.
- [Release notes](https://github.com/stefanbuck/github-issue-parser/releases)
- [Commits](25f1485edf...10dcc54158)

---
updated-dependencies:
- dependency-name: stefanbuck/github-issue-parser
  dependency-version: 3.2.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 09:35:10 +05:30
Max Gautier
bc5528f585 Patch versions updates (#12854)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-17 23:57:09 +05:30
Max Gautier
2740c13c0c Do not use apiserver LB in etcd certificates
etcd does not use the apiserver load balancer, there is no reason to
include it's DNS into etcd certificates.
2026-01-15 16:50:45 +01:00
Bas
52b68bccad Fix: ansible_facts.selinux.status added. (#12861) 2026-01-14 23:31:40 +05:30
Will Xiang
82c4c0afdf fix syntax in haproxy.cfg.j2 for IPv6 binding (#12862) 2026-01-14 12:33:35 +05:30
Kirill Statsenko
63a43cf6db add metallb_namespace default value (#12860) 2026-01-13 20:55:43 +05:30
Ali Afsharzadeh
666a3a9500 Upgrade containerd and nerdctl from 2.1.6 to 2.2.1 (#12825) 2026-01-12 15:24:10 +05:30
Max Gautier
28f9c126bf ansible-lint: disable jinja[spacing] warning (#12848)
This pollutes ansible-lint output and force us to scroll to check what
the actuall issues are.
The spacing issues are minor and very opinionated, so it's no great
loss.
2026-01-12 13:42:07 +05:30
Sivaram Singana
d41b629be3 updated elastx_ubuntu20 to ubuntu24 (#12844)
* Updated the job name to elastx_ubuntu24 and ci matrix and test file

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

* remove unused OVH CI tf file (tf-ovh_ubuntu20-calico.yml)

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

* remove ubuntu20 for pre-commit fix

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>

---------

Signed-off-by: sivaram <singana.sivaram.naidu@ibm.com>
2026-01-10 23:35:56 +05:30
Ali Afsharzadeh
851abbc2e3 Disable discard_unpacked_layers for containerd >= 2.1 (#12821)
Only set `discard_unpacked_layers` in the CRI image config for containerd
versions earlier than 2.1.0.

Starting with containerd v2.1, the CRI plugin uses the Transfer Service for
image pulls by default. The `discard_unpacked_layers` option is incompatible
with the Transfer Service and triggers containerd to fall back to local
image pulls, logging a warning.

This change prevents unsupported configuration from being applied on newer
containerd versions, avoiding runtime warnings and ensuring default image
pull behavior.

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-08 19:39:40 +05:30
Qasim Mehmood
17c72367bc kube-vip: Fix template, drop all capabilities and use kube_vip_version (#12835)
* Drop capabilities in kube-vip and use kube_vip_version

* Preserve trailing newline for kube_vip_cidr env var
2026-01-07 07:43:38 +05:30
Max Gautier
d91c7d7576 Fix ansible-lint config error (#12842) 2026-01-06 20:15:11 +05:30
Kubernetes Prow Robot
14b20ad2a2 Merge pull request #12832 from VannTen/cleanup/network_facts
network_facts: streamline set_fact and setup calls
2026-01-06 15:01:10 +05:30
Max Gautier
72cb1356ef ci: make opentofu elastx not optionnal 2026-01-05 15:55:01 +01:00
Max Gautier
51304d57e2 network_facts: streamline set_fact and setup calls
- invoke setup module only once to gather ipv4 and ipv6 addresses
- eliminate remaining use of `fallback_ip` and `fallback_ip6`, allowing
  us to define (with `set_fact` all the "computed" IPs variable in one
  go, since there is no longer a dependency between them.
2026-01-05 15:54:56 +01:00
Goutham K
a0d7bef90e Remove deprecated kubelet flag (#12639) 2026-01-05 18:56:42 +05:30
Max Gautier
a1ec88e290 openstack-cleanup: delete old keypairs as well (#12833)
* openstack-cleanup: format and logging

* openstack-cleanup: delete old keypairs as well
2026-01-05 17:42:37 +05:30
Kubernetes Prow Robot
c9ff62944e Merge pull request #12355 from tico88612/feat/rocky-10-support
RockyLinux 10 support (experimental)
2026-01-05 14:32:37 +05:30
LawiK974
20ab9179af Update kube-vip to v1.0.3 (#12815) 2026-01-04 22:52:37 +05:30
ChengHao Yang
5be35c811a Docs: Rocky Linux 10 experimental description
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-03 15:40:01 +08:00
ChengHao Yang
ad522d4aab Docs: add rockylinux-10-extra description
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-03 00:52:30 +08:00
ChengHao Yang
9c511069cc CI: change rockylinux 10 image with kernel-module-extra
How to build RockyLinux 10 + `kernel-module-extra` with dib
https://github.com/kubernetes-sigs/kubespray/pull/12355#issuecomment-3705400093

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
ed270fcab4 Docs: update support system RHEL-based variants
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
0615929727 CI: add cilium test for rockylinux 10
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:41 +08:00
ChengHao Yang
48c25d9ebf CI: add calico test for rockylinux 10
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-02 23:10:40 +08:00
LawiK974
0bffcacbe7 Add rbac for calico kube-controllers to access services (#12828) 2026-01-02 20:04:35 +05:30
R. P. Taylor
c857252225 terraform openstack: allow ICMPv6 by default (#12805) 2026-01-02 14:50:38 +05:30
Ali Afsharzadeh
a0f00761ac Removed deprecated keys from containerd config (#12820)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-02 14:26:35 +05:30
r3m8
3a3e5d6954 fix(cilium): add dynamic api server endpoint configuration (#12624) 2026-01-01 17:26:34 +05:30
ChengHao Yang
2d6e508084 Fix: molecule 25.12.0 test (#12808)
* Bump molecule to 25.12.0

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Fixed ansible role not found in molecule after 25.2.0

Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Signed-off-by: ChengHao Yang
2025-12-31 15:12:34 +05:30
Ali Afsharzadeh
6d850a0dc5 Update pause image to 3.10.1 for Kubernetes 1.34 (#12827) 2025-12-31 13:48:35 +05:30
Max Gautier
6a517e165e Fix kubeadm init retry (#12785)
We currently always fail on the kubeadm init retry, because of the
remnants of the first try.

Ignore the related errors in the retry to unblock it.
2025-12-25 15:14:31 +05:30
85 changed files with 385 additions and 1107 deletions

View File

@@ -1,5 +1,4 @@
---
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
@@ -34,6 +33,8 @@ skip_list:
# Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]'
- 'jinja[spacing]'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml

View File

@@ -13,10 +13,10 @@ jobs:
issues: write
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
- name: Parse issue form
uses: stefanbuck/github-issue-parser@25f1485edffc1fee3ea68eb9f59a72e58720ffc4
uses: stefanbuck/github-issue-parser@10dcc54158ba4c137713d9d69d70a2da63b6bda3
id: issue-parser
with:
template-path: .github/ISSUE_TEMPLATE/bug-report.yaml

View File

@@ -20,7 +20,7 @@ jobs:
query get_release_branches($owner:String!, $name:String!) {
repository(owner:$owner, name:$name) {
refs(refPrefix: "refs/heads/",
first: 2, # TODO increment once we have release branch with the new checksums format
first: 3,
query: "release-",
orderBy: {
field: ALPHABETICAL,

View File

@@ -11,7 +11,7 @@ jobs:
update-patch-versions:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
ref: ${{ inputs.branch }}
- uses: actions/setup-python@v6
@@ -29,7 +29,7 @@ jobs:
~/.cache/pre-commit
- run: pre-commit run --all-files propagate-ansible-variables
continue-on-error: true
- uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725
- uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0
with:
commit-message: Patch versions updates
title: Patch versions updates - ${{ inputs.branch }}

View File

@@ -43,6 +43,7 @@ pr:
- fedora39-kube-router
- openeuler24-calico
- rockylinux9-cilium
- rockylinux10-cilium
- ubuntu22-calico-all-in-one
- ubuntu22-calico-all-in-one-upgrade
- ubuntu24-calico-etcd-datastore
@@ -127,6 +128,7 @@ pr_extended:
- debian12-docker
- debian13-calico
- rockylinux9-calico
- rockylinux10-calico
- ubuntu22-all-in-one-docker
- ubuntu24-all-in-one-docker
- ubuntu24-calico-all-in-one

View File

@@ -37,7 +37,6 @@ terraform_validate:
- hetzner
- vsphere
- upcloud
- nifcloud
.terraform_apply:
extends: .terraform_install
@@ -89,11 +88,10 @@ tf-elastx_cleanup:
- ./scripts/openstack-cleanup/main.py
allow_failure: true
tf-elastx_ubuntu20-calico:
tf-elastx_ubuntu24-calico:
extends: .terraform_apply
stage: deploy-part1
when: on_success
allow_failure: true
variables:
<<: *elastx_variables
PROVIDER: openstack

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
@@ -29,7 +29,7 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

View File

@@ -22,7 +22,7 @@ Ensure you have installed Docker then
```ShellSession
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash
quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@@ -89,13 +89,13 @@ vagrant up
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Trixie
- **Ubuntu** 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **CentOS Stream / RHEL** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Oracle Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Alma Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Rocky Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8) (experimental in 10: see [Rocky Linux 10 notes](docs/operating_systems/rhel.md#rocky-linux-10))
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
@@ -114,17 +114,17 @@ Note:
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.34.3
- [etcd](https://github.com/etcd-io/etcd) 3.5.26
- [docker](https://www.docker.com/) 28.3
- [containerd](https://containerd.io/) 2.1.6
- [cri-o](http://cri-o.io/) 1.34.3 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) 2.2.1
- [cri-o](http://cri-o.io/) 1.34.4 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.5
- [cilium](https://github.com/cilium/cilium) 1.18.5
- [calico](https://github.com/projectcalico/calico) 3.30.6
- [cilium](https://github.com/cilium/cilium) 1.18.6
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2
- [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0
- [kube-vip](https://github.com/kube-vip/kube-vip) 1.0.3
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
- [coredns](https://github.com/coredns/coredns) 1.12.1

View File

@@ -15,7 +15,7 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
1. Create/Update Issue for upgrading kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
## Major/minor releases and milestones

View File

@@ -20,7 +20,6 @@ function create_container_image_tar() {
kubectl describe cronjobs,jobs,pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq > "${IMAGES}"
# NOTE: etcd and pause cannot be seen as pods.
# The pause image is used for --pod-infra-container-image option of kubelet.
kubectl cluster-info dump | grep -E "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g >> "${IMAGES}"
else
echo "Getting images from file \"${IMAGES_FROM_FILE}\""

View File

@@ -1,5 +0,0 @@
*.tfstate*
.terraform.lock.hcl
.terraform
sample-inventory/inventory.ini

View File

@@ -1,138 +0,0 @@
# Kubernetes on NIFCLOUD with Terraform
Provision a Kubernetes cluster on [NIFCLOUD](https://pfs.nifcloud.com/) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+----------------------------+
+---------------+ | +--------------------+ |
| | | | +--------------------+ |
| API server LB +---------> | | | |
| | | | | Control Plane/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------------+ |
| ^ |
| | |
| v |
| +--------------------+ |
| | +--------------------+ |
| | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------------+ |
+----------------------------+
```
## Requirements
* Terraform 1.3.7
## Quickstart
### Export Variables
* Your NIFCLOUD credentials:
```bash
export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY>
export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
```
* The SSH KEY used to connect to the instance:
* FYI: [Cloud Help(SSH Key)](https://pfs.nifcloud.com/help/ssh.htm)
```bash
export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
```
* The IP address to connect to bastion server:
```bash
export TF_VAR_working_instance_ip=$(curl ifconfig.me)
```
### Create The Infrastructure
* Run terraform:
```bash
terraform init
terraform apply -var-file ./sample-inventory/cluster.tfvars
```
### Setup The Kubernetes
* Generate cluster configuration file:
```bash
./generate-inventory.sh > sample-inventory/inventory.ini
```
* Export Variables:
```bash
BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip')
API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb')
CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip')
export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
```
* Set ssh-agent"
```bash
eval `ssh-agent`
ssh-add <THE PATH TO YOUR SSH KEY>
```
* Run cluster.yml playbook:
```bash
cd ./../../../
ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
```
### Connecting to Kubernetes
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/) on the localhost
* Fetching kubeconfig file:
```bash
mkdir -p ~/.kube
scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
```
* Rewrite /etc/hosts
```bash
sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
```
* Run kubectl
```bash
kubectl get node
```
## Variables
* `region`: Region where to run the cluster
* `az`: Availability zone where to run the cluster
* `private_ip_bn`: Private ip address of bastion server
* `private_network_cidr`: Subnet of private network
* `instances_cp`: Machine to provision as Control Plane. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instances_wk`: Machine to provision as Worker Node. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instance_key_name`: The key name of the Key Pair to use for the instance
* `instance_type_bn`: The instance type of bastion server
* `instance_type_wk`: The instance type of worker node
* `instance_type_cp`: The instance type of control plane
* `image_name`: OS image used for the instance
* `working_instance_ip`: The IP address to connect to bastion server
* `accounting_type`: Accounting type. (1: monthly, 2: pay per use)

View File

@@ -1,64 +0,0 @@
#!/bin/bash
#
# Generates a inventory file based on the terraform output.
# After provisioning a cluster, simply run this command and supply the terraform state file
# Default state file is terraform.tfstate
#
set -e
TF_OUT=$(terraform output -json)
CONTROL_PLANES=$(jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[]' <(echo "${TF_OUT}"))
WORKERS=$(jq -r '.kubernetes_cluster.value.worker_info | to_entries[]' <(echo "${TF_OUT}"))
mapfile -t CONTROL_PLANE_NAMES < <(jq -r '.key' <(echo "${CONTROL_PLANES}"))
mapfile -t WORKER_NAMES < <(jq -r '.key' <(echo "${WORKERS}"))
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo "[all]"
# Generate control plane hosts
i=1
for name in "${CONTROL_PLANE_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${CONTROL_PLANES}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip} etcd_member_name=etcd${i}"
i=$(( i + 1 ))
done
# Generate worker hosts
for name in "${WORKER_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${WORKERS}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip}"
done
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo ""
echo "[all:vars]"
echo "upstream_dns_servers=['8.8.8.8','8.8.4.4']"
echo "loadbalancer_apiserver={'address':'${API_LB}','port':'6443'}"
echo ""
echo "[kube_control_plane]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[etcd]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[kube_node]"
for name in "${WORKER_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[k8s_cluster:children]"
echo "kube_control_plane"
echo "kube_node"

View File

@@ -1,36 +0,0 @@
provider "nifcloud" {
region = var.region
}
module "kubernetes_cluster" {
source = "./modules/kubernetes-cluster"
availability_zone = var.az
prefix = "dev"
private_network_cidr = var.private_network_cidr
instance_key_name = var.instance_key_name
instances_cp = var.instances_cp
instances_wk = var.instances_wk
image_name = var.image_name
instance_type_bn = var.instance_type_bn
instance_type_cp = var.instance_type_cp
instance_type_wk = var.instance_type_wk
private_ip_bn = var.private_ip_bn
additional_lb_filter = [var.working_instance_ip]
}
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
module.kubernetes_cluster.security_group_name.bastion
]
type = "IN"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_ip = var.working_instance_ip
}

View File

@@ -1,301 +0,0 @@
#################################################
##
## Local variables
##
locals {
# e.g. east-11 is 11
az_num = reverse(split("-", var.availability_zone))[0]
# e.g. east-11 is e11
az_short_name = "${substr(reverse(split("-", var.availability_zone))[1], 0, 1)}${local.az_num}"
# Port used by the protocol
port_ssh = 22
port_kubectl = 6443
port_kubelet = 10250
# calico: https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements#network-requirements
port_bgp = 179
port_vxlan = 4789
port_etcd = 2379
}
#################################################
##
## General
##
# data
data "nifcloud_image" "this" {
image_name = var.image_name
}
# private lan
resource "nifcloud_private_lan" "this" {
private_lan_name = "${var.prefix}lan"
availability_zone = var.availability_zone
cidr_block = var.private_network_cidr
accounting_type = var.accounting_type
}
#################################################
##
## Bastion
##
resource "nifcloud_security_group" "bn" {
group_name = "${var.prefix}bn"
description = "${var.prefix} bastion"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "bn" {
instance_id = "${local.az_short_name}${var.prefix}bn01"
security_group = nifcloud_security_group.bn.group_name
instance_type = var.instance_type_bn
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = var.private_ip_bn
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}bn01"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Control Plane
##
resource "nifcloud_security_group" "cp" {
group_name = "${var.prefix}cp"
description = "${var.prefix} control plane"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "cp" {
for_each = var.instances_cp
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.cp.group_name
instance_type = var.instance_type_cp
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
resource "nifcloud_load_balancer" "this" {
load_balancer_name = "${local.az_short_name}${var.prefix}cp"
accounting_type = var.accounting_type
balancing_type = 1 // Round-Robin
load_balancer_port = local.port_kubectl
instance_port = local.port_kubectl
instances = [for v in nifcloud_instance.cp : v.instance_id]
filter = concat(
[for k, v in nifcloud_instance.cp : v.public_ip],
[for k, v in nifcloud_instance.wk : v.public_ip],
var.additional_lb_filter,
)
filter_type = 1 // Allow
}
#################################################
##
## Worker
##
resource "nifcloud_security_group" "wk" {
group_name = "${var.prefix}wk"
description = "${var.prefix} worker"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "wk" {
for_each = var.instances_wk
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.wk.group_name
instance_type = var.instance_type_wk
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Security Group Rule: Kubernetes
##
# ssh
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
nifcloud_security_group.wk.group_name,
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_ssh
to_port = local.port_ssh
protocol = "TCP"
source_security_group_name = nifcloud_security_group.bn.group_name
}
# kubectl
resource "nifcloud_security_group_rule" "kubectl_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubectl
to_port = local.port_kubectl
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# kubelet
resource "nifcloud_security_group_rule" "kubelet_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
resource "nifcloud_security_group_rule" "kubelet_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
#################################################
##
## Security Group Rule: calico
##
# vslan
resource "nifcloud_security_group_rule" "vxlan_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "vxlan_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# bgp
resource "nifcloud_security_group_rule" "bgp_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "bgp_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# etcd
resource "nifcloud_security_group_rule" "etcd_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_etcd
to_port = local.port_etcd
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}

View File

@@ -1,48 +0,0 @@
output "control_plane_lb" {
description = "The DNS name of LB for control plane"
value = nifcloud_load_balancer.this.dns_name
}
output "security_group_name" {
description = "The security group used in the cluster"
value = {
bastion = nifcloud_security_group.bn.group_name,
control_plane = nifcloud_security_group.cp.group_name,
worker = nifcloud_security_group.wk.group_name,
}
}
output "private_network_id" {
description = "The private network used in the cluster"
value = nifcloud_private_lan.this.id
}
output "bastion_info" {
description = "The basion information in cluster"
value = { (nifcloud_instance.bn.instance_id) : {
instance_id = nifcloud_instance.bn.instance_id,
unique_id = nifcloud_instance.bn.unique_id,
private_ip = nifcloud_instance.bn.private_ip,
public_ip = nifcloud_instance.bn.public_ip,
} }
}
output "worker_info" {
description = "The worker information in cluster"
value = { for v in nifcloud_instance.wk : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}
output "control_plane_info" {
description = "The control plane information in cluster"
value = { for v in nifcloud_instance.cp : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}

View File

@@ -1,45 +0,0 @@
#!/bin/bash
#################################################
##
## IP Address
##
configure_private_ip_address () {
cat << EOS > /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
ens192:
dhcp4: yes
dhcp6: yes
dhcp-identifier: mac
ens224:
dhcp4: no
dhcp6: no
addresses: [${private_ip_address}]
EOS
netplan apply
}
configure_private_ip_address
#################################################
##
## SSH
##
configure_ssh_port () {
sed -i 's/^#*Port [0-9]*/Port ${ssh_port}/' /etc/ssh/sshd_config
}
configure_ssh_port
#################################################
##
## Hostname
##
hostnamectl set-hostname ${hostname}
#################################################
##
## Disable swap files genereated by systemd-gpt-auto-generator
##
systemctl mask "dev-sda3.swap"

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = ">= 1.8.0, < 2.0.0"
}
}
}

View File

@@ -1,81 +0,0 @@
variable "availability_zone" {
description = "The availability zone"
type = string
}
variable "prefix" {
description = "The prefix for the entire cluster"
type = string
validation {
condition = length(var.prefix) <= 5
error_message = "Must be a less than 5 character long."
}
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "additional_lb_filter" {
description = "Additional LB filter"
type = list(string)
}
variable "accounting_type" {
type = string
default = "1"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -1,3 +0,0 @@
output "kubernetes_cluster" {
value = module.kubernetes_cluster
}

View File

@@ -1,22 +0,0 @@
region = "jp-west-1"
az = "west-11"
instance_key_name = "deployerkey"
instance_type_bn = "e-medium"
instance_type_cp = "e-medium"
instance_type_wk = "e-medium"
private_network_cidr = "192.168.30.0/24"
instances_cp = {
"cp01" : { private_ip : "192.168.30.11/24" }
"cp02" : { private_ip : "192.168.30.12/24" }
"cp03" : { private_ip : "192.168.30.13/24" }
}
instances_wk = {
"wk01" : { private_ip : "192.168.30.21/24" }
"wk02" : { private_ip : "192.168.30.22/24" }
}
private_ip_bn = "192.168.30.10/24"
image_name = "Ubuntu Server 22.04 LTS"

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = "1.8.0"
}
}
}

View File

@@ -1,77 +0,0 @@
variable "region" {
description = "The region"
type = string
}
variable "az" {
description = "The availability zone"
type = string
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "working_instance_ip" {
description = "The IP address to connect to bastion server."
type = string
}
variable "accounting_type" {
type = string
default = "2"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -281,9 +281,9 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default |
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}, { "protocol" = "ipv6-icmp", "port_range_min" = 0, "port_range_max" = 0, "remote_ip_prefix" = "::/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, `[{ "protocol" = "ipv6-icmp", "port_range_min" = 0, "port_range_max" = 0, "remote_ip_prefix" = "::/0"}]` by default |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |

View File

@@ -1006,7 +1006,7 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip
availability_zone = element(var.az_list, count.index)
image_name = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
image_id = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
flavor_id = var.flavor_gfs_node
key_pair = openstack_compute_keypair_v2.k8s.name
@@ -1078,7 +1078,7 @@ resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_blockstorage_volume_v3" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}"
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
description = "Non-ephemeral volume for GlusterFS"
@@ -1088,5 +1088,5 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v3.glusterfs_volume.*.id, count.index)
}

View File

@@ -271,7 +271,14 @@ variable "master_allowed_ports" {
variable "master_allowed_ports_ipv6" {
type = list(any)
default = []
default = [
{
"protocol" = "ipv6-icmp"
"port_range_min" = 0
"port_range_max" = 0
"remote_ip_prefix" = "::/0"
},
]
}
variable "worker_allowed_ports" {
@@ -297,6 +304,12 @@ variable "worker_allowed_ports_ipv6" {
"port_range_max" = 32767
"remote_ip_prefix" = "::/0"
},
{
"protocol" = "ipv6-icmp"
"port_range_min" = 0
"port_range_max" = 0
"remote_ip_prefix" = "::/0"
},
]
}

View File

@@ -245,7 +245,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.18.5"
cilium_version: "1.18.6"
```
## Add variable to config

View File

@@ -65,9 +65,8 @@ In kubespray, the default runtime name is "runc", and it can be configured with
containerd_runc_runtime:
name: runc
type: "io.containerd.runc.v2"
engine: ""
root: ""
options:
Root: ""
SystemdCgroup: "false"
BinaryName: /usr/local/bin/my-runc
base_runtime_spec: cri-base.json

View File

@@ -193,11 +193,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.29.0
docker pull quay.io/kubespray/kubespray:v2.29.0
git checkout v2.30.0
docker pull quay.io/kubespray/kubespray:v2.30.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash
quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```

View File

@@ -15,8 +15,8 @@ fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: |
@@ -33,8 +33,8 @@ fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -51,7 +51,7 @@ fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -21,6 +21,12 @@ metallb_enabled: true
metallb_speaker_enabled: true
```
By default, MetalLB resources are deployed into the `metallb-system` namespace. You can override this namespace using a variable.
```yaml
metallb_namespace: woodenlb-system
```
By default only the MetalLB BGP speaker is allowed to run on control plane nodes. If you have a single node cluster or a cluster where control plane are also worker nodes you may need to enable tolerations for the MetalLB controller:
```yaml

View File

@@ -38,3 +38,11 @@ you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)
The kernel version is lower than the kubernetes 1.32 system validation, please refer to the [kernel requirements](../operations/kernel-requirements.md).
## Rocky Linux 10
(Experimental in Kubespray CI)
The official Rocky Linux 10 cloud image does not include `kernel-module-extra`. Both Kube Proxy and CNI rely on this package, and since it relates to kernel version compatibility (which may require VM reboots, etc.), we haven't found an ideal solution.
However, some users report that it doesn't affect them (minimal version). Therefore, the Kubespray CI Rocky Linux 10 image is built by Kubespray maintainers using `diskimage-builder`. For detailed methods, please refer to [the comments](https://github.com/kubernetes-sigs/kubespray/pull/12355#issuecomment-3705400093).

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.30.0
version: 2.31.0
readme: README.md
authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -11,15 +11,15 @@
# containerd_runc_runtime:
# name: runc
# type: "io.containerd.runc.v2"
# engine: ""
# root: ""
# options:
# Root: ""
# containerd_additional_runtimes:
# Example for Kata Containers as additional runtime:
# - name: kata
# type: "io.containerd.kata.v2"
# engine: ""
# root: ""
# options:
# Root: ""
# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216

View File

@@ -56,8 +56,8 @@ cilium_l2announcements: false
#
# Only effective when monitor aggregation is set to "medium" or higher.
# cilium_monitor_aggregation_flags: "all"
# Kube Proxy Replacement mode (strict/partial)
# cilium_kube_proxy_replacement: partial
# Kube Proxy Replacement mode (true/false)
# cilium_kube_proxy_replacement: false
# If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also:

View File

@@ -1,5 +1,5 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -27,14 +27,14 @@ RUN apt update -q \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
unzip \
libvirt-clients \
qemu-utils \
qemu-kvm \
dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list \
&& apt update -q \
&& apt install --no-install-recommends -yq docker-ce \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
@@ -44,9 +44,8 @@ ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.34.3/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.34.3/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
@@ -56,5 +55,5 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
&& vagrant plugin install vagrant-libvirt \
# Install Kubernetes collections
&& pip install --no-compile --no-cache-dir kubernetes \
&& pip install --break-system-packages --no-compile --no-cache-dir kubernetes \
&& ansible-galaxy collection install kubernetes.core

View File

@@ -1,7 +1,7 @@
ansible==10.7.0
# Needed for community.crypto module
cryptography==46.0.3
cryptography==46.0.4
# Needed for jinja2 json_query templating
jmespath==1.0.1
jmespath==1.1.0
# Needed for ansible.utils.ipaddr
netaddr==1.3.0

View File

@@ -9,6 +9,8 @@ platforms:
vm_memory: 512
provisioner:
name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options:
defaults:
callbacks_enabled: profile_tasks

View File

@@ -9,6 +9,8 @@ platforms:
vm_memory: 512
provisioner:
name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options:
defaults:
callbacks_enabled: profile_tasks

View File

@@ -21,6 +21,8 @@ platforms:
vm_memory: 512
provisioner:
name: ansible
env:
ANSIBLE_ROLES_PATH: ../../../
config_options:
defaults:
callbacks_enabled: profile_tasks

View File

@@ -13,10 +13,9 @@ containerd_snapshotter: "overlayfs"
containerd_runc_runtime:
name: runc
type: "io.containerd.runc.v2"
engine: ""
root: ""
base_runtime_spec: cri-base.json
options:
Root: ""
SystemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
BinaryName: "{{ bin_dir }}/runc"
@@ -24,8 +23,8 @@ containerd_additional_runtimes: []
# Example for Kata Containers as additional runtime:
# - name: kata
# type: "io.containerd.kata.v2"
# engine: ""
# root: ""
# options:
# Root: ""
containerd_base_runtime_spec_rlimit_nofile: 65535
@@ -36,8 +35,8 @@ containerd_default_base_runtime_spec_patch:
hard: "{{ containerd_base_runtime_spec_rlimit_nofile }}"
soft: "{{ containerd_base_runtime_spec_rlimit_nofile }}"
# Can help reduce disk usage
# https://github.com/containerd/containerd/discussions/6295
# Only for containerd < 2.1; discard unpacked layers to save disk space
# https://github.com/containerd/containerd/blob/release/2.1/docs/cri/config.md#image-pull-configuration-since-containerd-v21
containerd_discard_unpacked_layers: true
containerd_base_runtime_specs:

View File

@@ -52,8 +52,6 @@ oom_score = {{ containerd_oom_score }}
{% for runtime in [containerd_runc_runtime] + containerd_additional_runtimes %}
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.{{ runtime.name }}]
runtime_type = "{{ runtime.type }}"
runtime_engine = "{{ runtime.engine }}"
runtime_root = "{{ runtime.root }}"
{% if runtime.base_runtime_spec is defined %}
base_runtime_spec = "{{ containerd_cfg_dir }}/{{ runtime.base_runtime_spec }}"
{% endif %}
@@ -78,7 +76,9 @@ oom_score = {{ containerd_oom_score }}
[plugins."io.containerd.cri.v1.images"]
snapshotter = "{{ containerd_snapshotter }}"
{% if containerd_discard_unpacked_layers and containerd_version is version('2.1.0', '<') %}
discard_unpacked_layers = {{ containerd_discard_unpacked_layers | lower }}
{% endif %}
image_pull_progress_timeout = "{{ containerd_image_pull_progress_timeout }}"
[plugins."io.containerd.cri.v1.images".pinned_images]
sandbox = "{{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}"

View File

@@ -1,16 +1,16 @@
{% if crio_registry_auth is defined and crio_registry_auth|length %}
{
{% for reg in crio_registry_auth %}
"auths": {
{% for reg in crio_registry_auth %}
"{{ reg.registry }}": {
"auth": "{{ (reg.username + ':' + reg.password) | string | b64encode }}"
}
{% if not loop.last %}
},
},
{% else %}
}
}
{% endif %}
{% endfor %}
}
}
{% else %}
{}

View File

@@ -55,7 +55,7 @@
register: keyserver_task_result
until: keyserver_task_result is succeeded
retries: 4
delay: "{{ retry_stagger | d(3) }}"
delay: "{{ retry_stagger }}"
with_items: "{{ docker_repo_key_info.repo_keys }}"
environment: "{{ proxy_env }}"
when: ansible_pkg_mgr == 'apt'
@@ -128,7 +128,7 @@
register: docker_task_result
until: docker_task_result is succeeded
retries: 4
delay: "{{ retry_stagger | d(3) }}"
delay: "{{ retry_stagger }}"
notify: Restart docker
when:
- not ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"]

View File

@@ -5,8 +5,7 @@
group: "{{ etcd_cert_group }}"
state: directory
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
mode: "0700"
- name: "Gen_certs | create etcd script dir (on {{ groups['etcd'][0] }})"
file:
@@ -145,15 +144,6 @@
- ('k8s_cluster' in group_names) and
sync_certs | default(false) and inventory_hostname not in groups['etcd']
- name: Gen_certs | check certificate permissions
file:
path: "{{ etcd_cert_dir }}"
group: "{{ etcd_cert_group }}"
state: directory
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
# This is a hack around the fact kubeadm expect the same certs path on all kube_control_plane
# TODO: fix certs generation to have the same file everywhere
# OR work with kubeadm on node-specific config

View File

@@ -32,23 +32,16 @@ DNS.{{ counter["dns"] }} = {{ hostvars[host]['etcd_access_address'] }}{{ increme
{# This will always expand to inventory_hostname, which can be a completely arbitrary name, that etcd will not know or care about, hence this line is (probably) redundant. #}
DNS.{{ counter["dns"] }} = {{ host }}{{ increment(counter, 'dns') }}
{% endfor %}
{% if apiserver_loadbalancer_domain_name is defined %}
DNS.{{ counter["dns"] }} = {{ apiserver_loadbalancer_domain_name }}{{ increment(counter, 'dns') }}
{% endif %}
{% for etcd_alt_name in etcd_cert_alt_names %}
DNS.{{ counter["dns"] }} = {{ etcd_alt_name }}{{ increment(counter, 'dns') }}
{% endfor %}
{% for host in groups['etcd'] %}
{% if hostvars[host]['access_ip'] is defined %}
IP.{{ counter["ip"] }} = {{ hostvars[host]['access_ip'] }}{{ increment(counter, 'ip') }}
{% endif %}
{% if hostvars[host]['access_ip6'] is defined %}
IP.{{ counter["ip"] }} = {{ hostvars[host]['access_ip6'] }}{{ increment(counter, 'ip') }}
{% endif %}
{% if ipv6_stack %}
IP.{{ counter["ip"] }} = {{ hostvars[host]['ip6'] | default(hostvars[host]['fallback_ip6']) }}{{ increment(counter, 'ip') }}
{% endif %}
IP.{{ counter["ip"] }} = {{ hostvars[host]['main_ip'] }}{{ increment(counter, 'ip') }}
{% for address in hostvars[host]['main_access_ips'] %}
IP.{{ counter["ip"] }} = {{ address }}{{ increment(counter, 'ip') }}
{% endfor %}
{% for address in hostvars[host]['main_ips'] %}
IP.{{ counter["ip"] }} = {{ address }}{{ increment(counter, 'ip') }}
{% endfor %}
{% endfor %}
{% for cert_alt_ip in etcd_cert_alt_ips %}
IP.{{ counter["ip"] }} = {{ cert_alt_ip }}{{ increment(counter, 'ip') }}

View File

@@ -18,7 +18,6 @@ etcd_backup_retention_count: -1
force_etcd_cert_refresh: true
etcd_config_dir: /etc/ssl/etcd
etcd_cert_dir: "{{ etcd_config_dir }}/ssl"
etcd_cert_dir_mode: "0700"
etcd_cert_group: root
# Note: This does not set up DNS entries. It simply adds the following DNS
# entries to the certificate

View File

@@ -8,3 +8,4 @@ local_path_provisioner_is_default_storageclass: "true"
local_path_provisioner_debug: false
local_path_provisioner_helper_image_repo: "busybox"
local_path_provisioner_helper_image_tag: "latest"
local_path_provisioner_resources: {}

View File

@@ -35,6 +35,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{% if local_path_provisioner_resources %}
resources:
{{ local_path_provisioner_resources | to_nice_yaml | indent(10) | trim }}
{% endif %}
volumes:
- name: config-volume
configMap:

View File

@@ -1,6 +1,7 @@
---
metallb_enabled: false
metallb_log_level: info
metallb_namespace: "metallb-system"
metallb_port: "7472"
metallb_memberlist_port: "7946"
metallb_speaker_enabled: "{{ metallb_enabled }}"

View File

@@ -26,6 +26,16 @@ rules:
verbs:
- watch
- list
# Services are monitored for service LoadBalancer IP allocation
- apiGroups: [""]
resources:
- services
- services/status
verbs:
- get
- list
- update
- watch
{% elif calico_datastore == "kdd" %}
# Nodes are watched to monitor for deletions.
- apiGroups: [""]

View File

@@ -2,6 +2,9 @@
# disable upgrade cluster
upgrade_cluster_setup: false
# Number of retries (with 5 seconds interval) to check that new control plane nodes
# are in Ready condition after joining
control_plane_node_become_ready_tries: 24
# By default the external API listens on all interfaces, this can be changed to
# listen on a specific address/interface.
# NOTE: If you specific address/interface and use loadbalancer_apiserver_localhost

View File

@@ -1,7 +1,7 @@
---
- name: Kubeadm | Check api is up
uri:
url: "https://{{ ip | default(fallback_ip) }}:{{ kube_apiserver_port }}/healthz"
url: "https://{{ main_ip | ansible.utils.ipwrap }}:{{ kube_apiserver_port }}/healthz"
validate_certs: false
when: ('kube_control_plane' in group_names)
register: _result

View File

@@ -98,3 +98,18 @@
when:
- inventory_hostname != first_kube_control_plane
- kubeadm_already_run is not defined or not kubeadm_already_run.stat.exists
- name: Wait for new control plane nodes to be Ready
when: kubeadm_already_run.stat.exists
run_once: true
command: >
{{ kubectl }} get nodes --selector node-role.kubernetes.io/control-plane
-o jsonpath-as-json="{.items[*].status.conditions[?(@.type == 'Ready')]}"
register: control_plane_node_ready_conditions
retries: "{{ control_plane_node_become_ready_tries }}"
delay: 5
delegate_to: "{{ groups['kube_control_plane'][0] }}"
until: >
control_plane_node_ready_conditions.stdout
| from_json | selectattr('status', '==', 'True')
| length == (groups['kube_control_plane'] | length)

View File

@@ -90,7 +90,7 @@
# Nginx LB(default), If kubeadm_config_api_fqdn is defined, use other LB by kubeadm controlPlaneEndpoint.
- name: Set kubeadm_config_api_fqdn define
set_fact:
kubeadm_config_api_fqdn: "{{ apiserver_loadbalancer_domain_name | default('lb-apiserver.kubernetes.local') }}"
kubeadm_config_api_fqdn: "{{ apiserver_loadbalancer_domain_name }}"
when: loadbalancer_apiserver is defined
- name: Kubeadm | Create kubeadm config
@@ -179,9 +179,10 @@
timeout -k {{ kubeadm_init_timeout }} {{ kubeadm_init_timeout }}
{{ bin_dir }}/kubeadm init
--config={{ kube_config_dir }}/kubeadm-config.yaml
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--ignore-preflight-errors={{ _ignore_errors | flatten | join(',') }}
--skip-phases={{ kubeadm_init_phases_skip | join(',') }}
{{ kube_external_ca_mode | ternary('', '--upload-certs') }}
_ignore_errors: "{{ kubeadm_ignore_preflight_errors }}"
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
notify: Control plane | restart kubelet
@@ -195,6 +196,15 @@
# This retry task is separated from 1st task to show log of failure of 1st task.
- name: Kubeadm | Initialize first control plane node (retry)
command: "{{ kubeadm_init_first_control_plane_cmd }}"
vars:
_errors_from_first_try:
- 'FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml'
- 'FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml'
- 'FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml'
- 'Port-10250'
_ignore_errors:
- "{{ kubeadm_ignore_preflight_errors }}"
- "{{ _errors_from_first_try if 'all' not in kubeadm_ignore_preflight_errors else [] }}"
register: kubeadm_init
retries: 2
until: kubeadm_init is succeeded or "field is immutable" in kubeadm_init.stderr

View File

@@ -5,7 +5,6 @@ echo "## Check Expiration before renewal ##"
{{ bin_dir }}/kubeadm certs check-expiration
days_buffer=7 # set a time margin, because we should not renew at the last moment
calendar={{ auto_renew_certificates_systemd_calendar }}
next_time=$(systemctl show k8s-certs-renew.timer -p NextElapseUSecRealtime --value)
if [ "${next_time}" == "" ]; then

View File

@@ -0,0 +1,2 @@
---
node_taints: []

View File

@@ -14,13 +14,13 @@
- name: Populate inventory node taint
set_fact:
inventory_node_taints: "{{ inventory_node_taints + ['%s' | format(item)] }}"
loop: "{{ node_taints | d([]) }}"
inventory_node_taints: "{{ inventory_node_taints + node_taints }}"
when:
- node_taints is defined
- node_taints is not string
- node_taints is not mapping
- node_taints is iterable
- debug: # noqa name[missing]
var: role_node_taints
- debug: # noqa name[missing]

View File

@@ -61,8 +61,6 @@ eviction_hard_control_plane: {}
kubelet_status_update_frequency: 10s
# kube-vip
kube_vip_version: 0.8.0
kube_vip_arp_enabled: false
kube_vip_interface:
kube_vip_services_interface:

View File

@@ -32,7 +32,7 @@ frontend healthz
frontend kube_api_frontend
bind 127.0.0.1:{{ loadbalancer_apiserver_port|default(kube_apiserver_port) }}
{% if ipv6_stack -%}
bind [::1]:{{ loadbalancer_apiserver_port|default(kube_apiserver_port) }};
bind [::1]:{{ loadbalancer_apiserver_port|default(kube_apiserver_port) }}
{% endif -%}
mode tcp
option tcplog

View File

@@ -1,4 +1,4 @@
# Inspired by https://github.com/kube-vip/kube-vip/blob/v0.8.0/pkg/kubevip/config_generator.go#L103
# Inspired by https://github.com/kube-vip/kube-vip/blob/v1.0.3/pkg/kubevip/config_generator.go#L103
apiVersion: v1
kind: Pod
metadata:
@@ -27,7 +27,7 @@ spec:
value: {{ kube_vip_services_interface | string | to_json }}
{% endif %}
{% if kube_vip_cidr %}
- name: vip_cidr
- name: vip_{{ "subnet" if kube_vip_version is version('0.9.0', '>=') else "cidr" }}
value: {{ kube_vip_cidr | string | to_json }}
{% endif %}
{% if kube_vip_dns_mode %}
@@ -113,6 +113,8 @@ spec:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
{% endif %}
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf

View File

@@ -88,6 +88,7 @@
when:
- ntp_timezone
- ansible_os_family == "RedHat"
- ansible_facts.selinux.status == 'enabled'
- ansible_facts.selinux.mode == 'enforcing'
- name: Set ntp_timezone

View File

@@ -116,7 +116,7 @@ flannel_version: 0.27.3
flannel_cni_version: 1.7.1-flannel1
cni_version: "{{ (cni_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_version: "1.18.5"
cilium_version: "1.18.6"
cilium_cli_version: "{{ (ciliumcli_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_enable_hubble: false
@@ -265,8 +265,9 @@ multus_image_tag: "v{{ multus_version }}"
external_openstack_cloud_controller_image_repo: "{{ kube_image_repo }}/provider-os/openstack-cloud-controller-manager"
external_openstack_cloud_controller_image_tag: "v1.32.0"
kube_vip_version: 1.0.3
kube_vip_image_repo: "{{ github_image_repo }}/kube-vip/kube-vip{{ '-iptables' if kube_vip_lb_fwdmethod == 'masquerade' else '' }}"
kube_vip_image_tag: v0.8.9
kube_vip_image_tag: "v{{ kube_vip_version }}"
nginx_image_repo: "{{ docker_image_repo }}/library/nginx"
nginx_image_tag: 1.28.0-alpine
haproxy_image_repo: "{{ docker_image_repo }}/library/haproxy"

View File

@@ -643,10 +643,10 @@ first_kube_control_plane_address: "{{ hostvars[groups['kube_control_plane'][0]][
loadbalancer_apiserver_localhost: "{{ loadbalancer_apiserver is not defined }}"
loadbalancer_apiserver_type: "nginx"
# applied if only external loadbalancer_apiserver is defined, otherwise ignored
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
apiserver_loadbalancer_domain_name: "{{ 'localhost' if loadbalancer_apiserver_localhost else (loadbalancer_apiserver.address | d(undef())) }}"
kube_apiserver_global_endpoint: |-
{% if loadbalancer_apiserver is defined -%}
https://{{ apiserver_loadbalancer_domain_name }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}
https://{{ apiserver_loadbalancer_domain_name | ansible.utils.ipwrap }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}
{%- elif loadbalancer_apiserver_localhost -%}
https://localhost:{{ loadbalancer_apiserver_port | default(kube_apiserver_port) }}
{%- else -%}
@@ -654,7 +654,7 @@ kube_apiserver_global_endpoint: |-
{%- endif %}
kube_apiserver_endpoint: |-
{% if loadbalancer_apiserver is defined -%}
https://{{ apiserver_loadbalancer_domain_name }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}
https://{{ apiserver_loadbalancer_domain_name | ansible.utils.ipwrap }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}
{%- elif ('kube_control_plane' not in group_names) and loadbalancer_apiserver_localhost -%}
https://localhost:{{ loadbalancer_apiserver_port | default(kube_apiserver_port) }}
{%- elif 'kube_control_plane' in group_names -%}

View File

@@ -14,10 +14,12 @@ crictl_checksums:
1.32.0: sha256:4ffaf29bbda8df42ed2dda4f1ad33cc785987701dc8d1e0043c17cfea9af43e0
crio_archive_checksums:
arm64:
1.34.4: sha256:d176f6256d606a3fc279f9f2994ef4a4c4cbaaa0601f4d1bba1a19bec5674ce9
1.34.3: sha256:314595247054b53767a736e24bc3030a5f7c17552944c62b2e190c9e95fe4ca6
1.34.2: sha256:ac7530f7fc9d531a87bfdfcae9cf8bf81a8bbdb75e63a046ed96911aa7b68ebd
1.34.1: sha256:41a71cab6a61ae429ec447d572fd1cdea0a7e33d62aaa58c3b07467665b50b9f
1.34.0: sha256:3006658270477c5fb1e88e9124e40982d2ba7b34495fcc12f0fecd33bbab9a5a
1.33.8: sha256:59c91726535dcadd0372df0c6aa8595e4d59590994b598b2d97ea2510b216359
1.33.7: sha256:af3ea22d3d6944c9a907c6c13d77e9fc4dbcf3972ffbde18dd6f37f1c2ffbd0d
1.33.6: sha256:6ee49e746d1a5be1a664a6f801c68b169cb181a9aaf12218eed121e2b151bfdb
1.33.5: sha256:ef1b5e2162b0f55722e0966db0cfe387f3ba7cb91d6a803f627121733132792d
@@ -26,6 +28,7 @@ crio_archive_checksums:
1.33.2: sha256:0a161cb1437a50fbdb04bf5ca11dbec8bfc567871d0597a5676737278a945a36
1.33.1: sha256:6bf135db438937f0ab7a533af64564a0fb1d2079a43723ce9255ecbf9556ae05
1.33.0: sha256:8a0dbee2879495d5b33e6fdeac32e5d86c356897bdcf3a94cd602851620ce8b5
1.32.12: sha256:26a5138f4e4f15d370630c3bb8bf04fe28b24c57ce2bb11717a2c9a2e1c54404
1.32.11: sha256:25c6ccfe9b70bf12222577b4cbf286ade9e2d112ab10c7d4507ba12cbcfad5ba
1.32.10: sha256:4e8ceb6f2c936e31a9b892a076deecc52be9feac4acf8af242fb6db817fda9b1
1.32.9: sha256:f854848dc5ae54ea03e48f2bc6d6ffbea2173de45c3d7a2abbc3af3abcb779f9
@@ -39,10 +42,12 @@ crio_archive_checksums:
1.32.1: sha256:f64da0ef41604575b476ad6d7288ca14f56fc06cc0ca138a5c3dc933427f7b32
1.32.0: sha256:b092eddabedac98a0f8449dc535acfec0e14c21f59cabe8f9703043d995a1a41
amd64:
1.34.4: sha256:f6348a781c34b433fe1c5150da3408e51e828b610eacbe734405e9c31136d810
1.34.3: sha256:e269914f3bc4f36ac87cd593d74daaa43c390571994062180019248be32cc6f7
1.34.2: sha256:3a0012938ed389e9270a208bb73b250062d5f1be5798472b1728403d55ddc1da
1.34.1: sha256:22c1e4d68d9339aa58a1b0f1b40a8944102934a7505105abe461dc8a7e3de540
1.34.0: sha256:5a8bc5c3b8072cb9bde1cf025d5597f75bf21018712c5b72d5cb0657948595c8
1.33.8: sha256:537adda39074377893f1f650a71b576ba487b3c4d2ee55e9b22f4e95fc188594
1.33.7: sha256:e2999436a272c77370241a4f962c80737698dd8c2400fe75e5c7cf2142c96001
1.33.6: sha256:4d0d446f73d9db6d5bf2c03ecdc39d9d702836886f4715886c15dc2f461cc810
1.33.5: sha256:b8883e51837ee7fd45c88c762f37ca4b96d80ec6a7b46ec989381089e762aa7f
@@ -51,6 +56,7 @@ crio_archive_checksums:
1.33.2: sha256:6e82739bbbeae12d571a277a88d85e8a0e23dbc87529414a91ee5f2e23792dcf
1.33.1: sha256:036063194028d24c75b9ce080e475ad97bacc955de796b7c895845294db8edbf
1.33.0: sha256:dad0cec9e09368b37b35ce824b0ef517a1b33365c4bb164fe82310c73c886f7e
1.32.12: sha256:13cb9676686c0ccd6bd7ffef9125f6370f803f08a559cf31f017193619891960
1.32.11: sha256:98424dbe3eb1377b314bb35b30842987ccc800faa2f8145d52eb2a9c1efa17be
1.32.10: sha256:b8e66bd33c885baf65535e671a120de4d7675833a75489403a9406e5fd2faa5e
1.32.9: sha256:59b861b9c8913328c9bc97b3bcb007951b0c3bf6c9f40fbad236be4b31534503
@@ -64,10 +70,12 @@ crio_archive_checksums:
1.32.1: sha256:d35de1e765481018c7ccdc92edeb59b25938f3bd9d1670440e7ccd3d599f95a7
1.32.0: sha256:8f483f1429d2d9cd6bfa6db2e3a4263151701dd4f05f2b1c06cf8e67c44ea67e
ppc64le:
1.34.4: sha256:dca59a28fe9b0b9163418eca1545c9ed01cf514179f108d14e462c6074fd103c
1.34.3: sha256:4dd782484eeb460b9a95e6e2e07474216fc02ad45a27ba871799d18f2b6ee0ae
1.34.2: sha256:d4c3c9ba24b1b0eabf3c11ddec98801dda7a87b0529706e9ede18b8cc9e4182a
1.34.1: sha256:cba0ac74e7202fe28cf8aa895b83f7a30d78b148666add78e19215259f629bb0
1.34.0: sha256:e9e41d14439db0ca88cf2cd8533038203f379c25cd612f37635c17908e050ebf
1.33.8: sha256:1d69c01512e8ebdd51fc70fc64473a31d492e8db095c0ee5d3ee58722048150c
1.33.7: sha256:076e7519bfff72a43fb1121ce836eee3cc1fec5bb5a59a11747c514e9d162d26
1.33.6: sha256:3643eefe295604288f5b652fb9c672a60f96dc803e63edaf9ee64ed4047a50dd
1.33.5: sha256:cf85062f39d755418da0ee4f869c7a4817bf95daee6e35df53010ad29be37c88
@@ -76,6 +84,7 @@ crio_archive_checksums:
1.33.2: sha256:8ed65404a57262a9f8eb75b61afa37fcec134472eb1a6d81f1889a74ff32c651
1.33.1: sha256:12646aca33f65fe335c27d3af582c599584d3f51185f01044e7ddd0668bb2b4c
1.33.0: sha256:b4fa46b25538d8145197f8bf2e935486392c0ca2a9fa609aedd02b9f106d37a6
1.32.12: sha256:9ba4f2c3be48c0f1f3228ef6322aeb3738f3ef461fd483a0cb4c2e5b067f080c
1.32.11: sha256:6c2036f2ed7134c596b5a453a06fbb7e646db9586bff0d993f5223dccf167420
1.32.10: sha256:ae4740c6bb6f346338f94508c74d5b1ec94f2691cb12f9a9add437fee5391f8d
1.32.9: sha256:604bd6f866be327951942656931847c3623cd1e138197f153dd4d5537dd19f11
@@ -431,6 +440,7 @@ cni_binary_checksums:
1.6.0: sha256:d8d4bd74247407c8c73de057bc00adac28bb1ed2d2ee60a9dda278e3b398bcc2
calicoctl_binary_checksums:
arm64:
3.30.6: sha256:47ecc00bdd797f82e4bac0ff3904c3a5143ba2d61e8ae1cbbce286ca76d3790a
3.30.5: sha256:7611343e7a56e770b95e2bb882dda787efbbd4331b1dd6316ff8ea189238dfaa
3.30.4: sha256:b21fbbc55b6f5d50c1c0faae714242cae3e013185cb8e26ce56981bd10da260d
3.30.3: sha256:2ae0474b88a6042e5489d7410d2669a9d443c9d5c51e2bdc8ebe4d6dd98f2475
@@ -452,6 +462,7 @@ calicoctl_binary_checksums:
3.28.1: sha256:c062d13534498a427c793a4a9190be4df3cf796a3feb29e4a501e1d6f48daa7c
3.28.0: sha256:c4ca8563d2a920729116a3a30171c481580c8c447938ce974ce14d7ce25a31bf
amd64:
3.30.6: sha256:2017e19727dca689d8bb73a9d8dff3c6a8ba7d8c75049f99ee207272161b5749
3.30.5: sha256:6cdfb17b0276f648f4fdb051a5d75617a50b3c328d4cccfc40d087b96c361d80
3.30.4: sha256:7e2e5e75b25c55683b68eabeb9b00390b1d359e72bf57f7ec2b76bb006fd175f
3.30.3: sha256:a7d017d1abf6ef5d6e03267187c0dd68c32f5e937b64decd29d003be44fa6b94
@@ -473,6 +484,7 @@ calicoctl_binary_checksums:
3.28.1: sha256:22ec5727c38dbe19001792b4ca64ac760a6e2985d5c1a231d919dbebe5bca171
3.28.0: sha256:4ea270699e67ca29e5533ddb0a68d370cb0005475796c7e841f83047da6297b6
ppc64le:
3.30.6: sha256:9a9c368499b1e3d08418dfbb566379483e15c50d08dd1bcaf6148c115d82ed36
3.30.5: sha256:5b6de49da1af2633549bff5e8f4d8a573a175b65c47c29d327ef6a0760d39a93
3.30.4: sha256:8fc8ef492d463e184e714bc6d31b05f9066c8af3445928efef233850f036bb92
3.30.3: sha256:ccd13ced62baf633fb4347fbe6c9fdc0d3b1b7deb1794c83c015507a0cb8238e
@@ -570,6 +582,7 @@ ciliumcli_binary_checksums:
0.16.0: sha256:da98675f961833d4ffd68b1046d907b228a7d394ded2abd70a50b20eaca171c4
calico_crds_archive_checksums:
no_arch:
3.30.6: sha256:d61aa5bcddfc78b0094acd54e0358009fa79e1cbe6d8c23bdacb34ff7a2c6c82
3.30.5: sha256:3a38f91596c204b43c70f642a3e686d8c3fbfdfa5caa7824b716aa2f4a4e568b
3.30.4: sha256:a9398f6de6cce8f683e0ad649a21f3d3b8bb5fe4cd26e7b26b33b9a8c740274f
3.30.3: sha256:36c50905b9b62a78638bcfb9d1c4faf1efa08e2013265dcd694ec4e370b78dd7
@@ -649,6 +662,8 @@ helm_archive_checksums:
3.16.0: sha256:d13a4b87b31a5b50c8d93dd9988dfb312a61e56504102f466a4004e5a3ab8e9e
cri_dockerd_archive_checksums:
arm64:
0.3.23: sha256:a78037d2d2e9c52c48372a5cbba7b94b1c57be5759449beef29cfe03cbe6f14b
0.3.22: sha256:3260b214c9b12dbf0cbf4d60410c45aacfc31ba52aa7b74164135968e8950cb6
0.3.21: sha256:35de6b1e8eba11d8ba6d71fa7499cb3d610a1e7b866c9d43b7f87029e3a769cd
0.3.20: sha256:e6b4661c51c832ee1cbbb75d1c8b086fa803acc153d400454c3b8cf324547d89
0.3.18: sha256:d16204a4f01685ba67319adb3acc6a6f3e62d8bcfd87bc67f5e08f7332515a9d
@@ -666,6 +681,8 @@ cri_dockerd_archive_checksums:
0.3.6: sha256:793b8f57cecf734c47bface10387a8e90994c570b516cb755900f21ebd0a663b
0.3.5: sha256:c20014dc5a71e6991a3bd7e1667c744e3807b5675b1724b26bb7c70093582cfe
amd64:
0.3.23: sha256:c7fe5db7f9396186193b58ded0e62a31eca7b3c58ad8691d57017986f96482ee
0.3.22: sha256:6621a96a885c82844d12318de00f510eae3459871cf1ad47317f38dd242f9a03
0.3.21: sha256:6c35838bc4b1aef74f9113670e114ca729a5f295f9457b226791e18e86e91698
0.3.20: sha256:2ce46d6bbd7f6a7e06e211836c201fdc2311111913eccc63a03f6ef4fe1958fc
0.3.18: sha256:937578ddcdb28c71afded3fda25d555e0c9e6d396668977ff98228d55886dc79
@@ -802,6 +819,8 @@ kata_containers_binary_checksums:
3.5.0: sha256:fa4cf67d010244c4f8d0e6d450d04e28d1bbce5ad1a3cbc0154adff628d56c0c
gvisor_runsc_binary_checksums:
arm64:
'20260112.0': sha512:3b7925d26d71fdcb8cb552950c88bcfed658c06ad6b1211906bfe86d13bc56d8005ac90a4d9ab4c8b6a48eb62ec51ebcdfd45a64067ac5190274e710961e51ea
'20260105.0': sha512:cc98ad73e8d181f4738c97883180bc76cf8b2eb773c11f3a44f1636d0b0e00f2ee9228e4eecd414f94d6410f4877e6c93260b8070130fba767583026115d1038
'20251215.0': sha512:5e7d6206bce4164c9109d37dfb0b169d1c59cc256910de42799a868c3f9ba5560ef5c05c0de3fad4f0856f906463588ff25c9bce3b25e0d3f20874521dffe767
'20251208.0': sha512:db07dc2def9b1e0b13e17bec5f98e9cd794159955ac999432fad16d1ec747924a05cd5e854b4d45f11147c090208c0ce7d915a0734cf2960047bd4daaba0465b
'20251201.0': sha512:fb527cea4d165478f297a918734f10acabf5230a4a0d29b19709cb6a69a389d32c2a0da328146f72ef0d8776aca35d97647db82ff46be60e85ad02305f631896
@@ -829,6 +848,8 @@ gvisor_runsc_binary_checksums:
'20250414.0': sha512:d1ba68b20057622e58e886f472e021a473222590c936a86951005d7b97366b446ef0342b91457ffc0d7e543d54c9c06a363f2883bdd6c594799c4ca1091dabd5
'20250407.0': sha512:cb590f72b0fbda45e89a2300e9247f12ff295a8c52653c8cf815c662d3fbbc774f9b915cdd4fad59e30694d8cc8737fe2a1a8186ab5136f7701bd6e6877a1662
amd64:
'20260112.0': sha512:b36de90cdad4cfe0b9b66318407da79c035dd6dcf4c1374250011f34e511c0a29e335fe04eabb0d3fe7140131925f619f724a4702b37c49557bdeb25924b4dc8
'20260105.0': sha512:15c8adabc9f1006d469177b0ec3962d4993e01c85be17d381a4979029eacc7db37ef354e3eafd279573135a1adf81baffc5c19f2bbfac932c79386f6ac74e52f
'20251215.0': sha512:ea82bb66ce61a80adb6edaa61e2f2b1cd6339c504a55dd6663555010ed7f96c6234ac787bd9ecdb29ed4058e806e829fa45f14093466913dafc44d56055a5acb
'20251208.0': sha512:4b9a29a6f887aedbc10de5f5f0900eb64026c3472b5522ee21a6d2b3d30ac3ebc084a78b97e371d3bf830dcba4f61a5809922ea768650d52ef120221b4a9b19f
'20251201.0': sha512:8534bc833d9b1e286b8876abb17dd6fe202c40a75a36dd62b0ce892bf9dceb1773e71447848e7acab120ce99283c22d2f4e4a6171008c9c5f3d5fe6ad6f1cc75
@@ -857,6 +878,8 @@ gvisor_runsc_binary_checksums:
'20250407.0': sha512:097259d6d93548bf669e21cfec5ba6a47081e43f61d22c5d8a8a4c0c209c81ac9c4454162b826f98cec49e047bbdc29c270113ab6db5519ef3e6a90f302fa47b
gvisor_containerd_shim_binary_checksums:
arm64:
'20260112.0': sha512:3215952718bd1636173649c4742e3d8e1978c410abd71bb8252c8ad6d28130cb6d66684aa089f61a0eda0b8786553620a08a9f1b5ab824bb27b1b0cf47bfb25b
'20260105.0': sha512:cfe8a07c304dca21171e5a76614ac3605f5b1ec8f9ed2eeac014a44bc00821864f219db0e25fcc1c56cedbe335bbf34a7fa6bc57335888dcd04278bc0263f5cc
'20251215.0': sha512:2b3a00ec2d646a1c26c1944781b5caf039ce7035dd72281ccff8e244af55606e01667de311febee1a0a03ebd2633af6ebb0ad72d27b8a966743ffe31563b3a5a
'20251208.0': sha512:f3a6d9ff32dae45c62ae831580e5dfbd28fed38f1ca9daf09e6a9960a5373da7e29fbf61e0846676102f053ed38f23a0ad41349f5326fb3a2991b296d33c853a
'20251201.0': sha512:9546236a7ddad9a2ccd51c41f2f309b7f4016fdf489581f77b1b803ed73ca72501af2de3e3d0b58daa633384baa0d46ecd515760165ed51bfb6c0900649c6306
@@ -884,6 +907,8 @@ gvisor_containerd_shim_binary_checksums:
'20250414.0': sha512:33b9c67bc7b73ca49154aff48da52029414a707b6a3a25eb4f71e861a94dec8fce220e63a162841670ddd4876f45b0e39abdf9f8c3235019c89f209684d3007d
'20250407.0': sha512:1c3838e10c905af0cb52697712bf6bd76b94c9e9d3d07a7643cd43dc2f8dab03b4ed4693c117e555e07a158e04ee583b6b1f1cf2fb9705244ffa5fdc4af67248
amd64:
'20260112.0': sha512:89f55750488559796fe51d2c10c289a8b0617fb9f6498714c026825268eeed449941d23e8cd5b285b69c1b032005ddeec278345198301c50d89ff6d3f66871a5
'20260105.0': sha512:7f3f5a864fda5f4e2de9db20dd5edad60b6aa467cc7c22d13f40cdce811783d66018f2c28fb74b907c6d6ac0e39f6d0e1047f1f33447b8a8682f1fbaa25edeb4
'20251215.0': sha512:538a04d88a39de1679afd9868806bd5fdc63737a4871955fc8a8c8e183942c6cc3dbd6b34b2f5589f5f474b4826427f149d5c6abec4ca8d09db363ff5f149b4f
'20251208.0': sha512:8f1e41374785bdfdf69c5798cfbdec53a833ff6724d36dd644a387b2f6e151c513389b2e5b3c0d5347c5c2ff214910db3c8c164b4d5bb2fe963c5a5eb70ca1a4
'20251201.0': sha512:216a937437cb1747d5e84edd9ae7274c5a2c4f712f4601e7e0ca06e0a688bebfac267707028b78845276302023d305ec9a93f5b200c9c3c3cdf86a2f41817703
@@ -912,6 +937,8 @@ gvisor_containerd_shim_binary_checksums:
'20250407.0': sha512:09acdc895cea6706ba528939da2e6ddab148dfee56addb0d52d7af74378454f4e05cfd47cbb29ad0569139c49cf298be9d4b94a3c2d28b75c05f713e425746e8
nerdctl_archive_checksums:
arm:
2.2.1: sha256:8d49681ac806dd3acb2477675daba3574b7d019aea26513ac1960549473738df
2.2.0: sha256:91f286e4babbd6e000e743f55e2ec6fd6b93f5b227386175f7932d247ab5a431
2.1.6: sha256:9523cce6ed87d379fe06d9b043936398f1b047917037d0faae151de83acb3b4d
2.1.5: sha256:a946db17dca42c0835d4ef891057afda03998501f600db3825f8f421f9c0180e
2.1.4: sha256:dcb2786c45913903e84ee2cbc064f9812b3f7e82bb208b14d622cd4707882063
@@ -933,6 +960,8 @@ nerdctl_archive_checksums:
1.7.1: sha256:799d35de7a182da35d850308c7f1787cd7321404348ff2d5ba64ad43b06b395a
1.7.0: sha256:8b9e7cccbcc0a472685d1bc285f591f41005f8699e7265ea5438a3e06aefdcfd
arm64:
2.2.1: sha256:abc83c9ac3d843c3442eedfb61c6456b8b59b1e4cd69f69598ca1582acc7c094
2.2.0: sha256:37b353122e0785578d1680fb1d7be546f4c64d0a4aed7875d3a216b2c44be76d
2.1.6: sha256:5c30be3ec118eb222bf635f8049c2f96b4c46d9343ec445058e9bc2ee9531c28
2.1.5: sha256:d8d7caea291e0ad828dfb885fed2b9ee6703432be54ff3b62ed81aedd8431f63
2.1.4: sha256:aaf5acbbb044d82038518780cebebcc2a901afde2db465d3581b8987c2d6f6fc
@@ -954,6 +983,8 @@ nerdctl_archive_checksums:
1.7.1: sha256:46affa0564bb74f595a817e7d5060140099d9cfd9e00e1272b4dbe8b0b85c655
1.7.0: sha256:1255eea5bc2dbac9339d0a9acfb0651dda117504d52cd52b38cf3c2251db4f39
amd64:
2.2.1: sha256:34144de7f12756aa4b9dc42a907fd95b0c5eb82a63566a650ca10c8abe7a26a0
2.2.0: sha256:1b3390a832eaeaa1459cf42357da983205da2dd72300a015ad018b3499fc455e
2.1.6: sha256:22857b373edea479a4534ba62cae1c77f2af38f0aac4c91c1c68cf09e29d6f9e
2.1.5: sha256:9ff862624084fa1b2c6272c4498754801bff3f4ce09421ac6eba58a760878544
2.1.4: sha256:d6e91d3e275bfb3404959b1f95ed25ff6cd83e3181d17b93afe2d39cd025501d
@@ -975,6 +1006,8 @@ nerdctl_archive_checksums:
1.7.1: sha256:5fc0a6e8c3a71cbba95fbdb6833fb8a7cd8e78f53de10988362d4029c14b905a
1.7.0: sha256:844c47b175a3d6bc8eaad0c51f23624a5ef10c09e55607803ec2bc846fb04df9
ppc64le:
2.2.1: sha256:05c3573e0468fbe6ccecce497b8129beec0fa1d8afadeba244e3d5ac63047fce
2.2.0: sha256:cc9f55ffec892498bb27db1f6b0eef16b591ee4ce873b61f2fd9a9a30930c620
2.1.6: sha256:807678bc5042cccf81dbe13b00bbe8e18dc24412481c3cc68eafa316ae43842d
2.1.5: sha256:18c72aa80d974394452058472e0dfcfe6200307969a17a33ffe1a85606a2663c
2.1.4: sha256:82b3c3ccbf314e27591977268c81f92660bf6b78fc568e4e7dc1b583f61b622d
@@ -997,6 +1030,8 @@ nerdctl_archive_checksums:
1.7.0: sha256:e421ae655ff68461bad04b4a1a0ffe40c6f0fcfb0847d5730d66cd95a7fd10cd
containerd_archive_checksums:
arm64:
2.2.1: sha256:dac15a0d412a24be8bfe6a40cec8f51829062725169f1e72ac7d120a891ef5b6
2.2.0: sha256:8805c2123d3b7c7ee2030e9f8fc07a1167d8a3f871d6a7d7ec5d1deb0b51a4a7
2.1.6: sha256:88d6e32348c36628c8500a630c6dd4b3cb8c680b1d18dc8d1d19041f67757c6e
2.1.5: sha256:fe81122c0cc8222470fa3be51f42fa918ac29ffd956ccd2fc408c1997babd2ca
2.1.4: sha256:846d13bc2bf1c01ae2f20d13beb9b3a1e50b52c86e955b4ac7d658f5847f2b0e
@@ -1044,6 +1079,8 @@ containerd_archive_checksums:
1.7.1: sha256:1f828dc063e3c24b0840b284c5635b5a11b1197d564c97f9e873b220bab2b41b
1.7.0: sha256:e7e5be2d9c92e076f1e2e15c9f0a6e0609ddb75f7616999b843cba92d01e4da2
amd64:
2.2.1: sha256:f5d8e90ecb6c1c7e33ecddf8cc268a93b9e5b54e0e850320d765511d76624f41
2.2.0: sha256:b9626a94ab93b00bcbcbf13d98deef972c6fb064690e57940632df54ad39ee71
2.1.6: sha256:4793dc5c1f34ebf8402990d0050f3c294aa3c794cd5a4baa403c1cf10602326d
2.1.5: sha256:403af72d9f956ed8a5ad5b0ac0f1e8e371a1488f2b9edf9b4ba13db0653936ea
2.1.4: sha256:316d510a0428276d931023f72c09fdff1a6ba81d6cc36f31805fea6a3c88f515
@@ -1091,6 +1128,8 @@ containerd_archive_checksums:
1.7.1: sha256:9504771bcb816d3b27fab37a6cf76928ee5e95a31eb41510a7d10ae726e01e85
1.7.0: sha256:b068b05d58025dc9f2fc336674cac0e377a478930f29b48e068f97c783a423f0
ppc64le:
2.2.1: sha256:3de0f215ea649952a9e99040cb3888d8059bd3d35b04edbe6afb916c763f9ea7
2.2.0: sha256:e4ecd0b03200864e117371b25cce5335e39ce0b0a168a01d2ba6562a05020f0b
2.1.6: sha256:aef2b639a14ae79f2bbe43356b25e84ecfb2c7f269c87f41e41585e724073e54
2.1.5: sha256:dc95edc01958d18f8475ab4d415e8c92cb3bad580167db8b0054374fd9004f78
2.1.4: sha256:d519e40e266f39cdd68f2c31e2e4e9b70eda09b96f3c3de343a7a3e11d49ad4c
@@ -1139,6 +1178,8 @@ containerd_archive_checksums:
1.7.0: sha256:051e897d3ee5b8c8097f65be447fea2d29226b583ca5d9ed78e9aebcf4e69889
containerd_static_archive_checksums:
arm64:
2.2.1: sha256:6b3b011ee388638eace05ac3be0eb32dfb4a43086695c29d06e997febd336f2e
2.2.0: sha256:5f2a7f451231ff35d8306f874c51606fc9da1e2db56048834a23260f68a78eef
2.1.6: sha256:9da292010d36d80afa3bb48cbd15f65d3bf38177217060272a1c3fd65351cfa4
2.1.5: sha256:d1a1e64c4334e17d6f9f40093d5ff9f810b95ac34c7dcba55e7d2226d2a8ab79
2.1.4: sha256:c5f0957064e6ed5a67905ea3f8e451dadd16530334b86baaad678dd357205c30
@@ -1186,6 +1227,8 @@ containerd_static_archive_checksums:
1.7.1: sha256:f0435e7cda3c3abc40d3f27d403a8e24bd0b927a8a893a7e4dfaec5996fa9731
1.7.0: sha256:6e648cd832f026e23eb6998191e618da7c1ec0c0373263d503ff464e0ae3977a
amd64:
2.2.1: sha256:af3e82bac6abed58d45956c653244aa2be583359a9753614278ef652012f2883
2.2.0: sha256:2d20037947cbb0def12b8ac0c572b212284c1832bf3c921df1e58975515d1d08
2.1.6: sha256:577900a5a8684c27e344aeeb1fc64e355745f58cba7f83c53649235ba25abbbf
2.1.5: sha256:9389a4bff0112258bd953c66382709c2a4a11e25e376c8c4c7075b39de156882
2.1.4: sha256:50e53500800f4f74d0d8b2e57e939297eab68b0fe11a0957b771d5faa61fae5b
@@ -1233,6 +1276,8 @@ containerd_static_archive_checksums:
1.7.1: sha256:8b4e8ed8a650ea435aa71e115fa1a70701ab98bc1836b3ed33341af35bf85a3a
1.7.0: sha256:64ad6428cc4aca486db3a6148682052955d1e3134b69f079edf686c21d123fcd
ppc64le:
2.2.1: sha256:fc9235be9a3dd3005e7fe6a9d7bb80e42dbfbff4b119cdce6ea3ee66bc7ae9ca
2.2.0: sha256:d15a4edfe689ce71df8cc5a0c1837856f54aba8d7336170600e6592c2fbf3d8d
2.1.6: sha256:c64312b87181d900452b5c3360a90578acd39ec7664d0c2e060183b24a708766
2.1.5: sha256:4404b918b9e101274baa072188054766a1af16be8d22f02a51a5f6ee4e5d159f
2.1.4: sha256:9bc1ac45ba197873a4d47045313e0cc55910802937739bf57aded125abe55c8c

View File

@@ -7,7 +7,7 @@ kube_next: "{{ ((kube_version | split('.'))[1] | int) + 1 }}"
kube_major_next_version: "1.{{ kube_next }}"
pod_infra_supported_versions:
'1.34': '3.10'
'1.34': '3.10.1'
'1.33': '3.10'
'1.32': '3.10'

View File

@@ -3,54 +3,36 @@
tags:
- always
block:
- name: Gather ansible_default_ipv4
- name: Gather node IPs
setup:
gather_subset: '!all,network'
filter: "ansible_default_ipv4"
when: ansible_default_ipv4 is not defined
gather_subset: '!all,!min,network'
filter: "ansible_default_ip*"
when: ansible_default_ipv4 is not defined or ansible_default_ipv6 is not defined
ignore_unreachable: true
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
# https://medium.com/opsops/ansible-default-ipv4-is-not-what-you-think-edb8ab154b10
# TODO: discard this and update all the location relying on it in "looping on hostvars" templates
- name: Set fallback_ip
set_fact:
- name: Set computed IPs varables
vars:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
when: fallback_ip is not defined
- name: Gather ansible_default_ipv6
setup:
gather_subset: '!all,network'
filter: "ansible_default_ipv6"
when: ansible_default_ipv6 is not defined
ignore_unreachable: true
- name: Set fallback_ip6
set_fact:
fallback_ip6: "{{ ansible_default_ipv6.address | d('::1') }}"
when: fallback_ip6 is not defined
- name: Set main access ip(access_ip based on ipv4_stack/ipv6_stack options).
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
_ipv4: "{{ ip | default(fallback_ip) }}"
_access_ipv4: "{{ access_ip | default(_ipv4) }}"
_ipv6: "{{ ip6 | default(fallback_ip6) }}"
_access_ipv6: "{{ access_ip6 | default(_ipv6) }}"
_access_ips:
- "{{ _access_ipv4 if ipv4_stack }}"
- "{{ _access_ipv6 if ipv6_stack }}"
_ips:
- "{{ _ipv4 if ipv4_stack }}"
- "{{ _ipv6 if ipv6_stack }}"
set_fact:
cacheable: true
main_access_ip: >-
{%- if ipv4_stack -%}
{{ access_ip | default(ip | default(fallback_ip)) }}
{%- else -%}
{{ access_ip6 | default(ip6 | default(fallback_ip6)) }}
{%- endif -%}
- name: Set main ip(ip based on ipv4_stack/ipv6_stack options).
set_fact:
cacheable: true
main_ip: "{{ (ip | default(fallback_ip)) if ipv4_stack else (ip6 | default(fallback_ip6)) }}"
- name: Set main access ips(mixed ips for dualstack).
set_fact:
main_access_ips: ["{{ (main_access_ip + ',' + (access_ip6 | default(ip6 | default(fallback_ip6)))) if (ipv4_stack and ipv6_stack) else main_access_ip }}"]
- name: Set main ips(mixed ips for dualstack).
set_fact:
main_ips: ["{{ (main_ip + ',' + (ip6 | default(fallback_ip6))) if (ipv4_stack and ipv6_stack) else main_ip }}"]
main_access_ip: "{{ _access_ipv4 if ipv4_stack else _access_ipv6 }}"
main_ip: "{{ _ipv4 if ipv4_stack else _ipv6 }}"
# Mixed IPs - for dualstack
main_access_ips: "{{ _access_ips | select }}"
main_ips: "{{ _ips | select }}"
- name: Set no_proxy
import_tasks: no_proxy.yml

View File

@@ -4,7 +4,7 @@
# noqa: jinja[spacing]
no_proxy_prepare: >-
{%- if loadbalancer_apiserver is defined -%}
{{ apiserver_loadbalancer_domain_name | default('') }},
{{ apiserver_loadbalancer_domain_name }},
{{ loadbalancer_apiserver.address | default('') }},
{%- endif -%}
{%- if no_proxy_exclude_workers | default(false) -%}
@@ -32,7 +32,7 @@
- name: Populates no_proxy to all hosts
set_fact:
no_proxy: "{{ hostvars.localhost.no_proxy_prepare }}"
no_proxy: "{{ hostvars.localhost.no_proxy_prepare | select }}"
# noqa: jinja[spacing]
proxy_env: "{{ proxy_env | combine({
'no_proxy': hostvars.localhost.no_proxy_prepare,

View File

@@ -7,8 +7,8 @@ image:
repository: {{ cilium_image_repo }}
tag: {{ cilium_image_tag }}
k8sServiceHost: "auto"
k8sServicePort: "auto"
k8sServiceHost: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
k8sServicePort: "{{ kube_apiserver_global_endpoint | urlsplit('port') }}"
ipv4:
enabled: {{ cilium_enable_ipv4 | to_json }}

View File

@@ -1,44 +0,0 @@
---
dependencies:
- role: network_plugin/cni
when: kube_network_plugin != 'none'
- role: network_plugin/cilium
when: kube_network_plugin == 'cilium' or cilium_deploy_additionally
tags:
- cilium
- role: network_plugin/calico
when: kube_network_plugin == 'calico'
tags:
- calico
- role: network_plugin/flannel
when: kube_network_plugin == 'flannel'
tags:
- flannel
- role: network_plugin/macvlan
when: kube_network_plugin == 'macvlan'
tags:
- macvlan
- role: network_plugin/kube-ovn
when: kube_network_plugin == 'kube-ovn'
tags:
- kube-ovn
- role: network_plugin/kube-router
when: kube_network_plugin == 'kube-router'
tags:
- kube-router
- role: network_plugin/custom_cni
when: kube_network_plugin == 'custom_cni'
tags:
- custom_cni
- role: network_plugin/multus
when: kube_network_plugin_multus
tags:
- multus

View File

@@ -0,0 +1,47 @@
---
- name: Container Network Interface plugin
include_role:
name: network_plugin/cni
when: kube_network_plugin != 'none'
- name: Network plugin
include_role:
name: "network_plugin/{{ kube_network_plugin }}"
apply:
tags:
- "{{ kube_network_plugin }}"
- network
when:
- kube_network_plugin != 'none'
tags:
- cilium
- calico
- flannel
- macvlan
- kube-ovn
- kube-router
- custom_cni
- name: Cilium additional
include_role:
name: network_plugin/cilium
apply:
tags:
- cilium
- network
when:
- kube_network_plugin != 'cilium'
- cilium_deploy_additionally
tags:
- cilium
- name: Multus
include_role:
name: network_plugin/multus
apply:
tags:
- multus
- network
when: kube_network_plugin_multus
tags:
- multus

View File

@@ -20,7 +20,7 @@
when:
- not ignore_assert_errors
- name: Warn if `kube_network_plugin` is `none
- name: Warn if `kube_network_plugin` is `none`
debug:
msg: |
"WARNING! => `kube_network_plugin` is set to `none`. The network configuration will be skipped.

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
@@ -29,7 +29,7 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

View File

@@ -116,4 +116,9 @@ infos = {
"graphql_id": "R_kgDODQ6RZw",
"binary": True,
},
"prometheus_operator_crds": {
"url": "https://github.com/prometheus-operator/prometheus-operator/releases/download/v{version}/stripped-down-crds.yaml",
"graphql_id": "R_kgDOBBxPpw",
"binary": True,
},
}

View File

@@ -5,42 +5,38 @@ import logging
import datetime
import time
DATE_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
PAUSE_SECONDS = 5
log = logging.getLogger(__name__)
log = logging.getLogger('openstack-cleanup')
parser = argparse.ArgumentParser(description="Cleanup OpenStack resources")
parser = argparse.ArgumentParser(description='Cleanup OpenStack resources')
parser.add_argument('-v', '--verbose', action='store_true',
help='Increase verbosity')
parser.add_argument('--hours', type=int, default=4,
help='Age (in hours) of VMs to cleanup (default: 4h)')
parser.add_argument('--dry-run', action='store_true',
help='Do not delete anything')
parser.add_argument(
"--hours",
type=int,
default=4,
help="Age (in hours) of VMs to cleanup (default: 4h)",
)
parser.add_argument("--dry-run", action="store_true", help="Do not delete anything")
args = parser.parse_args()
oldest_allowed = datetime.datetime.now() - datetime.timedelta(hours=args.hours)
oldest_allowed = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(
hours=args.hours
)
def main():
logging.basicConfig(level=logging.INFO)
if args.dry_run:
print('Running in dry-run mode')
else:
print('This will delete resources... (ctrl+c to cancel)')
time.sleep(PAUSE_SECONDS)
log.info("Running in dry-run mode")
conn = openstack.connect()
print('Servers...')
map_if_old(conn.compute.delete_server,
conn.compute.servers())
log.info("Deleting servers...")
map_if_old(conn.compute.delete_server, conn.compute.servers())
print('Ports...')
log.info("Deleting ports...")
try:
map_if_old(conn.network.delete_port,
conn.network.ports())
map_if_old(conn.network.delete_port, conn.network.ports())
except openstack.exceptions.ConflictException as ex:
# Need to find subnet-id which should be removed from a router
for sn in conn.network.subnets():
@@ -48,40 +44,41 @@ def main():
fn_if_old(conn.network.delete_subnet, sn)
except openstack.exceptions.ConflictException:
for r in conn.network.routers():
print("Deleting subnet %s from router %s", sn, r)
log.info("Deleting subnet %s from router %s", sn, r)
try:
conn.network.remove_interface_from_router(
r, subnet_id=sn.id)
conn.network.remove_interface_from_router(r, subnet_id=sn.id)
except Exception as ex:
print("Failed to delete subnet from router as %s", ex)
log.error("Failed to delete subnet from router", exc_info=True)
for ip in conn.network.ips():
fn_if_old(conn.network.delete_ip, ip)
# After removing unnecessary subnet from router, retry to delete ports
map_if_old(conn.network.delete_port,
conn.network.ports())
map_if_old(conn.network.delete_port, conn.network.ports())
print('Security groups...')
log.info("Deleting security groups...")
try:
map_if_old(conn.network.delete_security_group,
conn.network.security_groups())
map_if_old(conn.network.delete_security_group, conn.network.security_groups())
except openstack.exceptions.ConflictException as ex:
# Need to delete port when security groups is in used
map_if_old(conn.network.delete_port,
conn.network.ports())
map_if_old(conn.network.delete_security_group,
conn.network.security_groups())
map_if_old(conn.network.delete_port, conn.network.ports())
map_if_old(conn.network.delete_security_group, conn.network.security_groups())
print('Subnets...')
map_if_old(conn.network.delete_subnet,
conn.network.subnets())
log.info("Deleting Subnets...")
map_if_old(conn.network.delete_subnet, conn.network.subnets())
print('Networks...')
log.info("Deleting networks...")
for n in conn.network.networks():
if not n.is_router_external:
fn_if_old(conn.network.delete_network, n)
log.info("Deleting keypairs...")
map_if_old(
conn.compute.delete_keypair,
(conn.compute.get_keypair(x.name) for x in conn.compute.keypairs()),
# LIST API for keypairs does not give us a created_at (WTF)
)
# runs the given fn to all elements of the that are older than allowed
def map_if_old(fn, items):
@@ -91,15 +88,19 @@ def map_if_old(fn, items):
# run the given fn function only if the passed item is older than allowed
def fn_if_old(fn, item):
created_at = datetime.datetime.strptime(item.created_at, DATE_FORMAT)
created_at = datetime.datetime.fromisoformat(item.created_at)
if created_at.tzinfo is None:
created_at = created_at.replace(tzinfo=datetime.timezone.utc)
# Handle TZ unaware object by assuming UTC
# Can't compare to oldest_allowed otherwise
if item.name == "default": # skip default security group
return
if created_at < oldest_allowed:
print('Will delete %(name)s (%(id)s)' % item)
log.info("Will delete %s %s)", item.name, item.id)
if not args.dry_run:
fn(item)
if __name__ == '__main__':
if __name__ == "__main__":
# execute only if run as a script
main()

View File

@@ -1,5 +1,5 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -27,14 +27,14 @@ RUN apt update -q \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
unzip \
libvirt-clients \
qemu-utils \
qemu-kvm \
dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list \
&& apt update -q \
&& apt install --no-install-recommends -yq docker-ce \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
@@ -44,9 +44,8 @@ ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& curl -L https://dl.k8s.io/release/v{{ kube_version }}/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v{{ kube_version }}/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
@@ -56,5 +55,5 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
&& vagrant plugin install vagrant-libvirt \
# Install Kubernetes collections
&& pip install --no-compile --no-cache-dir kubernetes \
&& pip install --break-system-packages --no-compile --no-cache-dir kubernetes \
&& ansible-galaxy collection install kubernetes.core

View File

@@ -118,6 +118,10 @@ images:
converted: true
tag: "latest"
# rockylinux-10-extra:
# default cloud image doesn't included `kernel-modules-extra`. How to build RockyLinux 10 + `kernel-module-extra` with dib
# https://github.com/kubernetes-sigs/kubespray/pull/12355#issuecomment-3705400093
debian-10:
filename: debian-10-openstack-amd64.qcow2
url: https://cdimage.debian.org/cdimage/openstack/current-10/debian-10-openstack-amd64.qcow2

View File

@@ -0,0 +1,10 @@
---
# Instance settings
cloud_image: rockylinux-10-extra
vm_memory: 3072
# Kubespray settings
metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy

View File

@@ -0,0 +1 @@
RESET_CHECK=true

View File

@@ -0,0 +1,15 @@
---
# Instance settings
cloud_image: rockylinux-10-extra
vm_memory: 3072
# Kubespray settings
kube_network_plugin: cilium
cilium_kube_proxy_replacement: true
kube_owner: root
# Node Feature Discovery
node_feature_discovery_enabled: true
kube_asymmetric_encryption_algorithm: "ECDSA-P256"

View File

@@ -1,7 +0,0 @@
---
sonobuoy_enabled: true
pkg_install_retries: 25
retry_stagger: 10
# Ignore ping errors
ignore_assert_errors: true

View File

@@ -1,4 +1,4 @@
-r ../requirements.txt
distlib==0.4.0 # required for building collections
molecule==25.1.0
molecule==25.12.0
pytest-testinfra==10.2.2

View File

@@ -18,7 +18,7 @@ if [ "${UPGRADE_TEST}" != "false" ]; then
# Checkout the current tests/ directory ; even when testing old version,
# we want the up-to-date test setup/provisionning
git checkout "${CI_COMMIT_SHA}" -- tests/
pip install --no-compile --no-cache-dir -r requirements.txt
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt
fi
export ANSIBLE_BECOME=true
@@ -58,7 +58,7 @@ fi
if [ "${UPGRADE_TEST}" != "false" ]; then
git checkout "${CI_COMMIT_SHA}"
pip install --no-compile --no-cache-dir -r requirements.txt
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt
case "${UPGRADE_TEST}" in
"basic")