Compare commits

...

36 Commits

Author SHA1 Message Date
ChengHao Yang
6965d8ded9 Support Fedora 41 (#12138)
* Add Fedora 41 CI support

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: add fedora41 support

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Add Fedora 41 local vagrant test

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Fix: Fedora 41+ need python3-libdnf5 for package management

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-02-11 08:26:01 +05:30
Meza
8bd5045ecf cleanup: Deprecate Ingress-Nginx from kubernetes-apps (#12767)
* [docs] Remove ingress-nginx references in docs and scripts jinja

Signed-off-by: Meza <meza-xyz@proton.me>

* Remove ingress-nginx doc and remove references in readme and sidebar

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress-nginx dir from kubernetes-apps

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress-nginx from inventory addons

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx_enabled from default main

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx from download

Signed-off-by: Meza <meza-xyz@proton.me>

* Delete ingress_nginx from dependencies

Signed-off-by: Meza <meza-xyz@proton.me>

* Remove ingress_nginx from registry task

Signed-off-by: Meza <meza-xyz@proton.me>

---------

Signed-off-by: Meza <meza-xyz@proton.me>
2026-02-10 20:22:04 +05:30
Micke Nordin
8f73dc9c2f Add services RBAC for calico-kube-controllers in KDD mode (#12928)
Commit 5fb85dc added service permissions for etcd datastore mode,
but the same permissions are needed for KDD (Kubernetes datastore) mode.

Signed-off-by: Micke Nordin <kano@sunet.se>
2026-02-10 19:52:02 +05:30
Ali Afsharzadeh
cc05dd4d14 Upgrade ansible from 10.7.0 to 11.13.0 (#12903)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-02-10 19:48:07 +05:30
Mark Tsai
9582ab3dcd image_updates: update openstack-cloud-controller to v1.35.0 (#12972) 2026-02-10 14:58:01 +05:30
Mohamed Omar Zaian
a77221d12b [kubernetes] Support Kubernetes v1.35.0 (#12812) 2026-02-10 14:54:02 +05:30
Max Gautier
57364f4085 Patch versions updates (#12973)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-09 21:13:25 +05:30
Max Gautier
34f43d21e3 Revert "kubelet: conditionalize staticPodPath location (#12433)" (#12970)
* Revert "kubelet: conditionalize staticPodPath location (#12433)"

This reverts commit 082507cff2.

* Add kubelet_static_pod_path to removed variables
2026-02-09 07:31:09 +05:30
Srishti Jaiswal
052846aa28 removed deprecated containerd_registries from test file (#12969) 2026-02-08 11:11:08 +05:30
neo
a563431c68 Remove Kubernetes Dashboard support (#12858) 2026-02-07 22:49:08 +05:30
Max Gautier
3aa0c0cc64 coredns: allow to customize service name (#12951) 2026-02-06 09:52:29 +05:30
chun
9bbef44e32 Bump: Prometheus Operator CRD to 0.88.1 (#12968)
Signed-off-by: hcc429 <dev.hcc29@gmail.com>
2026-02-06 08:36:30 +05:30
Srishti Jaiswal
03cfdbf2a9 add removed var validation to validate_inventory (#12942) 2026-02-05 15:34:31 +05:30
Jordan Liggitt
b5b599ecf8 Clean up unused nodes/proxy permission from node-feature-discovery-gc (#12955) 2026-02-05 15:30:34 +05:30
Max Gautier
4245ddcee8 Make etcd node removal idempotent (#12949) 2026-02-05 11:40:28 +05:30
Joshua N Haupt
422e7366ec Fix Gluster image_id and update openstack_blockstorage_volume_v3 (#12910)
This fixes the Terraform Gluster Compute image_id bug and updates the openstack_blockstorage_volume_v2 to
openstack_blockstorage_volume_v3.

Resolves:
[Bug] OpenStack Compute variable handling of image_id and image_name for Gluster nodes is broken

https://github.com/kubernetes-sigs/kubespray/issues/12902

Update openstack_blockstorage_volume_v2 to openstack_blockstorage_volume_v3

https://github.com/kubernetes-sigs/kubespray/issues/12901

Signed-off-by: Joshua Nathaniel Haupt <joshua@hauptj.com>
2026-02-04 11:08:26 +05:30
Tushar240503
bf69e67240 refactor/dynamic-role-loading-network (#12933)
Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-02-03 21:58:29 +05:30
Tushar240503
c5c2cf16a0 Move inline defaults to defaults/main.yml (#12926) 2026-02-03 14:14:29 +05:30
Ali Afsharzadeh
69e042bd9e Remove software-properties-common from pipeline.Dockerfile (#12945)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-02-02 20:04:32 +05:30
dependabot[bot]
20da3bb1b0 build(deps): bump cryptography from 46.0.3 to 46.0.4 (#12944)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.3 to 46.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.3...46.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-02 09:26:30 +05:30
Ieere Song
4d4058ee8e fix: typo in validate_inventory task name (missing backtick) (#12940)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-01-31 20:02:24 +05:30
Tushar240503
f071fccc33 updated prometheus-operator crd checksum autobump (#12939)
* updated prometheus-operator crd checksum autobump

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

* updated to Next-Gen format

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>

---------

Signed-off-by: Tushar Sharma <tusharkumargzb6@gmail.com>
2026-01-31 19:44:24 +05:30
Eugene Shutov
70daea701a local_path_provisioner: add resources (#12548)
* local_path_provisioner: add resources

* Update roles/kubernetes-apps/external_provisioner/local_path_provisioner/templates/local-path-storage-deployment.yml.j2

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-31 00:08:25 +05:30
Ali Afsharzadeh
3e42b84e94 Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04 (#12935)
* Upgrade Dockerfile base image from Ubuntu 22.04 to 24.04

* Add --break-system-packages flag to testcases_run.sh file
2026-01-30 19:57:44 +05:30
Max Gautier
868ff3cea9 Auto-bump checksums on last 3 branches (#12934)
We now have all supported release branches (last 3) using the new
checksums format, which means they all work with the auto-bump tooling.
2026-01-30 15:39:44 +05:30
Max Gautier
0b69a18e35 Remove nifcloud terraform provider support (it is no longer available) (#12936)
The nifcloud terraform provider has been deleted, so remove support and
CI.
2026-01-30 15:05:44 +05:30
ChengHao Yang
e30076016c Releng: Galaxy version upgrade to 2.31.0 (#12909)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-30 13:35:43 +05:30
ChengHao Yang
f4ccdb5e72 Docs: update 2.29.0 to 2.30.0 (#12899)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-29 23:45:50 +05:30
Max Gautier
fcecaf6943 wait for control plane node to become ready after joining (#12794)
When joining a control plane node and "upgrading" the cluster setup (for
example, to update etcd addresses after adding a new etcd) in the same
playbook run, the node can take a bit of time to become ready after
joining.
This triggers a kubeadm preflight check (ControlPlaneNodesReady) in
kubeadm upgrade, which is run directly after the join tasks.

Add a configurable wait for the control plane node to become Ready to
fix this race condition.
2026-01-28 22:15:51 +05:30
Max Gautier
37f7a86014 etcd-certs: only change necessary permissions (#12908)
We currently **recursively** set the permissions of /etc/ssl/etcd/ssl
(default path) to 700. But this removes group permission from the files
under it, and certain composents (like calio with etcd datastore) rely
on it ; thus, the upgrade of a cluster can fail because the
calico-kube-controller can't access the certs, and thus the etcd.

This works in other case because as far as I can tell, the apiserver
which do access the etcd run as root (the owner of the files, not just
the "group owner")

We also for some reasons do this twice.

Only create the etcd cert directory with the correct permissions once,
not recursively.
2026-01-27 20:25:52 +05:30
Max Gautier
fff7f10a85 Patch versions updates (#12912)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-27 20:21:53 +05:30
ChengHao Yang
dc09298f7e Docs: cilium_kube_proxy_replacement change boolean (#12898)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-01-27 16:43:48 +05:30
dependabot[bot]
680db0c921 build(deps): bump jmespath from 1.0.1 to 1.1.0 (#12905)
Bumps [jmespath](https://github.com/jmespath/jmespath.py) from 1.0.1 to 1.1.0.
- [Changelog](https://github.com/jmespath/jmespath.py/blob/develop/CHANGELOG.rst)
- [Commits](https://github.com/jmespath/jmespath.py/compare/1.0.1...1.1.0)

---
updated-dependencies:
- dependency-name: jmespath
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 16:39:49 +05:30
dependabot[bot]
9977d4dc10 build(deps): bump actions/checkout from 6.0.1 to 6.0.2 (#12906)
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8e8c483db8...de0fac2e45)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:41:53 +05:30
dependabot[bot]
1b6129566b build(deps): bump peter-evans/create-pull-request from 8.0.0 to 8.1.0 (#12907)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 8.0.0 to 8.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](98357b18bf...c0f553fe54)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 20:37:51 +05:30
Ali Afsharzadeh
c3404c3685 Upgrade cilium from 1.18.5 to 1.18.6 (#12900)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-01-26 20:21:50 +05:30
107 changed files with 312 additions and 2574 deletions

View File

@@ -13,7 +13,7 @@ jobs:
issues: write
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
- name: Parse issue form
uses: stefanbuck/github-issue-parser@10dcc54158ba4c137713d9d69d70a2da63b6bda3

View File

@@ -20,7 +20,7 @@ jobs:
query get_release_branches($owner:String!, $name:String!) {
repository(owner:$owner, name:$name) {
refs(refPrefix: "refs/heads/",
first: 2, # TODO increment once we have release branch with the new checksums format
first: 3,
query: "release-",
orderBy: {
field: ALPHABETICAL,

View File

@@ -11,7 +11,7 @@ jobs:
update-patch-versions:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
ref: ${{ inputs.branch }}
- uses: actions/setup-python@v6
@@ -29,7 +29,7 @@ jobs:
~/.cache/pre-commit
- run: pre-commit run --all-files propagate-ansible-variables
continue-on-error: true
- uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725
- uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0
with:
commit-message: Patch versions updates
title: Patch versions updates - ${{ inputs.branch }}

View File

@@ -41,6 +41,7 @@ pr:
- debian12-cilium
- debian13-cilium
- fedora39-kube-router
- fedora41-kube-router
- openeuler24-calico
- rockylinux9-cilium
- rockylinux10-cilium
@@ -91,6 +92,8 @@ pr_full:
- debian12-custom-cni-helm
- fedora39-calico-swap-selinux
- fedora39-crio
- fedora41-calico-swap-selinux
- fedora41-crio
- ubuntu24-calico-ha-wireguard
- ubuntu24-flannel-ha
- ubuntu24-flannel-ha-once
@@ -150,6 +153,7 @@ periodic:
- debian12-cilium-svc-proxy
- fedora39-calico-selinux
- fedora40-docker-calico
- fedora41-calico-selinux
- ubuntu24-calico-etcd-kubeadm-upgrade-ha
- ubuntu24-calico-ha-recover
- ubuntu24-calico-ha-recover-noquorum

View File

@@ -37,7 +37,6 @@ terraform_validate:
- hetzner
- vsphere
- upcloud
- nifcloud
.terraform_apply:
extends: .terraform_install

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
@@ -29,14 +29,14 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.34.3/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.34.3/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& curl -L "https://dl.k8s.io/release/v1.35.0/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.35.0/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./

View File

@@ -22,7 +22,7 @@ Ensure you have installed Docker then
```ShellSession
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash
quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@@ -90,7 +90,7 @@ vagrant up
- **Debian** Bookworm, Bullseye, Trixie
- **Ubuntu** 22.04, 24.04
- **CentOS Stream / RHEL** [9, 10](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40
- **Fedora** 39, 40, 41
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** [9, 10](docs/operating_systems/rhel.md#rhel-8)
@@ -111,15 +111,15 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.34.3
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.35.0
- [etcd](https://github.com/etcd-io/etcd) 3.5.26
- [docker](https://www.docker.com/) 28.3
- [containerd](https://containerd.io/) 2.2.1
- [cri-o](http://cri-o.io/) 1.34.4 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [cri-o](http://cri-o.io/) 1.35.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.5
- [cilium](https://github.com/cilium/cilium) 1.18.5
- [calico](https://github.com/projectcalico/calico) 3.30.6
- [cilium](https://github.com/cilium/cilium) 1.18.6
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
@@ -127,8 +127,7 @@ Note:
- [kube-vip](https://github.com/kube-vip/kube-vip) 1.0.3
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
- [coredns](https://github.com/coredns/coredns) 1.12.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.13.3
- [coredns](https://github.com/coredns/coredns) 1.12.4
- [argocd](https://argoproj.github.io/) 2.14.5
- [helm](https://helm.sh/) 3.18.4
- [metallb](https://metallb.universe.tf/) 0.13.9
@@ -202,8 +201,6 @@ See also [Network checker](docs/advanced/netcheck.md).
## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources

2
Vagrantfile vendored
View File

@@ -35,6 +35,8 @@ SUPPORTED_OS = {
"fedora40" => {box: "fedora/40-cloud-base", user: "vagrant"},
"fedora39-arm64" => {box: "bento/fedora-39-arm64", user: "vagrant"},
"fedora40-arm64" => {box: "bento/fedora-40", user: "vagrant"},
"fedora41" => {box: "fedora/41-cloud-base", user: "vagrant"},
"fedora41-bento" => {box: "bento/fedora-41", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.6.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},

View File

@@ -1,5 +0,0 @@
*.tfstate*
.terraform.lock.hcl
.terraform
sample-inventory/inventory.ini

View File

@@ -1,138 +0,0 @@
# Kubernetes on NIFCLOUD with Terraform
Provision a Kubernetes cluster on [NIFCLOUD](https://pfs.nifcloud.com/) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+----------------------------+
+---------------+ | +--------------------+ |
| | | | +--------------------+ |
| API server LB +---------> | | | |
| | | | | Control Plane/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------------+ |
| ^ |
| | |
| v |
| +--------------------+ |
| | +--------------------+ |
| | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------------+ |
+----------------------------+
```
## Requirements
* Terraform 1.3.7
## Quickstart
### Export Variables
* Your NIFCLOUD credentials:
```bash
export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY>
export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
```
* The SSH KEY used to connect to the instance:
* FYI: [Cloud Help(SSH Key)](https://pfs.nifcloud.com/help/ssh.htm)
```bash
export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
```
* The IP address to connect to bastion server:
```bash
export TF_VAR_working_instance_ip=$(curl ifconfig.me)
```
### Create The Infrastructure
* Run terraform:
```bash
terraform init
terraform apply -var-file ./sample-inventory/cluster.tfvars
```
### Setup The Kubernetes
* Generate cluster configuration file:
```bash
./generate-inventory.sh > sample-inventory/inventory.ini
```
* Export Variables:
```bash
BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip')
API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb')
CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip')
export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
```
* Set ssh-agent"
```bash
eval `ssh-agent`
ssh-add <THE PATH TO YOUR SSH KEY>
```
* Run cluster.yml playbook:
```bash
cd ./../../../
ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
```
### Connecting to Kubernetes
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/) on the localhost
* Fetching kubeconfig file:
```bash
mkdir -p ~/.kube
scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
```
* Rewrite /etc/hosts
```bash
sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
```
* Run kubectl
```bash
kubectl get node
```
## Variables
* `region`: Region where to run the cluster
* `az`: Availability zone where to run the cluster
* `private_ip_bn`: Private ip address of bastion server
* `private_network_cidr`: Subnet of private network
* `instances_cp`: Machine to provision as Control Plane. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instances_wk`: Machine to provision as Worker Node. Key of this object will be used as part of the machine' name
* `private_ip`: private ip address of machine
* `instance_key_name`: The key name of the Key Pair to use for the instance
* `instance_type_bn`: The instance type of bastion server
* `instance_type_wk`: The instance type of worker node
* `instance_type_cp`: The instance type of control plane
* `image_name`: OS image used for the instance
* `working_instance_ip`: The IP address to connect to bastion server
* `accounting_type`: Accounting type. (1: monthly, 2: pay per use)

View File

@@ -1,64 +0,0 @@
#!/bin/bash
#
# Generates a inventory file based on the terraform output.
# After provisioning a cluster, simply run this command and supply the terraform state file
# Default state file is terraform.tfstate
#
set -e
TF_OUT=$(terraform output -json)
CONTROL_PLANES=$(jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[]' <(echo "${TF_OUT}"))
WORKERS=$(jq -r '.kubernetes_cluster.value.worker_info | to_entries[]' <(echo "${TF_OUT}"))
mapfile -t CONTROL_PLANE_NAMES < <(jq -r '.key' <(echo "${CONTROL_PLANES}"))
mapfile -t WORKER_NAMES < <(jq -r '.key' <(echo "${WORKERS}"))
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo "[all]"
# Generate control plane hosts
i=1
for name in "${CONTROL_PLANE_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${CONTROL_PLANES}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip} etcd_member_name=etcd${i}"
i=$(( i + 1 ))
done
# Generate worker hosts
for name in "${WORKER_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${WORKERS}"))
echo "${name} ansible_user=root ansible_host=${private_ip} access_ip=${private_ip} ip=${private_ip}"
done
API_LB=$(jq -r '.kubernetes_cluster.value.control_plane_lb' <(echo "${TF_OUT}"))
echo ""
echo "[all:vars]"
echo "upstream_dns_servers=['8.8.8.8','8.8.4.4']"
echo "loadbalancer_apiserver={'address':'${API_LB}','port':'6443'}"
echo ""
echo "[kube_control_plane]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[etcd]"
for name in "${CONTROL_PLANE_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[kube_node]"
for name in "${WORKER_NAMES[@]}"; do
echo "${name}"
done
echo ""
echo "[k8s_cluster:children]"
echo "kube_control_plane"
echo "kube_node"

View File

@@ -1,36 +0,0 @@
provider "nifcloud" {
region = var.region
}
module "kubernetes_cluster" {
source = "./modules/kubernetes-cluster"
availability_zone = var.az
prefix = "dev"
private_network_cidr = var.private_network_cidr
instance_key_name = var.instance_key_name
instances_cp = var.instances_cp
instances_wk = var.instances_wk
image_name = var.image_name
instance_type_bn = var.instance_type_bn
instance_type_cp = var.instance_type_cp
instance_type_wk = var.instance_type_wk
private_ip_bn = var.private_ip_bn
additional_lb_filter = [var.working_instance_ip]
}
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
module.kubernetes_cluster.security_group_name.bastion
]
type = "IN"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_ip = var.working_instance_ip
}

View File

@@ -1,301 +0,0 @@
#################################################
##
## Local variables
##
locals {
# e.g. east-11 is 11
az_num = reverse(split("-", var.availability_zone))[0]
# e.g. east-11 is e11
az_short_name = "${substr(reverse(split("-", var.availability_zone))[1], 0, 1)}${local.az_num}"
# Port used by the protocol
port_ssh = 22
port_kubectl = 6443
port_kubelet = 10250
# calico: https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements#network-requirements
port_bgp = 179
port_vxlan = 4789
port_etcd = 2379
}
#################################################
##
## General
##
# data
data "nifcloud_image" "this" {
image_name = var.image_name
}
# private lan
resource "nifcloud_private_lan" "this" {
private_lan_name = "${var.prefix}lan"
availability_zone = var.availability_zone
cidr_block = var.private_network_cidr
accounting_type = var.accounting_type
}
#################################################
##
## Bastion
##
resource "nifcloud_security_group" "bn" {
group_name = "${var.prefix}bn"
description = "${var.prefix} bastion"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "bn" {
instance_id = "${local.az_short_name}${var.prefix}bn01"
security_group = nifcloud_security_group.bn.group_name
instance_type = var.instance_type_bn
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = var.private_ip_bn
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}bn01"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Control Plane
##
resource "nifcloud_security_group" "cp" {
group_name = "${var.prefix}cp"
description = "${var.prefix} control plane"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "cp" {
for_each = var.instances_cp
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.cp.group_name
instance_type = var.instance_type_cp
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
resource "nifcloud_load_balancer" "this" {
load_balancer_name = "${local.az_short_name}${var.prefix}cp"
accounting_type = var.accounting_type
balancing_type = 1 // Round-Robin
load_balancer_port = local.port_kubectl
instance_port = local.port_kubectl
instances = [for v in nifcloud_instance.cp : v.instance_id]
filter = concat(
[for k, v in nifcloud_instance.cp : v.public_ip],
[for k, v in nifcloud_instance.wk : v.public_ip],
var.additional_lb_filter,
)
filter_type = 1 // Allow
}
#################################################
##
## Worker
##
resource "nifcloud_security_group" "wk" {
group_name = "${var.prefix}wk"
description = "${var.prefix} worker"
availability_zone = var.availability_zone
}
resource "nifcloud_instance" "wk" {
for_each = var.instances_wk
instance_id = "${local.az_short_name}${var.prefix}${each.key}"
security_group = nifcloud_security_group.wk.group_name
instance_type = var.instance_type_wk
user_data = templatefile("${path.module}/templates/userdata.tftpl", {
private_ip_address = each.value.private_ip
ssh_port = local.port_ssh
hostname = "${local.az_short_name}${var.prefix}${each.key}"
})
availability_zone = var.availability_zone
accounting_type = var.accounting_type
image_id = data.nifcloud_image.this.image_id
key_name = var.instance_key_name
network_interface {
network_id = "net-COMMON_GLOBAL"
}
network_interface {
network_id = nifcloud_private_lan.this.network_id
ip_address = "static"
}
# The image_id changes when the OS image type is demoted from standard to public.
lifecycle {
ignore_changes = [
image_id,
user_data,
]
}
}
#################################################
##
## Security Group Rule: Kubernetes
##
# ssh
resource "nifcloud_security_group_rule" "ssh_from_bastion" {
security_group_names = [
nifcloud_security_group.wk.group_name,
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_ssh
to_port = local.port_ssh
protocol = "TCP"
source_security_group_name = nifcloud_security_group.bn.group_name
}
# kubectl
resource "nifcloud_security_group_rule" "kubectl_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubectl
to_port = local.port_kubectl
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# kubelet
resource "nifcloud_security_group_rule" "kubelet_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
resource "nifcloud_security_group_rule" "kubelet_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_kubelet
to_port = local.port_kubelet
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
#################################################
##
## Security Group Rule: calico
##
# vslan
resource "nifcloud_security_group_rule" "vxlan_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "vxlan_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_vxlan
to_port = local.port_vxlan
protocol = "UDP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# bgp
resource "nifcloud_security_group_rule" "bgp_from_control_plane" {
security_group_names = [
nifcloud_security_group.wk.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.cp.group_name
}
resource "nifcloud_security_group_rule" "bgp_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_bgp
to_port = local.port_bgp
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}
# etcd
resource "nifcloud_security_group_rule" "etcd_from_worker" {
security_group_names = [
nifcloud_security_group.cp.group_name,
]
type = "IN"
from_port = local.port_etcd
to_port = local.port_etcd
protocol = "TCP"
source_security_group_name = nifcloud_security_group.wk.group_name
}

View File

@@ -1,48 +0,0 @@
output "control_plane_lb" {
description = "The DNS name of LB for control plane"
value = nifcloud_load_balancer.this.dns_name
}
output "security_group_name" {
description = "The security group used in the cluster"
value = {
bastion = nifcloud_security_group.bn.group_name,
control_plane = nifcloud_security_group.cp.group_name,
worker = nifcloud_security_group.wk.group_name,
}
}
output "private_network_id" {
description = "The private network used in the cluster"
value = nifcloud_private_lan.this.id
}
output "bastion_info" {
description = "The basion information in cluster"
value = { (nifcloud_instance.bn.instance_id) : {
instance_id = nifcloud_instance.bn.instance_id,
unique_id = nifcloud_instance.bn.unique_id,
private_ip = nifcloud_instance.bn.private_ip,
public_ip = nifcloud_instance.bn.public_ip,
} }
}
output "worker_info" {
description = "The worker information in cluster"
value = { for v in nifcloud_instance.wk : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}
output "control_plane_info" {
description = "The control plane information in cluster"
value = { for v in nifcloud_instance.cp : v.instance_id => {
instance_id = v.instance_id,
unique_id = v.unique_id,
private_ip = v.private_ip,
public_ip = v.public_ip,
} }
}

View File

@@ -1,45 +0,0 @@
#!/bin/bash
#################################################
##
## IP Address
##
configure_private_ip_address () {
cat << EOS > /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
ens192:
dhcp4: yes
dhcp6: yes
dhcp-identifier: mac
ens224:
dhcp4: no
dhcp6: no
addresses: [${private_ip_address}]
EOS
netplan apply
}
configure_private_ip_address
#################################################
##
## SSH
##
configure_ssh_port () {
sed -i 's/^#*Port [0-9]*/Port ${ssh_port}/' /etc/ssh/sshd_config
}
configure_ssh_port
#################################################
##
## Hostname
##
hostnamectl set-hostname ${hostname}
#################################################
##
## Disable swap files genereated by systemd-gpt-auto-generator
##
systemctl mask "dev-sda3.swap"

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = ">= 1.8.0, < 2.0.0"
}
}
}

View File

@@ -1,81 +0,0 @@
variable "availability_zone" {
description = "The availability zone"
type = string
}
variable "prefix" {
description = "The prefix for the entire cluster"
type = string
validation {
condition = length(var.prefix) <= 5
error_message = "Must be a less than 5 character long."
}
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "additional_lb_filter" {
description = "Additional LB filter"
type = list(string)
}
variable "accounting_type" {
type = string
default = "1"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -1,3 +0,0 @@
output "kubernetes_cluster" {
value = module.kubernetes_cluster
}

View File

@@ -1,22 +0,0 @@
region = "jp-west-1"
az = "west-11"
instance_key_name = "deployerkey"
instance_type_bn = "e-medium"
instance_type_cp = "e-medium"
instance_type_wk = "e-medium"
private_network_cidr = "192.168.30.0/24"
instances_cp = {
"cp01" : { private_ip : "192.168.30.11/24" }
"cp02" : { private_ip : "192.168.30.12/24" }
"cp03" : { private_ip : "192.168.30.13/24" }
}
instances_wk = {
"wk01" : { private_ip : "192.168.30.21/24" }
"wk02" : { private_ip : "192.168.30.22/24" }
}
private_ip_bn = "192.168.30.10/24"
image_name = "Ubuntu Server 22.04 LTS"

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -1,9 +0,0 @@
terraform {
required_version = ">=1.3.7"
required_providers {
nifcloud = {
source = "nifcloud/nifcloud"
version = "1.8.0"
}
}
}

View File

@@ -1,77 +0,0 @@
variable "region" {
description = "The region"
type = string
}
variable "az" {
description = "The availability zone"
type = string
}
variable "private_ip_bn" {
description = "Private IP of bastion server"
type = string
}
variable "private_network_cidr" {
description = "The subnet of private network"
type = string
validation {
condition = can(cidrnetmask(var.private_network_cidr))
error_message = "Must be a valid IPv4 CIDR block address."
}
}
variable "instances_cp" {
type = map(object({
private_ip = string
}))
}
variable "instances_wk" {
type = map(object({
private_ip = string
}))
}
variable "instance_key_name" {
description = "The key name of the Key Pair to use for the instance"
type = string
}
variable "instance_type_bn" {
description = "The instance type of bastion server"
type = string
}
variable "instance_type_wk" {
description = "The instance type of worker"
type = string
}
variable "instance_type_cp" {
description = "The instance type of control plane"
type = string
}
variable "image_name" {
description = "The name of image"
type = string
}
variable "working_instance_ip" {
description = "The IP address to connect to bastion server."
type = string
}
variable "accounting_type" {
type = string
default = "2"
validation {
condition = anytrue([
var.accounting_type == "1", // Monthly
var.accounting_type == "2", // Pay per use
])
error_message = "Must be a 1 or 2."
}
}

View File

@@ -1006,7 +1006,7 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip
availability_zone = element(var.az_list, count.index)
image_name = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
image_id = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
flavor_id = var.flavor_gfs_node
key_pair = openstack_compute_keypair_v2.k8s.name
@@ -1078,7 +1078,7 @@ resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_blockstorage_volume_v3" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}"
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
description = "Non-ephemeral volume for GlusterFS"
@@ -1088,5 +1088,5 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v3.glusterfs_volume.*.id, count.index)
}

View File

@@ -245,7 +245,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.18.5"
cilium_version: "1.18.6"
```
## Add variable to config

1
docs/_sidebar.md generated
View File

@@ -57,7 +57,6 @@
* [Setting-up-your-first-cluster](/docs/getting_started/setting-up-your-first-cluster.md)
* Ingress
* [Alb Ingress Controller](/docs/ingress/alb_ingress_controller.md)
* [Ingress Nginx](/docs/ingress/ingress_nginx.md)
* [Kube-vip](/docs/ingress/kube-vip.md)
* [Metallb](/docs/ingress/metallb.md)
* Operating Systems

View File

@@ -30,14 +30,7 @@ If you don't have a TLS Root CA certificate and key available, you can create th
A common use-case for cert-manager is requesting TLS signed certificates to secure your ingress resources. This can be done by simply adding annotations to your Ingress resources and cert-manager will facilitate creating the Certificate resource for you. A small sub-component of cert-manager, ingress-shim, is responsible for this.
To enable the Nginx Ingress controller as part of your Kubespray deployment, simply edit your K8s cluster addons inventory e.g. `inventory\sample\group_vars\k8s_cluster\addons.yml` and set `ingress_nginx_enabled` to true.
```ini
# Nginx ingress controller deployment
ingress_nginx_enabled: true
```
For example, if you're using the Nginx ingress controller, you can secure the Prometheus ingress by adding the annotation `cert-manager.io/cluster-issuer: ca-issuer` and the `spec.tls` section to the `Ingress` resource definition.
For example, if you're using the Traefik ingress controller, you can secure the Prometheus ingress by adding the annotation `cert-manager.io/cluster-issuer: ca-issuer` and the `spec.tls` section to the `Ingress` resource definition.
```yaml
apiVersion: networking.k8s.io/v1
@@ -48,9 +41,9 @@ metadata:
labels:
prometheus: k8s
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: ca-issuer
spec:
ingressClassName: "traefik"
tls:
- hosts:
- prometheus.example.com
@@ -72,8 +65,8 @@ Once deployed to your K8s cluster, every 3 months cert-manager will automaticall
Please consult the official upstream documentation:
- [cert-manager Ingress Usage](https://cert-manager.io/v1.5-docs/usage/ingress/)
- [cert-manager Ingress Tutorial](https://cert-manager.io/v1.5-docs/tutorials/acme/ingress/#step-3-assign-a-dns-name)
- [cert-manager Ingress Usage](https://cert-manager.io/usage/ingress/)
- [cert-manager Ingress Tutorial](https://cert-manager.io/tutorials/acme/ingress/#step-3-assign-a-dns-name)
### ACME
@@ -81,12 +74,12 @@ The ACME Issuer type represents a single account registered with the Automated C
Certificates issued by public ACME servers are typically trusted by clients computers by default. This means that, for example, visiting a website that is backed by an ACME certificate issued for that URL, will be trusted by default by most clients web browsers. ACME certificates are typically free.
- [ACME Configuration](https://cert-manager.io/v1.5-docs/configuration/acme/)
- [ACME HTTP Validation](https://cert-manager.io/v1.5-docs/tutorials/acme/http-validation/)
- [HTTP01 Challenges](https://cert-manager.io/v1.5-docs/configuration/acme/http01/)
- [ACME DNS Validation](https://cert-manager.io/v1.5-docs/tutorials/acme/dns-validation/)
- [DNS01 Challenges](https://cert-manager.io/v1.5-docs/configuration/acme/dns01/)
- [ACME FAQ](https://cert-manager.io/v1.5-docs/faq/acme/)
- [ACME Configuration](https://cert-manager.io/docs/configuration/acme/)
- [ACME HTTP Validation](https://cert-manager.io/docs/tutorials/acme/http-validation/)
- [HTTP01 Challenges](https://cert-manager.io/docs/configuration/acme/http01/)
- [ACME DNS Validation](https://cert-manager.io/docs/tutorials/acme/dns-validation/)
- [DNS01 Challenges](https://cert-manager.io/docs/configuration/acme/dns01/)
- [ACME FAQ](https://cert-manager.io/docs/troubleshooting/acme/)
#### ACME With An Internal Certificate Authority

View File

@@ -30,9 +30,9 @@ If the latest version supported according to pip is 6.7.0 it means you are runni
Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray.
| Ansible Version | Python Version |
|-----------------|----------------|
| >= 2.17.3 | 3.10-3.12 |
| Ansible Version | Python Version |
|-------------------|----------------|
| >=2.18.0, <2.19.0 | 3.11-3.13 |
## Customize Ansible vars
@@ -78,7 +78,6 @@ The following tags are defined in playbooks:
| crio | Configuring crio container engine for hosts |
| crun | Configuring crun runtime |
| csi-driver | Configuring csi driver |
| dashboard | Installing and configuring the Kubernetes Dashboard |
| dns | Remove dns entries when resetting |
| docker | Configuring docker engine runtime for hosts |
| download | Fetching container images to a delegate host |
@@ -193,11 +192,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.29.0
docker pull quay.io/kubespray/kubespray:v2.29.0
git checkout v2.30.0
docker pull quay.io/kubespray/kubespray:v2.30.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.29.0 bash
quay.io/kubespray/kubespray:v2.30.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```

View File

@@ -145,7 +145,6 @@ upstream_dns_servers:
- 1.0.0.1
# Extensions
ingress_nginx_enabled: True
helm_enabled: True
cert_manager_enabled: True
metrics_server_enabled: True

View File

@@ -13,6 +13,7 @@ debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |
debian13 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora41 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
@@ -31,6 +32,7 @@ debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora41 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -49,6 +51,7 @@ debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora41 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -83,32 +83,6 @@ authentication. One can get a kubeconfig from kube_control_plane hosts
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
## Accessing Kubernetes Dashboard
Supported version is kubernetes-dashboard v2.0.x :
- Login option : token/kubeconfig by default
- Deployed by default in "kube-system" namespace, can be overridden with `dashboard_namespace: kubernetes-dashboard` in inventory,
- Only serves over https
Access is described in [dashboard docs](https://github.com/kubernetes/dashboard/tree/master/docs/user/accessing-dashboard). With kubespray's default deployment in kube-system namespace, instead of kubernetes-dashboard :
- Proxy URL is <http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login>
- kubectl commands must be run with "-n kube-system"
Accessing through Ingress is highly recommended. For proxy access, please note that proxy must listen to [localhost](https://github.com/kubernetes/dashboard/issues/692#issuecomment-220492484) (`proxy --address="x.x.x.x"` will not work)
For token authentication, guide to create Service Account is provided in [dashboard sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) doc. Still take care of default namespace.
Access can also by achieved via ssh tunnel on a control plane :
```bash
# localhost:8081 will be sent to control-plane-1's own localhost:8081
ssh -L8001:localhost:8001 user@control-plane-1
sudo -i
kubectl proxy
```
## Accessing Kubernetes API
The main client of Kubernetes is `kubectl`. It is installed on each kube_control_plane

View File

@@ -1,203 +0,0 @@
# Installation Guide
## Contents
- [Prerequisite Generic Deployment Command](#prerequisite-generic-deployment-command)
- [Provider Specific Steps](#provider-specific-steps)
- [Docker for Mac](#docker-for-mac)
- [minikube](#minikube)
- [AWS](#aws)
- [GCE - GKE](#gce-gke)
- [Azure](#azure)
- [Bare-metal](#bare-metal)
- [Verify installation](#verify-installation)
- [Detect installed version](#detect-installed-version)
- [Using Helm](#using-helm)
## Prerequisite Generic Deployment Command
!!! attention
The default configuration watches Ingress object from *all the namespaces*.
To change this behavior use the flag `--watch-namespace` to limit the scope to a particular namespace.
!!! warning
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
!!! attention
If you're using GKE you need to initialize your user as a cluster-admin with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml
```
### Provider Specific Steps
There are cloud provider specific yaml files.
#### Docker for Mac
Kubernetes is available in Docker for Mac (from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018))
First you need to [enable kubernetes](https://docs.docker.com/docker-for-mac/#kubernetes).
Then you have to create a service:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### minikube
For standard usage:
```console
minikube addons enable ingress
```
For development:
1. Disable the ingress addon:
```console
minikube addons disable ingress
```
1. Execute `make dev-env`
1. Confirm the `nginx-ingress-controller` deployment exists:
```console
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s
nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
```
#### AWS
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)
Please check the [elastic load balancing AWS details page](https://aws.amazon.com/elasticloadbalancing/details/)
##### Elastic Load Balancer - ELB
This setup requires to choose in which layer (L4 or L7) we want to configure the Load Balancer:
- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): Use an Network Load Balancer (NLB) with TCP as the listener protocol for ports 80 and 443.
- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): Use an Elastic Load Balancer (ELB) with HTTP as the listener protocol for port 80 and terminate TLS in the ELB
For L4:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml
```
For L7:
Change the value of `service.beta.kubernetes.io/aws-load-balancer-ssl-cert` in the file `provider/aws/deploy-tls-termination.yaml` replacing the dummy id with a valid one. The dummy value is `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"`
Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the [ELB Idle Timeouts section](#elb-idle-timeouts) for additional information. If a change is required, users will need to update the value of `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` in `provider/aws/deploy-tls-termination.yaml`
Then execute:
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy-tls-termination.yaml
```
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](https://github.com/kubernetes/ingress-nginx/blob/main/docs/images/elb-l7-listener.png)
##### ELB Idle Timeouts
In some scenarios users will need to modify the value of the ELB idle timeout.
Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX.
By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
*Please Note: An idle timeout of `3600s` is recommended when using WebSockets.*
More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
##### Network Load Balancer (NLB)
This type of load balancer is supported since v1.10.0 as an ALPHA feature.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml
```
#### GCE-GKE
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
**Important Note:** proxy protocol is not supported in GCE/GKE
#### Azure
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
```
#### Bare-metal
Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport):
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
```
!!! tip
For extended notes regarding deployments on bare-metal, see [Bare-metal considerations](https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md).
### Verify installation
To check if the ingress controller pods have started, run the following command:
```console
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
```
Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`.
Now, you are ready to create your first ingress.
### Detect installed version
To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command.
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
```
## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [ingress-nginx/ingress-nginx](https://kubernetes.github.io/ingress-nginx).
Official documentation is [here](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm)
To install the chart with the release name `my-nginx`:
```console
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-nginx ingress-nginx/ingress-nginx
```
Detect installed version:
```console
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
```

View File

@@ -100,8 +100,6 @@ kubelet_make_iptables_util_chains: true
kubelet_feature_gates: ["RotateKubeletServerCertificate=true"]
kubelet_seccomp_default: true
kubelet_systemd_hardening: true
# To disable kubelet's staticPodPath (for nodes that don't use static pods like worker nodes)
kubelet_static_pod_path: ""
# In case you have multiple interfaces in your
# control plane nodes and you want to specify the right
# IP addresses, kubelet_secure_addresses allows you

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.30.0
version: 2.31.0
readme: README.md
authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -1,8 +1,4 @@
---
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
# dashboard_enabled: false
# Helm deployment
helm_enabled: false
@@ -67,39 +63,6 @@ local_volume_provisioner_enabled: false
# Gateway API CRDs
gateway_api_enabled: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_annotations:
# example.io/loadbalancerIPs: 1.2.3.4
# ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "TLSv1.2 TLSv1.3"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/coredns:53"
# ingress_nginx_extra_args:
# - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx
# ingress_nginx_without_class: true
# ingress_nginx_default: false
# ALB ingress controller deployment
ingress_alb_enabled: false
# alb_ingress_aws_region: "us-east-1"

View File

@@ -56,8 +56,8 @@ cilium_l2announcements: false
#
# Only effective when monitor aggregation is set to "medium" or higher.
# cilium_monitor_aggregation_flags: "all"
# Kube Proxy Replacement mode (strict/partial)
# cilium_kube_proxy_replacement: partial
# Kube Proxy Replacement mode (true/false)
# cilium_kube_proxy_replacement: false
# If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also:

View File

@@ -1,2 +1,2 @@
---
requires_ansible: ">=2.17.3"
requires_ansible: ">=2.18.0,<2.19.0"

View File

@@ -1,5 +1,5 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -27,14 +27,14 @@ RUN apt update -q \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
unzip \
libvirt-clients \
qemu-utils \
qemu-kvm \
dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list \
&& apt update -q \
&& apt install --no-install-recommends -yq docker-ce \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
@@ -44,11 +44,10 @@ ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.34.3/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.34.3/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.35.0/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.35.0/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
# Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
@@ -56,5 +55,5 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
&& vagrant plugin install vagrant-libvirt \
# Install Kubernetes collections
&& pip install --no-compile --no-cache-dir kubernetes \
&& pip install --break-system-packages --no-compile --no-cache-dir kubernetes \
&& ansible-galaxy collection install kubernetes.core

View File

@@ -5,8 +5,8 @@
become: false
run_once: true
vars:
minimal_ansible_version: 2.17.3
maximal_ansible_version: 2.18.0
minimal_ansible_version: 2.18.0
maximal_ansible_version: 2.19.0
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"

View File

@@ -1,7 +1,7 @@
ansible==10.7.0
ansible==11.13.0
# Needed for community.crypto module
cryptography==46.0.3
cryptography==46.0.4
# Needed for jinja2 json_query templating
jmespath==1.0.1
jmespath==1.1.0
# Needed for ansible.utils.ipaddr
netaddr==1.3.0

View File

@@ -55,7 +55,7 @@
register: keyserver_task_result
until: keyserver_task_result is succeeded
retries: 4
delay: "{{ retry_stagger | d(3) }}"
delay: "{{ retry_stagger }}"
with_items: "{{ docker_repo_key_info.repo_keys }}"
environment: "{{ proxy_env }}"
when: ansible_pkg_mgr == 'apt'
@@ -128,7 +128,7 @@
register: docker_task_result
until: docker_task_result is succeeded
retries: 4
delay: "{{ retry_stagger | d(3) }}"
delay: "{{ retry_stagger }}"
notify: Restart docker
when:
- not ansible_os_family in ["Flatcar", "Flatcar Container Linux by Kinvolk"]

View File

@@ -5,8 +5,7 @@
group: "{{ etcd_cert_group }}"
state: directory
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
mode: "0700"
- name: "Gen_certs | create etcd script dir (on {{ groups['etcd'][0] }})"
file:
@@ -145,15 +144,6 @@
- ('k8s_cluster' in group_names) and
sync_certs | default(false) and inventory_hostname not in groups['etcd']
- name: Gen_certs | check certificate permissions
file:
path: "{{ etcd_cert_dir }}"
group: "{{ etcd_cert_group }}"
state: directory
owner: "{{ etcd_owner }}"
mode: "{{ etcd_cert_dir_mode }}"
recurse: true
# This is a hack around the fact kubeadm expect the same certs path on all kube_control_plane
# TODO: fix certs generation to have the same file everywhere
# OR work with kubeadm on node-specific config

View File

@@ -18,7 +18,6 @@ etcd_backup_retention_count: -1
force_etcd_cert_refresh: true
etcd_config_dir: /etc/ssl/etcd
etcd_cert_dir: "{{ etcd_config_dir }}/ssl"
etcd_cert_dir_mode: "0700"
etcd_cert_group: root
# Note: This does not set up DNS entries. It simply adds the following DNS
# entries to the certificate

View File

@@ -11,6 +11,7 @@ dns_nodes_per_replica: 16
dns_cores_per_replica: 256
dns_prevent_single_point_failure: "{{ 'true' if dns_min_replicas | int > 1 else 'false' }}"
enable_coredns_reverse_dns_lookups: true
coredns_svc_name: "coredns"
coredns_ordinal_suffix: ""
# dns_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]
coredns_affinity:
@@ -118,29 +119,5 @@ netchecker_agent_log_level: 5
netchecker_server_log_level: 5
netchecker_etcd_log_level: info
# Dashboard
dashboard_replicas: 1
# Namespace for dashboard
dashboard_namespace: kube-system
# Limits for dashboard
dashboard_cpu_limit: 100m
dashboard_memory_limit: 256M
dashboard_cpu_requests: 50m
dashboard_memory_requests: 64M
# Set dashboard_use_custom_certs to true if overriding dashboard_certs_secret_name with a secret that
# contains dashboard_tls_key_file and dashboard_tls_cert_file instead of using the initContainer provisioned certs
dashboard_use_custom_certs: false
dashboard_certs_secret_name: kubernetes-dashboard-certs
dashboard_tls_key_file: dashboard.key
dashboard_tls_cert_file: dashboard.crt
dashboard_master_toleration: true
# Override dashboard default settings
dashboard_token_ttl: 900
dashboard_skip_login: false
# Policy Controllers
# policy_controller_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]

View File

@@ -109,15 +109,3 @@
- netchecker-server-clusterrolebinding.yml.j2
- netchecker-server-deployment.yml.j2
- netchecker-server-svc.yml.j2
- name: Kubernetes Apps | Dashboard
command:
cmd: "{{ kubectl_apply_stdin }}"
stdin: "{{ lookup('template', 'dashboard.yml.j2') }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
vars:
k8s_namespace: "{{ dashboard_namespace }}"
when: dashboard_enabled
tags:
- dashboard

View File

@@ -2,7 +2,7 @@
apiVersion: v1
kind: Service
metadata:
name: coredns{{ coredns_ordinal_suffix }}
name: {{ coredns_svc_name }}{{ coredns_ordinal_suffix }}
namespace: kube-system
labels:
k8s-app: kube-dns{{ coredns_ordinal_suffix }}

View File

@@ -1,323 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
{% if k8s_namespace != 'kube-system' %}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ k8s_namespace }}
labels:
name: {{ k8s_namespace }}
{% endif %}
---
# ------------------- Dashboard Secrets ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
type: Opaque
---
# ------------------- Dashboard ConfigMap ------------------- #
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ k8s_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: {{ k8s_namespace }}
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
spec:
replicas: {{ dashboard_replicas }}
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-dashboard
image: {{ dashboard_image_repo }}:{{ dashboard_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
limits:
cpu: {{ dashboard_cpu_limit }}
memory: {{ dashboard_memory_limit }}
requests:
cpu: {{ dashboard_cpu_requests }}
memory: {{ dashboard_memory_requests }}
ports:
- containerPort: 8443
protocol: TCP
args:
- --namespace={{ k8s_namespace }}
{% if dashboard_use_custom_certs %}
- --tls-key-file={{ dashboard_tls_key_file }}
- --tls-cert-file={{ dashboard_tls_cert_file }}
{% else %}
- --auto-generate-certificates
{% endif %}
{% if dashboard_skip_login %}
- --enable-skip-login
{% endif %}
- --authentication-mode=token
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
- --token-ttl={{ dashboard_token_ttl }}
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: {{ dashboard_certs_secret_name }}
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
{% if dashboard_master_toleration %}
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{% endif %}
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
# ------------------- Metrics Scraper Service Account ------------------- #
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
# ------------------- Metrics Scraper Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: dashboard-metrics-scraper
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: kubernetes-metrics-scraper
---
# ------------------- Metrics Scraper Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
name: kubernetes-metrics-scraper
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-metrics-scraper
template:
metadata:
labels:
k8s-app: kubernetes-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-metrics-scraper
image: {{ dashboard_metrics_scraper_repo }}:{{ dashboard_metrics_scraper_tag }}
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumeMounts:
- mountPath: /tmp
name: tmp-volume
serviceAccountName: kubernetes-dashboard
volumes:
- name: tmp-volume
emptyDir: {}
{% if dashboard_master_toleration %}
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{% endif %}

View File

@@ -21,7 +21,7 @@ external_openstack_cacert: "{{ lookup('env', 'OS_CACERT') }}"
## arg1: "value1"
## arg2: "value2"
external_openstack_cloud_controller_extra_args: {}
external_openstack_cloud_controller_image_tag: "v1.32.0"
external_openstack_cloud_controller_image_tag: "v1.35.0"
external_openstack_cloud_controller_bind_address: 127.0.0.1
external_openstack_cloud_controller_dns_policy: ClusterFirst

View File

@@ -8,3 +8,4 @@ local_path_provisioner_is_default_storageclass: "true"
local_path_provisioner_debug: false
local_path_provisioner_helper_image_repo: "busybox"
local_path_provisioner_helper_image_tag: "latest"
local_path_provisioner_resources: {}

View File

@@ -35,6 +35,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{% if local_path_provisioner_resources %}
resources:
{{ local_path_provisioner_resources | to_nice_yaml | indent(10) | trim }}
{% endif %}
volumes:
- name: config-volume
configMap:

View File

@@ -1,28 +0,0 @@
---
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_host_network: false
ingress_nginx_service_type: LoadBalancer
ingress_nginx_service_nodeport_http: ""
ingress_nginx_service_nodeport_https: ""
ingress_nginx_service_annotations: {}
ingress_publish_status_address: ""
ingress_nginx_publish_service: "{{ ingress_nginx_namespace }}/ingress-nginx"
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations: []
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_metrics_port: 10254
ingress_nginx_configmap: {}
ingress_nginx_configmap_tcp_services: {}
ingress_nginx_configmap_udp_services: {}
ingress_nginx_extra_args: []
ingress_nginx_termination_grace_period_seconds: 300
ingress_nginx_class: nginx
ingress_nginx_without_class: true
ingress_nginx_default: false
ingress_nginx_webhook_enabled: false
ingress_nginx_webhook_job_ttl: 1800
ingress_nginx_opentelemetry_enabled: false
ingress_nginx_probe_initial_delay_seconds: 10

View File

@@ -1,69 +0,0 @@
---
- name: NGINX Ingress Controller | Create addon dir
file:
path: "{{ kube_config_dir }}/addons/ingress_nginx"
state: directory
owner: root
group: root
mode: "0755"
when:
- inventory_hostname == groups['kube_control_plane'][0]
- name: NGINX Ingress Controller | Templates list
set_fact:
ingress_nginx_templates:
- { name: 00-namespace, file: 00-namespace.yml, type: ns }
- { name: cm-ingress-nginx, file: cm-ingress-nginx.yml, type: cm }
- { name: cm-tcp-services, file: cm-tcp-services.yml, type: cm }
- { name: cm-udp-services, file: cm-udp-services.yml, type: cm }
- { name: sa-ingress-nginx, file: sa-ingress-nginx.yml, type: sa }
- { name: clusterrole-ingress-nginx, file: clusterrole-ingress-nginx.yml, type: clusterrole }
- { name: clusterrolebinding-ingress-nginx, file: clusterrolebinding-ingress-nginx.yml, type: clusterrolebinding }
- { name: role-ingress-nginx, file: role-ingress-nginx.yml, type: role }
- { name: rolebinding-ingress-nginx, file: rolebinding-ingress-nginx.yml, type: rolebinding }
- { name: ingressclass-nginx, file: ingressclass-nginx.yml, type: ingressclass }
- { name: ds-ingress-nginx-controller, file: ds-ingress-nginx-controller.yml, type: ds }
ingress_nginx_template_for_service:
- { name: svc-ingress-nginx, file: svc-ingress-nginx.yml, type: svc }
ingress_nginx_templates_for_webhook:
- { name: admission-webhook-configuration, file: admission-webhook-configuration.yml, type: sa }
- { name: sa-admission-webhook, file: sa-admission-webhook.yml, type: sa }
- { name: clusterrole-admission-webhook, file: clusterrole-admission-webhook.yml, type: clusterrole }
- { name: clusterrolebinding-admission-webhook, file: clusterrolebinding-admission-webhook.yml, type: clusterrolebinding }
- { name: role-admission-webhook, file: role-admission-webhook.yml, type: role }
- { name: rolebinding-admission-webhook, file: rolebinding-admission-webhook.yml, type: rolebinding }
- { name: admission-webhook-job, file: admission-webhook-job.yml, type: job }
- { name: svc-ingress-nginx-controller-admission, file: svc-ingress-nginx-controller-admission.yml, type: svc }
- name: NGINX Ingress Controller | Append extra templates to NGINX Ingress Template list for service
set_fact:
ingress_nginx_templates: "{{ ingress_nginx_templates + ingress_nginx_template_for_service }}"
when: not ingress_nginx_host_network
- name: NGINX Ingress Controller | Append extra templates to NGINX Ingress Templates list for webhook
set_fact:
ingress_nginx_templates: "{{ ingress_nginx_templates + ingress_nginx_templates_for_webhook }}"
when: ingress_nginx_webhook_enabled
- name: NGINX Ingress Controller | Create manifests
template:
src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/addons/ingress_nginx/{{ item.file }}"
mode: "0644"
with_items: "{{ ingress_nginx_templates }}"
register: ingress_nginx_manifests
when:
- inventory_hostname == groups['kube_control_plane'][0]
- name: NGINX Ingress Controller | Apply manifests
kube:
name: "{{ item.item.name }}"
namespace: "{{ ingress_nginx_namespace }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/addons/ingress_nginx/{{ item.item.file }}"
state: "latest"
with_items: "{{ ingress_nginx_manifests.results }}"
when:
- inventory_hostname == groups['kube_control_plane'][0]

View File

@@ -1,7 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ ingress_nginx_namespace }}
labels:
name: {{ ingress_nginx_namespace }}

View File

@@ -1,30 +0,0 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: {{ ingress_nginx_namespace }}
path: /networking/v1/ingresses
port: 443
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None

View File

@@ -1,96 +0,0 @@
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-create
namespace: {{ ingress_nginx_namespace }}
spec:
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "{{ ingress_nginx_kube_webhook_certgen_image_repo }}:{{ ingress_nginx_kube_webhook_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
name: create
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
ttlSecondsAfterFinished: {{ ingress_nginx_webhook_job_ttl }}
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-patch
namespace: {{ ingress_nginx_namespace }}
spec:
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "{{ ingress_nginx_kube_webhook_certgen_image_repo }}:{{ ingress_nginx_kube_webhook_certgen_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
name: patch
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
ttlSecondsAfterFinished: {{ ingress_nginx_webhook_job_ttl }}

View File

@@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update

View File

@@ -1,36 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups: [""]
resources: ["configmaps", "endpoints", "nodes", "pods", "secrets", "namespaces"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]

View File

@@ -1,16 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,16 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap %}
data:
{{ ingress_nginx_configmap | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap_tcp_services %}
data:
{{ ingress_nginx_configmap_tcp_services | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_configmap_udp_services %}
data:
{{ ingress_nginx_configmap_udp_services | to_nice_yaml | indent(2) }}
{%- endif %}

View File

@@ -1,201 +0,0 @@
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ingress-nginx-controller
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: {{ ingress_nginx_termination_grace_period_seconds }}
{% if ingress_nginx_opentelemetry_enabled %}
initContainers:
- name: opentelemetry
command:
- /init_module
image: {{ ingress_nginx_opentelemetry_image_repo }}:{{ ingress_nginx_opentelemetry_image_tag }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: false
runAsGroup: 82
runAsNonRoot: true
runAsUser: 101
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /modules_mount
name: modules
{% endif %}
{% if ingress_nginx_host_network %}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
{% endif %}
{% if ingress_nginx_nodeselector %}
nodeSelector:
{{ ingress_nginx_nodeselector | to_nice_yaml | indent(width=8) }}
{%- endif %}
{% if ingress_nginx_tolerations %}
tolerations:
{{ ingress_nginx_tolerations | to_nice_yaml(indent=2) | indent(width=8) }}
{% endif %}
priorityClassName: {% if ingress_nginx_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
containers:
- name: ingress-nginx-controller
image: {{ ingress_nginx_controller_image_repo }}:{{ ingress_nginx_controller_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --election-id=ingress-controller-leader-{{ ingress_nginx_class }}
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class={{ ingress_nginx_class }}
{% if ingress_nginx_without_class %}
- --watch-ingress-without-class=true
{% endif %}
{% if ingress_publish_status_address != "" %}
- --publish-status-address={{ ingress_publish_status_address }}
{% elif ingress_nginx_host_network %}
- --report-node-internal-ip-address
{% elif ingress_nginx_publish_service != "" %}
- --publish-service={{ ingress_nginx_publish_service }}
{% endif %}
{% for extra_arg in ingress_nginx_extra_args %}
- {{ extra_arg }}
{% endfor %}
{% if ingress_nginx_webhook_enabled %}
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
{% endif %}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: false
runAsGroup: 82
runAsNonRoot: true
runAsUser: 101
seccompProfile:
type: RuntimeDefault
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
ports:
- name: http
containerPort: 80
hostPort: {{ ingress_nginx_insecure_port }}
- name: https
containerPort: 443
hostPort: {{ ingress_nginx_secure_port }}
- name: metrics
containerPort: 10254
{% if not ingress_nginx_host_network %}
hostPort: {{ ingress_nginx_metrics_port }}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
containerPort: {{ port | int }}
protocol: TCP
{% if not ingress_nginx_host_network %}
hostPort: {{ port | int }}
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
containerPort: {{ port | int }}
protocol: UDP
{% if not ingress_nginx_host_network %}
hostPort: {{ port | int }}
{% endif %}
{% endfor %}
{% endif %}
{% if ingress_nginx_webhook_enabled %}
- name: webhook
containerPort: 8443
protocol: TCP
{% endif %}
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: {{ ingress_nginx_probe_initial_delay_seconds }}
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: {{ ingress_nginx_probe_initial_delay_seconds }}
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
{% if ingress_nginx_webhook_enabled or ingress_nginx_opentelemetry_enabled %}
volumeMounts:
{% if ingress_nginx_webhook_enabled %}
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
{% endif %}
{% if ingress_nginx_opentelemetry_enabled %}
- name: modules
mountPath: /modules_mount
{% endif %}
{% endif %}
{% if ingress_nginx_webhook_enabled or ingress_nginx_opentelemetry_enabled %}
volumes:
{% if ingress_nginx_webhook_enabled %}
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
{% endif %}
{% if ingress_nginx_opentelemetry_enabled %}
- name: modules
emptyDir: {}
{% endif %}
{% endif %}

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: {{ ingress_nginx_class }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_default %}
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
{% endif %}
spec:
controller: k8s.io/ingress-nginx

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create

View File

@@ -1,47 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps", "pods", "secrets", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
# Defaults to "<election-id>", defined in
# ds-ingress-nginx-controller.yml.js
# by a command-line argument.
#
# This is the correct behaviour for ingress-controller
# version 1.8.1
resourceNames: ["ingress-controller-leader-{{ ingress_nginx_class }}"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,17 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,9 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx-controller-admission
namespace: {{ ingress_nginx_namespace }}
spec:
type: ClusterIP
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

View File

@@ -1,50 +0,0 @@
{% if not ingress_nginx_host_network %}
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: {{ ingress_nginx_namespace }}
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_service_annotations %}
annotations:
{{ ingress_nginx_service_annotations | to_nice_yaml(indent=2, width=1337) | indent(width=4) }}
{% endif %}
spec:
type: {{ ingress_nginx_service_type }}
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_http %}
nodePort: {{ingress_nginx_service_nodeport_http | int}}
{% endif %}
- name: https
port: 443
targetPort: 443
protocol: TCP
{% if (ingress_nginx_service_type == 'NodePort' or ingress_nginx_service_type == 'LoadBalancer') and ingress_nginx_service_nodeport_https %}
nodePort: {{ingress_nginx_service_nodeport_https | int}}
{% endif %}
{% if ingress_nginx_configmap_tcp_services %}
{% for port in ingress_nginx_configmap_tcp_services.keys() %}
- name: tcp-port-{{ port }}
port: {{ port | int }}
targetPort: {{ port | int }}
protocol: TCP
{% endfor %}
{% endif %}
{% if ingress_nginx_configmap_udp_services %}
{% for port in ingress_nginx_configmap_udp_services.keys() %}
- name: udp-port-{{ port }}
port: {{ port | int }}
targetPort: {{ port | int }}
protocol: UDP
{% endfor %}
{% endif %}
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% endif %}

View File

@@ -1,12 +1,5 @@
---
dependencies:
- role: kubernetes-apps/ingress_controller/ingress_nginx
when: ingress_nginx_enabled
tags:
- apps
- ingress-controller
- ingress-nginx
- role: kubernetes-apps/ingress_controller/cert_manager
when: cert_manager_enabled
tags:

View File

@@ -58,12 +58,6 @@ rules:
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- get
- apiGroups:
- topology.node.k8s.io
resources:

View File

@@ -114,4 +114,14 @@ rules:
- update
# watch for changes
- watch
# Services are monitored for service LoadBalancer IP allocation
- apiGroups: [""]
resources:
- services
- services/status
verbs:
- get
- list
- update
- watch
{% endif %}

View File

@@ -43,12 +43,12 @@
- { name: registry-cm, file: registry-cm.yml, type: cm }
- { name: registry-rs, file: registry-rs.yml, type: rs }
- name: Registry | Append nginx ingress templates to Registry Templates list when ingress enabled
- name: Registry | Append ingress templates to Registry Templates list when ALB ingress enabled
set_fact:
registry_templates: "{{ registry_templates + [item] }}"
with_items:
- [{ name: registry-ing, file: registry-ing.yml, type: ing }]
when: ingress_nginx_enabled or ingress_alb_enabled
when: ingress_alb_enabled
- name: Registry | Create manifests
template:

View File

@@ -2,6 +2,9 @@
# disable upgrade cluster
upgrade_cluster_setup: false
# Number of retries (with 5 seconds interval) to check that new control plane nodes
# are in Ready condition after joining
control_plane_node_become_ready_tries: 24
# By default the external API listens on all interfaces, this can be changed to
# listen on a specific address/interface.
# NOTE: If you specific address/interface and use loadbalancer_apiserver_localhost

View File

@@ -98,3 +98,18 @@
when:
- inventory_hostname != first_kube_control_plane
- kubeadm_already_run is not defined or not kubeadm_already_run.stat.exists
- name: Wait for new control plane nodes to be Ready
when: kubeadm_already_run.stat.exists
run_once: true
command: >
{{ kubectl }} get nodes --selector node-role.kubernetes.io/control-plane
-o jsonpath-as-json="{.items[*].status.conditions[?(@.type == 'Ready')]}"
register: control_plane_node_ready_conditions
retries: "{{ control_plane_node_become_ready_tries }}"
delay: 5
delegate_to: "{{ groups['kube_control_plane'][0] }}"
until: >
control_plane_node_ready_conditions.stdout
| from_json | selectattr('status', '==', 'True')
| length == (groups['kube_control_plane'] | length)

View File

@@ -429,6 +429,9 @@ featureGates:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
{% if kube_version is version('1.35.0', '>=') %}
failCgroupV1: {{ kubelet_fail_cgroup_v1 }}
{% endif %}
clusterDNS:
{% for dns_address in kubelet_cluster_dns %}
- {{ dns_address }}

View File

@@ -563,6 +563,9 @@ featureGates:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
{% if kube_version is version('1.35.0', '>=') %}
failCgroupV1: {{ kubelet_fail_cgroup_v1 }}
{% endif %}
clusterDNS:
{% for dns_address in kubelet_cluster_dns %}
- {{ dns_address }}

View File

@@ -0,0 +1,2 @@
---
node_taints: []

View File

@@ -14,13 +14,13 @@
- name: Populate inventory node taint
set_fact:
inventory_node_taints: "{{ inventory_node_taints + ['%s' | format(item)] }}"
loop: "{{ node_taints | d([]) }}"
inventory_node_taints: "{{ inventory_node_taints + node_taints }}"
when:
- node_taints is defined
- node_taints is not string
- node_taints is not mapping
- node_taints is iterable
- debug: # noqa name[missing]
var: role_node_taints
- debug: # noqa name[missing]

View File

@@ -180,9 +180,6 @@ kube_proxy_ipvs_modules:
- ip_vs_wlc
- ip_vs_lc
# Set this option to "" (empty) to disable staticPodPath (See docs/operations/hardening.md)
kubelet_static_pod_path: "{{ kube_manifest_dir }}"
## Enable distributed tracing for kubelet
kubelet_tracing: false
kubelet_tracing_endpoint: "[::]:4317"

View File

@@ -15,6 +15,9 @@ authorization:
{% else %}
mode: AlwaysAllow
{% endif %}
{% if kube_version is version('1.35.0', '>=') %}
failCgroupV1: {{ kubelet_fail_cgroup_v1 }}
{% endif %}
{% if kubelet_enforce_node_allocatable is defined and kubelet_enforce_node_allocatable != "\"\"" %}
{% set kubelet_enforce_node_allocatable_list = kubelet_enforce_node_allocatable.split(",") %}
enforceNodeAllocatable:
@@ -22,7 +25,7 @@ enforceNodeAllocatable:
- {{ item }}
{% endfor %}
{% endif %}
staticPodPath: "{{ kubelet_static_pod_path }}"
staticPodPath: {{ kube_manifest_dir }}
cgroupDriver: {{ kubelet_cgroup_driver | default('systemd') }}
containerLogMaxFiles: {{ kubelet_logfiles_max_nr }}
containerLogMaxSize: {{ kubelet_logfiles_max_size }}

View File

@@ -116,7 +116,7 @@ flannel_version: 0.27.3
flannel_cni_version: 1.7.1-flannel1
cni_version: "{{ (cni_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_version: "1.18.5"
cilium_version: "1.18.6"
cilium_cli_version: "{{ (ciliumcli_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_enable_hubble: false
@@ -263,7 +263,7 @@ kube_router_image_tag: "v{{ kube_router_version }}"
multus_image_repo: "{{ github_image_repo }}/k8snetworkplumbingwg/multus-cni"
multus_image_tag: "v{{ multus_version }}"
external_openstack_cloud_controller_image_repo: "{{ kube_image_repo }}/provider-os/openstack-cloud-controller-manager"
external_openstack_cloud_controller_image_tag: "v1.32.0"
external_openstack_cloud_controller_image_tag: "v1.35.0"
kube_vip_version: 1.0.3
kube_vip_image_repo: "{{ github_image_repo }}/kube-vip/kube-vip{{ '-iptables' if kube_vip_lb_fwdmethod == 'masquerade' else '' }}"
@@ -277,9 +277,9 @@ haproxy_image_tag: 3.2.4-alpine
# bundle with kubeadm; if not 'basic' upgrade can sometimes fail
coredns_supported_versions:
'1.35': 1.12.4
'1.34': 1.12.1
'1.33': 1.12.0
'1.32': 1.11.3
coredns_version: "{{ coredns_supported_versions[kube_major_version] }}"
coredns_image_repo: "{{ kube_image_repo }}{{ '/coredns' if coredns_version is version('1.7.1', '>=') else '' }}/coredns"
coredns_image_tag: "{{ 'v' if coredns_version is version('1.7.1', '>=') else '' }}{{ coredns_version }}"
@@ -309,13 +309,6 @@ local_volume_provisioner_image_tag: "v{{ local_volume_provisioner_version }}"
local_path_provisioner_version: "0.0.32"
local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
local_path_provisioner_image_tag: "v{{ local_path_provisioner_version }}"
ingress_nginx_version: "1.13.3"
ingress_nginx_controller_image_repo: "{{ kube_image_repo }}/ingress-nginx/controller"
ingress_nginx_opentelemetry_image_repo: "{{ kube_image_repo }}/ingress-nginx/opentelemetry"
ingress_nginx_controller_image_tag: "v{{ ingress_nginx_version }}"
ingress_nginx_opentelemetry_image_tag: "v20230721-3e2062ee5"
ingress_nginx_kube_webhook_certgen_image_repo: "{{ kube_image_repo }}/ingress-nginx/kube-webhook-certgen"
ingress_nginx_kube_webhook_certgen_image_tag: "v1.6.3"
alb_ingress_image_repo: "{{ docker_image_repo }}/amazon/aws-alb-ingress-controller"
alb_ingress_image_tag: "v1.1.9"
cert_manager_version: "1.15.3"
@@ -340,9 +333,9 @@ csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe"
csi_livenessprobe_image_tag: "v2.11.0"
snapshot_controller_supported_versions:
'1.35': "v7.0.2"
'1.34': "v7.0.2"
'1.33': "v7.0.2"
'1.32': "v7.0.2"
snapshot_controller_image_repo: "{{ kube_image_repo }}/sig-storage/snapshot-controller"
snapshot_controller_image_tag: "{{ snapshot_controller_supported_versions[kube_major_version] }}"
@@ -376,11 +369,6 @@ gcp_pd_csi_attacher_image_tag: "v2.1.1-gke.0"
gcp_pd_csi_resizer_image_tag: "v0.4.0-gke.0"
gcp_pd_csi_registrar_image_tag: "v1.2.0-gke.0"
dashboard_image_repo: "{{ docker_image_repo }}/kubernetesui/dashboard"
dashboard_image_tag: "v2.7.0"
dashboard_metrics_scraper_repo: "{{ docker_image_repo }}/kubernetesui/metrics-scraper"
dashboard_metrics_scraper_tag: "v1.0.8"
metallb_speaker_image_repo: "{{ quay_image_repo }}/metallb/speaker"
metallb_controller_image_repo: "{{ quay_image_repo }}/metallb/controller"
metallb_version: 0.13.9
@@ -924,15 +912,6 @@ downloads:
groups:
- kube_node
ingress_nginx_controller:
enabled: "{{ ingress_nginx_enabled }}"
container: true
repo: "{{ ingress_nginx_controller_image_repo }}"
tag: "{{ ingress_nginx_controller_image_tag }}"
checksum: "{{ ingress_nginx_controller_digest_checksum | default(None) }}"
groups:
- kube_node
ingress_alb_controller:
enabled: "{{ ingress_alb_enabled }}"
container: true
@@ -1074,24 +1053,6 @@ downloads:
groups:
- kube_node
dashboard:
enabled: "{{ dashboard_enabled }}"
container: true
repo: "{{ dashboard_image_repo }}"
tag: "{{ dashboard_image_tag }}"
checksum: "{{ dashboard_digest_checksum | default(None) }}"
groups:
- kube_control_plane
dashboard_metrics_scrapper:
enabled: "{{ dashboard_enabled }}"
container: true
repo: "{{ dashboard_metrics_scraper_repo }}"
tag: "{{ dashboard_metrics_scraper_tag }}"
checksum: "{{ dashboard_digest_checksum | default(None) }}"
groups:
- kube_control_plane
metallb_speaker:
enabled: "{{ metallb_speaker_enabled }}"
container: true

View File

@@ -17,6 +17,9 @@ kube_api_anonymous_auth: true
# Default value, but will be set to true automatically if detected
is_fedora_coreos: false
# Kubernetes 1.35+: fail on cgroup v1 by default
kubelet_fail_cgroup_v1: true
# Swap settings
kubelet_fail_swap_on: true
kubelet_swap_behavior: LimitedSwap
@@ -436,10 +439,6 @@ credentials_dir: "{{ inventory_dir }}/credentials"
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: false
# Addons which can be enabled
helm_enabled: false
registry_enabled: false
@@ -456,7 +455,6 @@ vsphere_csi_enabled: false
upcloud_csi_enabled: false
csi_snapshot_controller_enabled: false
persistent_volumes_enabled: false
ingress_nginx_enabled: false
ingress_alb_enabled: false
cert_manager_enabled: false
expand_persistent_volumes: false

View File

@@ -1,24 +1,27 @@
---
crictl_checksums:
arm64:
1.35.0: sha256:519071de89b64c43e2a1661bb5489c6c3fd5e9e5fcef75e50e542b0c891f1118
1.34.0: sha256:c31d252e203df5f4cf37f314bd3092eb79087e791631c1e607087c74b6d0423f
1.33.0: sha256:e1f34918d77d5b4be85d48f5d713ca617698a371b049ea1486000a5e86ab1ff3
1.32.0: sha256:f2f4e20658b72d00897f41e4b57093c8080e2d800ee894a5f4351f31d1833e30
amd64:
1.35.0: sha256:2e141e5b22cb189c40365a11807d69b76b9b3caced89fac2f4ec879408ce2177
1.34.0: sha256:a8ff2a3edb37a98daf3aba7c3b284fe0aa5bff24166d896ab9ef64c8913c9f51
1.33.0: sha256:8307399e714626e69d1213a4cd18c8dec3d0201ecdac009b1802115df8973f0f
1.32.0: sha256:f050b71d3a73a91a4e0990b90143ed04dcd100cc66f953736fcb6a2730e283c4
ppc64le:
1.35.0: sha256:786522b14d684604c8b435312a310972bc1b460cddb1bb216a298098cd86b22e
1.34.0: sha256:1da50181f2f6f6f6332b9dbc7d7cc020457ccd542620167953c0e288535acc93
1.33.0: sha256:4224acfef4d1deba2ba456b7d93fa98feb0a96063ef66024375294f1de2b064f
1.32.0: sha256:4ffaf29bbda8df42ed2dda4f1ad33cc785987701dc8d1e0043c17cfea9af43e0
crio_archive_checksums:
arm64:
1.35.0: sha256:e57175a4d00387b78adfbe248d087d8127bed625afb529e34b2c90d08cfdaf87
1.34.5: sha256:999a5dc2dc9854222aeff8a20897e0b34f0ba02c9b260b611d66c62e00e279e0
1.34.4: sha256:d176f6256d606a3fc279f9f2994ef4a4c4cbaaa0601f4d1bba1a19bec5674ce9
1.34.3: sha256:314595247054b53767a736e24bc3030a5f7c17552944c62b2e190c9e95fe4ca6
1.34.2: sha256:ac7530f7fc9d531a87bfdfcae9cf8bf81a8bbdb75e63a046ed96911aa7b68ebd
1.34.1: sha256:41a71cab6a61ae429ec447d572fd1cdea0a7e33d62aaa58c3b07467665b50b9f
1.34.0: sha256:3006658270477c5fb1e88e9124e40982d2ba7b34495fcc12f0fecd33bbab9a5a
1.33.9: sha256:bfcd534db3d1a9380dd7007d623e1eb3250ba64f7c4657e79e9e99b1d874f8f1
1.33.8: sha256:59c91726535dcadd0372df0c6aa8595e4d59590994b598b2d97ea2510b216359
1.33.7: sha256:af3ea22d3d6944c9a907c6c13d77e9fc4dbcf3972ffbde18dd6f37f1c2ffbd0d
1.33.6: sha256:6ee49e746d1a5be1a664a6f801c68b169cb181a9aaf12218eed121e2b151bfdb
@@ -28,6 +31,7 @@ crio_archive_checksums:
1.33.2: sha256:0a161cb1437a50fbdb04bf5ca11dbec8bfc567871d0597a5676737278a945a36
1.33.1: sha256:6bf135db438937f0ab7a533af64564a0fb1d2079a43723ce9255ecbf9556ae05
1.33.0: sha256:8a0dbee2879495d5b33e6fdeac32e5d86c356897bdcf3a94cd602851620ce8b5
1.32.13: sha256:f40004183d93bb203231385b5dd07a32e17eced47213817c1958ccc9eea73f70
1.32.12: sha256:26a5138f4e4f15d370630c3bb8bf04fe28b24c57ce2bb11717a2c9a2e1c54404
1.32.11: sha256:25c6ccfe9b70bf12222577b4cbf286ade9e2d112ab10c7d4507ba12cbcfad5ba
1.32.10: sha256:4e8ceb6f2c936e31a9b892a076deecc52be9feac4acf8af242fb6db817fda9b1
@@ -42,11 +46,14 @@ crio_archive_checksums:
1.32.1: sha256:f64da0ef41604575b476ad6d7288ca14f56fc06cc0ca138a5c3dc933427f7b32
1.32.0: sha256:b092eddabedac98a0f8449dc535acfec0e14c21f59cabe8f9703043d995a1a41
amd64:
1.35.0: sha256:55b6d3e9fc9a5864ab5cdf0b24d54b1dcbaf6d4919274b3b9eb37bfc4b0b8cb5
1.34.5: sha256:d6606fb6d686b8f814dfec801f0f3cf2ded974c194fa90facefda36075b6fab2
1.34.4: sha256:f6348a781c34b433fe1c5150da3408e51e828b610eacbe734405e9c31136d810
1.34.3: sha256:e269914f3bc4f36ac87cd593d74daaa43c390571994062180019248be32cc6f7
1.34.2: sha256:3a0012938ed389e9270a208bb73b250062d5f1be5798472b1728403d55ddc1da
1.34.1: sha256:22c1e4d68d9339aa58a1b0f1b40a8944102934a7505105abe461dc8a7e3de540
1.34.0: sha256:5a8bc5c3b8072cb9bde1cf025d5597f75bf21018712c5b72d5cb0657948595c8
1.33.9: sha256:81c20a12866d9a7c08c6e381ed326141c917454b696a05b46ae27665fe3c5cfa
1.33.8: sha256:537adda39074377893f1f650a71b576ba487b3c4d2ee55e9b22f4e95fc188594
1.33.7: sha256:e2999436a272c77370241a4f962c80737698dd8c2400fe75e5c7cf2142c96001
1.33.6: sha256:4d0d446f73d9db6d5bf2c03ecdc39d9d702836886f4715886c15dc2f461cc810
@@ -56,6 +63,7 @@ crio_archive_checksums:
1.33.2: sha256:6e82739bbbeae12d571a277a88d85e8a0e23dbc87529414a91ee5f2e23792dcf
1.33.1: sha256:036063194028d24c75b9ce080e475ad97bacc955de796b7c895845294db8edbf
1.33.0: sha256:dad0cec9e09368b37b35ce824b0ef517a1b33365c4bb164fe82310c73c886f7e
1.32.13: sha256:27e2bf049f589a568d45c4fdd0eaf119680176c202bd09219f8726ba37f9c21e
1.32.12: sha256:13cb9676686c0ccd6bd7ffef9125f6370f803f08a559cf31f017193619891960
1.32.11: sha256:98424dbe3eb1377b314bb35b30842987ccc800faa2f8145d52eb2a9c1efa17be
1.32.10: sha256:b8e66bd33c885baf65535e671a120de4d7675833a75489403a9406e5fd2faa5e
@@ -70,11 +78,14 @@ crio_archive_checksums:
1.32.1: sha256:d35de1e765481018c7ccdc92edeb59b25938f3bd9d1670440e7ccd3d599f95a7
1.32.0: sha256:8f483f1429d2d9cd6bfa6db2e3a4263151701dd4f05f2b1c06cf8e67c44ea67e
ppc64le:
1.35.0: sha256:081ab73a6970ac3c68893dea9a03b0732ca22ab44a2aa8794fddac0bd4dfa749
1.34.5: sha256:3a10d4c1406df01bd9ab88750eabc1273964e9c5f24c7d4a0b719ae77e6cfec2
1.34.4: sha256:dca59a28fe9b0b9163418eca1545c9ed01cf514179f108d14e462c6074fd103c
1.34.3: sha256:4dd782484eeb460b9a95e6e2e07474216fc02ad45a27ba871799d18f2b6ee0ae
1.34.2: sha256:d4c3c9ba24b1b0eabf3c11ddec98801dda7a87b0529706e9ede18b8cc9e4182a
1.34.1: sha256:cba0ac74e7202fe28cf8aa895b83f7a30d78b148666add78e19215259f629bb0
1.34.0: sha256:e9e41d14439db0ca88cf2cd8533038203f379c25cd612f37635c17908e050ebf
1.33.9: sha256:c0a9e60800f66f85c70615128fec5a8358ffde0f715a4058163707dbcca8eb94
1.33.8: sha256:1d69c01512e8ebdd51fc70fc64473a31d492e8db095c0ee5d3ee58722048150c
1.33.7: sha256:076e7519bfff72a43fb1121ce836eee3cc1fec5bb5a59a11747c514e9d162d26
1.33.6: sha256:3643eefe295604288f5b652fb9c672a60f96dc803e63edaf9ee64ed4047a50dd
@@ -84,6 +95,7 @@ crio_archive_checksums:
1.33.2: sha256:8ed65404a57262a9f8eb75b61afa37fcec134472eb1a6d81f1889a74ff32c651
1.33.1: sha256:12646aca33f65fe335c27d3af582c599584d3f51185f01044e7ddd0668bb2b4c
1.33.0: sha256:b4fa46b25538d8145197f8bf2e935486392c0ca2a9fa609aedd02b9f106d37a6
1.32.13: sha256:52e9c38bb1a11abfe4f271eb4d4675cc99cfbaef3d35fd5572be8e63659b08ab
1.32.12: sha256:9ba4f2c3be48c0f1f3228ef6322aeb3738f3ef461fd483a0cb4c2e5b067f080c
1.32.11: sha256:6c2036f2ed7134c596b5a453a06fbb7e646db9586bff0d993f5223dccf167420
1.32.10: sha256:ae4740c6bb6f346338f94508c74d5b1ec94f2691cb12f9a9add437fee5391f8d
@@ -99,6 +111,7 @@ crio_archive_checksums:
1.32.0: sha256:e0544544c91f603afaf54ed814c8519883212bcb149f53a8be9bb0c749e9ec86
kubelet_checksums:
arm64:
1.35.0: sha256:aa658d077348b43d238f50966a583f4244b2a7d45590c77b3b165b7d44983ab8
1.34.3: sha256:765b740e3ad9c590852652a2623424ec60e2dddce2c6280d7f042f56c8c98619
1.34.2: sha256:3e31b1bee9ab32264a67af8a19679777cd372b1c3a04b5d7621289cf137b357c
1.34.1: sha256:6a66bc08d6c637fcea50c19063cf49e708fde1630a7f1d4ceca069a45a87e6f1
@@ -111,19 +124,8 @@ kubelet_checksums:
1.33.2: sha256:0fa15aca9b90fe7aef1ed3aad31edd1d9944a8c7aae34162963a6aaaf726e065
1.33.1: sha256:10540261c311ae005b9af514d83c02694e12614406a8524fd2d0bad75296f70d
1.33.0: sha256:ae5a4fc6d733fc28ff198e2d80334e21fcb5c34e76b411c50fff9cb25accf05a
1.32.11: sha256:7d1c3aaae0dffa8d5c90bbaed49f25d32f98332801bde55cfea6efaead639491
1.32.10: sha256:21cc3d98550d3a23052d649e77956f2557e7f6119ff1e27dc82b852d006136cd
1.32.9: sha256:29037381c79152409adacee83448a2bdb67e113f003613663c7589286200ded8
1.32.8: sha256:d5527714fac08eac4c1ddcbd8a3c6db35f3acd335d43360219d733273b672cce
1.32.7: sha256:b862a8d550875924c8abed6c15ba22564f7e232c239aa6a2e88caf069a0ab548
1.32.6: sha256:b045d4f8f96bf934c894f9704ab2931ffa3c6cf78a8d98e457482a6c455dab6d
1.32.5: sha256:034753a2e308afeb4ce3cf332d38346c6e660252eac93b268fac0e112a56ff46
1.32.4: sha256:91117b71eb2bb3dd79ec3ed444e058a347349108bf661838f53ee30d2a0ff168
1.32.3: sha256:5c3c98e6e0fa35d209595037e05022597954b8d764482417a9588e15218f0fe2
1.32.2: sha256:d74b659bbde5adf919529d079975900e51e10bc807f0fda9dc9f6bb07c4a3a7b
1.32.1: sha256:8e6d0eeedd9f0b8b38d4f600ee167816f71cf4dacfa3d9a9bb6c3561cc884e95
1.32.0: sha256:bda9b2324c96693b38c41ecea051bab4c7c434be5683050b5e19025b50dbc0bf
amd64:
1.35.0: sha256:2f4ed7778681649b81244426c29c5d98df60ccabf83d561d69e61c1cbb943ddf
1.34.3: sha256:0e759f40bbc717c05227ae3994b77786f58f59ffa0137a34958c6b26fa5bcbbd
1.34.2: sha256:9c5e717b774ee9b9285ce47e7d2150c29e84837eb19a7eaa24b60b1543c9d58f
1.34.1: sha256:5a72c596c253ea0b0e5bcc6f29903fd41d1d542a7cadf3700c165a2a041a8d82
@@ -136,19 +138,8 @@ kubelet_checksums:
1.33.2: sha256:77fa5d29995653fe7e2855759a909caf6869c88092e2f147f0b84cbdba98c8f3
1.33.1: sha256:f7224648451dd4f9f2c4f79416f9874223c286ce41727788965fd0341ddb59c4
1.33.0: sha256:dd416d94850c342226d3dcdce838518b040ccea16548bfeaf2595934af88ef60
1.32.11: sha256:02b25e87a3fe14e9ea74c10d3b1e204d12af30b8ce7ed11af2a985b49ddb0b83
1.32.10: sha256:bfff8f244992162c0491f8f42d807165ed5c685aecfb3e8000412535ad18a873
1.32.9: sha256:fd7711d1f0c1e263e9332004858fc4a6c39462e3e2ee485706eea5297966ed9c
1.32.8: sha256:7dfca4da9cdf592c0f70800e09fb42553765bc0951cade3d6e0c571daf3f23ee
1.32.7: sha256:7ab96898436475640cbd416b2446f33aba1c2cb62dae876302ff7775d850041c
1.32.6: sha256:aa37219c4796a2fbf5af7f37fb7f11998947f9fd0d0f30dbeb40c47d4e9c8777
1.32.5: sha256:2b2988edd1646bf139dee6956d4283c520ff151a36febd10701ffda4852b8250
1.32.4: sha256:3e0c265fe80f3ea1b7271a00879d4dbd5e6ea1e91ecf067670c983e07c33a6f4
1.32.3: sha256:024bb7faffa787c7717a2b37398a8c6df35694a8585a73074b052c3f4c4906ce
1.32.2: sha256:9927fee1678202719075d8d546390bcda86c9e519b811fb7f4820b6823f84cab
1.32.1: sha256:967dc8984651c48230a2ff5319e22cbf858452e974104a19bbade5d1708f72ad
1.32.0: sha256:5ad4965598773d56a37a8e8429c3dc3d86b4c5c26d8417ab333ae345c053dae2
ppc64le:
1.35.0: sha256:f24eb1244878a3876fe180e6052822cc9998033850478b2f4776e5c3b09baecd
1.34.3: sha256:67dcceb6d91710e4da7af720eda7b20fd4e8c24237fc345602bb54439ad8ccca
1.34.2: sha256:a195f278b9bac26803f1e26b0f608e0dce66aad033e8c043e8555775612530c9
1.34.1: sha256:c4782dbf1987680e9b2baa3ecf5db9e66395772e82b251eb73a150fbfbe0b906
@@ -161,20 +152,9 @@ kubelet_checksums:
1.33.2: sha256:be8412cb9bf30125e3a88ecb9bfca4df1ff5d4e650947c46222683071f1a17d7
1.33.1: sha256:c1bc01115a513eaec76d56dc52a52aeb05f866a6d07c55335c1fff56c868543d
1.33.0: sha256:6fa5abbc14d65b943b00fcfc8a6ac7eb39fd7e924271738c6f17e0b7e74c665b
1.32.11: sha256:17baef329a468f958658f3e4c3f04689dd2506077214e36d4495b8d0c6776da9
1.32.10: sha256:277e68bcf192ea91f3426b8fb540c4951e2e3bffc659a7b39b98c749e828acc7
1.32.9: sha256:81ba713e8b51644336d428dfa5654cc4e2e4a4ea742976b56ddf965a347330e5
1.32.8: sha256:ec5a2e045dc49b7e1d34a0c78fbc645ce568b2275e807b6313da46e584f56f68
1.32.7: sha256:4ddc5a0b42100295896a43a1a637180872293c9f7305a90dd3377681b1401469
1.32.6: sha256:fd0140949b02c82539ff84db15d0d406445f34221d0547e7ee31245cd982ff47
1.32.5: sha256:b9cb7bf4b5518e1b5542717c82a753663154e08c84e336feba424cf3575313a3
1.32.4: sha256:62e7854ea84bf0fd5a9c47a1ab7ade7a74b4f160efdf486320ed913b4e8e7f79
1.32.3: sha256:efc2b01d4ab74f283ab4ff2bad4369e2b9f66fa875673b72627aa6e7a7b507cb
1.32.2: sha256:3602474e25b0b42a4b0f43ece2ca1e03fe5f3864f0936537256920bbb2eb9acd
1.32.1: sha256:623889368808042a236d7078d85a23ce5ef0e43b6fadc09bcacfdf704ac876b4
1.32.0: sha256:99d409a8023224d84c361e29cdf21ac0458a5449f03e12550288aa654539e3a1
kubectl_checksums:
arm:
1.35.0: sha256:dca28f6af03b31ca6043baa1da7332472c7a3df743606a758534b9ac3ed7ecce
1.34.3: sha256:e0cf1eddede6abfd539e30ccbb4e50f65b2d6ff44b3bb9d9107ea8775a90a7e4
1.34.2: sha256:18e03c1c6ab1dbff6d2a648bf944213f627369d1daeea5b43a7890181ab33abf
1.34.1: sha256:ca6218ae8bf366bd8ccdcb440b756c67422a4e04936163845f74d8c056e786ee
@@ -187,19 +167,8 @@ kubectl_checksums:
1.33.2: sha256:f3992382aa0ea21f71a976b6fd6a213781c9b58be60c42013950110cf2184f2a
1.33.1: sha256:6b1cd6e2bf05c6adaa76b952f9c4ea775f5255913974ccdb12145175d4809e93
1.33.0: sha256:bbb4b4906d483f62b0fc3a0aea3ddac942820984679ad11635b81ee881d69ab3
1.32.11: sha256:358dafd910cec676f05e04fbed44ea26ec393cd60b5b885bc60c27e1aaf383c9
1.32.10: sha256:b42bc77586238b43b8c5cdd06086f1ab00190245dd8b66b28822785b177fbde4
1.32.9: sha256:84629d460b60693ca954e148ce522defd34d18bc5c934836cfaf0268930713dd
1.32.8: sha256:ed54b52631fdf5ecc4ddb12c47df481f84b5890683beaeaa55dc84e43d2cd023
1.32.7: sha256:c5416b59afdf897c4fbf08867c8a32b635f83f26e40980d38233fad6b345e37c
1.32.6: sha256:77fec65c6f08c28f8695de4db877d82d74c881ed3ed110ebfd88cbd4ee3d01dc
1.32.5: sha256:7270e6ac4b82b5e4bd037dccae1631964634214baa66a9548deb5edd3f79de31
1.32.4: sha256:bf28793213039690d018bbfa9bcfcfed76a9aa8e18dc299eced8709ca542fcdd
1.32.3: sha256:f990c878e54e5fac82eac7398ef643acca9807838b19014f1816fa9255b2d3d9
1.32.2: sha256:e1e6a2fd4571cd66c885aa42b290930660d34a7331ffb576fcab9fd1a0941a83
1.32.1: sha256:8ccf69be2578d3a324e9fc7d4f3b29bc9743cc02d72f33ba2d0fe30389014bc8
1.32.0: sha256:6b33ea8c80f785fb07be4d021301199ae9ee4f8d7ea037a8ae544d5a7514684e
arm64:
1.35.0: sha256:58f82f9fe796c375c5c4b8439850b0f3f4d401a52434052f2df46035a8789e25
1.34.3: sha256:46913a7aa0327f6cc2e1cc2775d53c4a2af5e52f7fd8dacbfbfd098e757f19e9
1.34.2: sha256:95df604e914941f3172a93fa8feeb1a1a50f4011dfbe0c01e01b660afc8f9b85
1.34.1: sha256:420e6110e3ba7ee5a3927b5af868d18df17aae36b720529ffa4e9e945aa95450
@@ -212,19 +181,8 @@ kubectl_checksums:
1.33.2: sha256:54dc02c8365596eaa2b576fae4e3ac521db9130e26912385e1e431d156f8344d
1.33.1: sha256:d595d1a26b7444e0beb122e25750ee4524e74414bbde070b672b423139295ce6
1.33.0: sha256:48541d119455ac5bcc5043275ccda792371e0b112483aa0b29378439cf6322b9
1.32.11: sha256:b1c91c106ec20e61c5dff869e9a39e6af4fb96572bddaac9cce307dfa3ed2348
1.32.10: sha256:1f4229526e16bf9f5b854fbf3bdb9c7040404a29c1d1e4193258b8a73de06e92
1.32.9: sha256:d5f6b45ad81b7d199187a28589e65f83406e0610b036491a9abaa49bfd04a708
1.32.8: sha256:8a7371e54187249389a9aa222b150d61a4a745c121ab24dbcbb56d1ac2d0b912
1.32.7: sha256:232f6e517633fbb4696c9eb7a0431ee14b3fccbb47360b4843d451e0d8c9a3a2
1.32.6: sha256:f7bac84f8c35f55fb2c6ad167beb59eba93de5924b50bbaa482caa14ff480eec
1.32.5: sha256:9edee84103e63c40a37cd15bd11e04e7835f65cb3ff5a50972058ffc343b4d96
1.32.4: sha256:c6f96d0468d6976224f5f0d81b65e1a63b47195022646be83e49d38389d572c2
1.32.3: sha256:6c2c91e760efbf3fa111a5f0b99ba8975fb1c58bb3974eca88b6134bcf3717e2
1.32.2: sha256:7381bea99c83c264100f324c2ca6e7e13738a73b8928477ac805991440a065cd
1.32.1: sha256:98206fd83a4fd17f013f8c61c33d0ae8ec3a7c53ec59ef3d6a0a9400862dc5b2
1.32.0: sha256:ba4004f98f3d3a7b7d2954ff0a424caa2c2b06b78c17b1dccf2acc76a311a896
amd64:
1.35.0: sha256:a2e984a18a0c063279d692533031c1eff93a262afcc0afdc517375432d060989
1.34.3: sha256:ab60ca5f0fd60c1eb81b52909e67060e3ba0bd27e55a8ac147cbc2172ff14212
1.34.2: sha256:9591f3d75e1581f3f7392e6ad119aab2f28ae7d6c6e083dc5d22469667f27253
1.34.1: sha256:7721f265e18709862655affba5343e85e1980639395d5754473dafaadcaa69e3
@@ -237,19 +195,8 @@ kubectl_checksums:
1.33.2: sha256:33d0cdec6967817468f0a4a90f537dfef394dcf815d91966ca651cc118393eea
1.33.1: sha256:5de4e9f2266738fd112b721265a0c1cd7f4e5208b670f811861f699474a100a3
1.33.0: sha256:9efe8d3facb23e1618cba36fb1c4e15ac9dc3ed5a2c2e18109e4a66b2bac12dc
1.32.11: sha256:48581d0e808bd8b7d3c3fc014e86b170e25a987df04c8a879b982b28a5180815
1.32.10: sha256:6e14ef4e509e9f3d1dfc2815643f832f853d2d9f6622d4a0f83f77c7e4014b57
1.32.9: sha256:509ae171bac7ad3b98cc49f5594d6bc84900cf6860f155968d1059fde3be5286
1.32.8: sha256:0fc709a8262be523293a18965771fedfba7466eda7ab4337feaa5c028aa46b1b
1.32.7: sha256:b8f24d467a8963354b028796a85904824d636132bef00988394cadacffe959c9
1.32.6: sha256:0e31ebf882578b50e50fe6c43e3a0e3db61f6a41c9cded46485bc74d03d576eb
1.32.5: sha256:aaa7e6ff3bd28c262f2d95c8c967597e097b092e9b79bcb37de699e7488e3e7b
1.32.4: sha256:10d739e9af8a59c9e7a730a2445916e04bc9cbb44bc79d22ce460cd329fa076c
1.32.3: sha256:ab209d0c5134b61486a0486585604a616a5bb2fc07df46d304b3c95817b2d79f
1.32.2: sha256:4f6a959dcc5b702135f8354cc7109b542a2933c46b808b248a214c1f69f817ea
1.32.1: sha256:e16c80f1a9f94db31063477eb9e61a2e24c1a4eee09ba776b029048f5369db0c
1.32.0: sha256:646d58f6d98ee670a71d9cdffbf6625aeea2849d567f214bc43a35f8ccb7bf70
ppc64le:
1.35.0: sha256:8989809d0ac771244dabe50ed742249ac60eeb6d385cd234ee151eb40b7c32c4
1.34.3: sha256:ae239b7f6f071e47014e1b5b20aa60626e06b32922a6b5054562ae2c5fa82c18
1.34.2: sha256:49a985986a9add6c229c628bf2a83addebbdeeef40469fce2a54e51b6f1bb05b
1.34.1: sha256:45499f0728b4a3428400db289edb444609d41787061f09b66f18028c0a73652f
@@ -262,20 +209,9 @@ kubectl_checksums:
1.33.2: sha256:d1cdf13cb786c1ee6d5bf6d85034f496aa2fee97b287028043eb14c5dc74993f
1.33.1: sha256:f922dd8f558dc616ebaa34908ceb7964ebb8caadd7c48699d0b791ffff2be1aa
1.33.0: sha256:580d076c891711ec37afaf5994f72a8aad9d45c25413e6e94648e988a5a9933a
1.32.11: sha256:4310edfc10fbc64cc69a25d27a1a8c4e134ad6642f8c83a8b0b612768ac63e84
1.32.10: sha256:544722455bc0a3f57b68e9aafe8bffa0af25d4f0f383848f03ba7aff2cab7e10
1.32.9: sha256:bdc8af9c1aed9737d58442f59034ad0125efe3a2dfad9f6ec14f1264e7020cc3
1.32.8: sha256:52cc07556a8f0076d4e48003aa416b486c729e9679dbe2ea92bbd88e5be5cc93
1.32.7: sha256:c0fb655243a98c4b063f39f2208c7b9d3cbe77b302a8b8b683aabe42e47fc556
1.32.6: sha256:808e2b86128a9f25922bdb099ebf276ba4220dbf53c63a033348ee119697b22a
1.32.5: sha256:1fc869a9d620982f16104f3b33c393aba54dd41136d18009bf6fc39accf6465c
1.32.4: sha256:61a8c1f441900b4e61defcb83bb54f61f883f9e75810897cfabfd6860ae7e195
1.32.3: sha256:11e1a377f404bdab6e3587375f7c2ee432df80b56d7ccf6151d4e48cd8063f55
1.32.2: sha256:c25500027cd331ae3e65bed2612491c5307721894e9d39e869f24ca14973677f
1.32.1: sha256:46d98d3463e065dff035d76f6c2b604c990d79634cc574d43b0c21f0367bbf0c
1.32.0: sha256:9f3f239e2601ce53ec4e70b80b7684f9c89817cc9938ed0bb14f125a3c4f8c8f
kubeadm_checksums:
arm64:
1.35.0: sha256:1dac7dc2c6a56548bbc6bf8a7ecf4734f2e733fb336d7293d84541ebe52d0e50
1.34.3: sha256:697cf3aa54f1a5740b883a3b18a5d051b4032fd68ba89af626781a43ec9bccc3
1.34.2: sha256:065f7de266c59831676cc48b50f404fd18d1f6464502d53980957158e4cab3a7
1.34.1: sha256:b0dc5cf091373caf87d069dc3678e661464837e4f10156f1436bd35a9a7db06b
@@ -288,19 +224,8 @@ kubeadm_checksums:
1.33.2: sha256:21efc1ba54a1cf25ac68208b7dde2e67f6d0331259f432947d83e70b975ad4cc
1.33.1: sha256:5b3e3a1e18d43522fdee0e15be13a42cee316e07ddcf47ef718104836edebb3e
1.33.0: sha256:746c0ee45f4d32ec5046fb10d4354f145ba1ff0c997f9712d46036650ad26340
1.32.11: sha256:0190c49b61b065409b1e99c70e5ec3c52576bf8902432fb2c97bf1d0d2777b69
1.32.10: sha256:a201f246be3d2c35ffa7fc51a1d2596797628f9b1455da52a246b42ce8e1f779
1.32.9: sha256:377349141e865849355140c78063fa2b87443bf1aecb06319be4de4df8dbd918
1.32.8: sha256:8dbd3fa2d94335d763b983caaf2798caae2d4183f6a95ebff28289f2e86edf68
1.32.7: sha256:a2aad7f7b320c3c847dea84c08e977ba8b5c84d4b7102b46ffd09d41af6c4b51
1.32.6: sha256:f786731c37ce6e89e6b71d5a7518e4d1c633337237e3803615056eb4640bfc8e
1.32.5: sha256:2956c694ff2891acdc4690b807f87ab48419b4925d3fad2ac52ace2a1160bd17
1.32.4: sha256:1b9d97b44758dc4da20d31e3b6d46f50af75ac48be887793e16797a43d9c30e7
1.32.3: sha256:f9d007aaf1468ea862ef2a1a1a3f6f34cc57358742ceaff518e1533f5a794181
1.32.2: sha256:fd8a8c1c41d719de703bf49c6f56692dd6477188d8f43dcb77019fd8bc30cbd3
1.32.1: sha256:55a57145708aaa37f716f140ef774ca64b7088b6df5ee8eae182936ad6580328
1.32.0: sha256:5da9746a449a3b8a8312b6dd8c48dcb861036cf394306cfbc66a298ba1e8fbde
amd64:
1.35.0: sha256:729e7fb34e4f1bfcf2bdaf2a14891ed64bd18c47aaab42f8cc5030875276cfed
1.34.3: sha256:f9ce265434d306e59d800b26f3049b8430ba71f815947f4bacdcdc33359417fb
1.34.2: sha256:6a2346006132f6e1ed0b5248e518098cf5abbce25bf11b8926fb1073091b83f4
1.34.1: sha256:20654fd7c5155057af5c30b86c52c9ba169db6229eee6ac7abab4309df4172e7
@@ -313,19 +238,8 @@ kubeadm_checksums:
1.33.2: sha256:5c623ec9a9b8584beba510da5c2b775c41cf51c0accdfb43af093bc084563845
1.33.1: sha256:9a481b0a5f1cee1e071bc9a0867ca0aad5524408c2580596c00767ba1a7df0bd
1.33.0: sha256:5a65cfec0648cabec124c41be8c61040baf2ba27a99f047db9ca08cac9344987
1.32.11: sha256:5e191b7329897a16ea87aed75b66f561e7243691620d6b792f34d488285484ce
1.32.10: sha256:1c5033ee113d9072a53ee1ef3a3b18e566721bb3879b49c6813c67066687afbc
1.32.9: sha256:183b3b12e39b3ed2dc2db25cbc17769610cdd5f02e9d1325ba747d54978d8f5f
1.32.8: sha256:da4cc996800db14f82fce8813caa55be318e52ef69d82e50e728ef4cfa18b69f
1.32.7: sha256:dcd40af0042c559f3218dbd23bf318b850a5213528b428e1637ccb357ac32498
1.32.6: sha256:7092527a63e5380a6be05cf6041c849ba8d13bf41a2adb2a029f44717f53439f
1.32.5: sha256:9070c3d469f5a3e777948b63a7a5e6c5bd7682c7416547770a78880fe4293ea9
1.32.4: sha256:445cdebd140dc0a9f4d18505821dcca77d7a21992133bf6731777f5724968255
1.32.3: sha256:be42caa726b85b7723605ca8fea22e4a26e0d439b789a3d9d6e636a7078b3db4
1.32.2: sha256:fb3a90f1bfc78146a8a03b50eb59aaf957a023c1c5a2b166062ef9412550bba6
1.32.1: sha256:5ed13bb4bc1d5fb4579b8cc8c7c2245356837122f9a3fd729c2f6d1338f58dcf
1.32.0: sha256:8a10abe691a693d6deeeb1c992bc75da9d8c76718a22327688f7eb1d7c15f0d6
ppc64le:
1.35.0: sha256:77a466e1b6a8e28362a729541269de0a7c4a6b9e7770cccefcd745502e656b90
1.34.3: sha256:2b8b48b3b0eb657e04122a158cb7fcad964fba5bd2d8e07f8eeec6f856a63ecf
1.34.2: sha256:bea4ed6d971523da794a802de15910b08c09e23bc4c850ee3b953c4bdb0b7976
1.34.1: sha256:ddb6bd80bee0719924ae901672b99205226badab74fb13a9e1bb6d3de49fbb21
@@ -338,18 +252,6 @@ kubeadm_checksums:
1.33.2: sha256:1b818900ac7af72a14f50300d6c6ad600eecdc578c37b75fa488cc654ca08c25
1.33.1: sha256:a772834ba22478c9119f03ecca2a27a70234623d74ff1d7671ee85675a4e830b
1.33.0: sha256:26cb7ac57d522a59c84c4784b176097d23c7b4e61874fab84ae719d0e43ac0bc
1.32.11: sha256:c7bb0bbac734290666f6deaba731f4eae46045c94ae53501153e4167dad51d34
1.32.10: sha256:5cfda89b98b6308f4d28e77eabc0111c3eb3c7b64baccf644ecdbcac90b258d0
1.32.9: sha256:fcc5aa3401d130156e0b73dab192631108b77e778f3d87838419993aea1ef8d5
1.32.8: sha256:b5e4f0da030de98f1179a148f6563d69fbfb4c35c2dd1de1d30f000805d12412
1.32.7: sha256:d87ec6c40aef05df1cb23298aff4a7a6c5af64c8a7a1671d4274385a0601b6cb
1.32.6: sha256:ec3fdb5f563b000c824bc4438664ae62797bf75cdcee1448e617f296cbd3e955
1.32.5: sha256:9ace8b24eba37d960a9cafd947015722c383bd695767b7a7c8449a4f6a3f3e9e
1.32.4: sha256:fb0223765d57c59ff4202445b3768e848b6d383dfac058b5882696bca0286053
1.32.3: sha256:68cc7669e47575ead58563c39abf89c7faf1c70fb6733ea9c727f303f2af1abf
1.32.2: sha256:02573483126e39c6b25c769131cf30ea7c470ad635374be343d5e76845a4ecdb
1.32.1: sha256:ff7f1dd3f1a6a5c0cf2c9977ec7c474bd22908850e33358dd40aeba17d8375b0
1.32.0: sha256:d79fe8cbd1d98bcbe56b8c0c3a64716603581cecf274951af49aa07748bf175a
etcd_binary_checksums:
arm64:
3.5.26: sha256:93ac1667df0e178ea6d152476ce4088df4075604fe4bc7f85f4719e863cd030b
@@ -440,6 +342,7 @@ cni_binary_checksums:
1.6.0: sha256:d8d4bd74247407c8c73de057bc00adac28bb1ed2d2ee60a9dda278e3b398bcc2
calicoctl_binary_checksums:
arm64:
3.30.6: sha256:47ecc00bdd797f82e4bac0ff3904c3a5143ba2d61e8ae1cbbce286ca76d3790a
3.30.5: sha256:7611343e7a56e770b95e2bb882dda787efbbd4331b1dd6316ff8ea189238dfaa
3.30.4: sha256:b21fbbc55b6f5d50c1c0faae714242cae3e013185cb8e26ce56981bd10da260d
3.30.3: sha256:2ae0474b88a6042e5489d7410d2669a9d443c9d5c51e2bdc8ebe4d6dd98f2475
@@ -461,6 +364,7 @@ calicoctl_binary_checksums:
3.28.1: sha256:c062d13534498a427c793a4a9190be4df3cf796a3feb29e4a501e1d6f48daa7c
3.28.0: sha256:c4ca8563d2a920729116a3a30171c481580c8c447938ce974ce14d7ce25a31bf
amd64:
3.30.6: sha256:2017e19727dca689d8bb73a9d8dff3c6a8ba7d8c75049f99ee207272161b5749
3.30.5: sha256:6cdfb17b0276f648f4fdb051a5d75617a50b3c328d4cccfc40d087b96c361d80
3.30.4: sha256:7e2e5e75b25c55683b68eabeb9b00390b1d359e72bf57f7ec2b76bb006fd175f
3.30.3: sha256:a7d017d1abf6ef5d6e03267187c0dd68c32f5e937b64decd29d003be44fa6b94
@@ -482,6 +386,7 @@ calicoctl_binary_checksums:
3.28.1: sha256:22ec5727c38dbe19001792b4ca64ac760a6e2985d5c1a231d919dbebe5bca171
3.28.0: sha256:4ea270699e67ca29e5533ddb0a68d370cb0005475796c7e841f83047da6297b6
ppc64le:
3.30.6: sha256:9a9c368499b1e3d08418dfbb566379483e15c50d08dd1bcaf6148c115d82ed36
3.30.5: sha256:5b6de49da1af2633549bff5e8f4d8a573a175b65c47c29d327ef6a0760d39a93
3.30.4: sha256:8fc8ef492d463e184e714bc6d31b05f9066c8af3445928efef233850f036bb92
3.30.3: sha256:ccd13ced62baf633fb4347fbe6c9fdc0d3b1b7deb1794c83c015507a0cb8238e
@@ -579,6 +484,7 @@ ciliumcli_binary_checksums:
0.16.0: sha256:da98675f961833d4ffd68b1046d907b228a7d394ded2abd70a50b20eaca171c4
calico_crds_archive_checksums:
no_arch:
3.30.6: sha256:d61aa5bcddfc78b0094acd54e0358009fa79e1cbe6d8c23bdacb34ff7a2c6c82
3.30.5: sha256:3a38f91596c204b43c70f642a3e686d8c3fbfdfa5caa7824b716aa2f4a4e568b
3.30.4: sha256:a9398f6de6cce8f683e0ad649a21f3d3b8bb5fe4cd26e7b26b33b9a8c740274f
3.30.3: sha256:36c50905b9b62a78638bcfb9d1c4faf1efa08e2013265dcd694ec4e370b78dd7
@@ -658,6 +564,7 @@ helm_archive_checksums:
3.16.0: sha256:d13a4b87b31a5b50c8d93dd9988dfb312a61e56504102f466a4004e5a3ab8e9e
cri_dockerd_archive_checksums:
arm64:
0.3.23: sha256:a78037d2d2e9c52c48372a5cbba7b94b1c57be5759449beef29cfe03cbe6f14b
0.3.22: sha256:3260b214c9b12dbf0cbf4d60410c45aacfc31ba52aa7b74164135968e8950cb6
0.3.21: sha256:35de6b1e8eba11d8ba6d71fa7499cb3d610a1e7b866c9d43b7f87029e3a769cd
0.3.20: sha256:e6b4661c51c832ee1cbbb75d1c8b086fa803acc153d400454c3b8cf324547d89
@@ -676,6 +583,7 @@ cri_dockerd_archive_checksums:
0.3.6: sha256:793b8f57cecf734c47bface10387a8e90994c570b516cb755900f21ebd0a663b
0.3.5: sha256:c20014dc5a71e6991a3bd7e1667c744e3807b5675b1724b26bb7c70093582cfe
amd64:
0.3.23: sha256:c7fe5db7f9396186193b58ded0e62a31eca7b3c58ad8691d57017986f96482ee
0.3.22: sha256:6621a96a885c82844d12318de00f510eae3459871cf1ad47317f38dd242f9a03
0.3.21: sha256:6c35838bc4b1aef74f9113670e114ca729a5f295f9457b226791e18e86e91698
0.3.20: sha256:2ce46d6bbd7f6a7e06e211836c201fdc2311111913eccc63a03f6ef4fe1958fc
@@ -813,6 +721,8 @@ kata_containers_binary_checksums:
3.5.0: sha256:fa4cf67d010244c4f8d0e6d450d04e28d1bbce5ad1a3cbc0154adff628d56c0c
gvisor_runsc_binary_checksums:
arm64:
'20260202.0': sha512:5fbb9c68efdf3a404217fb57be55051b4b5f8b83ca631101204615b87ff5b6ea8680cd6599e434f1d87fecb9071367b65e90cd8ad5df3f0b9f0101796ecc8c43
'20260126.0': sha512:c1b42f5789c09a68eb006964048448c058776440477fac83c7fd9cef879cec40878fb2f5f2450315ca0e7f568889f0b52c842b84929784a57023961f6eb77d04
'20260112.0': sha512:3b7925d26d71fdcb8cb552950c88bcfed658c06ad6b1211906bfe86d13bc56d8005ac90a4d9ab4c8b6a48eb62ec51ebcdfd45a64067ac5190274e710961e51ea
'20260105.0': sha512:cc98ad73e8d181f4738c97883180bc76cf8b2eb773c11f3a44f1636d0b0e00f2ee9228e4eecd414f94d6410f4877e6c93260b8070130fba767583026115d1038
'20251215.0': sha512:5e7d6206bce4164c9109d37dfb0b169d1c59cc256910de42799a868c3f9ba5560ef5c05c0de3fad4f0856f906463588ff25c9bce3b25e0d3f20874521dffe767
@@ -842,6 +752,8 @@ gvisor_runsc_binary_checksums:
'20250414.0': sha512:d1ba68b20057622e58e886f472e021a473222590c936a86951005d7b97366b446ef0342b91457ffc0d7e543d54c9c06a363f2883bdd6c594799c4ca1091dabd5
'20250407.0': sha512:cb590f72b0fbda45e89a2300e9247f12ff295a8c52653c8cf815c662d3fbbc774f9b915cdd4fad59e30694d8cc8737fe2a1a8186ab5136f7701bd6e6877a1662
amd64:
'20260202.0': sha512:f7bb9cc5e3f5e36a6788f959361415f6d7f7cd0225b8b4d99728da4b1ac7e5c7ce9c72b4c61e424ba93db77c983109d56b54907a3b2e2b982b34058410611023
'20260126.0': sha512:cce974fa832c50d26c6ccc08ce50b4972921cd0818ebe8007587211d360cbc828ceea4ec8296703200afa208b679437d24f27a6dca31887b3c0fc6ee8be5eb05
'20260112.0': sha512:b36de90cdad4cfe0b9b66318407da79c035dd6dcf4c1374250011f34e511c0a29e335fe04eabb0d3fe7140131925f619f724a4702b37c49557bdeb25924b4dc8
'20260105.0': sha512:15c8adabc9f1006d469177b0ec3962d4993e01c85be17d381a4979029eacc7db37ef354e3eafd279573135a1adf81baffc5c19f2bbfac932c79386f6ac74e52f
'20251215.0': sha512:ea82bb66ce61a80adb6edaa61e2f2b1cd6339c504a55dd6663555010ed7f96c6234ac787bd9ecdb29ed4058e806e829fa45f14093466913dafc44d56055a5acb
@@ -872,6 +784,8 @@ gvisor_runsc_binary_checksums:
'20250407.0': sha512:097259d6d93548bf669e21cfec5ba6a47081e43f61d22c5d8a8a4c0c209c81ac9c4454162b826f98cec49e047bbdc29c270113ab6db5519ef3e6a90f302fa47b
gvisor_containerd_shim_binary_checksums:
arm64:
'20260202.0': sha512:714ad3a53a28aa4acd891553d848278f5a873d0a1733836382eaf2bf701d62ece9cef324390602d2676af5e2e3a3d329486d2b18803c9cef5685220764757eb4
'20260126.0': sha512:84abf41b68ba450ed2cbbdf544e7d347d30f6fd577572e2e58f2fa8e038689f557953148287e26c8f4ee5040c1e928670f113bebca6d81ed7ce014ec4e0ad256
'20260112.0': sha512:3215952718bd1636173649c4742e3d8e1978c410abd71bb8252c8ad6d28130cb6d66684aa089f61a0eda0b8786553620a08a9f1b5ab824bb27b1b0cf47bfb25b
'20260105.0': sha512:cfe8a07c304dca21171e5a76614ac3605f5b1ec8f9ed2eeac014a44bc00821864f219db0e25fcc1c56cedbe335bbf34a7fa6bc57335888dcd04278bc0263f5cc
'20251215.0': sha512:2b3a00ec2d646a1c26c1944781b5caf039ce7035dd72281ccff8e244af55606e01667de311febee1a0a03ebd2633af6ebb0ad72d27b8a966743ffe31563b3a5a
@@ -901,6 +815,8 @@ gvisor_containerd_shim_binary_checksums:
'20250414.0': sha512:33b9c67bc7b73ca49154aff48da52029414a707b6a3a25eb4f71e861a94dec8fce220e63a162841670ddd4876f45b0e39abdf9f8c3235019c89f209684d3007d
'20250407.0': sha512:1c3838e10c905af0cb52697712bf6bd76b94c9e9d3d07a7643cd43dc2f8dab03b4ed4693c117e555e07a158e04ee583b6b1f1cf2fb9705244ffa5fdc4af67248
amd64:
'20260202.0': sha512:bd21b80502be25484d8b43168c88d66b6f3e853c78c0ae5b5206c5625e2a365e98c8b3ba259453d18c01d1aa08fb7c8c1e7f122fdcd7ef806bfc2f44f5837b5e
'20260126.0': sha512:51c3b4bc21cb5c3d4e3baf9f43e5fecd86c327abf0c84d492510f480cdfb38c90d43f3b0dbf1887ada8846d3806da79a73729acaedc570894ba6ed7cf9e083ed
'20260112.0': sha512:89f55750488559796fe51d2c10c289a8b0617fb9f6498714c026825268eeed449941d23e8cd5b285b69c1b032005ddeec278345198301c50d89ff6d3f66871a5
'20260105.0': sha512:7f3f5a864fda5f4e2de9db20dd5edad60b6aa467cc7c22d13f40cdce811783d66018f2c28fb74b907c6d6ac0e39f6d0e1047f1f33447b8a8682f1fbaa25edeb4
'20251215.0': sha512:538a04d88a39de1679afd9868806bd5fdc63737a4871955fc8a8c8e183942c6cc3dbd6b34b2f5589f5f474b4826427f149d5c6abec4ca8d09db363ff5f149b4f
@@ -1401,6 +1317,15 @@ gateway_api_experimental_crds_checksums:
1.0.0: sha256:6c601dced7872a940d76fa667ae126ba718cb4c6db970d0bab49128ecc1192a3
prometheus_operator_crds_checksums:
no_arch:
0.88.1: sha256:b827b8ec478e6b31cc1b85c1736570a3575953fe9f470fc29d0ffdb2803d94c4
0.88.0: sha256:11ee66653657f3abc1bc8c41e17aa950eadb66035edb7f84cd3a1cbe4c67b2a4
0.87.1: sha256:62490f7c1863539d61295f53784e27d70deec96a3b465832ba3cf96120e298b5
0.87.0: sha256:a5282133ffa634405b0414d2fdc07e6fe393124d1d5072073af363689dac6a62
0.86.2: sha256:7c9d455333ac5ea7837d5f0e4edd966698e44edd79108bafdd8508f2da503b5b
0.86.1: sha256:9a30912ba9970a2968d7a8bf030a9f6579a5e8b312961018b5fe4c1153fc5fce
0.86.0: sha256:0d2a590b288c79a98515e9fc4315451cfbde964c7977eb527696f7c2ebf47f58
0.85.0: sha256:30e1b1b034ebc750d50a77dc19841176d698d524edf677276a760f9e228e1208
0.84.1: sha256:f4a186ac58f354793e27a0b4b6f8baf5a31a9d10045e5085c23b0570dbfd30dd
0.84.0: sha256:8990f6837ccff4461df9abe19d31d532fef11386d85d861b392249fff2502255
argocd_install_checksums:
no_arch:

View File

@@ -7,14 +7,14 @@ kube_next: "{{ ((kube_version | split('.'))[1] | int) + 1 }}"
kube_major_next_version: "1.{{ kube_next }}"
pod_infra_supported_versions:
'1.35': '3.10.1'
'1.34': '3.10.1'
'1.33': '3.10'
'1.32': '3.10'
etcd_supported_versions:
'1.35': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
'1.34': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
'1.33': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
'1.32': "{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
# Kubespray constants
kube_proxy_deployed: "{{ 'addon/kube-proxy' not in kubeadm_init_phases_skip }}"

View File

@@ -1,44 +0,0 @@
---
dependencies:
- role: network_plugin/cni
when: kube_network_plugin != 'none'
- role: network_plugin/cilium
when: kube_network_plugin == 'cilium' or cilium_deploy_additionally
tags:
- cilium
- role: network_plugin/calico
when: kube_network_plugin == 'calico'
tags:
- calico
- role: network_plugin/flannel
when: kube_network_plugin == 'flannel'
tags:
- flannel
- role: network_plugin/macvlan
when: kube_network_plugin == 'macvlan'
tags:
- macvlan
- role: network_plugin/kube-ovn
when: kube_network_plugin == 'kube-ovn'
tags:
- kube-ovn
- role: network_plugin/kube-router
when: kube_network_plugin == 'kube-router'
tags:
- kube-router
- role: network_plugin/custom_cni
when: kube_network_plugin == 'custom_cni'
tags:
- custom_cni
- role: network_plugin/multus
when: kube_network_plugin_multus
tags:
- multus

View File

@@ -0,0 +1,47 @@
---
- name: Container Network Interface plugin
include_role:
name: network_plugin/cni
when: kube_network_plugin != 'none'
- name: Network plugin
include_role:
name: "network_plugin/{{ kube_network_plugin }}"
apply:
tags:
- "{{ kube_network_plugin }}"
- network
when:
- kube_network_plugin != 'none'
tags:
- cilium
- calico
- flannel
- macvlan
- kube-ovn
- kube-router
- custom_cni
- name: Cilium additional
include_role:
name: network_plugin/cilium
apply:
tags:
- cilium
- network
when:
- kube_network_plugin != 'cilium'
- cilium_deploy_additionally
tags:
- cilium
- name: Multus
include_role:
name: network_plugin/multus
apply:
tags:
- multus
- network
when: kube_network_plugin_multus
tags:
- multus

View File

@@ -21,6 +21,10 @@
- "{{ bin_dir }}/etcdctl"
- member
- remove
- "{{ '%x' | format(((etcd_members.stdout | from_json).members | selectattr('peerURLs.0', '==', etcd_peer_url))[0].ID) }}"
- "{{ '%x' | format(etcd_removed_nodes[0].ID) }}"
vars:
etcd_removed_nodes: "{{ (etcd_members.stdout | from_json).members | selectattr('peerURLs.0', '==', etcd_peer_url) }}"
# This should always have at most one member, since the etcd_peer_url should be unique in the etcd cluster
when: etcd_removed_nodes != []
register: etcd_removal_output
changed_when: "'Removed member' in etcd_removal_output.stdout"

View File

@@ -34,6 +34,17 @@
tags:
- bootstrap_os
# Remove this after ansible-core >= 2.19.0
# See https://github.com/kubernetes-sigs/kubespray/pull/12138#issuecomment-3019304574
- name: Install python3-libdnf5 on Fedora >= 41
raw: >
dnf install --assumeyes python3-libdnf5
become: true
retries: "{{ pkg_install_retries }}"
when:
- ansible_distribution == "Fedora"
- ansible_distribution_major_version | int >= 41
- name: Manage packages
package:
name: "{{ item.packages | dict2items | selectattr('value', 'ansible.builtin.all') | map(attribute='key') }}"

View File

@@ -6,6 +6,17 @@
# -> nothing depending on facts or similar cluster state
# Checks depending on current state (of the nodes or the cluster)
# should be in roles/kubernetes/preinstall/tasks/0040-verify-settings.yml
- name: Fail if removed variables are used
vars:
# Always remove items from this list after the release in comments
removed_vars:
- kubelet_static_pod_path # 2.31.0
removed_vars_found: "{{ query('varnames', '^' + (removed_vars | join('|')) + '$') }}"
assert:
that: removed_vars_found | length == 0
fail_msg: "Removed variables present: {{ removed_vars_found | join(', ') }}"
run_once: true
- name: Stop if kube_control_plane group is empty
assert:
that: groups.get( 'kube_control_plane' )
@@ -20,7 +31,7 @@
when:
- not ignore_assert_errors
- name: Warn if `kube_network_plugin` is `none
- name: Warn if `kube_network_plugin` is `none`
debug:
msg: |
"WARNING! => `kube_network_plugin` is set to `none`. The network configuration will be skipped.
@@ -67,13 +78,6 @@
- kube_network_plugin not in ['calico', 'none']
- ipv4_stack | bool
- name: Stop if RBAC is not enabled when dashboard is enabled
assert:
that: rbac_enabled
when:
- dashboard_enabled
- not ignore_assert_errors
- name: Check cloud_provider value
assert:
that: cloud_provider == 'external'

View File

@@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
@@ -29,7 +29,7 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
pip install --break-system-packages --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

View File

@@ -116,4 +116,9 @@ infos = {
"graphql_id": "R_kgDODQ6RZw",
"binary": True,
},
"prometheus_operator_crds": {
"url": "https://github.com/prometheus-operator/prometheus-operator/releases/download/v{version}/stripped-down-crds.yaml",
"graphql_id": "R_kgDOBBxPpw",
"binary": True,
},
}

View File

@@ -1,5 +1,5 @@
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Use immutable image tags rather than mutable tags (like ubuntu:24.04)
FROM ubuntu:noble-20260113@sha256:cd1dba651b3080c3686ecf4e3c4220f026b521fb76978881737d24f200828b2b
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -27,14 +27,14 @@ RUN apt update -q \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
unzip \
libvirt-clients \
qemu-utils \
qemu-kvm \
dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list \
&& apt update -q \
&& apt install --no-install-recommends -yq docker-ce \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
@@ -44,9 +44,8 @@ ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& pip install --break-system-packages --ignore-installed --no-compile --no-cache-dir pip -U \
&& pip install --break-system-packages --no-compile --no-cache-dir -r tests/requirements.txt \
&& curl -L https://dl.k8s.io/release/v{{ kube_version }}/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v{{ kube_version }}/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
@@ -56,5 +55,5 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& rm vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \
&& vagrant plugin install vagrant-libvirt \
# Install Kubernetes collections
&& pip install --no-compile --no-cache-dir kubernetes \
&& pip install --break-system-packages --no-compile --no-cache-dir kubernetes \
&& ansible-galaxy collection install kubernetes.core

View File

@@ -16,7 +16,6 @@
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) {{ cert_manager_version }}
- [coredns](https://github.com/coredns/coredns) {{ coredns_version }}
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) {{ ingress_nginx_version }}
- [argocd](https://argoproj.github.io/) {{ argocd_version }}
- [helm](https://helm.sh/) {{ helm_version }}
- [metallb](https://metallb.universe.tf/) {{ metallb_version }}

View File

@@ -5,8 +5,6 @@ vm_memory: 3072
# Kubespray settings
metrics_server_enabled: true
dashboard_namespace: "kube-dashboard"
dashboard_enabled: true
loadbalancer_apiserver_type: haproxy
local_path_provisioner_enabled: true

View File

@@ -0,0 +1,9 @@
---
# Instance settings
cloud_image: fedora-41
# Kubespray settings
auto_renew_certificates: true
# Test with SELinux in enforcing mode
preinstall_selinux_state: enforcing

View File

@@ -0,0 +1,14 @@
---
# Instance settings
cloud_image: fedora-41
# Kubespray settings
auto_renew_certificates: true
# Test with SELinux in enforcing mode
preinstall_selinux_state: enforcing
# Test Alpha swap feature by leveraging zswap default config in Fedora 35
kubelet_fail_swap_on: false
kube_feature_gates:
- "NodeSwap=True"

View File

@@ -0,0 +1,7 @@
---
# Instance settings
cloud_image: fedora-41
# Kubespray settings
container_manager: crio
auto_renew_certificates: true

Some files were not shown because too many files have changed in this diff Show More