Compare commits

..

23 Commits

Author SHA1 Message Date
VoidQuark
fae47ab9e6 fix(cilium): quote empty string defaults to prevent null in Helm values (#13109) 2026-03-18 12:43:42 +05:30
Ali Afsharzadeh
e979e770f2 Fix calico api server permissions (#13101)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-17 16:19:50 +05:30
Ali Afsharzadeh
b1e3816b2f Add calico-tier-getter RBAC (#13100)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-17 16:19:42 +05:30
Vitaly
391b08c645 fix: use nodelocaldns_ip with ipv6 address (#13087) 2026-03-17 16:07:38 +05:30
NoNE
39b97464be Use async/poll on drain tasks to prevent SSH connection timeouts (#13081) 2026-03-17 14:31:36 +05:30
Kay Yan
3c6d368397 ci(openeuler): improve mirror selection and stabilize CI checks (#13094)
Enable openEuler metalink and clear dnf cache after repo updates so package downloads use refreshed mirror metadata. Keep openeuler24-calico in the main CI matrix with a longer package timeout, and clean up failed pods before running-state checks to reduce transient CI noise.


Made-with: Cursor

Made-with: Cursor

Made-with: Cursor

Made-with: Cursor

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2026-03-17 13:29:37 +05:30
Max Gautier
03d17fea92 proxy: Fix the no_proxy variable (#12981)
* CI: add no_proxy regression test

* proxy: Fix the no_proxy variable

Since 2.29, probably due to a change in ansible templating, the no_proxy
variable is rendered as an array of character rather than a string.

This results in broken cluster in some case.

Eliminate the custom jinja looping to use filters and list flatteing +
join instead.
Also simplify some things (no separate tasks file, just use `run_once`
instead of delegating to localhost)
2026-03-17 03:45:37 +05:30
Cheprasov Daniil
dbb8527560 docs(etcd): clarify etcd metrics scraping with listen-metrics-urls (#13059) 2026-03-16 14:37:39 +05:30
Shaleen Bathla
7acdc4df64 cilium: honor resource limits and requests by default (#13092)
Signed-off-by: Shaleen Bathla <shaleen.bathla@servicenow.com>
2026-03-16 08:49:40 +05:30
Ali Afsharzadeh
a51773e78f Upgrade cilium from 1.18.6 to 1.19.1 (#13095)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-14 09:39:34 +05:30
Jannick Kappelmann
096dd1875a Added kube_version check (#13071) 2026-03-13 17:25:36 +05:30
Viktor
e3b5c41ced fix: update volumesnapshotclass to v1 (#12775) 2026-03-11 20:31:38 +05:30
Srishti Jaiswal
ba70ed35f0 Remove kubeadm config api version: v1beta3 for kubeadm config template (#13027) 2026-03-11 20:27:38 +05:30
Ali Afsharzadeh
1bafb8e882 Update load balancer versions to Nginx 1.28.2 and Haproxy 3.2.13 (#13034)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-11 20:17:37 +05:30
Cheprasov Daniil
3bdd70c5d8 Feat: make kube-vip BGP source configurable (#13044) 2026-03-11 19:17:38 +05:30
Shaleen Bathla
979fe25521 cilium: remove unused bpf clock probe variable (#13050)
Signed-off-by: Shaleen Bathla <shaleen.bathla@servicenow.com>
2026-03-11 16:39:38 +05:30
dependabot[bot]
7e7b016a15 build(deps): bump molecule from 26.2.0 to 26.3.0 in the molecule group (#13079)
Bumps the molecule group with 1 update: [molecule](https://github.com/ansible-community/molecule).


Updates `molecule` from 26.2.0 to 26.3.0
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v26.2.0...v26.3.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-version: 26.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: molecule
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 23:49:12 +05:30
Ali Afsharzadeh
da6539c7a0 Enable create_namespace option for custom_cni with helm (#13061)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2026-03-05 15:12:20 +05:30
dependabot[bot]
459f31034e build(deps): bump molecule from 25.12.0 to 26.2.0 in the molecule group (#13065)
Bumps the molecule group with 1 update: [molecule](https://github.com/ansible-community/molecule).


Updates `molecule` from 25.12.0 to 26.2.0
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v25.12.0...v26.2.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-version: 26.2.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: molecule
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-05 09:22:17 +05:30
Max Gautier
f66e11e5cc CI: Fix terraform job not using correct extra vars (#13057) 2026-03-04 02:00:18 +05:30
Max Gautier
0c47a6891e Remove netcheker as we now use hydrophone for network tests (#13058) 2026-03-02 19:40:54 +05:30
NoNE
a866292279 Deduplicate GraphQL node IDs in update-hashes to fix 502 err (#13064)
* Deduplicate GraphQL node IDs in update-hashes to fix 502

* Bump component_hash_update version to 1.0.1

Avoids stale pip/uv installation cache in CI pipelines
after the GraphQL deduplication fix.
2026-03-02 16:00:13 +05:30
ChengHao Yang
98ac2e40bf Test: fix vm_memory not enough for testing (#13060)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2026-03-02 13:52:13 +05:30
61 changed files with 296 additions and 1050 deletions

View File

@@ -57,6 +57,7 @@ pr:
- ubuntu24-kube-router-svc-proxy
- ubuntu24-ha-separate-etcd
- fedora40-flannel-crio-collection-scale
- openeuler24-calico
# This is for flakey test so they don't disrupt the PR worklflow too much.
# Jobs here MUST have a open issue so we don't lose sight of them
@@ -67,7 +68,6 @@ pr-flakey:
matrix:
- TESTCASE:
- flatcar4081-calico # https://github.com/kubernetes-sigs/kubespray/issues/12309
- openeuler24-calico # https://github.com/kubernetes-sigs/kubespray/issues/12877
# The ubuntu24-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
ubuntu24-calico-all-in-one:

View File

@@ -116,3 +116,4 @@ tf-elastx_ubuntu24-calico:
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-24.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
TESTCASE: $CI_JOB_NAME

View File

@@ -119,7 +119,7 @@ Note:
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.6
- [cilium](https://github.com/cilium/cilium) 1.18.6
- [cilium](https://github.com/cilium/cilium) 1.19.1
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1

View File

@@ -245,7 +245,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.18.6"
cilium_version: "1.19.1"
```
## Add variable to config

View File

@@ -63,6 +63,8 @@ kube_vip_bgppeers:
# kube_vip_bgp_peeraddress:
# kube_vip_bgp_peerpass:
# kube_vip_bgp_peeras:
# kube_vip_bgp_sourceip:
# kube_vip_bgp_sourceif:
```
If using [control plane load-balancing](https://kube-vip.io/docs/about/architecture/#control-plane-load-balancing):

View File

@@ -32,12 +32,12 @@ etcd_metrics_service_labels:
k8s-app: etcd
app.kubernetes.io/managed-by: Kubespray
app: kube-prometheus-stack-kube-etcd
release: prometheus-stack
release: kube-prometheus-stack
```
The last two labels in the above example allows to scrape the metrics from the
[kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
chart with the following Helm `values.yaml` :
chart when it is installed with the release name `kube-prometheus-stack` and the following Helm `values.yaml`:
```yaml
kubeEtcd:
@@ -45,8 +45,22 @@ kubeEtcd:
enabled: false
```
To fully override metrics exposition urls, define it in the inventory with:
If your Helm release name is different, adjust the `release` label accordingly.
To fully override metrics exposition URLs, define it in the inventory with:
```yaml
etcd_listen_metrics_urls: "http://0.0.0.0:2381"
```
If you choose to expose metrics on specific node IPs (for example `10.141.4.22`, `10.141.4.23`, `10.141.4.24`) in `etcd_listen_metrics_urls`,
you can configure kube-prometheus-stack to scrape those endpoints directly with:
```yaml
kubeEtcd:
enabled: true
endpoints:
- 10.141.4.22
- 10.141.4.23
- 10.141.4.24
```

View File

@@ -199,6 +199,8 @@ kube_vip_enabled: false
# kube_vip_leasename: plndr-cp-lock
# kube_vip_enable_node_labeling: false
# kube_vip_lb_fwdmethod: local
# kube_vip_bgp_sourceip:
# kube_vip_bgp_sourceif:
# Node Feature Discovery
node_feature_discovery_enabled: false

View File

@@ -361,8 +361,6 @@ cilium_l2announcements: false
# -- Enable the use of well-known identities.
# cilium_enable_well_known_identities: false
# cilium_enable_bpf_clock_probe: true
# -- Whether to enable CNP status updates.
# cilium_disable_cnp_status_updates: true

View File

@@ -16,6 +16,8 @@
- name: Gather and compute network facts
import_role:
name: network_facts
tags:
- always
- name: Gather minimal facts
setup:
gather_subset: '!all'

View File

@@ -12,6 +12,10 @@ coreos_locksmithd_disable: false
# Install epel repo on Centos/RHEL
epel_enabled: false
## openEuler specific variables
# Enable metalink for openEuler repos (auto-selects fastest mirror by location)
openeuler_metalink_enabled: false
## Oracle Linux specific variables
# Install public repo on Oracle Linux
use_oracle_public_repo: true

View File

@@ -1,3 +1,43 @@
---
- name: Import Centos boostrap for openEuler
import_tasks: centos.yml
- name: Import CentOS bootstrap for openEuler
ansible.builtin.import_tasks: centos.yml
- name: Get existing openEuler repo sections
ansible.builtin.shell:
cmd: "set -o pipefail && grep '^\\[' /etc/yum.repos.d/openEuler.repo | tr -d '[]'"
executable: /bin/bash
register: _openeuler_repo_sections
changed_when: false
failed_when: false
check_mode: false
become: true
when: openeuler_metalink_enabled
- name: Enable metalink for openEuler repos
community.general.ini_file:
path: /etc/yum.repos.d/openEuler.repo
section: "{{ item.key }}"
option: metalink
value: "{{ item.value }}"
no_extra_spaces: true
mode: "0644"
loop: "{{ _openeuler_metalink_repos | dict2items | selectattr('key', 'in', _openeuler_repo_sections.stdout_lines | default([])) }}"
become: true
when: openeuler_metalink_enabled
register: _openeuler_metalink_result
vars:
_openeuler_metalink_repos:
OS: "https://mirrors.openeuler.org/metalink?repo=$releasever/OS&arch=$basearch"
everything: "https://mirrors.openeuler.org/metalink?repo=$releasever/everything&arch=$basearch"
EPOL: "https://mirrors.openeuler.org/metalink?repo=$releasever/EPOL/main&arch=$basearch"
debuginfo: "https://mirrors.openeuler.org/metalink?repo=$releasever/debuginfo&arch=$basearch"
source: "https://mirrors.openeuler.org/metalink?repo=$releasever&arch=source"
update: "https://mirrors.openeuler.org/metalink?repo=$releasever/update&arch=$basearch"
update-source: "https://mirrors.openeuler.org/metalink?repo=$releasever/update&arch=source"
- name: Clean dnf cache to apply metalink mirror selection
ansible.builtin.command: dnf clean all
become: true
when:
- openeuler_metalink_enabled
- _openeuler_metalink_result.changed

View File

@@ -1,9 +1,9 @@
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
criSocket: {{ cri_socket }}
---
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
imageRepository: {{ kubeadm_image_repo }}
kubernetesVersion: v{{ kube_version }}

View File

@@ -88,36 +88,5 @@ dns_autoscaler_affinity: {}
# app: kube-prometheus-stack-kube-etcd
# release: prometheus-stack
# Netchecker
deploy_netchecker: false
netchecker_port: 31081
agent_report_interval: 15
netcheck_namespace: default
# Limits for netchecker apps
netchecker_agent_cpu_limit: 30m
netchecker_agent_memory_limit: 100M
netchecker_agent_cpu_requests: 15m
netchecker_agent_memory_requests: 64M
netchecker_server_cpu_limit: 100m
netchecker_server_memory_limit: 256M
netchecker_server_cpu_requests: 50m
netchecker_server_memory_requests: 64M
netchecker_etcd_cpu_limit: 200m
netchecker_etcd_memory_limit: 256M
netchecker_etcd_cpu_requests: 100m
netchecker_etcd_memory_requests: 128M
# SecurityContext (user/group)
netchecker_agent_user: 1000
netchecker_server_user: 1000
netchecker_agent_group: 1000
netchecker_server_group: 1000
# Log levels
netchecker_agent_log_level: 5
netchecker_server_log_level: 5
netchecker_etcd_log_level: info
# Policy Controllers
# policy_controller_extra_tolerations: [{effect: NoSchedule, operator: "Exists"}]

View File

@@ -87,25 +87,3 @@
when: etcd_metrics_port is defined and etcd_metrics_service_labels is defined
tags:
- etcd_metrics
- name: Kubernetes Apps | Netchecker
command:
cmd: "{{ kubectl_apply_stdin }}"
stdin: "{{ lookup('template', item) }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
vars:
k8s_namespace: "{{ netcheck_namespace }}"
when: deploy_netchecker
tags:
- netchecker
loop:
- netchecker-ns.yml.j2
- netchecker-agent-sa.yml.j2
- netchecker-agent-ds.yml.j2
- netchecker-agent-hostnet-ds.yml.j2
- netchecker-server-sa.yml.j2
- netchecker-server-clusterrole.yml.j2
- netchecker-server-clusterrolebinding.yml.j2
- netchecker-server-deployment.yml.j2
- netchecker-server-svc.yml.j2

View File

@@ -1,56 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: netchecker-agent
name: netchecker-agent
namespace: {{ netcheck_namespace }}
spec:
selector:
matchLabels:
app: netchecker-agent
template:
metadata:
name: netchecker-agent
labels:
app: netchecker-agent
spec:
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
tolerations:
- effect: NoSchedule
operator: Exists
nodeSelector:
kubernetes.io/os: linux
containers:
- name: netchecker-agent
image: "{{ netcheck_agent_image_repo }}:{{ netcheck_agent_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- "-v={{ netchecker_agent_log_level }}"
- "-alsologtostderr=true"
- "-serverendpoint=netchecker-service:8081"
- "-reportinterval={{ agent_report_interval }}"
resources:
limits:
cpu: {{ netchecker_agent_cpu_limit }}
memory: {{ netchecker_agent_memory_limit }}
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
securityContext:
runAsUser: {{ netchecker_agent_user | default('0') }}
runAsGroup: {{ netchecker_agent_group | default('0') }}
serviceAccountName: netchecker-agent
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -1,58 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: netchecker-agent-hostnet
name: netchecker-agent-hostnet
namespace: {{ netcheck_namespace }}
spec:
selector:
matchLabels:
app: netchecker-agent-hostnet
template:
metadata:
name: netchecker-agent-hostnet
labels:
app: netchecker-agent-hostnet
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: netchecker-agent
image: "{{ netcheck_agent_image_repo }}:{{ netcheck_agent_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- "-v={{ netchecker_agent_log_level }}"
- "-alsologtostderr=true"
- "-serverendpoint=netchecker-service:8081"
- "-reportinterval={{ agent_report_interval }}"
resources:
limits:
cpu: {{ netchecker_agent_cpu_limit }}
memory: {{ netchecker_agent_memory_limit }}
requests:
cpu: {{ netchecker_agent_cpu_requests }}
memory: {{ netchecker_agent_memory_requests }}
securityContext:
runAsUser: {{ netchecker_agent_user | default('0') }}
runAsGroup: {{ netchecker_agent_group | default('0') }}
serviceAccountName: netchecker-agent
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: netchecker-agent
namespace: {{ netcheck_namespace }}

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: "{{ netcheck_namespace }}"
labels:
name: "{{ netcheck_namespace }}"

View File

@@ -1,9 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get"]

View File

@@ -1,13 +0,0 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
subjects:
- kind: ServiceAccount
name: netchecker-server
namespace: {{ netcheck_namespace }}
roleRef:
kind: ClusterRole
name: netchecker-server
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,86 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}
labels:
app: netchecker-server
spec:
replicas: 1
selector:
matchLabels:
app: netchecker-server
template:
metadata:
name: netchecker-server
labels:
app: netchecker-server
spec:
priorityClassName: {% if netcheck_namespace == 'kube-system' %}system-cluster-critical{% else %}k8s-cluster-critical{% endif %}{{ '' }}
volumes:
- name: etcd-data
emptyDir: {}
containers:
- name: netchecker-server
image: "{{ netcheck_server_image_repo }}:{{ netcheck_server_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
limits:
cpu: {{ netchecker_server_cpu_limit }}
memory: {{ netchecker_server_memory_limit }}
requests:
cpu: {{ netchecker_server_cpu_requests }}
memory: {{ netchecker_server_memory_requests }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: {{ netchecker_server_user | default('0') }}
runAsGroup: {{ netchecker_server_group | default('0') }}
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
ports:
- containerPort: 8081
args:
- -v={{ netchecker_server_log_level }}
- -logtostderr
- -kubeproxyinit=false
- -endpoint=0.0.0.0:8081
- -etcd-endpoints=http://127.0.0.1:2379
- name: etcd
image: "{{ etcd_image_repo }}:{{ netcheck_etcd_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
env:
- name: ETCD_LOG_LEVEL
value: "{{ netchecker_etcd_log_level }}"
command:
- etcd
- --listen-client-urls=http://127.0.0.1:2379
- --advertise-client-urls=http://127.0.0.1:2379
- --data-dir=/var/lib/etcd
- --enable-v2
- --force-new-cluster
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
resources:
limits:
cpu: {{ netchecker_etcd_cpu_limit }}
memory: {{ netchecker_etcd_memory_limit }}
requests:
cpu: {{ netchecker_etcd_cpu_requests }}
memory: {{ netchecker_etcd_memory_requests }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
runAsUser: {{ netchecker_server_user | default('0') }}
runAsGroup: {{ netchecker_server_group | default('0') }}
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
tolerations:
- effect: NoSchedule
operator: Exists
serviceAccountName: netchecker-server

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: netchecker-server
namespace: {{ netcheck_namespace }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: netchecker-service
namespace: {{ netcheck_namespace }}
spec:
selector:
app: netchecker-server
ports:
-
protocol: TCP
port: 8081
targetPort: 8081
nodePort: {{ netchecker_port }}
type: NodePort

View File

@@ -45,7 +45,7 @@ data:
force_tcp
}
prometheus {% if nodelocaldns_bind_metrics_host_ip %}{$MY_HOST_IP}{% endif %}:{{ nodelocaldns_prometheus_port }}
health {{ nodelocaldns_ip }}:{{ nodelocaldns_health_port }}
health {{ nodelocaldns_ip | ansible.utils.ipwrap }}:{{ nodelocaldns_health_port }}
{% if dns_etchosts | default(None) %}
hosts /etc/coredns/hosts {
fallthrough
@@ -132,7 +132,7 @@ data:
force_tcp
}
prometheus {% if nodelocaldns_bind_metrics_host_ip %}{$MY_HOST_IP}{% endif %}:{{ nodelocaldns_secondary_prometheus_port }}
health {{ nodelocaldns_ip }}:{{ nodelocaldns_second_health_port }}
health {{ nodelocaldns_ip | ansible.utils.ipwrap }}:{{ nodelocaldns_second_health_port }}
{% if dns_etchosts | default(None) %}
hosts /etc/coredns/hosts {
fallthrough

View File

@@ -1,7 +1,7 @@
{% for class in snapshot_classes %}
---
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
apiVersion: snapshot.storage.k8s.io/v1
metadata:
name: "{{ class.name }}"
annotations:

View File

@@ -95,7 +95,7 @@
- name: Kubeadm | Create kubeadm config
template:
src: "kubeadm-config.{{ kubeadm_config_api_version }}.yaml.j2"
src: "kubeadm-config.v1beta4.yaml.j2"
dest: "{{ kube_config_dir }}/kubeadm-config.yaml"
mode: "0640"
validate: "{{ kubeadm_config_validate_enabled | ternary(bin_dir + '/kubeadm config validate --config %s', omit) }}"

View File

@@ -2,44 +2,21 @@
- name: Ensure kube-apiserver is up before upgrade
import_tasks: check-api.yml
# kubeadm-config.v1beta4 with UpgradeConfiguration requires some values that were previously allowed as args to be specified in the config file
# TODO: Remove --skip-phases from command when v1beta4 UpgradeConfiguration supports skipPhases
- name: Kubeadm | Upgrade first control plane node to {{ kube_version }}
command: >-
timeout -k 600s 600s
{{ bin_dir }}/kubeadm upgrade apply -y v{{ kube_version }}
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif %}
{%- if kube_version is version('1.32.0', '>=') %}
--skip-phases={{ kubeadm_init_phases_skip | join(',') }}
{%- endif %}
register: kubeadm_upgrade
when: inventory_hostname == first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr
environment:
PATH: "{{ bin_dir }}:{{ ansible_env.PATH }}"
# TODO: When we retire kubeadm-config.v1beta3, remove --certificate-renewal, --ignore-preflight-errors, --etcd-upgrade, --patches, and --skip-phases from command, since v1beta4+ supports these in UpgradeConfiguration.node
- name: Kubeadm | Upgrade other control plane nodes to {{ kube_version }}
command: >-
{{ bin_dir }}/kubeadm upgrade node
{%- if kubeadm_config_api_version == 'v1beta3' %}
--certificate-renewal={{ kubeadm_upgrade_auto_cert_renewal }}
--ignore-preflight-errors={{ kubeadm_ignore_preflight_errors | join(',') }}
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | lower }}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
{%- else %}
--config={{ kube_config_dir }}/kubeadm-config.yaml
{%- endif %}
--skip-phases={{ kubeadm_upgrade_node_phases_skip | join(',') }}
register: kubeadm_upgrade
when: inventory_hostname != first_kube_control_plane
failed_when: kubeadm_upgrade.rc != 0 and "field is immutable" not in kubeadm_upgrade.stderr

View File

@@ -1,445 +0,0 @@
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
{% if kubeadm_token is defined %}
bootstrapTokens:
- token: "{{ kubeadm_token }}"
description: "kubespray kubeadm bootstrap token"
ttl: "24h"
{% endif %}
localAPIEndpoint:
advertiseAddress: "{{ kube_apiserver_address }}"
bindPort: {{ kube_apiserver_port }}
{% if kubeadm_certificate_key is defined %}
certificateKey: {{ kubeadm_certificate_key }}
{% endif %}
nodeRegistration:
{% if kube_override_hostname | default('') %}
name: "{{ kube_override_hostname }}"
{% endif %}
{% if 'kube_control_plane' in group_names and 'kube_node' not in group_names %}
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
{% else %}
taints: []
{% endif %}
criSocket: {{ cri_socket }}
{% if cloud_provider == "external" %}
kubeletExtraArgs:
cloud-provider: external
{% endif %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches_dir }}
{% endif %}
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: {{ cluster_name }}
etcd:
{% if etcd_deployment_type != "kubeadm" %}
external:
endpoints:
{% for endpoint in etcd_access_addresses.split(',') %}
- "{{ endpoint }}"
{% endfor %}
caFile: {{ etcd_cert_dir }}/{{ kube_etcd_cacert_file }}
certFile: {{ etcd_cert_dir }}/{{ kube_etcd_cert_file }}
keyFile: {{ etcd_cert_dir }}/{{ kube_etcd_key_file }}
{% elif etcd_deployment_type == "kubeadm" %}
local:
imageRepository: "{{ etcd_image_repo | regex_replace("/etcd$","") }}"
imageTag: "{{ etcd_image_tag }}"
dataDir: "{{ etcd_data_dir }}"
extraArgs:
metrics: {{ etcd_metrics }}
election-timeout: "{{ etcd_election_timeout }}"
heartbeat-interval: "{{ etcd_heartbeat_interval }}"
auto-compaction-retention: "{{ etcd_compaction_retention }}"
{% if etcd_listen_metrics_urls is defined %}
listen-metrics-urls: "{{ etcd_listen_metrics_urls }}"
{% endif %}
snapshot-count: "{{ etcd_snapshot_count }}"
quota-backend-bytes: "{{ etcd_quota_backend_bytes }}"
max-request-bytes: "{{ etcd_max_request_bytes }}"
log-level: "{{ etcd_log_level }}"
{% for key, value in etcd_extra_vars.items() %}
{{ key }}: "{{ value }}"
{% endfor %}
serverCertSANs:
{% for san in etcd_cert_alt_names %}
- "{{ san }}"
{% endfor %}
{% for san in etcd_cert_alt_ips %}
- "{{ san }}"
{% endfor %}
peerCertSANs:
{% for san in etcd_cert_alt_names %}
- "{{ san }}"
{% endfor %}
{% for san in etcd_cert_alt_ips %}
- "{{ san }}"
{% endfor %}
{% endif %}
dns:
imageRepository: {{ coredns_image_repo | regex_replace('/coredns(?!/coredns).*$', '') }}
imageTag: {{ coredns_image_tag }}
networking:
dnsDomain: {{ dns_domain }}
serviceSubnet: "{{ kube_service_subnets }}"
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
podSubnet: "{{ kube_pods_subnets }}"
{% endif %}
{% if kubeadm_feature_gates %}
featureGates:
{% for feature in kubeadm_feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}
kubernetesVersion: v{{ kube_version }}
{% if kubeadm_config_api_fqdn is defined %}
controlPlaneEndpoint: "{{ kubeadm_config_api_fqdn }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}"
{% else %}
controlPlaneEndpoint: "{{ main_ip | ansible.utils.ipwrap }}:{{ kube_apiserver_port }}"
{% endif %}
certificatesDir: {{ kube_cert_dir }}
imageRepository: {{ kubeadm_image_repo }}
apiServer:
extraArgs:
etcd-compaction-interval: "{{ kube_apiserver_etcd_compaction_interval }}"
default-not-ready-toleration-seconds: "{{ kube_apiserver_pod_eviction_not_ready_timeout_seconds }}"
default-unreachable-toleration-seconds: "{{ kube_apiserver_pod_eviction_unreachable_timeout_seconds }}"
{% if kube_api_anonymous_auth is defined %}
{# TODO: rework once suppport for structured auth lands #}
anonymous-auth: "{{ kube_api_anonymous_auth }}"
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
authorization-config: "{{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml"
{% else %}
authorization-mode: {{ authorization_modes | join(',') }}
{% endif %}
bind-address: "{{ kube_apiserver_bind_address }}"
{% if kube_apiserver_enable_admission_plugins | length > 0 %}
enable-admission-plugins: {{ kube_apiserver_enable_admission_plugins | join(',') }}
{% endif %}
{% if kube_apiserver_admission_control_config_file %}
admission-control-config-file: {{ kube_config_dir }}/admission-controls.yaml
{% endif %}
{% if kube_apiserver_disable_admission_plugins | length > 0 %}
disable-admission-plugins: {{ kube_apiserver_disable_admission_plugins | join(',') }}
{% endif %}
apiserver-count: "{{ kube_apiserver_count }}"
endpoint-reconciler-type: lease
{% if etcd_events_cluster_enabled %}
etcd-servers-overrides: "/events#{{ etcd_events_access_addresses_semicolon }}"
{% endif %}
service-node-port-range: {{ kube_apiserver_node_port_range }}
service-cluster-ip-range: "{{ kube_service_subnets }}"
kubelet-preferred-address-types: "{{ kubelet_preferred_address_types }}"
profiling: "{{ kube_profiling }}"
request-timeout: "{{ kube_apiserver_request_timeout }}"
enable-aggregator-routing: "{{ kube_api_aggregator_routing }}"
{% if kube_token_auth %}
token-auth-file: {{ kube_token_dir }}/known_tokens.csv
{% endif %}
{% if kube_apiserver_service_account_lookup %}
service-account-lookup: "{{ kube_apiserver_service_account_lookup }}"
{% endif %}
{% if kube_oidc_auth and kube_oidc_url is defined and kube_oidc_client_id is defined %}
oidc-issuer-url: "{{ kube_oidc_url }}"
oidc-client-id: "{{ kube_oidc_client_id }}"
{% if kube_oidc_ca_file is defined %}
oidc-ca-file: "{{ kube_oidc_ca_file }}"
{% endif %}
{% if kube_oidc_username_claim is defined %}
oidc-username-claim: "{{ kube_oidc_username_claim }}"
{% endif %}
{% if kube_oidc_groups_claim is defined %}
oidc-groups-claim: "{{ kube_oidc_groups_claim }}"
{% endif %}
{% if kube_oidc_username_prefix is defined %}
oidc-username-prefix: "{{ kube_oidc_username_prefix }}"
{% endif %}
{% if kube_oidc_groups_prefix is defined %}
oidc-groups-prefix: "{{ kube_oidc_groups_prefix }}"
{% endif %}
{% endif %}
{% if kube_webhook_token_auth %}
authentication-token-webhook-config-file: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization and not kube_apiserver_use_authorization_config_file %}
authorization-webhook-config-file: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_encrypt_secret_data %}
encryption-provider-config: {{ kube_cert_dir }}/secrets_encryption.yaml
{% endif %}
storage-backend: {{ kube_apiserver_storage_backend }}
{% if kube_api_runtime_config | length > 0 %}
runtime-config: {{ kube_api_runtime_config | join(',') }}
{% endif %}
allow-privileged: "true"
{% if kubernetes_audit or kubernetes_audit_webhook %}
audit-policy-file: {{ audit_policy_file }}
{% endif %}
{% if kubernetes_audit %}
audit-log-path: "{{ audit_log_path }}"
audit-log-maxage: "{{ audit_log_maxage }}"
audit-log-maxbackup: "{{ audit_log_maxbackups }}"
audit-log-maxsize: "{{ audit_log_maxsize }}"
{% endif %}
{% if kubernetes_audit_webhook %}
audit-webhook-config-file: {{ audit_webhook_config_file }}
audit-webhook-mode: {{ audit_webhook_mode }}
{% if audit_webhook_mode == "batch" %}
audit-webhook-batch-max-size: "{{ audit_webhook_batch_max_size }}"
audit-webhook-batch-max-wait: "{{ audit_webhook_batch_max_wait }}"
{% endif %}
{% endif %}
{% for key in kube_kubeadm_apiserver_extra_args %}
{{ key }}: "{{ kube_kubeadm_apiserver_extra_args[key] }}"
{% endfor %}
{% if kube_apiserver_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_apiserver_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
event-ttl: {{ event_ttl_duration }}
{% if kubelet_rotate_server_certificates %}
kubelet-certificate-authority: {{ kube_cert_dir }}/ca.crt
{% endif %}
{% if kube_apiserver_tracing %}
tracing-config-file: {{ kube_config_dir }}/tracing/apiserver-tracing.yaml
{% endif %}
{% if kubernetes_audit or kube_token_auth or kube_webhook_token_auth or apiserver_extra_volumes or ssl_ca_dirs | length %}
extraVolumes:
{% if kube_token_auth %}
- name: token-auth-config
hostPath: {{ kube_token_dir }}
mountPath: {{ kube_token_dir }}
{% endif %}
{% if kube_webhook_token_auth %}
- name: webhook-token-auth-config
hostPath: {{ kube_config_dir }}/webhook-token-auth-config.yaml
mountPath: {{ kube_config_dir }}/webhook-token-auth-config.yaml
{% endif %}
{% if kube_webhook_authorization %}
- name: webhook-authorization-config
hostPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
mountPath: {{ kube_config_dir }}/webhook-authorization-config.yaml
{% endif %}
{% if kube_apiserver_use_authorization_config_file %}
- name: authorization-config
hostPath: {{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml
mountPath: {{ kube_config_dir }}/apiserver-authorization-config-{{ kube_apiserver_authorization_config_api_version }}.yaml
{% endif %}
{% if kubernetes_audit or kubernetes_audit_webhook %}
- name: {{ audit_policy_name }}
hostPath: {{ audit_policy_hostpath }}
mountPath: {{ audit_policy_mountpath }}
{% if audit_log_path != "-" %}
- name: {{ audit_log_name }}
hostPath: {{ audit_log_hostpath }}
mountPath: {{ audit_log_mountpath }}
readOnly: false
{% endif %}
{% endif %}
{% if kube_apiserver_admission_control_config_file %}
- name: admission-control-configs
hostPath: {{ kube_config_dir }}/admission-controls
mountPath: {{ kube_config_dir }}
readOnly: false
pathType: DirectoryOrCreate
{% endif %}
{% if kube_apiserver_tracing %}
- name: tracing
hostPath: {{ kube_config_dir }}/tracing
mountPath: {{ kube_config_dir }}/tracing
readOnly: true
pathType: DirectoryOrCreate
{% endif %}
{% for volume in apiserver_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% if ssl_ca_dirs | length %}
{% for dir in ssl_ca_dirs %}
- name: {{ dir | regex_replace('^/(.*)$', '\\1' ) | regex_replace('/', '-') }}
hostPath: {{ dir }}
mountPath: {{ dir }}
readOnly: true
{% endfor %}
{% endif %}
{% endif %}
certSANs:
{% for san in apiserver_sans %}
- "{{ san }}"
{% endfor %}
timeoutForControlPlane: 5m0s
controllerManager:
extraArgs:
node-monitor-grace-period: {{ kube_controller_node_monitor_grace_period }}
node-monitor-period: {{ kube_controller_node_monitor_period }}
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
cluster-cidr: "{{ kube_pods_subnets }}"
{% endif %}
service-cluster-ip-range: "{{ kube_service_subnets }}"
{% if kube_network_plugin is defined and kube_network_plugin == "calico" and not calico_ipam_host_local %}
allocate-node-cidrs: "false"
{% else %}
{% if ipv4_stack %}
node-cidr-mask-size-ipv4: "{{ kube_network_node_prefix }}"
{% endif %}
{% if ipv6_stack %}
node-cidr-mask-size-ipv6: "{{ kube_network_node_prefix_ipv6 }}"
{% endif %}
{% endif %}
profiling: "{{ kube_profiling }}"
terminated-pod-gc-threshold: "{{ kube_controller_terminated_pod_gc_threshold }}"
bind-address: "{{ kube_controller_manager_bind_address }}"
leader-elect-lease-duration: {{ kube_controller_manager_leader_elect_lease_duration }}
leader-elect-renew-deadline: {{ kube_controller_manager_leader_elect_renew_deadline }}
{% if kube_controller_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_controller_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
{% for key in kube_kubeadm_controller_extra_args %}
{{ key }}: "{{ kube_kubeadm_controller_extra_args[key] }}"
{% endfor %}
{% if kube_network_plugin is defined and kube_network_plugin not in ["cloud"] %}
configure-cloud-routes: "false"
{% endif %}
{% if kubelet_flexvolumes_plugins_dir is defined %}
flex-volume-plugin-dir: {{ kubelet_flexvolumes_plugins_dir }}
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
{% if controller_manager_extra_volumes %}
extraVolumes:
{% for volume in controller_manager_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% endif %}
scheduler:
extraArgs:
bind-address: "{{ kube_scheduler_bind_address }}"
config: {{ kube_config_dir }}/kubescheduler-config.yaml
{% if kube_scheduler_feature_gates or kube_feature_gates %}
feature-gates: "{{ kube_scheduler_feature_gates | default(kube_feature_gates, true) | join(',') }}"
{% endif %}
profiling: "{{ kube_profiling }}"
{% if kube_kubeadm_scheduler_extra_args | length > 0 %}
{% for key in kube_kubeadm_scheduler_extra_args %}
{{ key }}: "{{ kube_kubeadm_scheduler_extra_args[key] }}"
{% endfor %}
{% endif %}
{% if tls_min_version is defined %}
tls-min-version: {{ tls_min_version }}
{% endif %}
{% if tls_cipher_suites is defined %}
tls-cipher-suites: {% for tls in tls_cipher_suites %}{{ tls }}{{ "," if not loop.last else "" }}{% endfor %}
{% endif %}
extraVolumes:
- name: kubescheduler-config
hostPath: {{ kube_config_dir }}/kubescheduler-config.yaml
mountPath: {{ kube_config_dir }}/kubescheduler-config.yaml
readOnly: true
{% if scheduler_extra_volumes %}
{% for volume in scheduler_extra_volumes %}
- name: {{ volume.name }}
hostPath: {{ volume.hostPath }}
mountPath: {{ volume.mountPath }}
readOnly: {{ volume.readOnly | d(not (volume.writable | d(false))) }}
{% endfor %}
{% endif %}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: "{{ kube_proxy_bind_address }}"
clientConnection:
acceptContentTypes: {{ kube_proxy_client_accept_content_types }}
burst: {{ kube_proxy_client_burst }}
contentType: {{ kube_proxy_client_content_type }}
kubeconfig: {{ kube_proxy_client_kubeconfig }}
qps: {{ kube_proxy_client_qps }}
{% if kube_network_plugin is defined and kube_network_plugin not in ["kube-ovn"] %}
clusterCIDR: "{{ kube_pods_subnets }}"
{% endif %}
configSyncPeriod: {{ kube_proxy_config_sync_period }}
conntrack:
maxPerCore: {{ kube_proxy_conntrack_max_per_core }}
min: {{ kube_proxy_conntrack_min }}
tcpCloseWaitTimeout: {{ kube_proxy_conntrack_tcp_close_wait_timeout }}
tcpEstablishedTimeout: {{ kube_proxy_conntrack_tcp_established_timeout }}
enableProfiling: {{ kube_proxy_enable_profiling }}
healthzBindAddress: "{{ kube_proxy_healthz_bind_address }}"
hostnameOverride: "{{ kube_override_hostname }}"
iptables:
masqueradeAll: {{ kube_proxy_masquerade_all }}
masqueradeBit: {{ kube_proxy_masquerade_bit }}
minSyncPeriod: {{ kube_proxy_min_sync_period }}
syncPeriod: {{ kube_proxy_sync_period }}
ipvs:
excludeCIDRs: {{ kube_proxy_exclude_cidrs }}
minSyncPeriod: {{ kube_proxy_min_sync_period }}
scheduler: {{ kube_proxy_scheduler }}
syncPeriod: {{ kube_proxy_sync_period }}
strictARP: {{ kube_proxy_strict_arp }}
tcpTimeout: {{ kube_proxy_tcp_timeout }}
tcpFinTimeout: {{ kube_proxy_tcp_fin_timeout }}
udpTimeout: {{ kube_proxy_udp_timeout }}
metricsBindAddress: "{{ kube_proxy_metrics_bind_address }}"
mode: {{ kube_proxy_mode }}
nodePortAddresses: {{ kube_proxy_nodeport_addresses }}
oomScoreAdj: {{ kube_proxy_oom_score_adj }}
portRange: {{ kube_proxy_port_range }}
{% if kube_proxy_feature_gates or kube_feature_gates %}
{% set feature_gates = ( kube_proxy_feature_gates | default(kube_feature_gates, true) ) %}
featureGates:
{% for feature in feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}
{# DNS settings for kubelet #}
{% if enable_nodelocaldns %}
{% set kubelet_cluster_dns = [nodelocaldns_ip] %}
{% elif dns_mode in ['coredns'] %}
{% set kubelet_cluster_dns = [skydns_server] %}
{% elif dns_mode == 'coredns_dual' %}
{% set kubelet_cluster_dns = [skydns_server,skydns_server_secondary] %}
{% elif dns_mode == 'manual' %}
{% set kubelet_cluster_dns = [manual_dns_server] %}
{% else %}
{% set kubelet_cluster_dns = [] %}
{% endif %}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
{% if kube_version is version('1.35.0', '>=') %}
failCgroupV1: {{ kubelet_fail_cgroup_v1 }}
{% endif %}
clusterDNS:
{% for dns_address in kubelet_cluster_dns %}
- {{ dns_address }}
{% endfor %}
{% if kubelet_feature_gates or kube_feature_gates %}
{% set feature_gates = ( kubelet_feature_gates | default(kube_feature_gates, true) ) %}
featureGates:
{% for feature in feature_gates %}
{{ feature | replace("=", ": ") }}
{% endfor %}
{% endif %}

View File

@@ -1,4 +1,4 @@
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
{% if kubeadm_use_file_discovery %}
@@ -15,13 +15,8 @@ discovery:
unsafeSkipCAVerification: true
{% endif %}
tlsBootstrapToken: {{ kubeadm_token }}
{# TODO: drop the if when we drop support for k8s<1.31 #}
{% if kubeadm_config_api_version == 'v1beta3' %}
timeout: {{ discovery_timeout }}
{% else %}
timeouts:
discovery: {{ discovery_timeout }}
{% endif %}
controlPlane:
localAPIEndpoint:
advertiseAddress: "{{ kube_apiserver_address }}"

View File

@@ -1,5 +1,5 @@
---
apiVersion: kubeadm.k8s.io/{{ kubeadm_config_api_version }}
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
{% if kubeadm_use_file_discovery %}
@@ -21,13 +21,8 @@ discovery:
{% endif %}
{% endif %}
tlsBootstrapToken: {{ kubeadm_token }}
{# TODO: drop the if when we drop support for k8s<1.31 #}
{% if kubeadm_config_api_version == 'v1beta3' %}
timeout: {{ discovery_timeout }}
{% else %}
timeouts:
discovery: {{ discovery_timeout }}
{% endif %}
caCertPath: {{ kube_cert_dir }}/ca.crt
{% if kubeadm_cert_controlplane is defined and kubeadm_cert_controlplane %}
controlPlane:

View File

@@ -86,6 +86,8 @@ kube_vip_leaseduration: 5
kube_vip_renewdeadline: 3
kube_vip_retryperiod: 1
kube_vip_enable_node_labeling: false
kube_vip_bgp_sourceip:
kube_vip_bgp_sourceif:
# Requests for load balancer app
loadbalancer_apiserver_memory_requests: 32M

View File

@@ -6,6 +6,17 @@
- kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp
- kube_vip_arp_enabled
- name: Kube-vip | Check mutually exclusive BGP source settings
vars:
kube_vip_bgp_sourceip_normalized: "{{ kube_vip_bgp_sourceip | default('', true) | string | trim }}"
kube_vip_bgp_sourceif_normalized: "{{ kube_vip_bgp_sourceif | default('', true) | string | trim }}"
assert:
that:
- kube_vip_bgp_sourceip_normalized == '' or kube_vip_bgp_sourceif_normalized == ''
fail_msg: "kube-vip allows only one of kube_vip_bgp_sourceip or kube_vip_bgp_sourceif."
when:
- kube_vip_bgp_enabled | default(false)
- name: Kube-vip | Check if super-admin.conf exists
stat:
path: "{{ kube_config_dir }}/super-admin.conf"

View File

@@ -85,6 +85,16 @@ spec:
value: {{ kube_vip_bgp_peerpass | to_json }}
- name: bgp_peeras
value: {{ kube_vip_bgp_peeras | string | to_json }}
{% set kube_vip_bgp_sourceip_normalized = kube_vip_bgp_sourceip | default('', true) | string | trim %}
{% if kube_vip_bgp_sourceip_normalized %}
- name: bgp_sourceip
value: {{ kube_vip_bgp_sourceip_normalized | to_json }}
{% endif %}
{% set kube_vip_bgp_sourceif_normalized = kube_vip_bgp_sourceif | default('', true) | string | trim %}
{% if kube_vip_bgp_sourceif_normalized %}
- name: bgp_sourceif
value: {{ kube_vip_bgp_sourceif_normalized | to_json }}
{% endif %}
{% if kube_vip_bgppeers %}
- name: bgp_peers
value: {{ kube_vip_bgppeers | join(',') | to_json }}

View File

@@ -116,7 +116,7 @@ flannel_version: 0.27.3
flannel_cni_version: 1.7.1-flannel1
cni_version: "{{ (cni_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_version: "1.18.6"
cilium_version: "1.19.1"
cilium_cli_version: "{{ (ciliumcli_binary_checksums['amd64'] | dict2items)[0].key }}"
cilium_enable_hubble: false
@@ -232,13 +232,6 @@ calico_apiserver_image_repo: "{{ quay_image_repo }}/calico/apiserver"
calico_apiserver_image_tag: "v{{ calico_apiserver_version }}"
pod_infra_image_repo: "{{ kube_image_repo }}/pause"
pod_infra_image_tag: "{{ pod_infra_version }}"
netcheck_version: "1.2.2"
netcheck_agent_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-agent"
netcheck_agent_image_tag: "v{{ netcheck_version }}"
netcheck_server_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-server"
netcheck_server_image_tag: "v{{ netcheck_version }}"
# netchecker doesn't work with etcd>=3.6 because etcd v2 API is removed
netcheck_etcd_image_tag: "v{{ (etcd_binary_checksums['amd64'].keys() | select('version', '3.6', '<'))[0] }}"
cilium_image_repo: "{{ quay_image_repo }}/cilium/cilium"
cilium_image_tag: "v{{ cilium_version }}"
cilium_operator_image_repo: "{{ quay_image_repo }}/cilium/operator"
@@ -270,9 +263,9 @@ kube_vip_version: 1.0.3
kube_vip_image_repo: "{{ github_image_repo }}/kube-vip/kube-vip{{ '-iptables' if kube_vip_lb_fwdmethod == 'masquerade' else '' }}"
kube_vip_image_tag: "v{{ kube_vip_version }}"
nginx_image_repo: "{{ docker_image_repo }}/library/nginx"
nginx_image_tag: 1.28.0-alpine
nginx_image_tag: 1.28.2-alpine
haproxy_image_repo: "{{ docker_image_repo }}/library/haproxy"
haproxy_image_tag: 3.2.4-alpine
haproxy_image_tag: 3.2.13-alpine
# Coredns version should be supported by corefile-migration (or at least work with)
# bundle with kubeadm; if not 'basic' upgrade can sometimes fail
@@ -380,24 +373,6 @@ node_feature_discovery_image_repo: "{{ kube_image_repo }}/nfd/node-feature-disco
node_feature_discovery_image_tag: "v{{ node_feature_discovery_version }}"
downloads:
netcheck_server:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_server_image_repo }}"
tag: "{{ netcheck_server_image_tag }}"
checksum: "{{ netcheck_server_digest_checksum | default(None) }}"
groups:
- k8s_cluster
netcheck_agent:
enabled: "{{ deploy_netchecker }}"
container: true
repo: "{{ netcheck_agent_image_repo }}"
tag: "{{ netcheck_agent_image_tag }}"
checksum: "{{ netcheck_agent_digest_checksum | default(None) }}"
groups:
- k8s_cluster
etcd:
container: "{{ etcd_deployment_type != 'host' }}"
file: "{{ etcd_deployment_type == 'host' }}"

View File

@@ -33,10 +33,6 @@ kube_version_min_required: "{{ (kubelet_checksums['amd64'] | dict2items)[-1].key
## Kube Proxy mode One of ['ipvs', 'iptables', 'nftables']
kube_proxy_mode: ipvs
# Kubeadm config api version
# If kube_version is v1.31 or higher, it will be v1beta4, otherwise it will be v1beta3.
kubeadm_config_api_version: "{{ 'v1beta4' if kube_version is version('1.31.0', '>=') else 'v1beta3' }}"
# Debugging option for the kubeadm config validate command
# Set to false only for development and testing scenarios where validation is expected to fail (pre-release Kubernetes versions, etc.)
kubeadm_config_validate_enabled: true
@@ -152,8 +148,6 @@ manual_dns_server: ""
# Can be host_resolvconf, docker_dns or none
resolvconf_mode: host_resolvconf
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes DNS service (called skydns for historical reasons)
skydns_server: "{{ kube_service_subnets.split(',') | first | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(3) | ansible.utils.ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_subnets.split(',') | first | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(4) | ansible.utils.ipaddr('address') }}"

View File

@@ -0,0 +1,5 @@
---
# Additional string host to inject into NO_PROXY
additional_no_proxy: ""
additional_no_proxy_list: "{{ additional_no_proxy | split(',') }}"
no_proxy_exclude_workers: false

View File

@@ -1,41 +1,63 @@
---
- name: Set facts variables
tags:
- always
block:
- name: Gather node IPs
setup:
gather_subset: '!all,!min,network'
filter: "ansible_default_ip*"
when: ansible_default_ipv4 is not defined or ansible_default_ipv6 is not defined
ignore_unreachable: true
- name: Gather node IPs
setup:
gather_subset: '!all,!min,network'
filter: "ansible_default_ip*"
when: ansible_default_ipv4 is not defined or ansible_default_ipv6 is not defined
ignore_unreachable: true
- name: Set computed IPs varables
vars:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
fallback_ip6: "{{ ansible_default_ipv6.address | d('::1') }}"
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
_ipv4: "{{ ip | default(fallback_ip) }}"
_access_ipv4: "{{ access_ip | default(_ipv4) }}"
_ipv6: "{{ ip6 | default(fallback_ip6) }}"
_access_ipv6: "{{ access_ip6 | default(_ipv6) }}"
_access_ips:
- "{{ _access_ipv4 if ipv4_stack }}"
- "{{ _access_ipv6 if ipv6_stack }}"
_ips:
- "{{ _ipv4 if ipv4_stack }}"
- "{{ _ipv6 if ipv6_stack }}"
set_fact:
cacheable: true
main_access_ip: "{{ _access_ipv4 if ipv4_stack else _access_ipv6 }}"
main_ip: "{{ _ipv4 if ipv4_stack else _ipv6 }}"
# Mixed IPs - for dualstack
main_access_ips: "{{ _access_ips | select }}"
main_ips: "{{ _ips | select }}"
- name: Set computed IPs variables
vars:
fallback_ip: "{{ ansible_default_ipv4.address | d('127.0.0.1') }}"
fallback_ip6: "{{ ansible_default_ipv6.address | d('::1') }}"
# Set 127.0.0.1 as fallback IP if we do not have host facts for host
# ansible_default_ipv4 isn't what you think.
_ipv4: "{{ ip | default(fallback_ip) }}"
_access_ipv4: "{{ access_ip | default(_ipv4) }}"
_ipv6: "{{ ip6 | default(fallback_ip6) }}"
_access_ipv6: "{{ access_ip6 | default(_ipv6) }}"
_access_ips:
- "{{ _access_ipv4 if ipv4_stack }}"
- "{{ _access_ipv6 if ipv6_stack }}"
_ips:
- "{{ _ipv4 if ipv4_stack }}"
- "{{ _ipv6 if ipv6_stack }}"
set_fact:
cacheable: true
main_access_ip: "{{ _access_ipv4 if ipv4_stack else _access_ipv6 }}"
main_ip: "{{ _ipv4 if ipv4_stack else _ipv6 }}"
# Mixed IPs - for dualstack
main_access_ips: "{{ _access_ips | select }}"
main_ips: "{{ _ips | select }}"
- name: Set no_proxy
import_tasks: no_proxy.yml
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
- name: Set no_proxy to all assigned cluster IPs and hostnames
when:
- http_proxy is defined or https_proxy is defined
- no_proxy is not defined
vars:
groups_with_no_proxy:
- kube_control_plane
- "{{ '' if no_proxy_exclude_workers else 'kube_node' }}" # TODO: exclude by a boolean in inventory rather than global variable
- etcd
- calico_rr
hosts_with_no_proxy: "{{ groups_with_no_proxy | select | map('extract', groups) | select('defined') | flatten }}"
_hostnames: "{{ (hosts_with_no_proxy +
(hosts_with_no_proxy | map('extract', hostvars, morekeys=['ansible_hostname'])
| select('defined')))
| unique }}"
no_proxy_prepare:
- "{{ apiserver_loadbalancer_domain_name | d('') }}"
- "{{ loadbalancer_apiserver.address if loadbalancer_apiserver is defined else '' }}"
- "{{ hosts_with_no_proxy | map('extract', hostvars, morekeys=['main_access_ip']) }}"
- "{{ _hostnames }}"
- "{{ _hostnames | map('regex_replace', '$', '.' + dns_domain ) }}"
- "{{ additional_no_proxy_list }}"
- 127.0.0.1
- localhost
- "{{ kube_service_subnets }}"
- "{{ kube_pods_subnets }}"
- svc
- "svc.{{ dns_domain }}"
set_fact:
no_proxy: "{{ no_proxy_prepare | select | flatten | unique | join(',') }}"
run_once: true

View File

@@ -1,40 +0,0 @@
---
- name: Set no_proxy to all assigned cluster IPs and hostnames
set_fact:
# noqa: jinja[spacing]
no_proxy_prepare: >-
{%- if loadbalancer_apiserver is defined -%}
{{ apiserver_loadbalancer_domain_name }},
{{ loadbalancer_apiserver.address | default('') }},
{%- endif -%}
{%- if no_proxy_exclude_workers | default(false) -%}
{% set cluster_or_control_plane = 'kube_control_plane' %}
{%- else -%}
{% set cluster_or_control_plane = 'k8s_cluster' %}
{%- endif -%}
{%- for item in (groups[cluster_or_control_plane] + groups['etcd'] | default([]) + groups['calico_rr'] | default([])) | unique -%}
{{ hostvars[item]['main_access_ip'] }},
{%- if item != hostvars[item].get('ansible_hostname', '') -%}
{{ hostvars[item]['ansible_hostname'] }},
{{ hostvars[item]['ansible_hostname'] }}.{{ dns_domain }},
{%- endif -%}
{{ item }},{{ item }}.{{ dns_domain }},
{%- endfor -%}
{%- if additional_no_proxy is defined -%}
{{ additional_no_proxy }},
{%- endif -%}
127.0.0.1,localhost,{{ kube_service_subnets }},{{ kube_pods_subnets }},svc,svc.{{ dns_domain }}
delegate_to: localhost
connection: local
delegate_facts: true
become: false
run_once: true
- name: Populates no_proxy to all hosts
set_fact:
no_proxy: "{{ hostvars.localhost.no_proxy_prepare | select }}"
# noqa: jinja[spacing]
proxy_env: "{{ proxy_env | combine({
'no_proxy': hostvars.localhost.no_proxy_prepare,
'NO_PROXY': hostvars.localhost.no_proxy_prepare
}) }}"

View File

@@ -177,6 +177,9 @@ rules:
- blockaffinities
- caliconodestatuses
- tiers
- stagednetworkpolicies
- stagedglobalnetworkpolicies
- stagedkubernetesnetworkpolicies
verbs:
- get
- list

View File

@@ -215,3 +215,17 @@ rules:
- calico-cni-plugin
verbs:
- create
{% if calico_version is version('3.29.0', '>=') %}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-tier-getter
rules:
- apiGroups:
- "projectcalico.org"
resources:
- "tiers"
verbs:
- "get"
{% endif %}

View File

@@ -26,3 +26,18 @@ subjects:
- kind: ServiceAccount
name: calico-cni-plugin
namespace: kube-system
{% if calico_version is version('3.29.0', '>=') %}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-tier-getter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-tier-getter
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:kube-controller-manager
{% endif %}

View File

@@ -305,12 +305,9 @@ cilium_enable_well_known_identities: false
# Only effective when monitor aggregation is set to "medium" or higher.
cilium_monitor_aggregation_flags: "all"
cilium_enable_bpf_clock_probe: true
# -- Enable BGP Control Plane
cilium_enable_bgp_control_plane: false
# -- Configure BGP Instances (New bgpv2 API v1.16+)
cilium_bgp_cluster_configs: []

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_advertisement in cilium_bgp_advertisements %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPAdvertisement
metadata:
name: "{{ cilium_bgp_advertisement.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_cluster_config in cilium_bgp_cluster_configs %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPClusterConfig
metadata:
name: "{{ cilium_bgp_cluster_config.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_node_config_override in cilium_bgp_node_config_overrides %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPNodeConfigOverride
metadata:
name: "{{ cilium_bgp_node_config_override.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_bgp_peer_config in cilium_bgp_peer_configs %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumBGPPeerConfig
metadata:
name: "{{ cilium_bgp_peer_config.name }}"

View File

@@ -1,6 +1,6 @@
{% for cilium_loadbalancer_ip_pool in cilium_loadbalancer_ip_pools %}
---
apiVersion: "cilium.io/v2alpha1"
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "{{ cilium_loadbalancer_ip_pool.name }}"

View File

@@ -62,8 +62,8 @@ cni:
autoDirectNodeRoutes: {{ cilium_auto_direct_node_routes | to_json }}
ipv4NativeRoutingCIDR: {{ cilium_native_routing_cidr }}
ipv6NativeRoutingCIDR: {{ cilium_native_routing_cidr_ipv6 }}
ipv4NativeRoutingCIDR: "{{ cilium_native_routing_cidr }}"
ipv6NativeRoutingCIDR: "{{ cilium_native_routing_cidr_ipv6 }}"
encryption:
enabled: {{ cilium_encryption_enabled | to_json }}
@@ -143,6 +143,14 @@ cgroup:
enabled: {{ cilium_cgroup_auto_mount | to_json }}
hostRoot: {{ cilium_cgroup_host_root }}
resources:
limits:
memory: "{{ cilium_memory_limit }}"
cpu: "{{ cilium_cpu_limit }}"
requests:
memory: "{{ cilium_memory_requests }}"
cpu: "{{ cilium_cpu_requests }}"
operator:
image:
repository: {{ cilium_operator_image_repo }}

View File

@@ -14,6 +14,7 @@ dependencies:
chart_ref: "{{ custom_cni_chart_ref }}"
chart_version: "{{ custom_cni_chart_version }}"
wait: true
create_namespace: true
values: "{{ custom_cni_chart_values }}"
repositories:
- name: "{{ custom_cni_chart_repository_name }}"

View File

@@ -17,6 +17,8 @@
--grace-period {{ drain_grace_period }}
--timeout {{ drain_timeout }}
--delete-emptydir-data {{ kube_override_hostname }}
async: "{{ (drain_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
when:
- groups['kube_control_plane'] | length > 0
# ignore servers that are not nodes

View File

@@ -59,6 +59,8 @@
--timeout {{ drain_timeout }}
--delete-emptydir-data {{ kube_override_hostname | default(inventory_hostname) }}
{% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %}
async: "{{ (drain_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
when: drain_nodes
register: result
failed_when:
@@ -82,6 +84,8 @@
--delete-emptydir-data {{ kube_override_hostname | default(inventory_hostname) }}
{% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %}
--disable-eviction
async: "{{ (drain_fallback_timeout | regex_replace('s$', '') | int) + 120 }}"
poll: 15
register: drain_fallback_result
until: drain_fallback_result.rc == 0
retries: "{{ drain_fallback_retries }}"

View File

@@ -49,7 +49,6 @@
assert:
that:
- download_run_once | type_debug == 'bool'
- deploy_netchecker | type_debug == 'bool'
- download_always_pull | type_debug == 'bool'
- helm_enabled | type_debug == 'bool'
- openstack_lbaas_enabled | type_debug == 'bool'
@@ -214,3 +213,13 @@
when:
- kube_external_ca_mode
- not ignore_assert_errors
- name: Download_file | Check if requested Kubernetes are supported
assert:
that:
- kube_version in kubeadm_checksums[image_arch]
- kube_version in kubelet_checksums[image_arch]
- kube_version in kubectl_checksums[image_arch]
msg: >-
Kubernetes v{{ kube_version }} is not supported for {{ image_arch }}.
Please check roles/kubespray_defaults/vars/main/checksums.yml for supported versions.

View File

@@ -6,7 +6,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "kubespray_component_hash_update"
version = "1.0.0"
version = "1.0.1"
dependencies = [
"more_itertools",
"ruamel.yaml",

View File

@@ -126,15 +126,20 @@ def download_hash(downloads: {str: {str: Any}}) -> None:
releases, tags = map(
dict, partition(lambda r: r[1].get("tags", False), downloads.items())
)
repos = {
"with_releases": [r["graphql_id"] for r in releases.values()],
"with_tags": [t["graphql_id"] for t in tags.values()],
}
unique_release_ids = list(dict.fromkeys(
r["graphql_id"] for r in releases.values()
))
unique_tag_ids = list(dict.fromkeys(
t["graphql_id"] for t in tags.values()
))
response = s.post(
"https://api.github.com/graphql",
json={
"query": files(__package__).joinpath("list_releases.graphql").read_text(),
"variables": repos,
"variables": {
"with_releases": unique_release_ids,
"with_tags": unique_tag_ids,
},
},
headers={
"Authorization": f"Bearer {os.environ['API_KEY']}",
@@ -155,31 +160,30 @@ def download_hash(downloads: {str: {str: Any}}) -> None:
except InvalidVersion:
return None
repos = response.json()["data"]
github_versions = dict(
zip(
chain(releases.keys(), tags.keys()),
[
{
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for repo in repos["with_releases"]
]
+ [
{
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for repo in repos["with_tags"]
],
strict=True,
)
)
resp_data = response.json()["data"]
release_versions_by_id = {
gql_id: {
v
for r in repo["releases"]["nodes"]
if not r["isPrerelease"]
and (v := valid_version(r["tagName"])) is not None
}
for gql_id, repo in zip(unique_release_ids, resp_data["with_releases"])
}
tag_versions_by_id = {
gql_id: {
v
for t in repo["refs"]["nodes"]
if (v := valid_version(t["name"].removeprefix("release-")))
is not None
}
for gql_id, repo in zip(unique_tag_ids, resp_data["with_tags"])
}
github_versions = {}
for name, info in releases.items():
github_versions[name] = release_versions_by_id[info["graphql_id"]]
for name, info in tags.items():
github_versions[name] = tag_versions_by_id[info["graphql_id"]]
components_supported_arch = {
component.removesuffix("_checksums"): [a for a in archs.keys()]

View File

@@ -1,6 +1,5 @@
---
# Kubespray settings for tests
deploy_netchecker: true
dns_min_replicas: 1
unsafe_show_logs: true
@@ -29,9 +28,6 @@ crio_registries:
- location: mirror.gcr.io
insecure: false
netcheck_agent_image_repo: "{{ quay_image_repo }}/kubespray/k8s-netchecker-agent"
netcheck_server_image_repo: "{{ quay_image_repo }}/kubespray/k8s-netchecker-server"
nginx_image_repo: "{{ quay_image_repo }}/kubespray/nginx"
flannel_image_repo: "{{ quay_image_repo }}/kubespray/flannel"

View File

@@ -3,8 +3,11 @@
cloud_image: openeuler-2403
vm_memory: 3072
# Openeuler package mgmt is slow for some reason
pkg_install_timeout: "{{ 10 * 60 }}"
# Use metalink for faster package downloads (auto-selects closest mirror)
openeuler_metalink_enabled: true
# CI package installation takes ~7min; default 5min is too tight, use 15min for margin
pkg_install_timeout: "{{ 15 * 60 }}"
# Work around so the Kubernetes 1.35 tests can pass. We will discuss the openeuler support later.
kubeadm_ignore_preflight_errors:

View File

@@ -13,3 +13,21 @@ kube_owner: root
# Node Feature Discovery
node_feature_discovery_enabled: true
kube_asymmetric_encryption_algorithm: "ECDSA-P256"
# Testing no_proxy setup
# The proxy is not intended to be accessed at all, we're only testing
# the no_proxy construction
https_proxy: "http://some-proxy.invalid"
http_proxy: "http://some-proxy.invalid"
additional_no_proxy_list:
- github.com
- githubusercontent.com
- k8s.io
- rockylinux.org
- docker.io
- googleapis.com
- quay.io
- pkg.dev
- amazonaws.com
- cilium.io
skip_http_proxy_on_os_packages: true

View File

@@ -2,7 +2,7 @@
# Instance settings
cloud_image: ubuntu-2404
mode: all-in-one
vm_memory: 1800
vm_memory: 3072
# Kubespray settings
container_manager: crio

View File

@@ -1,4 +1,4 @@
-r ../requirements.txt
distlib==0.4.0 # required for building collections
molecule==25.12.0
molecule==26.3.0
pytest-testinfra==10.2.2

View File

@@ -13,88 +13,6 @@
- import_role: # noqa name[missing]
name: cluster-dump
- name: Wait for netchecker server
command: "{{ bin_dir }}/kubectl get pods --field-selector=status.phase==Running -o jsonpath-as-json={.items[*].metadata.name} --namespace {{ netcheck_namespace }}"
register: pods_json
until:
- pods_json.stdout | from_json | select('match', 'netchecker-server.*') | length == 1
- (pods_json.stdout | from_json | select('match', 'netchecker-agent.*') | length)
>= (groups['k8s_cluster'] | intersect(ansible_play_hosts) | length * 2)
retries: 3
delay: 10
when: inventory_hostname == groups['kube_control_plane'][0]
- name: Get netchecker pods
command: "{{ bin_dir }}/kubectl -n {{ netcheck_namespace }} describe pod -l app={{ item }}"
run_once: true
delegate_to: "{{ groups['kube_control_plane'][0] }}"
with_items:
- netchecker-agent
- netchecker-agent-hostnet
when: not pods_json is success
- name: Perform netchecker tests
run_once: true
delegate_to: "{{ groups['kube_control_plane'][0] }}"
block:
- name: Get netchecker agents
uri:
url: "http://{{ (ansible_default_ipv6.address if not (ipv4_stack | default(true)) else ansible_default_ipv4.address) | ansible.utils.ipwrap }}:{{ netchecker_port }}/api/v1/agents/"
return_content: true
headers:
Accept: application/json
register: agents
retries: 18
delay: "{{ agent_report_interval }}"
until:
- agents is success
- (agents.content | from_json | length) == (groups['k8s_cluster'] | length * 2)
- name: Check netchecker status
uri:
url: "http://{{ (ansible_default_ipv6.address if not (ipv4_stack | default(true)) else ansible_default_ipv4.address) | ansible.utils.ipwrap }}:{{ netchecker_port }}/api/v1/connectivity_check"
return_content: true
headers:
Accept: application/json
register: connectivity_check
retries: 3
delay: "{{ agent_report_interval }}"
until:
- connectivity_check is success
- connectivity_check.content | from_json
rescue:
- name: Get kube-proxy logs
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app=kube-proxy"
- name: Get logs from other apps
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app={{ item }} --all-containers"
with_items:
- kube-router
- flannel
- canal-node
- calico-node
- cilium
- name: Netchecker tests failed
fail:
msg: "netchecker tests failed"
- name: Check connectivity with all netchecker agents
vars:
connectivity_check_result: "{{ connectivity_check.content | from_json }}"
agents_check_result: "{{ agents.content | from_json }}"
assert:
that:
- agents_check_result is defined
- connectivity_check_result is defined
- agents_check_result.keys() | length > 0
- not connectivity_check_result.Absent
- not connectivity_check_result.Outdated
msg: "Connectivity check to netchecker agents failed"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
- name: Create macvlan network conf
command:
cmd: "{{ bin_dir }}/kubectl create -f -"

View File

@@ -36,10 +36,6 @@
when:
- ('macvlan' not in testcase)
- ('hardening' not in testcase)
vars:
agent_report_interval: 10
netcheck_namespace: default
netchecker_port: 31081
- name: Testcases for kubernetes conformance
import_tasks: 100_check-k8s-conformance.yml
delegate_to: "{{ groups['kube_control_plane'][0] }}"