Compare commits

..

22 Commits

Author SHA1 Message Date
Max Gautier
3f6567bba0 Add patches versions checksums (k8s binaries, runc, containerd) (#10876)
Make runc 1.1.12 and containerd 1.7.13 default
Make kubernetes 1.27.10 default
2024-02-05 07:46:01 -08:00
bo.jiang
e09a7c02a6 Fix hardcoded pod infra version
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-01-22 17:23:25 +01:00
Kay Yan
48b3d9c56d cleanup-for-2.23.2 (#10801) 2024-01-18 11:23:30 +01:00
Max Gautier
ca271b8a65 [2.23] Update k8s and etcd hashes + default to latest patch version (#10797)
* k8s: add hashes for 1.25.16, 1.26.12, 1.27.9

Make 1.27.9 default

* [etcd] add 3.5.10 hashes (#10566)

* Update etcd version for 1.26 and 1.27

---------

Co-authored-by: Mohamed Omar Zaian <mohamedzaian@gmail.com>
2024-01-16 15:55:38 +01:00
Max Gautier
c264ae3016 Fix download retry when get_url has no status_code. (#10613) (#10791)
* Fix download retry when get_url has no status_code.

* Fix until clause in download role.

Co-authored-by: Romain <58464216+RomainMou@users.noreply.github.com>
2024-01-15 09:22:47 +01:00
Max Gautier
1bcd7395fa [2.23] Bump galaxy.yml to next expected version (#10728)
* Bump galaxy.yml to next expected version

* Refactor check_galaxy + fix version (#10729)

* Remove checks for docs using exact tags

Instead use a more generic documentation for installing kubespray as a
collection from git.

* Check that we upgraded galaxy.yml to next version

This is only intented to check for human error. The version in galaxy
should be the next (which does not mean the same if we're on master or a
release branch).

* Set collection version to KUBESPRAY_NEXT_VERSION
2024-01-12 10:42:48 +01:00
Max Gautier
3d76c30354 [2.23] Fix calico-node in etcd mode (#10768)
* CI: Document the 'all-in-one' layout + small refactoring (#10725)

* Rename aio to all-in-one and document it

ADTM.
Acronyms don't tell much.

* Refactor vm_count in tests provisioning

* Add test case for calico using etcd datastore (#10722)

* Add multinode ci layout

* Add test case for calico using etcd datastore

* Fix calico-node in etcd mode (#10438)

* Calico : add ETCD endpoints to install-cni container

* Calico : remove nodename from configmap in etcd mode

---------

Co-authored-by: Olivier Levitt <olivier.levitt@gmail.com>
2024-01-12 04:11:00 +01:00
Kay Yan
20a9e20c5a bump vagrant 2.3.7 (#10788) 2024-01-11 12:07:04 +01:00
Max Gautier
e4be213cf7 Disable podCIDR allocation from control-plane when using calico (#10639) (#10715)
* Disable control plane allocating podCIDR for nodes when using calico

Calico does not use the .spec.podCIDR field for its IP address
management.
Furthermore, it can false positives from the kube controller manager if
kube_network_node_prefix and calico_pool_blocksize are unaligned, which
is the case with the default shipped by kubespray.

If the subnets obtained from using kube_network_node_prefix are bigger,
this would result at some point in the control plane thinking it does
not have subnets left for a new node, while calico will work without
problems.

Explicitely set a default value of false for calico_ipam_host_local to
facilitate its use in templates.

* Don't default to kube_network_node_prefix for calico_pool_blocksize

They have different semantics: kube_network_node_prefix is intended to
be the size of the subnet for all pods on a node, while there can be
more than on calico block of the specified size (they are allocated on
demand).

Besides, this commit does not actually change anything, because the
current code is buggy: we don't ever default to
kube_network_node_prefix, since the variable is defined in the role
defaults.
2023-12-13 11:30:18 +01:00
Max Gautier
0107dbc29c [2.23] kubernetes: hashes for 1.27.8, 1.26.11, default to 1.27.8 (#10706)
* kubernetes: add hashes for 1.27.8, 1.26.11

Make 1.27.8 default.

* Convert exoscale tf provider to new version (#10646)

This is untested. It passes terraform validate to un-broke the CI.

* Update 0040-verify-settings.yml (#10699)

remove embedded template

---------

Co-authored-by: piwinkler <9642809+piwinkler@users.noreply.github.com>
2023-12-11 17:26:26 +01:00
Khanh Ngo Van Kim
72da838519 fix: invalid version check in containerd jinja-template config (#10620) 2023-11-17 14:34:00 +01:00
Romain
10679ebb5d [download] Don't fail on 304 Not Modified (#10452) (#10559)
i.e when file was not modified since last download

Co-authored-by: Mathieu Parent <mathieu.parent@insee.fr>
2023-10-30 17:28:43 +01:00
Mohamed Omar Zaian
8775dcf92f [ingress-nginx] Fix nginx controller leader election RBAC permissions (#10569) 2023-10-30 04:24:52 +01:00
Mohamed Omar Zaian
bd382a9c39 Change default cri-o versions for Kubernetes 1.25, 1.26 (#10563) 2023-10-30 04:24:45 +01:00
Mohamed Omar Zaian
ffacfe3ede Add crictl 1.26.1 for Kubernetes v1.26 (#10562) 2023-10-30 04:20:44 +01:00
Unai Arríen
7dcc22fe8c Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane (#10532)
* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane

* Migrate node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane
2023-10-25 18:14:32 +02:00
Mohamed Omar Zaian
47ed2b115d [kubernetes] Add hashes for kubernetes 1.27.7, 1.26.10, 1.25.15 (#10543) 2023-10-19 14:29:50 +02:00
Feruzjon Muyassarov
b9fc4ec43e Refactor NRI activation for containerd and CRI-O (#10470)
Refactor NRI (Node Resource Interface) activation in CRI-O and
containerd. Introduce a shared variable, nri_enabled, to streamline
the process. Currently, enabling NRI requires a separate update of
defaults for each container runtime independently, without any
verification of NRI support for the specific version of containerd
or CRI-O in use.

With this commit, the previous approach is replaced. Now, a single
variable, nri_enabled, handles this functionality. Also, this commit
separates the responsibility of verifying NRI supported versions of
containerd and CRI-O from cluster administrators, and leaves it to
Ansible.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
(cherry picked from commit 1fd31ccc28)
2023-10-06 23:24:19 +02:00
Feruzjon Muyassarov
7bd757da5f Add configuration option for NRI in crio & containerd (#10454)
* [containerd] Add Configuration option for Node Resource Interface

Node Resource Interface (NRI) is a common is a common framework for
plugging domain or vendor-specific custom logic into container
runtime like containerd. With this commit, we introduce the
containerd_disable_nri configuration flag, providing cluster
administrators the flexibility to opt in or out (defaulted to 'out')
of this feature in containerd. In line with containerd's default
configuration, NRI is disabled by default in this containerd role
defaults.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>

* [cri-o] Add configuration option for Node Resource Interface

Node Resource Interface (NRI) is a common is a common framework for
plugging domain or vendor-specific custom logic into container
runtimes like containerd/crio. With this commit, we introduce the
crio_enable_nri configuration flag, providing cluster
administrators the flexibility to opt in or out (defaulted to 'out')
of this feature in cri-o runtime. In line with crio's default
configuration, NRI is disabled by default in this cri-o role
defaults.

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>

---------

Signed-off-by: Feruzjon Muyassarov <feruzjon.muyassarov@intel.com>
(cherry picked from commit f964b3438d)
2023-10-06 23:24:19 +02:00
Mohamed Omar Zaian
9dc2092042 [etcd] make etcd 3.5.9 default (#10483) 2023-09-29 00:22:45 -07:00
Hans Kristian Moen
c7cfd32c40 [cilium] fix: invalid hubble yaml if cilium_hubble_tls_generate is enabled (#10430) (#10476)
Co-authored-by: Toon Albers <45094749+toonalbers@users.noreply.github.com>
2023-09-26 05:01:28 -07:00
Boris Barnier
a4b0656d9b [2.23] Add hashes for kubernetes version 1.25.14, 1.27.6 & 1.26.9 (#10443)
* Add hashes for kubernetes version 1.27.6 & 1.26.9

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>

* Add hashes for kubernetes version 1.25.14

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>

---------

Signed-off-by: Boris Barnier <bozzo@users.noreply.github.com>
2023-09-18 07:18:32 -07:00
244 changed files with 2523 additions and 2400 deletions

View File

@@ -5,4 +5,4 @@ roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]

44
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@@ -0,0 +1,44 @@
---
name: Bug Report
about: Report a bug encountered while operating Kubernetes
labels: kind/bug
---
<!--
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
- **Cloud provider or hardware configuration:**
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
- **Version of Ansible** (`ansible --version`):
- **Version of Python** (`python --version`):
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
**Network plugin used**:
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Command used to invoke ansible**:
**Output of ansible run**:
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Anything else do we need to know**:
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->

View File

@@ -1,117 +0,0 @@
---
name: Bug Report
description: Report a bug encountered while using Kubespray
labels: kind/bug
body:
- type: markdown
attributes:
value: |
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
- type: textarea
id: problem
attributes:
label: What happened?
description: |
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What did you expect to happen?
validations:
required: true
- type: textarea
id: repro
attributes:
label: How can we reproduce it (as minimally and precisely as possible)?
validations:
required: true
- type: markdown
attributes:
value: '### Environment'
- type: textarea
id: os
attributes:
label: OS
placeholder: 'printf "$(uname -srm)\n$(cat /etc/os-release)\n"'
validations:
required: true
- type: textarea
id: ansible_version
attributes:
label: Version of Ansible
placeholder: 'ansible --version'
validations:
required: true
- type: input
id: python_version
attributes:
label: Version of Python
placeholder: 'python --version'
validations:
required: true
- type: input
id: kubespray_version
attributes:
label: Version of Kubespray (commit)
placeholder: 'git rev-parse --short HEAD'
validations:
required: true
- type: dropdown
id: network_plugin
attributes:
label: Network plugin used
options:
- calico
- cilium
- cni
- custom_cni
- flannel
- kube-ovn
- kube-router
- macvlan
- meta
- multus
- ovn4nfv
- weave
validations:
required: true
- type: textarea
id: inventory
attributes:
label: Full inventory with variables
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
- type: input
id: ansible_command
attributes:
label: Command used to invoke ansible
- type: textarea
id: ansible_output
attributes:
label: Output of ansible run
description: We recommend using snippets services like https://gist.github.com/ etc.
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know
description: |
By running scripts/collect-info.yaml you can get a lot of useful informations.
Script can be started by:
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here

View File

@@ -1,5 +0,0 @@
---
contact_links:
- name: Support Request
url: https://kubernetes.slack.com/channels/kubespray
about: Support request or question relating to Kubernetes

11
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,11 @@
---
name: Enhancement Request
about: Suggest an enhancement to the Kubespray project
labels: kind/feature
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

View File

@@ -1,20 +0,0 @@
---
name: Enhancement Request
description: Suggest an enhancement to the Kubespray project
labels: kind/feature
body:
- type: markdown
attributes:
value: Please only use this template for submitting enhancement requests
- type: textarea
id: what
attributes:
label: What would you like to be added
validations:
required: true
- type: textarea
id: why
attributes:
label: Why is this needed
validations:
required: true

20
.github/ISSUE_TEMPLATE/failing-test.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Failing Test
about: Report test failures in Kubespray CI jobs
labels: kind/failing-test
---
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:

View File

@@ -1,41 +0,0 @@
---
name: Failing Test
description: Report test failures in Kubespray CI jobs
labels: kind/failing-test
body:
- type: markdown
attributes:
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
- type: textarea
id: failing_jobs
attributes:
label: Which jobs are failing ?
validations:
required: true
- type: textarea
id: failing_tests
attributes:
label: Which tests are failing ?
validations:
required: true
- type: input
id: since_when
attributes:
label: Since when has it been failing ?
validations:
required: true
- type: textarea
id: failure_reason
attributes:
label: Reason for failure
description: If you don't know and have no guess, just put "Unknown"
validations:
required: true
- type: textarea
id: anything_else
attributes:
label: Anything else we need to know

18
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@@ -0,0 +1,18 @@
---
name: Support Request
about: Support request or question relating to Kubespray
labels: kind/support
---
<!--
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->

View File

@@ -9,7 +9,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.23.2
KUBESPRAY_VERSION: v2.22.1
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"

View File

@@ -27,14 +27,6 @@ ansible-lint:
- ansible-lint -v
except: ['triggers', 'master']
jinja-syntax-check:
extends: .job
stage: unit-tests
tags: [light]
script:
- "find -name '*.j2' -exec tests/scripts/check-templates.py {} +"
except: ['triggers', 'master']
syntax-check:
extends: .job
stage: unit-tests

View File

@@ -265,11 +265,6 @@ packet_debian11-kubelet-csr-approver:
extends: .packet_pr
when: manual
packet_debian12-custom-cni-helm:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3
# Long jobs (45min+)

View File

@@ -69,12 +69,3 @@ repos:
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false
- id: jinja-syntax-check
name: jinja-syntax-check
entry: tests/scripts/check-templates.py
language: python
types:
- jinja
additional_dependencies:
- Jinja2

View File

@@ -1,8 +1,5 @@
# syntax=docker/dockerfile:1
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
FROM ubuntu:jammy-20230308
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
@@ -10,37 +7,7 @@ FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a
ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /kubespray
# hadolint ignore=DL3008
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update -q \
&& apt-get install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/log/*
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN --mount=type=bind,source=roles/kubespray-defaults/defaults/main/main.yml,target=roles/kubespray-defaults/defaults/main/main.yml \
KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
@@ -50,3 +17,29 @@ COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN apt update -q \
&& apt install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& pip install --no-compile --no-cache-dir \
ansible==7.6.0 \
ansible-core==2.14.6 \
cryptography==41.0.1 \
jinja2==3.1.2 \
netaddr==0.8.0 \
jmespath==1.0.1 \
MarkupSafe==2.1.3 \
ruamel.yaml==0.17.21 \
passlib==1.7.4 \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
&& rm -rf /var/lib/apt/lists/* /var/log/* \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

View File

@@ -24,7 +24,6 @@ aliases:
- mzaian
- mrfreezeex
- erikjiang
- vannten
kubespray-emeritus_approvers:
- riverzhang
- atoms

View File

@@ -75,8 +75,8 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.24.1
docker pull quay.io/kubespray/kubespray:v2.24.1
git checkout v2.23.2
docker pull quay.io/kubespray/kubespray:v2.23.2
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.23.2 bash
@@ -161,28 +161,28 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.10
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.10
- [etcd](https://github.com/etcd-io/etcd) v3.5.10
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.7.13
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.26.4
- [calico](https://github.com/projectcalico/calico) v3.25.2
- [cilium](https://github.com/cilium/cilium) v1.13.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
- [coredns](https://github.com/coredns/coredns) v1.10.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.9.4
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.8.4
- [helm](https://helm.sh/) v3.13.1
- [argocd](https://argoproj.github.io/) v2.8.0
- [helm](https://helm.sh/) v3.12.3
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
@@ -202,7 +202,7 @@ Note: Upstart/SysV init based OS types are not supported.
## Requirements
- **Minimum required version of Kubernetes is v1.26**
- **Minimum required version of Kubernetes is v1.25**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.

View File

@@ -5,9 +5,7 @@
Container image collecting script for offline deployment
This script has two features:
(1) Get container images from an environment which is deployed online.
(2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images
@@ -29,7 +27,7 @@ manage-offline-container-images.sh register
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/kubespray-defaults/defaults/main/download.yml` file.
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.

View File

@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/kubespray-defaults/defaults/main/download.yml"}
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
mkdir -p ${TEMP_DIR}
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/kubespray-defaults/defaults/main/download.yml
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"

View File

@@ -23,8 +23,8 @@ function create_container_image_tar() {
mkdir ${IMAGE_DIR}
cd ${IMAGE_DIR}
sudo ${runtime} pull registry:latest
sudo ${runtime} save -o registry-latest.tar registry:latest
sudo docker pull registry:latest
sudo docker save -o registry-latest.tar registry:latest
for image in ${IMAGES}
do
@@ -32,7 +32,7 @@ function create_container_image_tar() {
set +e
for step in $(seq 1 ${RETRY_COUNT})
do
sudo ${runtime} pull ${image}
sudo docker pull ${image}
if [ $? -eq 0 ]; then
break
fi
@@ -42,7 +42,7 @@ function create_container_image_tar() {
fi
done
set -e
sudo ${runtime} save -o ${FILE_NAME} ${image}
sudo docker save -o ${FILE_NAME} ${image}
# NOTE: Here removes the following repo parts from each image
# so that these parts will be replaced with Kubespray.
@@ -95,16 +95,16 @@ function register_container_images() {
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/registries.conf
sudo cp ${TEMP_DIR}/registries.conf /etc/containers/registries.conf
else
echo "runtime package(docker-ce, podman, nerctl, etc.) should be installed"
echo "docker package(docker-ce, etc.) should be installed"
exit 1
fi
tar -zxvf ${IMAGE_TAR_FILE}
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
set +e
sudo ${runtime} container inspect registry >/dev/null 2>&1
sudo docker container inspect registry >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo ${runtime} run --restart=always -d -p 5000:5000 --name registry registry:latest
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
fi
set -e
@@ -112,8 +112,8 @@ function register_container_images() {
file_name=$(echo ${line} | awk '{print $1}')
raw_image=$(echo ${line} | awk '{print $2}')
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
org_image=$(sudo ${runtime} load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
if [ -z "${file_name}" ]; then
echo "Failed to get file_name for line ${line}"
exit 1
@@ -130,9 +130,9 @@ function register_container_images() {
echo "Failed to get image_id for file ${file_name}"
exit 1
fi
sudo ${runtime} load -i ${IMAGE_DIR}/${file_name}
sudo ${runtime} tag ${image_id} ${new_image}
sudo ${runtime} push ${new_image}
sudo docker load -i ${IMAGE_DIR}/${file_name}
sudo docker tag ${image_id} ${new_image}
sudo docker push ${new_image}
done <<< "$(cat ${IMAGE_LIST})"
echo "Succeeded to register container images to local registry."
@@ -143,18 +143,6 @@ function register_container_images() {
echo "- quay_image_repo"
}
# get runtime command
if command -v nerdctl 1>/dev/null 2>&1; then
runtime="nerdctl"
elif command -v podman 1>/dev/null 2>&1; then
runtime="podman"
elif command -v docker 1>/dev/null 2>&1; then
runtime="docker"
else
echo "No supported container runtime found"
exit 1
fi
if [ "${OPTION}" == "create" ]; then
create_container_image_tar
elif [ "${OPTION}" == "register" ]; then

View File

@@ -38,7 +38,7 @@ sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@@ -50,32 +50,70 @@ Example (this one assumes you are using Ubuntu)
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
```
## Using other distrib than Ubuntu***
***Using other distrib than Ubuntu***
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
To leverage a Linux distribution other than Ubuntu 18.04 (Bionic) LTS for your Terraform configurations, you can adjust the AMI search filters within the 'data "aws_ami" "distro"' block by utilizing variables in your `terraform.tfvars` file. This approach ensures a flexible configuration that adapts to various Linux distributions without directly modifying the core Terraform files.
For example, to use:
### Example Usages
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
- **Debian Jessie**: To configure the usage of Debian Jessie, insert the subsequent lines into your `terraform.tfvars`:
```ini
data "aws_ami" "distro" {
most_recent = true
```hcl
ami_name_pattern = "debian-jessie-amd64-hvm-*"
ami_owners = ["379101102735"]
```
filter {
name = "name"
values = ["debian-jessie-amd64-hvm-*"]
}
- **Ubuntu 16.04**: To utilize Ubuntu 16.04 instead, apply the following configuration in your `terraform.tfvars`:
filter {
name = "virtualization-type"
values = ["hvm"]
}
```hcl
ami_name_pattern = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"
ami_owners = ["099720109477"]
```
owners = ["379101102735"]
}
```
- **Centos 7**: For employing Centos 7, incorporate these lines into your `terraform.tfvars`:
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
```hcl
ami_name_pattern = "dcos-centos7-*"
ami_owners = ["688023202711"]
```
```ini
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
```
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
```ini
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = ["dcos-centos7-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["688023202711"]
}
```
## Connecting to Kubernetes

View File

@@ -20,38 +20,20 @@ variable "aws_cluster_name" {
description = "Name of AWS Cluster"
}
variable "ami_name_pattern" {
description = "The name pattern to use for AMI lookup"
type = string
default = "debian-10-amd64-*"
}
variable "ami_virtualization_type" {
description = "The virtualization type to use for AMI lookup"
type = string
default = "hvm"
}
variable "ami_owners" {
description = "The owners to use for AMI lookup"
type = list(string)
default = ["136693071363"]
}
data "aws_ami" "distro" {
most_recent = true
filter {
name = "name"
values = [var.ami_name_pattern]
values = ["debian-10-amd64-*"]
}
filter {
name = "virtualization-type"
values = [var.ami_virtualization_type]
values = ["hvm"]
}
owners = var.ami_owners
owners = ["136693071363"] # Debian-10
}
//AWS VPC Variables

View File

@@ -7,7 +7,7 @@ terraform {
required_providers {
equinix = {
source = "equinix/equinix"
version = "1.24.0"
version = "~> 1.14"
}
}
}

View File

@@ -1,5 +1,5 @@
.terraform
*.tfvars
!sample-inventory/cluster.tfvars
!sample-inventory\/cluster.tfvars
*.tfstate
*.tfstate.backup

View File

@@ -318,7 +318,6 @@ k8s_nodes:
mount_path: string # Path to where the partition should be mounted
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
netplan_critical_dhcp_interface: string # Name of interface to set the dhcp flag critical = true, to circumvent [this issue](https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1776013).
```
For example:

View File

@@ -19,8 +19,8 @@ data "cloudinit_config" "cloudinit" {
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
extra_partitions = [],
netplan_critical_dhcp_interface = ""
# template_file doesn't support lists
extra_partitions = ""
})
}
}
@@ -821,8 +821,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
extra_partitions = each.value.cloudinit.extra_partitions,
netplan_critical_dhcp_interface = each.value.cloudinit.netplan_critical_dhcp_interface,
extra_partitions = each.value.cloudinit.extra_partitions
}) : data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {

View File

@@ -1,4 +1,4 @@
%{~ if length(extra_partitions) > 0 || netplan_critical_dhcp_interface != "" }
%{~ if length(extra_partitions) > 0 }
#cloud-config
bootcmd:
%{~ for idx, partition in extra_partitions }
@@ -8,26 +8,11 @@ bootcmd:
%{~ endfor }
runcmd:
%{~ if netplan_critical_dhcp_interface != "" }
- netplan apply
%{~ endif }
%{~ for idx, partition in extra_partitions }
- mkdir -p ${partition.mount_path}
- chown nobody:nogroup ${partition.mount_path}
- mount ${partition.partition_path} ${partition.mount_path}
%{~ endfor ~}
%{~ if netplan_critical_dhcp_interface != "" }
write_files:
- path: /etc/netplan/90-critical-dhcp.yaml
content: |
network:
version: 2
ethernets:
${ netplan_critical_dhcp_interface }:
dhcp4: true
critical: true
%{~ endif }
%{~ endfor }
mounts:
%{~ for idx, partition in extra_partitions }

View File

@@ -142,14 +142,13 @@ variable "k8s_nodes" {
additional_server_groups = optional(list(string))
server_group = optional(string)
cloudinit = optional(object({
extra_partitions = optional(list(object({
extra_partitions = list(object({
volume_path = string
partition_path = string
partition_start = string
partition_end = string
mount_path = string
})), [])
netplan_critical_dhcp_interface = optional(string, "")
}))
}))
}))
}

View File

@@ -140,4 +140,4 @@ terraform destroy --var-file cluster-settings.tfvars \
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
* `server_groups`: Group servers together
* `servers`: The servers that should be included in the group.
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
* `anti_affinity`: If anti-affinity should be enabled, try to spread the VMs out on separate nodes.

View File

@@ -18,7 +18,7 @@ ssh_public_keys = [
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
machines = {
"control-plane-0" : {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
@@ -133,9 +133,9 @@ loadbalancers = {
server_groups = {
# "control-plane" = {
# servers = [
# "control-plane-0"
# "master-0"
# ]
# anti_affinity_policy = "strict"
# anti_affinity = true
# },
# "workers" = {
# servers = [
@@ -143,6 +143,6 @@ server_groups = {
# "worker-1",
# "worker-2"
# ]
# anti_affinity_policy = "yes"
# anti_affinity = true
# }
}

View File

@@ -22,7 +22,7 @@ locals {
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
# Else don't prefix with anything
resource-prefix = "%{if var.prefix != ""}${var.prefix}-%{endif}"
resource-prefix = "%{ if var.prefix != ""}${var.prefix}-%{ endif }"
}
resource "upcloud_network" "private" {
@@ -38,7 +38,7 @@ resource "upcloud_network" "private" {
resource "upcloud_storage" "additional_disks" {
for_each = {
for disk in local.disks : "${disk.node_name}_${disk.disk_name}" => disk.disk
for disk in local.disks: "${disk.node_name}_${disk.disk_name}" => disk.disk
}
size = each.value.size
@@ -165,7 +165,7 @@ resource "upcloud_firewall_rules" "master" {
for_each = upcloud_server.master
server_id = each.value.id
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.master_allowed_remote_ips
content {
@@ -181,7 +181,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -197,7 +197,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
@@ -213,7 +213,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -229,7 +229,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.master_allowed_ports
content {
@@ -245,7 +245,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -261,7 +261,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -277,7 +277,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -293,7 +293,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -309,7 +309,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -325,7 +325,7 @@ resource "upcloud_firewall_rules" "master" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -354,7 +354,7 @@ resource "upcloud_firewall_rules" "k8s" {
for_each = upcloud_server.worker
server_id = each.value.id
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
@@ -370,7 +370,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
@@ -386,7 +386,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.worker_allowed_ports
content {
@@ -402,7 +402,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -418,7 +418,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -434,7 +434,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -450,7 +450,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
content {
@@ -466,7 +466,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -482,7 +482,7 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
dynamic "firewall_rule" {
dynamic firewall_rule {
for_each = var.firewall_default_deny_in ? ["udp"] : []
content {
@@ -535,7 +535,7 @@ resource "upcloud_loadbalancer_frontend" "lb_frontend" {
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
for_each = {
for be_server in local.lb_backend_servers :
for be_server in local.lb_backend_servers:
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
if var.loadbalancer_enabled
}
@@ -552,7 +552,7 @@ resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
resource "upcloud_server_group" "server_groups" {
for_each = var.server_groups
title = each.key
anti_affinity_policy = each.value.anti_affinity_policy
anti_affinity = each.value.anti_affinity
labels = {}
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id]
}

View File

@@ -3,8 +3,8 @@ output "master_ip" {
value = {
for instance in upcloud_server.master :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
"public_ip": instance.network_interface[0].ip_address
"private_ip": instance.network_interface[1].ip_address
}
}
}
@@ -13,8 +13,8 @@ output "worker_ip" {
value = {
for instance in upcloud_server.worker :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
"public_ip": instance.network_interface[0].ip_address
"private_ip": instance.network_interface[1].ip_address
}
}
}

View File

@@ -99,7 +99,7 @@ variable "server_groups" {
description = "Server groups"
type = map(object({
anti_affinity_policy = string
anti_affinity = bool
servers = list(string)
}))
}

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.12.0"
version = "~>2.7.1"
}
}
required_version = ">= 0.13"

View File

@@ -18,7 +18,7 @@ ssh_public_keys = [
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
machines = {
"control-plane-0" : {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
@@ -28,7 +28,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {}
"additional_disks": {}
},
"worker-0" : {
"node_type" : "worker",
@@ -40,7 +40,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -61,7 +61,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -82,7 +82,7 @@ machines = {
"mem" : "4096"
# The size of the storage in GB
"disk_size" : 250
"additional_disks" : {
"additional_disks": {
# "some-disk-name-1": {
# "size": 100,
# "tier": "maxiops",
@@ -134,9 +134,9 @@ loadbalancers = {
server_groups = {
# "control-plane" = {
# servers = [
# "control-plane-0"
# "master-0"
# ]
# anti_affinity_policy = "strict"
# anti_affinity = true
# },
# "workers" = {
# servers = [
@@ -144,6 +144,6 @@ server_groups = {
# "worker-1",
# "worker-2"
# ]
# anti_affinity_policy = "yes"
# anti_affinity = true
# }
}

View File

@@ -136,7 +136,7 @@ variable "server_groups" {
description = "Server groups"
type = map(object({
anti_affinity_policy = string
anti_affinity = bool
servers = list(string)
}))

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.12.0"
version = "~>2.7.1"
}
}
required_version = ">= 0.13"

View File

@@ -13,7 +13,6 @@
* CNI
* [Calico](docs/calico.md)
* [Flannel](docs/flannel.md)
* [Cilium](docs/cilium.md)
* [Kube Router](docs/kube-router.md)
* [Kube OVN](docs/kube-ovn.md)
* [Weave](docs/weave.md)
@@ -30,6 +29,7 @@
* [Equinix Metal](/docs/equinix-metal.md)
* [vSphere](/docs/vsphere.md)
* [Operating Systems](docs/bootstrap-os.md)
* [Debian](docs/debian.md)
* [Flatcar Container Linux](docs/flatcar.md)
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)

View File

@@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
| Ansible Version | Python Version |
|-----------------|----------------|
| >= 2.15.5 | 3.9-3.11 |
| 2.14 | 3.9-3.11 |
## Inventory

View File

@@ -7,6 +7,10 @@ Kubespray supports multiple ansible versions but only the default (5.x) gets wid
## CentOS 8
CentOS 8 / Oracle Linux 8,9 / AlmaLinux 8,9 / Rocky Linux 8,9 ship only with iptables-nft (ie without iptables-legacy similar to RHEL8)
The only tested configuration for now is using Calico CNI
You need to add `calico_iptables_backend: "NFT"` to your configuration.
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -11,7 +11,7 @@ amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -44,8 +44,6 @@ containerd_registries_mirrors:
image_command_tool: crictl
```
The `containerd_registries` and `containerd_insecure_registries` configs are deprecated.
### Containerd Runtimes
Containerd supports multiple runtime configurations that can be used with

View File

@@ -42,22 +42,6 @@ crio_registries:
[CRI-O]: https://cri-o.io/
The following is a method to enable insecure registries.
```yaml
crio_insecure_registries:
- 10.0.0.2:5000
```
And you can config authentication for these registries after `crio_insecure_registries`.
```yaml
crio_registry_auth:
- registry: 10.0.0.2:5000
username: user
password: pass
```
## Note about user namespaces
CRI-O has support for user namespaces. This feature is optional and can be enabled by setting the following two variables.

41
docs/debian.md Normal file
View File

@@ -0,0 +1,41 @@
# Debian Jessie
Debian Jessie installation Notes:
- Add
```ini
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
to `/etc/default/grub`. Then update with
```ShellSession
sudo update-grub
sudo update-grub2
sudo reboot
```
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```ShellSession
apt-get -t jessie-backports install systemd
```
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
- Add the Ansible repository and install Ansible to get a proper version
```ShellSession
sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
```
- Install Jinja2 and Python-Netaddr
```ShellSession
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -97,9 +97,3 @@ Adding extra options to pass to the docker daemon:
## This string should be exactly as you wish it to appear.
docker_options: ""
```
For Debian based distributions, set the path to store the GPG key to avoid using the default one used in `apt_key` module (e.g. /etc/apt/trusted.gpg)
```yaml
docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
```

View File

@@ -54,11 +54,6 @@ kube_apiserver_enable_admission_plugins:
- PodNodeSelector
- PodSecurity
kube_apiserver_admission_control_config_file: true
# Creates config file for PodNodeSelector
# kube_apiserver_admission_plugins_needs_configuration: [PodNodeSelector]
# Define the default node selector, by default all the workloads will be scheduled on nodes
# with label network=srv1
# kube_apiserver_admission_plugins_podnodeselector_default_node_selector: "network=srv1"
# EventRateLimit plugin configuration
kube_apiserver_admission_event_rate_limits:
limit_1:
@@ -120,7 +115,7 @@ kube_pod_security_default_enforce: restricted
Let's take a deep look to the resultant **kubernetes** configuration:
* The `anonymous-auth` (on `kube-apiserver`) is set to `true` by default. This is fine, because it is considered safe if you enable `RBAC` for the `authorization-mode`.
* The `enable-admission-plugins` includes `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
* The `enable-admission-plugins` has not the `PodSecurityPolicy` admission plugin. This because it is going to be definitely removed from **kubernetes** `v1.25`. For this reason we decided to set the newest `PodSecurity` (for more details, please take a look here: <https://kubernetes.io/docs/concepts/security/pod-security-admission/>). Then, we set the `EventRateLimit` plugin, providing additional configuration files (that are automatically created under the hood and mounted inside the `kube-apiserver` container) to make it work.
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.

View File

@@ -1,6 +1,6 @@
# AWS ALB Ingress Controller
**NOTE:** The current image version is `v1.1.9`. Please file any issues you find and note the version used.
**NOTE:** The current image version is `v1.1.6`. Please file any issues you find and note the version used.
The AWS ALB Ingress Controller satisfies Kubernetes [ingress resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) by provisioning [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html).

View File

@@ -70,9 +70,3 @@ If using [control plane load-balancing](https://kube-vip.io/docs/about/architect
```yaml
kube_vip_lb_enable: true
```
In addition, [load-balancing method](https://kube-vip.io/docs/installation/flags/#environment-variables) could be changed:
```yaml
kube_vip_lb_fwdmethod: masquerade
```

View File

@@ -29,6 +29,10 @@ metallb_config:
nodeselector:
kubernetes.io/os: linux
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
@@ -69,6 +73,7 @@ metallb_config:
primary:
ip_range:
- 192.0.1.0-192.0.1.254
auto_assign: true
pool1:
ip_range:
@@ -77,8 +82,8 @@ metallb_config:
pool2:
ip_range:
- 192.0.3.0/24
avoid_buggy_ips: true # When set to true, .0 and .255 addresses will be avoided.
- 192.0.2.2-192.0.2.2
auto_assign: false
```
## Layer2 Mode

View File

@@ -95,7 +95,7 @@ If you use the settings like the one above, you'll need to define in your invent
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
the ones defined
in [kubesprays-defaults's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/defaults/main/download.yml)
in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main/main.yml)
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
same repository path, you won't have to override anything else.
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].

View File

@@ -29,6 +29,10 @@ If the RHEL 7/8 hosts are already registered to a valid Red Hat support subscrip
## RHEL 8
RHEL 8 ships only with iptables-nft (ie without iptables-legacy)
The only tested configuration for now is using Calico CNI
You need to use K8S 1.17+ and to add `calico_iptables_backend: "NFT"` to your configuration
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

View File

@@ -1,6 +1,6 @@
# Node Layouts
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `multinode`.
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
@@ -18,8 +18,7 @@ never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.
This is necessary to tests setups requiring that nodes are etcd clients (use of cilium as `network_plugin` for instance)
`multinode` layout consists of two separate `kube_node` and a merged single `etcd+kube_control_plane` node.
Note, the canal network plugin deploys flannel as well plus calico policy controller.

View File

@@ -186,8 +186,6 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
* *containerd_additional_runtimes* - Sets the additional Containerd runtimes used by the Kubernetes CRI plugin.
[Default config](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/container-engine/containerd/defaults/main.yml) can be overridden in inventory vars.
* *crio_criu_support_enabled* - When set to `true`, enables the container checkpoint/restore in CRI-O. It's required to install [CRIU](https://criu.org/Installation) on the host when dumping/restoring checkpoints. And it's recommended to enable the feature gate `ContainerCheckpoint` so that the kubelet get a higher level API to simplify the operations (**Note**: It's still in experimental stage, just for container analytics so far). You can follow the [documentation](https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/).
* *http_proxy/https_proxy/no_proxy/no_proxy_exclude_workers/additional_no_proxy* - Proxy variables for deploying behind a
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
that correspond to each node.
@@ -254,6 +252,8 @@ node_taints:
- "node.example.com/external=true:NoSchedule"
```
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
Addons deployed in kube-system namespaces are handled.
* *kubernetes_audit* - When set to `true`, enables Auditing.
The auditing parameters can be tuned via the following variables (which default values are shown below):
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
@@ -271,7 +271,6 @@ node_taints:
* `audit_webhook_mode`: batch
* `audit_webhook_batch_max_size`: 100
* `audit_webhook_batch_max_wait`: 1s
* *kubectl_alias* - Bash alias of kubectl to interact with Kubernetes cluster much easier.
### Custom flags for Kube Components

View File

@@ -12,7 +12,7 @@
hosts: kube_control_plane[0]
tasks:
- name: Include kubespray-default variables
include_vars: ../roles/kubespray-defaults/defaults/main/main.yml
include_vars: ../roles/kubespray-defaults/defaults/main.yaml
- name: Copy get_cinder_pvs.sh to master
copy:
src: get_cinder_pvs.sh

View File

@@ -2,15 +2,13 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.24.2
version: 2.23.3
readme: README.md
authors:
- luksi1
tags:
- infrastructure
repository: https://github.com/kubernetes-sigs/kubespray
dependencies:
ansible.utils: '>=2.5.0'
build_ignore:
- .github
- '*.tar.gz'

View File

@@ -57,7 +57,7 @@ loadbalancer_apiserver_healthcheck_port: 8081
# https_proxy: ""
# https_proxy_cert_file: ""
## Refer to roles/kubespray-defaults/defaults/main/main.yml before modifying no_proxy
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug

View File

@@ -1,8 +1,5 @@
# Registries defined within cri-o.
# crio_insecure_registries:
# - 10.0.0.2:5000
# Auth config for the registries
# crio_registry_auth:
# - registry: 10.0.0.2:5000
# username: user

View File

@@ -24,12 +24,3 @@
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true
## Enable distributed tracing
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
# etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd

View File

@@ -99,11 +99,14 @@ rbd_provisioner_enabled: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
@@ -137,6 +140,8 @@ ingress_alb_enabled: false
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# cert_manager_tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
# - key: node-role.kubernetes.io/control-plane
# effect: NoSchedule
# cert_manager_affinity:
@@ -171,27 +176,33 @@ cert_manager_enabled: false
# MetalLB deployment
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
# metallb_speaker_nodeselector:
# kubernetes.io/os: "linux"
# metallb_controller_nodeselector:
# kubernetes.io/os: "linux"
# metallb_speaker_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_controller_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_version: v0.13.9
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
# metallb_config:
# speaker:
# nodeselector:
# kubernetes.io/os: "linux"
# tollerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# controller:
# nodeselector:
# kubernetes.io/os: "linux"
# tolerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# address_pools:
# primary:
# ip_range:
@@ -233,7 +244,7 @@ metallb_speaker_enabled: "{{ metallb_enabled }}"
# - pool2
argocd_enabled: false
# argocd_version: v2.8.4
# argocd_version: v2.8.0
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
@@ -248,14 +259,3 @@ argocd_enabled: false
# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"
# Kube VIP
kube_vip_enabled: false
# kube_vip_arp_enabled: true
# kube_vip_controlplane_enabled: true
# kube_vip_address: 192.168.56.120
# loadbalancer_apiserver:
# address: "{{ kube_vip_address }}"
# port: 6443
# kube_vip_interface: eth0
# kube_vip_services_enabled: false

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.28.10
kube_version: v1.27.7
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -243,6 +243,15 @@ kubernetes_audit: false
# kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
# Custom PodSecurityPolicySpec for restricted policy
# podsecuritypolicy_restricted_spec: {}
# Custom PodSecurityPolicySpec for privileged policy
# podsecuritypolicy_privileged_spec: {}
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false
# Use ansible_host as external api ip when copying over kubeconfig.

View File

@@ -1,51 +0,0 @@
---
# custom_cni network plugin configuration
# There are two deployment options to choose from, select one
## OPTION 1 - Static manifest files
## With this option, referred manifest file will be deployed
## as if the `kubectl apply -f` method was used with it.
#
## List of Kubernetes resource manifest files
## See tests/files/custom_cni/README.md for example
# custom_cni_manifests: []
## OPTION 1 EXAMPLE - Cilium static manifests in Kubespray tree
# custom_cni_manifests:
# - "{{ playbook_dir }}/../tests/files/custom_cni/cilium.yaml"
## OPTION 2 - Helm chart application
## This allows the CNI backend to be deployed to Kubespray cluster
## as common Helm application.
#
## Helm release name - how the local instance of deployed chart will be named
# custom_cni_chart_release_name: ""
#
## Kubernetes namespace to deploy into
# custom_cni_chart_namespace: "kube-system"
#
## Helm repository name - how the local record of Helm repository will be named
# custom_cni_chart_repository_name: ""
#
## Helm repository URL
# custom_cni_chart_repository_url: ""
#
## Helm chart reference - path to the chart in the repository
# custom_cni_chart_ref: ""
#
## Helm chart version
# custom_cni_chart_version: ""
#
## Custom Helm values to be used for deployment
# custom_cni_chart_values: {}
## OPTION 2 EXAMPLE - Cilium deployed from official public Helm chart
# custom_cni_chart_namespace: kube-system
# custom_cni_chart_release_name: cilium
# custom_cni_chart_repository_name: cilium
# custom_cni_chart_repository_url: https://helm.cilium.io
# custom_cni_chart_ref: cilium/cilium
# custom_cni_chart_version: 1.14.3
# custom_cni_chart_values:
# cluster:
# name: "cilium-demo"

View File

@@ -1,10 +1,4 @@
# See roles/network_plugin/kube-router/defaults/main.yml
# Kube router version
# Default to v2
# kube_router_version: "v2.0.0"
# Uncomment to use v1 (Deprecated)
# kube_router_version: "v1.6.0"
# See roles/network_plugin/kube-router//defaults/main.yml
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true
@@ -25,9 +19,6 @@
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_loadbalancer_ip: false
# Enables BGP graceful restarts
# kube_router_bgp_graceful_restart: true
# Adjust manifest of kube-router daemonset template with DSR needed changes
# kube_router_enable_dsr: false

View File

@@ -1,2 +1,2 @@
---
requires_ansible: '>=2.15.5'
requires_ansible: '>=2.14.0'

View File

@@ -30,7 +30,6 @@ RUN apt update -q \
software-properties-common \
unzip \
libvirt-clients \
qemu-utils \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& apt update -q \
@@ -41,11 +40,11 @@ WORKDIR /kubespray
RUN --mount=type=bind,target=./requirements.txt,src=./requirements.txt \
--mount=type=bind,target=./tests/requirements.txt,src=./tests/requirements.txt \
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main/main.yml,src=./roles/kubespray-defaults/defaults/main/main.yml \
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main.yaml,src=./roles/kubespray-defaults/defaults/main.yaml \
update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \

View File

@@ -4,8 +4,8 @@
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.15.5 # 2.15 versions before 2.15.5 are known to be buggy for kubespray
maximal_ansible_version: 2.17.0
minimal_ansible_version: 2.14.0
maximal_ansible_version: 2.15.0
ansible_connection: local
tags: always
tasks:

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Prepare for etcd install
@@ -17,7 +29,35 @@
- { role: download, tags: download, when: "not skip_downloads" }
- name: Install etcd
import_playbook: install_etcd.yml
hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"
- name: Install etcd certs on nodes if required
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- name: Install Kubernetes nodes
hosts: k8s_cluster

View File

@@ -1,29 +0,0 @@
---
- name: Add worker nodes to the etcd play if needed
hosts: kube_node
roles:
- { role: kubespray-defaults }
tasks:
- name: Check if nodes needs etcd client certs (depends on network_plugin)
group_by:
key: "_kubespray_needs_etcd"
when:
- kube_network_plugin in ["flannel", "canal", "cilium"] or
(cilium_deploy_additionally | default(false)) or
(kube_network_plugin == "calico" and calico_datastore == "etcd")
- etcd_deployment_type != "kubeadm"
tags: etcd
- name: Install etcd
hosts: etcd:kube_control_plane:_kubespray_needs_etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"

View File

@@ -1,8 +1,5 @@
---
- name: Check ansible version
import_playbook: ansible_version.yml
# These are inventory compatibility tasks to ensure we keep compatibility with old style group names
# This is an inventory compatibility playbook to ensure we keep compatibility with old style group names
- name: Add kube-master nodes to kube_control_plane
hosts: kube-master
@@ -48,11 +45,3 @@
- name: Add nodes to no-floating group
group_by:
key: 'no_floating'
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Recover etcd
hosts: etcd[0]

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Confirm node removal
hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}"

View File

@@ -1,6 +1,17 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Gather facts
import_playbook: facts.yml

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Generate the etcd certificates beforehand

View File

@@ -1,8 +1,20 @@
---
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- name: Gather facts
tags: always
import_playbook: facts.yml
- name: Download images to ansible host cache via first kube_control_plane node
@@ -36,7 +48,35 @@
- { role: container-engine, tags: "container-engine", when: deploy_container_engine }
- name: Install etcd
import_playbook: install_etcd.yml
hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"
- name: Install etcd certs on nodes if required
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- name: Handle upgrades to master components first to maintain backwards compat.
gather_facts: False

View File

@@ -1,9 +1,9 @@
ansible==8.5.0
cryptography==41.0.4
ansible==7.6.0
cryptography==41.0.1
jinja2==3.1.2
jmespath==1.0.1
MarkupSafe==2.1.3
netaddr==0.9.0
netaddr==0.8.0
pbr==5.11.1
ruamel.yaml==0.17.35
ruamel.yaml.clib==0.2.8
ruamel.yaml==0.17.31
ruamel.yaml.clib==0.2.7

View File

@@ -0,0 +1,43 @@
import os
from pathlib import Path
import testinfra.utils.ansible_runner
import yaml
from ansible.cli.playbook import PlaybookCLI
from ansible.playbook import Playbook
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ["MOLECULE_INVENTORY_FILE"]
).get_hosts("all")
def read_playbook(playbook):
cli_args = [os.path.realpath(playbook), testinfra_hosts]
cli = PlaybookCLI(cli_args)
cli.parse()
loader, inventory, variable_manager = cli._play_prereqs()
pb = Playbook.load(cli.args[0], variable_manager, loader)
for play in pb.get_plays():
yield variable_manager.get_vars(play)
def get_playbook():
playbooks_path = Path(__file__).parent.parent
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
if "playbooks" in data["provisioner"].keys():
if "converge" in data["provisioner"]["playbooks"].keys():
return data["provisioner"]["playbooks"]["converge"]
else:
return os.path.join(playbooks_path, "converge.yml")
def test_user(host):
for vars in read_playbook(get_playbook()):
assert host.user(vars["user"]["name"]).exists
if "group" in vars["user"].keys():
assert host.group(vars["user"]["group"]).exists
else:
assert host.group(vars["user"]["name"]).exists

View File

@@ -0,0 +1,40 @@
import os
from pathlib import Path
import testinfra.utils.ansible_runner
import yaml
from ansible.cli.playbook import PlaybookCLI
from ansible.playbook import Playbook
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ["MOLECULE_INVENTORY_FILE"]
).get_hosts("all")
def read_playbook(playbook):
cli_args = [os.path.realpath(playbook), testinfra_hosts]
cli = PlaybookCLI(cli_args)
cli.parse()
loader, inventory, variable_manager = cli._play_prereqs()
pb = Playbook.load(cli.args[0], variable_manager, loader)
for play in pb.get_plays():
yield variable_manager.get_vars(play)
def get_playbook():
playbooks_path = Path(__file__).parent.parent
with open(os.path.join(playbooks_path, "molecule.yml"), "r") as yamlfile:
data = yaml.load(yamlfile, Loader=yaml.FullLoader)
if "playbooks" in data["provisioner"].keys():
if "converge" in data["provisioner"]["playbooks"].keys():
return data["provisioner"]["playbooks"]["converge"]
else:
return os.path.join(playbooks_path, "converge.yml")
def test_ssh_config(host):
for vars in read_playbook(get_playbook()):
assert host.file(vars["ssh_bastion_confing__name"]).exists
assert host.file(vars["ssh_bastion_confing__name"]).is_file

View File

@@ -80,33 +80,13 @@
- { option: "name", value: "CentOS-{{ ansible_distribution_major_version }} - Extras" }
- { option: "enabled", value: "1" }
- { option: "gpgcheck", value: "0" }
- { option: "baseurl", value: "http://vault.centos.org/{{ 'altarch' if (ansible_distribution_major_version | int) <= 7 and ansible_architecture == 'aarch64' else 'centos' }}/{{ ansible_distribution_major_version }}/extras/$basearch/{% if ansible_distribution_major_version | int > 7 %}os/{% endif %}" }
- { option: "baseurl", value: "http://mirror.centos.org/{{ 'altarch' if (ansible_distribution_major_version | int) <= 7 and ansible_architecture == 'aarch64' else 'centos' }}/{{ ansible_distribution_major_version }}/extras/$basearch/{% if ansible_distribution_major_version | int > 7 %}os/{% endif %}" }
when:
- use_oracle_public_repo | default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) >= 7.6
- (ansible_distribution_version | float) < 9
# CentOS 7 EOL at July 1, 2024.
- name: Disable CentOS 7 mirrorlist in CentOS-Base.repo
replace:
path: /etc/yum.repos.d/CentOS-Base.repo
regexp: '^mirrorlist='
replace: '#mirrorlist='
become: true
when:
- ansible_distribution_major_version == "7"
# CentOS 7 EOL at July 1, 2024.
- name: Update CentOS 7 baseurl in CentOS-Base.repo
replace:
path: /etc/yum.repos.d/CentOS-Base.repo
regexp: '^#baseurl=http:\/\/mirror.centos.org'
replace: 'baseurl=http:\/\/vault.centos.org'
become: true
when:
- ansible_distribution_major_version == "7"
# CentOS ships with python installed
- name: Check presence of fastestmirror.conf

View File

@@ -18,7 +18,6 @@ containerd_runc_runtime:
base_runtime_spec: cri-base.json
options:
systemdCgroup: "{{ containerd_use_systemd_cgroup | ternary('true', 'false') }}"
binaryName: "{{ bin_dir }}/runc"
containerd_additional_runtimes: []
# Example for Kata Containers as additional runtime:
@@ -48,6 +47,9 @@ containerd_metrics_address: ""
containerd_metrics_grpc_histogram: false
containerd_registries:
"docker.io": "https://registry-1.docker.io"
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
@@ -102,6 +104,3 @@ containerd_supported_distributions:
- "UnionTech"
- "UniontechOS"
- "openEuler"
# Enable container device interface
enable_cdi: false

View File

@@ -1,4 +1,10 @@
---
- name: Restart containerd
command: /bin/true
notify:
- Containerd | restart containerd
- Containerd | wait for containerd
- name: Containerd | restart containerd
systemd:
name: containerd
@@ -6,7 +12,6 @@
enabled: yes
daemon-reload: yes
masked: no
listen: Restart containerd
- name: Containerd | wait for containerd
command: "{{ containerd_bin_dir }}/ctr images ls -q"
@@ -14,4 +19,3 @@
retries: 8
delay: 4
until: containerd_ready.rc == 0
listen: Restart containerd

View File

@@ -61,9 +61,6 @@
src: containerd.service.j2
dest: /etc/systemd/system/containerd.service
mode: 0644
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:containerd.service'"
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
# Remove once we drop support for systemd < 250
notify: Restart containerd
- name: Containerd | Ensure containerd directories exist

View File

@@ -20,10 +20,6 @@ oom_score = {{ containerd_oom_score }}
max_container_log_line_size = {{ containerd_max_container_log_line_size }}
enable_unprivileged_ports = {{ containerd_enable_unprivileged_ports | default(false) | lower }}
enable_unprivileged_icmp = {{ containerd_enable_unprivileged_icmp | default(false) | lower }}
{% if enable_cdi %}
enable_cdi = true
cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
{% endif %}
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "{{ containerd_default_runtime | default('runc') }}"
snapshotter = "{{ containerd_snapshotter | default('overlayfs') }}"
@@ -39,11 +35,7 @@ oom_score = {{ containerd_oom_score }}
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}.options]
{% for key, value in runtime.options.items() %}
{% if value | string != "true" and value | string != "false" %}
{{ key }} = "{{ value }}"
{% else %}
{{ key }} = {{ value }}
{% endif %}
{% endfor %}
{% endfor %}
{% if kata_containers_enabled %}

View File

@@ -2,6 +2,7 @@ server = "https://{{ item.prefix }}"
{% for mirror in item.mirrors %}
[host."{{ mirror.host }}"]
capabilities = ["{{ ([ mirror.capabilities ] | flatten ) | join('","') }}"]
{% if mirror.skip_verify is defined %}
skip_verify = {{ mirror.skip_verify | default('false') | string | lower }}
override_path = {{ mirror.override_path | default('false') | string | lower }}
{% endif %}
{% endfor %}

View File

@@ -1,31 +1,35 @@
---
- name: Restart and enable cri-dockerd
command: /bin/true
notify:
- Cri-dockerd | reload systemd
- Cri-dockerd | restart docker.service
- Cri-dockerd | reload cri-dockerd.socket
- Cri-dockerd | reload cri-dockerd.service
- Cri-dockerd | enable cri-dockerd service
- name: Cri-dockerd | reload systemd
systemd:
name: cri-dockerd
daemon_reload: true
masked: no
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | restart docker.service
service:
name: docker.service
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | reload cri-dockerd.socket
service:
name: cri-dockerd.socket
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | reload cri-dockerd.service
service:
name: cri-dockerd.service
state: restarted
listen: Restart and enable cri-dockerd
- name: Cri-dockerd | enable cri-dockerd service
service:
name: cri-dockerd.service
enabled: yes
listen: Restart and enable cri-dockerd

View File

@@ -18,9 +18,6 @@
src: "{{ item }}.j2"
dest: "/etc/systemd/system/{{ item }}"
mode: 0644
validate: "sh -c '[ -f /usr/bin/systemd/system/factory-reset.target ] || exit 0 && systemd-analyze verify %s:{{ item }}'"
# FIXME: check that systemd version >= 250 (factory-reset.target was introduced in that release)
# Remove once we drop support for systemd < 250
with_items:
- cri-dockerd.service
- cri-dockerd.socket

View File

@@ -69,6 +69,10 @@ youki_runtime:
type: oci
root: /run/youki
# TODO(cristicalin): remove this after 2.21
crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"
crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"
# Reserve 16M uids and gids for user namespaces (256 pods * 65536 uids/gids)
# at the end of the uid/gid space
crio_remap_enable: false
@@ -93,6 +97,3 @@ crio_man_files:
8:
- crio
- crio-status
# If set to true, it will enable the CRIU support in cri-o
crio_criu_support_enabled: false

View File

@@ -1,12 +1,16 @@
---
- name: Restart crio
command: /bin/true
notify:
- CRI-O | reload systemd
- CRI-O | reload crio
- name: CRI-O | reload systemd
systemd:
daemon_reload: true
listen: Restart crio
- name: CRI-O | reload crio
service:
name: crio
state: restarted
enabled: yes
listen: Restart crio

View File

@@ -0,0 +1,120 @@
---
# TODO(cristicalin): drop this file after 2.21
- name: CRI-O kubic repo name for debian os family
set_fact:
crio_kubic_debian_repo_name: "{{ ((ansible_distribution == 'Ubuntu') | ternary('x', '')) ~ ansible_distribution ~ '_' ~ ansible_distribution_version }}"
when: ansible_os_family == "Debian"
- name: Remove legacy CRI-O kubic apt repo key
apt_key:
url: "https://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/Release.key"
state: absent
environment: "{{ proxy_env }}"
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic apt repo
apt_repository:
repo: "deb http://{{ crio_download_base }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
filename: devel-kubic-libcontainers-stable
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic cri-o apt repo
apt_repository:
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
filename: devel-kubic-libcontainers-stable-cri-o
when: crio_kubic_debian_repo_name is defined
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages (CentOS_$releasever)
baseurl: http://{{ crio_download_base }}/CentOS_{{ ansible_distribution_major_version }}/
state: absent
when:
- ansible_os_family == "RedHat"
- ansible_distribution not in ["Amazon", "Fedora"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }} (CentOS_$releasever)"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_{{ ansible_distribution_major_version }}/"
state: absent
when:
- ansible_os_family == "RedHat"
- ansible_distribution not in ["Amazon", "Fedora"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages
baseurl: http://{{ crio_download_base }}/Fedora_{{ ansible_distribution_major_version }}/
state: absent
when:
- ansible_distribution in ["Fedora"]
- not is_ostree
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }}"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/Fedora_{{ ansible_distribution_major_version }}/"
state: absent
when:
- ansible_distribution in ["Fedora"]
- not is_ostree
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: devel_kubic_libcontainers_stable
description: Stable Releases of Upstream github.com/containers packages
baseurl: http://{{ crio_download_base }}/CentOS_7/
state: absent
when: ansible_distribution in ["Amazon"]
- name: Remove legacy CRI-O kubic yum repo
yum_repository:
name: "devel_kubic_libcontainers_stable_cri-o_{{ crio_version }}"
description: "CRI-O {{ crio_version }}"
baseurl: "{{ crio_download_crio }}{{ crio_version }}/CentOS_7/"
state: absent
when: ansible_distribution in ["Amazon"]
- name: Disable modular repos for CRI-O
community.general.ini_file:
path: "/etc/yum.repos.d/{{ item.repo }}.repo"
section: "{{ item.section }}"
option: enabled
value: 0
mode: 0644
become: true
when: is_ostree
loop:
- repo: "fedora-updates-modular"
section: "updates-modular"
- repo: "fedora-modular"
section: "fedora-modular"
# Disable any older module version if we enabled them before
- name: Disable CRI-O ex module
command: "rpm-ostree ex module disable cri-o:{{ item }}"
become: true
when:
- is_ostree
- ostree_version is defined and ostree_version.stdout is version('2021.9', '>=')
with_items:
- 1.22
- 1.23
- 1.24
- name: Cri-o | remove installed packages
package:
name: "{{ item }}"
state: absent
when: not is_ostree
with_items:
- cri-o
- cri-o-runc
- oci-systemd-hook

View File

@@ -27,6 +27,9 @@
import_tasks: "setup-amazon.yaml"
when: ansible_distribution in ["Amazon"]
- name: Cri-o | clean up reglacy repos
import_tasks: "cleanup.yaml"
- name: Cri-o | build a list of crio runtimes with Katacontainers runtimes
set_fact:
crio_runtimes: "{{ crio_runtimes + kata_runtimes }}"

View File

@@ -17,7 +17,7 @@
- name: CRI-O | Remove cri-o apt repo
apt_repository:
repo: "deb {{ crio_download_crio }}{{ crio_version }}/{{ crio_kubic_debian_repo_name }}/ /"
state: absent
state: present
filename: devel-kubic-libcontainers-stable-cri-o
when: crio_kubic_debian_repo_name is defined
tags:

View File

@@ -273,11 +273,6 @@ pinns_path = ""
pinns_path = "{{ bin_dir }}/pinns"
{% endif %}
{% if crio_criu_support_enabled %}
# Enable CRIU integration, requires that the criu binary is available in $PATH.
enable_criu_support = true
{% endif %}
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
# The runtime to use is picked based on the runtime_handler provided by the CRI.
# If no runtime_handler is provided, the runtime will be picked based on the level
@@ -382,7 +377,7 @@ enable_metrics = {{ crio_enable_metrics | bool | lower }}
# The port on which the metrics server will listen.
metrics_port = {{ crio_metrics_port }}
{% if nri_enabled and crio_version is version('v1.26.0', operator='>=') %}
{% if nri_enabled and crio_version >= v1.26.0 %}
[crio.nri]
enable_nri=true

View File

@@ -5,9 +5,6 @@ docker_cli_version: "{{ docker_version }}"
docker_package_info:
pkgs:
# Path where to store repo key
# docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
docker_repo_key_info:
repo_keys:

View File

@@ -1,25 +1,28 @@
---
- name: Restart docker
command: /bin/true
notify:
- Docker | reload systemd
- Docker | reload docker.socket
- Docker | reload docker
- Docker | wait for docker
- name: Docker | reload systemd
systemd:
name: docker
daemon_reload: true
masked: no
listen: Restart docker
- name: Docker | reload docker.socket
service:
name: docker.socket
state: restarted
when: ansible_os_family in ['Flatcar', 'Flatcar Container Linux by Kinvolk'] or is_fedora_coreos
listen: Restart docker
- name: Docker | reload docker
service:
name: docker
state: restarted
listen: Restart docker
- name: Docker | wait for docker
command: "{{ docker_bin_dir }}/docker images"
@@ -27,4 +30,3 @@
retries: 20
delay: 1
until: docker_ready.rc == 0
listen: Restart docker

View File

@@ -57,7 +57,6 @@
apt_key:
id: "{{ item }}"
url: "{{ docker_repo_key_info.url }}"
keyring: "{{ docker_repo_key_keyring|default(omit) }}"
state: present
register: keyserver_task_result
until: keyserver_task_result is succeeded
@@ -67,17 +66,6 @@
environment: "{{ proxy_env }}"
when: ansible_pkg_mgr == 'apt'
# ref to https://github.com/kubernetes-sigs/kubespray/issues/11086
- name: Remove the archived debian apt repository
lineinfile:
path: /etc/apt/sources.list
regexp: 'buster-backports'
state: absent
backup: yes
when:
- ansible_os_family == 'Debian'
- ansible_distribution_release == "buster"
- name: Ensure docker-ce repository is enabled
apt_repository:
repo: "{{ item }}"

View File

@@ -7,4 +7,3 @@ kata_containers_qemu_default_memory: "{{ ansible_memtotal_mb }}"
kata_containers_qemu_debug: 'false'
kata_containers_qemu_sandbox_cgroup_only: 'true'
kata_containers_qemu_enable_mem_prealloc: 'false'
kata_containers_virtio_fs_cache: 'always'

View File

@@ -1,12 +1,11 @@
# Copyright (c) 2017-2019 Intel Corporation
# Copyright (c) 2021 Adobe Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "config/configuration-qemu.toml.in"
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
@@ -19,46 +18,20 @@ kernel = "/opt/kata/share/kata-containers/vmlinux.container"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
{% endif %}
image = "/opt/kata/share/kata-containers/kata-containers.img"
# initrd = "/opt/kata/share/kata-containers/kata-containers-initrd.img"
machine_type = "q35"
# rootfs filesystem type:
# - ext4 (default)
# - xfs
# - erofs
rootfs_type="ext4"
# Enable confidential guest support.
# Toggling that setting may trigger different hardware features, ranging
# from memory encryption to both memory and CPU-state encryption and integrity.
# The Kata Containers runtime dynamically detects the available feature set and
# aims at enabling the largest possible one, returning an error if none is
# available, or none is supported by the hypervisor.
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
# aims at enabling the largest possible one.
# Default false
# confidential_guest = true
# Choose AMD SEV-SNP confidential guests
# In case of using confidential guests on AMD hardware that supports both SEV
# and SEV-SNP, the following enables SEV-SNP guests. SEV guests are default.
# Default false
# sev_snp_guest = true
# Enable running QEMU VMM as a non-root user.
# By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as
# a non-root random user. See documentation for the limitations of this mode.
# rootless = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = ["enable_iommu"]
enable_annotations = []
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
@@ -82,25 +55,11 @@ kernel_params = ""
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Path to the firmware volume.
# firmware TDVF or OVMF can be split into FIRMWARE_VARS.fd (UEFI variables
# as configuration) and FIRMWARE_CODE.fd (UEFI program image). UEFI variables
# can be customized per each user while UEFI code is kept same.
firmware_volume = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Qemu seccomp sandbox feature
# comma-separated list of seccomp sandbox features to control the syscall access.
# For example, `seccompsandbox= "on,obsolete=deny,spawn=deny,resourcecontrol=deny"`
# Note: "elevateprivileges=deny" doesn't work with daemonize option, so it's removed from the seccomp sandbox
# Another note: enabling this feature may reduce performance, you may enable
# /proc/sys/net/core/bpf_jit_enable to reduce the impact. see https://man7.org/linux/man-pages/man8/bpfc.8.html
#seccompsandbox="on,obsolete=deny,spawn=deny,resourcecontrol=deny"
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
@@ -151,12 +110,6 @@ default_memory = {{ kata_containers_qemu_default_memory }}
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10
# Default maximum memory in MiB per SB / VM
# unspecified or == 0 --> will be set to the actual amount of physical RAM
# > 0 <= amount of physical RAM --> will be set to the specified number
# > amount of physical RAM --> will be set to the actual amount of physical RAM
default_maxmemory = 0
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
@@ -175,13 +128,12 @@ default_maxmemory = 0
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
# - virtio-fs-nydus
{% if kata_containers_version is version('2.2.0', '>=') %}
shared_fs = "virtio-fs"
{% else %}
@@ -189,39 +141,27 @@ shared_fs = "virtio-9p"
{% endif %}
# Path to vhost-user-fs daemon.
{% if kata_containers_version is version('2.5.0', '>=') %}
virtio_fs_daemon = "/opt/kata/libexec/virtiofsd"
{% else %}
virtio_fs_daemon = "/opt/kata/libexec/kata-qemu/virtiofsd"
{% endif %}
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: ["/opt/kata/libexec/virtiofsd"]
valid_virtio_fs_daemon_paths = [
"/opt/kata/libexec/virtiofsd",
"/opt/kata/libexec/kata-qemu/virtiofsd",
]
# Your distribution recommends: ["/opt/kata/libexec/kata-qemu/virtiofsd"]
valid_virtio_fs_daemon_paths = ["/opt/kata/libexec/kata-qemu/virtiofsd"]
# Default size of DAX cache in MiB
virtio_fs_cache_size = 0
# Default size of virtqueues
virtio_fs_queue_size = 1024
# Extra args for virtiofsd daemon
#
# Format example:
# ["--arg1=xxx", "--arg2=yyy"]
# Examples:
# Set virtiofsd log level to debug : ["--log-level=debug"]
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"]
virtio_fs_extra_args = ["--thread-pool-size=1"]
# Cache mode:
#
# - never
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
@@ -232,27 +172,13 @@ virtio_fs_extra_args = ["--thread-pool-size=1", "--announce-submounts"]
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "{{ kata_containers_virtio_fs_cache }}"
virtio_fs_cache = "always"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# aio is the I/O mechanism used by qemu
# Options:
#
# - threads
# Pthread based disk I/O.
#
# - native
# Native Linux I/O.
#
# - io_uring
# Linux io_uring API. This provides the fastest I/O operations on Linux, requires kernel>5.1 and
# qemu >=5.0.
block_device_aio = "io_uring"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
@@ -316,11 +242,6 @@ vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Your distribution recommends: ["/var/run/kata-containers/vhost-user"]
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
# The timeout for reconnecting on non-server spdk sockets when the remote end goes away.
# qemu will delay this many seconds and then attempt to reconnect.
# Zero disables reconnecting, and the default is zero.
vhost_user_reconnect_timeout_sec = 0
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
@@ -332,12 +253,17 @@ vhost_user_reconnect_timeout_sec = 0
# Your distribution recommends: [""]
valid_file_mem_backends = [""]
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# -pflash can add image file to VM. The arguments of it should be in format
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
pflashes = []
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. And Debug also enables the hmp socket.
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
enable_debug = {{ kata_containers_qemu_debug }}
@@ -352,18 +278,21 @@ enable_debug = {{ kata_containers_qemu_debug }}
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
#
# nvdimm is not supported when `confidential_guest = true`.
#
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge.
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
@@ -400,15 +329,15 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}".
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered while scanning for hooks,
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
#
@@ -453,19 +382,6 @@ valid_entropy_sources = ["/dev/urandom","/dev/random",""]
# be default_memory.
#enable_guest_swap = true
# use legacy serial for guest console if available and implemented for architecture. Default false
#use_legacy_serial = true
# disable applying SELinux on the VMM process (default false)
disable_selinux=false
# disable applying SELinux on the container process
# If set to false, the type `container_t` is applied to the container process by default.
# Note: To enable guest SELinux, the guest rootfs must be CentOS that is created and built
# with `SELINUX=yes`.
# (default: true)
disable_guest_selinux=true
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
@@ -509,6 +425,31 @@ disable_guest_selinux=true
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
@@ -516,17 +457,24 @@ enable_debug = {{ kata_containers_qemu_debug }}
# Enable agent tracing.
#
# If enabled, the agent will generate OpenTelemetry trace spans.
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicitly with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - If the runtime also has tracing enabled, the agent spans will be
# associated with the appropriate runtime parent span.
# - If enabled, the runtime will wait for the container to shutdown,
# increasing the container shutdown time slightly.
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
@@ -552,6 +500,21 @@ kernel_modules=[]
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
enable_debug = {{ kata_containers_qemu_debug }}
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
@@ -583,19 +546,6 @@ internetworking_model="tcfilter"
# (default: true)
disable_guest_seccomp=true
# vCPUs pinning settings
# if enabled, each vCPU thread will be scheduled to a fixed CPU
# qualified condition: num(vCPU threads) == num(CPUs in sandbox's CPUSet)
# enable_vcpus_pinning = false
# Apply a custom SELinux security policy to the container process inside the VM.
# This is used when you want to apply a type other than the default `container_t`,
# so general users should not uncomment and apply it.
# (format: "user:role:type")
# Note: You cannot specify MCS policy with the label because the sensitivity levels and
# categories are determined automatically by high-level container runtimes such as containerd.
#guest_selinux_label="system_u:system_r:container_t"
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
@@ -613,9 +563,11 @@ disable_guest_seccomp=true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
@@ -624,49 +576,15 @@ disable_guest_seccomp=true
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/virtcontainers#ContainerType
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only={{ kata_containers_qemu_sandbox_cgroup_only }}
# If enabled, the runtime will attempt to determine appropriate sandbox size (memory, CPU) before booting the virtual machine. In
# this case, the runtime will not dynamically update the amount of memory and CPU in the virtual machine. This is generally helpful
# when a hardware architecture or hypervisor solutions is utilized which does not support CPU and/or memory hotplug.
# Compatibility for determining appropriate sandbox (VM) size:
# - When running with pods, sandbox sizing information will only be available if using Kubernetes >= 1.23 and containerd >= 1.6. CRI-O
# does not yet support sandbox sizing annotations.
# - When running single containers using a tool like ctr, container sizing information will be available.
static_sandbox_resource_mgmt=false
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
# These will not be exposed to the container workloads, and are only provided for potential guest services.
sandbox_bind_mounts=[]
# VFIO Mode
# Determines how VFIO devices should be be presented to the container.
# Options:
#
# - vfio
# Matches behaviour of OCI runtimes (e.g. runc) as much as
# possible. VFIO devices will appear in the container as VFIO
# character devices under /dev/vfio. The exact names may differ
# from the host (they need to match the VM's IOMMU group numbers
# rather than the host's)
#
# - guest-kernel
# This is a Kata-specific behaviour that's useful in certain cases.
# The VFIO device is managed by whatever driver in the VM kernel
# claims it. This means it will appear as one or more device nodes
# or network interfaces depending on the nature of the device.
# Using this mode requires specially built workloads that know how
# to locate the relevant device interfaces within the VM.
#
vfio_mode="guest-kernel"
# If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will
# be created on the host and shared via virtio-fs. This is potentially slower, but allows sharing of files from host to guest.
disable_guest_empty_dir=false
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.

View File

@@ -57,8 +57,7 @@ download_retries: 4
docker_image_pull_command: "{{ docker_bin_dir }}/docker pull"
docker_image_info_command: "{{ docker_bin_dir }}/docker images -q | xargs -i {{ '{{' }} docker_bin_dir }}/docker inspect -f {% raw %}'{{ '{{' }} if .RepoTags }}{{ '{{' }} join .RepoTags \",\" }}{{ '{{' }} end }}{{ '{{' }} if .RepoDigests }},{{ '{{' }} join .RepoDigests \",\" }}{{ '{{' }} end }}' {% endraw %} {} | tr '\n' ','"
nerdctl_image_info_command: "{{ bin_dir }}/nerdctl -n k8s.io images --format '{% raw %}{{ .Repository }}:{{ .Tag }}{% endraw %}' 2>/dev/null | grep -v ^:$ | tr '\n' ','"
# Using the ctr instead of nerdctl to workdaround the https://github.com/kubernetes-sigs/kubespray/issues/10670
nerdctl_image_pull_command: "{{ bin_dir }}/ctr -n k8s.io images pull{% if containerd_registries_mirrors is defined %} --hosts-dir {{ containerd_cfg_dir }}/certs.d{%- endif -%}"
nerdctl_image_pull_command: "{{ bin_dir }}/nerdctl -n k8s.io pull --quiet"
crictl_image_info_command: "{{ bin_dir }}/crictl images --verbose | awk -F ': ' '/RepoTags|RepoDigests/ {print $2}' | tr '\n' ','"
crictl_image_pull_command: "{{ bin_dir }}/crictl pull"
@@ -101,7 +100,7 @@ github_image_repo: "ghcr.io"
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
calico_version: "v3.26.4"
calico_version: "v3.25.2"
calico_ctl_version: "{{ calico_version }}"
calico_cni_version: "{{ calico_version }}"
calico_flexvol_version: "{{ calico_version }}"
@@ -115,7 +114,6 @@ flannel_version: "v0.22.0"
flannel_cni_version: "v1.1.2"
cni_version: "v1.3.0"
weave_version: 2.8.1
pod_infra_version: "3.9"
cilium_version: "v1.13.4"
cilium_cli_version: "v0.15.0"
@@ -123,35 +121,41 @@ cilium_enable_hubble: false
kube_ovn_version: "v1.11.5"
kube_ovn_dpdk_version: "19.11-{{ kube_ovn_version }}"
kube_router_version: "v2.0.0"
kube_router_version: "v1.5.1"
multus_version: "v3.8"
helm_version: "v3.13.1"
nerdctl_version: "1.7.1"
helm_version: "v3.12.3"
nerdctl_version: "1.5.0"
krew_version: "v0.4.4"
skopeo_version: "v1.13.2"
# Get kubernetes major version (i.e. 1.17.4 => 1.17)
kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}"
pod_infra_supported_version:
v1.27: "3.9"
v1.26: "3.9"
v1.25: "3.8"
pod_infra_version: "{{ pod_infra_supported_version[kube_major_version] }}"
etcd_supported_versions:
v1.28: "v3.5.10"
v1.27: "v3.5.10"
v1.26: "v3.5.10"
v1.25: "v3.5.9"
etcd_version: "{{ etcd_supported_versions[kube_major_version] }}"
crictl_supported_versions:
v1.28: "v1.28.0"
v1.27: "v1.27.1"
v1.26: "v1.26.1"
v1.25: "v1.25.0"
crictl_version: "{{ crictl_supported_versions[kube_major_version] }}"
crio_supported_versions:
v1.28: v1.28.1
v1.27: v1.27.1
v1.26: v1.26.4
v1.25: v1.25.4
crio_version: "{{ crio_supported_versions[kube_major_version] }}"
yq_version: "v4.35.2"
yq_version: "v4.35.1"
# Download URLs
kubelet_download_url: "https://dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
@@ -281,7 +285,7 @@ coredns_image_is_namespaced: "{{ (coredns_version is version('v1.7.1', '>=')) }}
coredns_image_repo: "{{ kube_image_repo }}{{ '/coredns/coredns' if (coredns_image_is_namespaced | bool) else '/coredns' }}"
coredns_image_tag: "{{ coredns_version if (coredns_image_is_namespaced | bool) else (coredns_version | regex_replace('^v', '')) }}"
nodelocaldns_version: "1.22.28"
nodelocaldns_version: "1.22.20"
nodelocaldns_image_repo: "{{ kube_image_repo }}/dns/k8s-dns-node-cache"
nodelocaldns_image_tag: "{{ nodelocaldns_version }}"
@@ -307,14 +311,14 @@ rbd_provisioner_image_tag: "{{ rbd_provisioner_version }}"
local_path_provisioner_version: "v0.0.24"
local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
local_path_provisioner_image_tag: "{{ local_path_provisioner_version }}"
ingress_nginx_version: "v1.9.4"
ingress_nginx_version: "v1.8.1"
ingress_nginx_controller_image_repo: "{{ kube_image_repo }}/ingress-nginx/controller"
ingress_nginx_controller_image_tag: "{{ ingress_nginx_version }}"
ingress_nginx_kube_webhook_certgen_image_repo: "{{ kube_image_repo }}/ingress-nginx/kube-webhook-certgen"
ingress_nginx_kube_webhook_certgen_image_tag: "v20231011-8b53cabe0"
ingress_nginx_kube_webhook_certgen_image_tag: "v20230407"
alb_ingress_image_repo: "{{ docker_image_repo }}/amazon/aws-alb-ingress-controller"
alb_ingress_image_tag: "v1.1.9"
cert_manager_version: "v1.13.2"
cert_manager_version: "v1.11.1"
cert_manager_controller_image_repo: "{{ quay_image_repo }}/jetstack/cert-manager-controller"
cert_manager_controller_image_tag: "{{ cert_manager_version }}"
cert_manager_cainjector_image_repo: "{{ quay_image_repo }}/jetstack/cert-manager-cainjector"
@@ -336,9 +340,9 @@ csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe"
csi_livenessprobe_image_tag: "v2.5.0"
snapshot_controller_supported_versions:
v1.28: "v4.2.1"
v1.27: "v4.2.1"
v1.26: "v4.2.1"
v1.25: "v4.2.1"
snapshot_controller_image_repo: "{{ kube_image_repo }}/sig-storage/snapshot-controller"
snapshot_controller_image_tag: "{{ snapshot_controller_supported_versions[kube_major_version] }}"
@@ -695,7 +699,7 @@ downloads:
enabled: "{{ kube_network_plugin == 'cilium' or cilium_deploy_additionally | default(false) | bool }}"
file: true
version: "{{ cilium_cli_version }}"
dest: "{{ local_release_dir }}/cilium-{{ cilium_cli_version }}-{{ image_arch }}.tar.gz"
dest: "{{ local_release_dir }}/cilium-{{ cilium_cli_version }}-{{ image_arch }}"
sha256: "{{ ciliumcli_binary_checksum }}"
url: "{{ ciliumcli_download_url }}"
unarchive: true

View File

@@ -120,10 +120,3 @@ etcd_experimental_initial_corrupt_check: true
# may contain some private data, so it is recommended to set it to false
# in the production environment.
unsafe_show_logs: false
# Enable distributed tracing
# https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
etcd_experimental_enable_distributed_tracing: false
etcd_experimental_distributed_tracing_sample_rate: 100
etcd_experimental_distributed_tracing_address: "localhost:4317"
etcd_experimental_distributed_tracing_service_name: etcd

View File

@@ -1,14 +1,22 @@
---
- name: Backup etcd data
command: /bin/true
notify:
- Refresh Time Fact
- Set Backup Directory
- Create Backup Directory
- Stat etcd v2 data directory
- Backup etcd v2 data
- Backup etcd v3 data
when: etcd_cluster_is_healthy.rc == 0
- name: Refresh Time Fact
setup:
filter: ansible_date_time
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Set Backup Directory
set_fact:
etcd_backup_directory: "{{ etcd_backup_prefix }}/etcd-{{ ansible_date_time.date }}_{{ ansible_date_time.time }}"
listen: Restart etcd
- name: Create Backup Directory
file:
@@ -17,8 +25,6 @@
owner: root
group: root
mode: 0600
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Stat etcd v2 data directory
stat:
@@ -27,13 +33,9 @@
get_checksum: no
get_mime: no
register: etcd_data_dir_member
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0
- name: Backup etcd v2 data
when:
- etcd_data_dir_member.stat.exists
- etcd_cluster_is_healthy.rc == 0
when: etcd_data_dir_member.stat.exists
command: >-
{{ bin_dir }}/etcdctl backup
--data-dir {{ etcd_data_dir }}
@@ -44,7 +46,6 @@
register: backup_v2_command
until: backup_v2_command.rc == 0
delay: "{{ retry_stagger | random + 3 }}"
listen: Restart etcd
- name: Backup etcd v3 data
command: >-
@@ -60,5 +61,3 @@
register: etcd_backup_v3_command
until: etcd_backup_v3_command.rc == 0
delay: "{{ retry_stagger | random + 3 }}"
listen: Restart etcd
when: etcd_cluster_is_healthy.rc == 0

View File

@@ -1,18 +1,12 @@
---
- name: Find old etcd backups
ansible.builtin.find:
file_type: directory
recurse: false
paths: "{{ etcd_backup_prefix }}"
patterns: "etcd-*"
register: _etcd_backups
when: etcd_backup_retention_count >= 0
listen: Restart etcd
- name: Cleanup etcd backups
command: /bin/true
notify:
- Remove old etcd backups
- name: Remove old etcd backups
ansible.builtin.file:
state: absent
path: "{{ item }}"
loop: "{{ (_etcd_backups.files | sort(attribute='ctime', reverse=True))[etcd_backup_retention_count:] | map(attribute='path') }}"
shell:
chdir: "{{ etcd_backup_prefix }}"
cmd: "set -o pipefail && find . -name 'etcd-*' -type d | sort -n | head -n -{{ etcd_backup_retention_count }} | xargs rm -rf"
executable: /bin/bash
when: etcd_backup_retention_count >= 0
listen: Restart etcd

Some files were not shown because too many files have changed in this diff Show More