Compare commits

...

126 Commits

Author SHA1 Message Date
Max Gautier
f9ebd45c74 boostrap-os: use import_tasks instead of symlinks (#11508)
Working symlinks are dependant on git configuration (when using the playbook as
a git repository, which is common), precisely `git config
core.symlinks`.

While this is enabled by default, some company policies will disable it.

Instead, use import_tasks which should avoid that class of bugs.
2024-09-05 08:24:49 +01:00
Max Gautier
7f527f6195 Drop support for RHEL 7 / CentOS 7 (#11246)
* Simplify docker systemd unit

systemd handles missing unit by ignoring the dependency so we don't need
to template them.

* Remove RHEL 7/CentOS 7 support

- remove ref in kubespray roles
- move CI from centos 7 to 8
- remove docs related to centos7

* Remove container-storage-setup

Only used for RHEL 7 and CentOS 7
2024-09-05 07:41:01 +01:00
刘旭
3da6c4fc18 Allow for configuring etcd progress notify interval and default set to 5s (#11499) 2024-09-05 06:29:05 +01:00
Max Gautier
e744a117d6 Remove systemd version + ostree check for docker TasksMax (#11493)
systemd ignores unknown keys (with a warning) so version checking is not
necessary.
There is no rationale for excluding it from ostree systems either.
2024-09-02 13:16:57 +01:00
Jongwoo Han
03372d883a upgrade nerdctl to v1.7.6 (#11492)
Signed-off-by: Jongwoo Han <jongwooo.han@gmail.com>
2024-09-01 11:20:44 +01:00
ChengHao Yang
8a961a60c2 Feat: Gateway API CRDs install support (#11376)
* Feat: add Gateway API CRDs installation

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Feat: add Gateway API CRDs variable in inventory

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-08-31 08:24:45 +01:00
ERIK
db0138b2f9 fix: incorrect member matching when removing etcd nodes (#11488)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-08-31 08:20:44 +01:00
Max Gautier
b0be5f2dad Print the name of faulty jinja templates in pre-commit (#11484) 2024-08-30 06:43:30 +01:00
Kay Yan
27c7dc7008 upgrade helm to v3.15.4 (#11486) 2024-08-30 06:39:30 +01:00
Lihai Tu
acc5e579f6 Add conditional checking on ubuntu kernel unattended_upgrades disabling (#11479)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-08-29 15:47:39 +01:00
Takuya Murakami
60b323b17f [CI] Add a CI job to test cluster upgrading, and fix bug of testcases_run.sh (#11458)
* Fix: fix testcases_run.sh for upgrade tests

Need to git checkout ${CI_COMMIT_SHA} before running upgrade playbook (revert #11173 partially)

* feat: add CI job to test upgrade

Add a packet_ubuntu22-calico-all-in-one-upgrade job
2024-08-29 15:47:32 +01:00
Ehsan Golpayegani
924a979955 Calico v3.28.[0-1] checksums and change calico default version (#11234)
* make calico api server manifest backward compatible with version older than 3.27.3

Add 3.28.1 checksums
Add 3.28.0 checksums
Change default version to 3.27.3

* change default calico version to 3.28.1

* Set mount type to DirectoryOrCreate for hostPath needed by Calico
2024-08-29 12:10:28 +01:00
Max Gautier
5fe8714f05 Adding myself (VannTen) as approver (#11483) 2024-08-29 10:30:29 +01:00
Kay Yan
6acb44eeaf update containerd 1.7.21 (#11478) 2024-08-29 04:22:29 +01:00
Takuya Murakami
c89ea7e4c7 Fix: remove --config option from kubeadm upgrade (#11350) (#11352)
We can't mix some options with --config for kubeadm upgrade.
The --config on upgrade is deprecated, and should be removed.
2024-08-29 03:08:29 +01:00
Selçuk Arıbalı
3d9e4951ce fix static api server advertise address (#11457) 2024-08-28 15:20:56 +01:00
Max Gautier
776b40a329 Adjust task name since we allow empty kube_node (#11474) 2024-08-28 06:35:02 +01:00
Max Gautier
a3d0ba230d Remove kubeadm_version and use kube_version instead (#11473)
We explicitly check for equality so customizing kubeadm_version does not
work at the moment.

Use only one variable instead.
2024-08-28 06:34:56 +01:00
Vlad Korolev
9a7b021eb8 Do not use ‘yes/no’ for boolean values (#11472)
Consistent boolean values in ansible playbooks
2024-08-28 06:30:56 +01:00
R. P. Taylor
5c5421e453 fix double pop of access_ip (#11435) 2024-08-27 16:28:57 +01:00
dependabot[bot]
1798989f99 Bump molecule from 24.7.0 to 24.8.0 (#11460)
Bumps [molecule](https://github.com/ansible-community/molecule) from 24.7.0 to 24.8.0.
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v24.7.0...v24.8.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-27 14:32:56 +01:00
kyrie
961a6a8c9e fix reset network for tencent OS (#11459)
Signed-off-by: KubeKyrie <shaolong.qin@daocloud.io>
2024-08-26 15:32:08 +01:00
Lola Delannoy
2f84567a69 Add containerd config options (#11080)
* chore(containerd): add some config debug options

See: https://github.com/containerd/containerd/blob/v1.7.15/docs/man/containerd-config.toml.5.md

* chore(containerd): add CRI config options

See: https://github.com/containerd/containerd/blob/v1.7.15/docs/man/containerd-config.toml.5.md
See: https://github.com/containerd/containerd/blob/v1.7.15/docs/cri/config.md
2024-08-21 05:13:05 +01:00
dependabot[bot]
171b0e60aa Bump tox from 4.17.1 to 4.18.0 (#11461)
Bumps [tox](https://github.com/tox-dev/tox) from 4.17.1 to 4.18.0.
- [Release notes](https://github.com/tox-dev/tox/releases)
- [Changelog](https://github.com/tox-dev/tox/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/tox/compare/4.17.1...4.18.0)

---
updated-dependencies:
- dependency-name: tox
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-20 02:35:44 -07:00
Mohamed Omar Zaian
c4338687e1 [ingress-nginx] upgrade to 1.11.2 (#11463) 2024-08-19 06:10:27 -07:00
Mohamed Omar Zaian
ad1ce92b41 Update node-feature-discovery to v0.16.4 (#11250) 2024-08-19 05:59:30 -07:00
kokyhm
1093c76f9b bump k8s version (#11455) 2024-08-19 00:12:33 -07:00
ChengHao Yang
c7935e2988 Add tico88612 as reviewer (#11453)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-08-16 07:06:39 -07:00
Ho Kim
0306771c29 fix: cleanup networkmanager dns conf on reset (#11440) 2024-08-15 06:43:19 -07:00
Mengxin Liu
390d74706c [kube-ovn] update version to 1.12.21 (#11445)
Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
2024-08-15 06:39:18 -07:00
dependabot[bot]
ce9ba9a8bf Bump tox from 4.16.0 to 4.17.1 (#11442)
Bumps [tox](https://github.com/tox-dev/tox) from 4.16.0 to 4.17.1.
- [Release notes](https://github.com/tox-dev/tox/releases)
- [Changelog](https://github.com/tox-dev/tox/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/tox/compare/4.16.0...4.17.1)

---
updated-dependencies:
- dependency-name: tox
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-14 04:19:37 -07:00
Ho Kim
fe4cbbccd1 fix: correct resolvconf typo (#11439) 2024-08-14 02:07:55 -07:00
Selçuk Arıbalı
e43e08c7d1 fix: use super-admin.conf for kube-vip on first master when it exists (#11422)
* fix: use super-admin.conf for kube-vip when it exists

* Mathieu Parent add as co-author

Co-authored-by: Mathieu Parent <math.parent@gmail.com>

* template change for readability

* fix lint error

---------

Co-authored-by: Mathieu Parent <math.parent@gmail.com>
2024-08-10 21:35:58 -07:00
Cyclinder
28712045a5 bump cni version to v1.4.0 (#10698) 2024-08-10 05:25:58 -07:00
Not Darko
1968db9a52 fix: skip multus when not defined (#10934)
fix task failure:
TASK [kubernetes-apps/network_plugin/multus : Multus | Start resources] ************************************************
fatal: [hfal12k8n1 -> {{ groups['kube_control_plane'][0] }}]: FAILED! => {"msg": "Error in jmespath.search in json_query filter plugin:\n'ansible.vars.hostvars.HostVarsVars object' has no attribute 'multus_manifest_2'"}
2024-08-06 03:42:50 -07:00
Slavi Pantaleev
cc03ca62be Avoid empty "supersede domain-name-servers" directives for dhclient.conf (#10948)
Fixes https://github.com/kubernetes-sigs/kubespray/issues/10947

This patch aims to be minimal and intentionally:

- does not change the generation logic for `supersede_domain` and `supersede_search`
- does not change how `nameserverentries` (for NetworkManager) is built

It seems like `nameserverentries` in the "Generate nameservers for resolvconf, including cluster DNS"
task is built the same way as `dhclient_supersede_nameserver_entries_list`.
However, `nameserverentries` in the "Generate nameservers for resolvconf, not including cluster DNS"
task (below) is built differently for some reason. It includes `configured_nameservers` as well.
Due to these differences, I have refrained from reusing the same building logic
(`dhclient_supersede_nameserver_entries_list`) for both.

If the `configured_nameservers` addition can be removed or made to apply
to dhclient as well, we could potentially build a single list and then
generate the `nameserverentries` and `supersede_nameserver` strings from it.
2024-08-06 03:38:51 -07:00
Injun Baeg
5f18fe739e Restart kube-proxy pods only on configmap changes (#11401) 2024-08-06 00:50:50 -07:00
kyrie
343d680371 fix kylin OS choose NetworkManager (#11406)
Signed-off-by: KubeKyrie <shaolong.qin@daocloud.io>
2024-08-05 03:34:59 -07:00
Mohamed Omar Zaian
3d1653f950 [containerd] add hashes for versions '1.6.32-34', 'v1.7.17-20' and make v1.7.20 default (#11413) 2024-08-05 02:48:07 -07:00
Bas
dd51ef6f96 Bugfix/code inspection. (#11384)
- Make ansible-galaxy collection dependencies explicit
- Reorganized requirements.yml
- Adding required collections to galaxy.yml
- Ansible 9.6.0 was yanked on Pypi
- Sync pre-commit requirements with requirements.txt

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
2024-08-02 03:43:54 -07:00
James
4e99b94dcc Add generic post upgrade hooks for node (#11368) 2024-07-31 21:58:48 -07:00
Sanyam Shah
54ac5a6de4 Update cni-kube-ovn.yml.j2 (#11357)
Made corrections in Indentation at L658 which causes kubespray execution failed with YAML to Json conversion. #11356
2024-07-31 21:58:39 -07:00
David
2799f11475 Add support for LB in upcloud private zone (#11260) 2024-07-31 21:58:30 -07:00
Mohamed Omar Zaian
8d497b49a6 [kubernetes] Add hashes for kubernetes 1.29.7, 1.28.[11-12] (#11407) 2024-07-31 03:50:56 -07:00
Kay Yan
86f980393c Merge pull request #11402 from tu1h/fix_centos_baserepo
Check CentOS-Base.repo exists for CentOS 7
2024-07-30 11:08:22 +08:00
Erwan Miran
d469503e84 Make netchecker log levels configurable (#11334)
* Make netchecker log levels configurable

* use ETCD_LOG_LEVEL
2024-07-28 23:57:56 -07:00
tu1h
351832ba1d Check CentOS-Base.repo exists for CentOS 7
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-07-29 13:49:14 +08:00
R. P. Taylor
468c5641b2 fix kube_reserved so it only controls kubeReservedCgroup (#11367) 2024-07-26 01:39:20 -07:00
Ugur Can Ozturk
2299e49e0e [containerd/tracing]: fix containerd tracing templating (#11372)
Signed-off-by: Ugur Ozturk <ugurozturk918@gmail.com>
2024-07-26 01:30:38 -07:00
Tom M.
c0fabccaf6 Add missing advertise-address flag to Kubeadm config, so it's passed to api-server (#11387) 2024-07-26 01:22:05 -07:00
Kay Yan
2ac5b37aa9 Merge pull request #11391 from tico88612/bump/k8s-1.30.3
Make kubernetes v1.30.3 default
2024-07-26 16:15:01 +08:00
Lihai Tu
8208a3f04f Rename systemd module to systemd_service (#11396)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-07-26 01:11:39 -07:00
Lihai Tu
2d194af85e Limit nodes in gather ansible_default_ipv4 (#11370)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-07-25 19:17:48 -07:00
dependabot[bot]
8022eddb55 Bump ansible-lint from 24.6.1 to 24.7.0 (#11380)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 24.6.1 to 24.7.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v24.6.1...v24.7.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-25 19:06:07 -07:00
Tom M.
242edd14ff Fix etcd certificate to acces address as SAN (#11388) 2024-07-25 18:49:23 -07:00
Bas
8f5f75211f Improving yamllint configuration (#11389)
Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
2024-07-25 18:42:20 -07:00
dependabot[bot]
5394715d9b Bump jsonschema from 4.22.0 to 4.23.0 (#11381)
Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from 4.22.0 to 4.23.0.
- [Release notes](https://github.com/python-jsonschema/jsonschema/releases)
- [Changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-jsonschema/jsonschema/compare/v4.22.0...v4.23.0)

---
updated-dependencies:
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-23 23:51:24 -07:00
ChengHao Yang
56e26d6061 Bump: CRI-O from v1.30.2 to v1.30.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-07-21 21:54:41 +08:00
ChengHao Yang
513e18cb90 Bump: Kubernetes from v1.30.2 to v1.30.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2024-07-21 21:54:16 +08:00
ChengHao Yang
5f35b66256 Bump: OpenStack Cloud Controller Manager to 1.30.0 (#11358)
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-07-16 02:22:54 -07:00
ChengHao Yang
bab0398c1e Bump Cinder CSI Plugin to v1.30.0 (#11374)
* Chore: bump cinder-csi-plugin from v1.29.0 to v1.30.0

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

* Docs: update README.md cinder-csi-plugin version

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-07-13 02:01:08 -07:00
dependabot[bot]
d993b2b8cf Bump molecule from 24.2.1 to 24.7.0 (#11373)
Bumps [molecule](https://github.com/ansible-community/molecule) from 24.2.1 to 24.7.0.
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v24.2.1...v24.7.0)

---
updated-dependencies:
- dependency-name: molecule
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-13 01:53:08 -07:00
dependabot[bot]
c89f901595 Bump tox from 4.15.0 to 4.16.0 (#11363)
Bumps [tox](https://github.com/tox-dev/tox) from 4.15.0 to 4.16.0.
- [Release notes](https://github.com/tox-dev/tox/releases)
- [Changelog](https://github.com/tox-dev/tox/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/tox/compare/4.15.0...4.16.0)

---
updated-dependencies:
- dependency-name: tox
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-11 05:25:24 -07:00
R. P. Taylor
2615805da2 Increase ansible timeout to 300 (#11354) 2024-07-10 19:19:24 -07:00
ChengHao Yang
464cc716d7 Feat: Update CentOS 7 EOL package to vault.centos.org (#11360)
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-07-08 04:36:52 -07:00
ERIK
1ebd860c13 [kubernetes] Add hashes for kubernetes 1.29.6 (#11351)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-07-05 00:18:25 -07:00
ChengHao Yang
474b259cf8 CI: Remove Debian 10 support & macvlan test move to Debian 12 (#11347)
* CI: macvlan test switch to debian 11 & default job

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

* CI: cilium-svc-proxy test switch to debian 12

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

* CI: remove debian 10 test

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

* Docs: remove debian 10 support

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-07-03 09:13:59 -07:00
Takuya Murakami
a0d03d9fa6 [kubernetes] Support kubernetes 1.30.2 (#11343) 2024-07-03 00:06:20 -07:00
Erwan Miran
0bcedd4603 Make local_volume_provisioner log level configurable (#11336) 2024-07-02 07:14:06 -07:00
Erwan Miran
413572eced Make calico-kube-controllers log level configurable (#11335) 2024-07-02 07:13:59 -07:00
dependabot[bot]
0be525c76f Bump ansible-lint from 24.5.0 to 24.6.1 (#11320)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 24.5.0 to 24.6.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v24.5.0...v24.6.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-02 07:09:59 -07:00
Antoine Legrand
fe97b99984 CI: remove centos7 and weave jobs from test pipeline (#11344)
Centos7 reached EOL and the jobs are failing.
Weave network is an archived project
2024-07-02 04:21:59 -07:00
ChengHao Yang
348335ece5 [cert-manager] upgrade to v1.14.7 (#11341)
* Feat: upgrade cert-manager crd to 1.14.7

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

* Feat: upgrade cert-manager download version to 1.14.7

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-07-02 00:19:58 -07:00
Takuya Murakami
ee3fef1051 [kubernetes] Add hashes for kubernetes 1.30 (#11109) (#11261)
Add hashes to crictl, crio, kubelet, kubectl and kubeadm
2024-07-02 00:15:59 -07:00
Antoine Legrand
a0587e0b8e CI: rework pipeline: short/extended based on labels (#11324)
* CI: reduce VM resources requests to improve scheduling

* CI: Reduce default jobs; add labels(ci-full/extended) to run more test

* CI: use jobs dependencies instead of stages

* precommit one-job

* CI: Use Kubevirt VM to run Molecule and Vagrant jobs
2024-07-01 03:25:36 -07:00
Keita Mochizuki
ff18f65a17 add ingress controller svc nodeport param (#11310) 2024-06-30 21:58:05 -07:00
opethema
35e904d7c3 fix image and kube-vip links (#11267) 2024-06-28 18:42:06 -07:00
dependabot[bot]
9a6922125c Bump netaddr from 1.2.1 to 1.3.0 (#11258)
Bumps [netaddr](https://github.com/netaddr/netaddr) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/netaddr/netaddr/releases)
- [Changelog](https://github.com/netaddr/netaddr/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/netaddr/netaddr/compare/1.2.1...1.3.0)

---
updated-dependencies:
- dependency-name: netaddr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-26 07:02:22 -07:00
Max Gautier
821dfbfdba Switch back pre-commit hook misspell to upstream (#11280)
The pull request adding the pre-commit hook config was merged.
2024-06-26 05:14:21 -07:00
ChengHao Yang
cce585066e Bump CNI weave 2.8.1 to 2.8.7 (community version) (#11228)
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-06-26 02:40:27 -07:00
Alexander
619938da95 add the ability to configure extra args to the different cinder-csi-p… (#11169)
* add the ability to configure extra args to the different cinder-csi-plugin containers

* endfor block added to be syntactically correct jinja
2024-06-26 02:40:20 -07:00
Keita Mochizuki
88b502f29d add ingress controller admission svc (#11309) 2024-06-26 02:30:41 -07:00
Serge Hartmann
db316a566d dependencies for kubelet.service (#11297)
Signed-off-by: serge Hartmann <serge.hartmann@gmail.com>
2024-06-26 02:30:34 -07:00
Lihai Tu
817c61695d Support disable unattended-upgrades for Linux kernel and all packages start with linux- on Ubuntu (#11296)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-06-26 02:30:27 -07:00
Lihai Tu
0c84175e3b Bump docker_containerd to 1.6.32 (#11293)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-06-26 02:30:21 -07:00
Elias-elastisys
cae266a045 Upgrade upcloud csi driver to v1.1.0 and add snapshot features (#11303) 2024-06-26 02:26:21 -07:00
Robin Wallace
15b62cc7ce upcloud: v5.6.0 and better server groups (#11311) 2024-06-26 01:42:21 -07:00
Daniil Muidinov
c352773737 fix task Set label to node (#11307) 2024-06-25 06:35:40 -07:00
Kay Yan
af0ac977a5 fix-ci-packet_centos7-calico-ha-once-localhost (#11315)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2024-06-24 20:49:40 -07:00
Aleksander Salamatov
40f5b28302 Update Vagrantfile: fix path for vagrant.md (#11306) 2024-06-24 20:03:40 -07:00
qlijin
2d612cde4d Fix broken links in the cilium doc (#11318) 2024-06-24 19:45:42 -07:00
ERIK
27cb22cee4 update docker cli version for ubuntu (#11291)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2024-06-24 05:20:56 -07:00
Kay Yan
b7873a0891 add step for k8s upgrade on release process (#11321)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2024-06-23 23:34:57 -07:00
peterw
edce2b528d add cilium_hubble_event_buffer_capacity & cilium_hubble_event_queue_size vars (#10943) 2024-06-23 20:14:56 -07:00
Kay Yan
647092b483 fix openstack cleanup (#11299)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2024-06-21 10:30:55 -07:00
Lihai Tu
921b0c0bed Add options to control images pulling of kubelet (#11094)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2024-06-21 07:54:54 -07:00
tico88612
24dc4cef56 Feat: upgrade cert-manager from 1.13.2 to 1.13.6 (#11279)
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
2024-06-18 00:45:31 -07:00
Antoine Legrand
3e72be2f72 CI: switch to unprivileged Kaniko to build pipeline images (#11292) 2024-06-11 06:19:02 -07:00
Antoine Legrand
f85e96904d Pipeline image: add qemu-utils (#11281) 2024-06-10 06:53:55 -07:00
Ehsan Golpayegani
0c8d29462d make sure peers is defined. (#11259)
* make sure peers is defined.

* Update peer_with_router.yml
2024-06-04 10:02:23 -07:00
itayporezky
351393e32a Removed unnecessary python modules (#11199) 2024-05-31 07:05:01 -07:00
Kay Yan
b70eaa0470 fix auto bump PR is blocked by label (#11256) 2024-05-31 06:28:48 -07:00
Antoine Legrand
ef6d24a49e CI require a 'lgtm' or 'ok-to-test' labels to pass (#11251)
- Require a 'lgtm' or 'ok-to-test' label for running CI after the
  moderator stage

Signed-off-by: ant31 <2t.antoine@gmail.com>
2024-05-31 03:42:49 -07:00
jmaccabee13
6cf11a9c72 fix Hetzner group names (#11232)
The inventory file generated by Terraform produces the following warnings:
```
[WARNING]:  * Failed to parse <PATH>/kubespray/contrib/terraform/hetzner/inventory.ini with ini plugin:
<PATH>/kubespray/contrib/terraform/hetzner/inventory.ini:21: Section [k8s_cluster:children] includes undefined group: kube-master
...
[WARNING]: Could not match supplied host pattern, ignoring: kube-master

PLAY [Add kube-master nodes to kube_control_plane] ********************************************************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: kube-node

PLAY [Add kube-node nodes to kube_node] *******************************************************************************************************************
skipping: no hosts matched
```
2024-05-31 01:29:55 -07:00
Kay Yan
aba79d1b3c Merge pull request #11235 from kubernetes-sigs/dependabot/pip/pytest-testinfra-10.1.1
Bump pytest-testinfra from 10.1.0 to 10.1.1
2024-05-31 10:13:05 +08:00
spnngl
4b82e90dcb fix(bootstrap-os): do not install pkgs requirements on flatcar (#11224)
Fix regression added in 663fcd104c for
flatcar nodes.

See: 663fcd104c
2024-05-30 06:34:25 -07:00
Hedayat Vatankhah (هدایت)
dedc00661a Add 'system-packages' tag to control installing packages from OS repositories (#10872) 2024-05-30 04:25:21 -07:00
Kubernetes Prow Robot
0624a3061a Merge pull request #11239 from VannTen/cleanup/collection-build-test
Cleanup galaxy.yml
2024-05-30 04:10:29 -07:00
Max Gautier
3082fa3d0f Allow empty kube_node group (#11248)
While uncommon, provisioning only a control plane is a valid use case,
so don't block it.
2024-05-30 03:01:38 -07:00
Antoine Legrand
d85b29aae1 owners: move ant31 from emeritus to approvers (#11247) 2024-05-30 02:32:28 -07:00
dependabot[bot]
eff4eec8de Bump pytest-testinfra from 10.1.0 to 10.1.1
Bumps [pytest-testinfra](https://github.com/pytest-dev/pytest-testinfra) from 10.1.0 to 10.1.1.
- [Release notes](https://github.com/pytest-dev/pytest-testinfra/releases)
- [Changelog](https://github.com/pytest-dev/pytest-testinfra/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-testinfra/compare/10.1.0...10.1.1)

---
updated-dependencies:
- dependency-name: pytest-testinfra
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-30 02:38:40 +00:00
Kubernetes Prow Robot
af593465b2 Merge pull request #11226 from VannTen/cleanup/pre-commit-hooks
pre-commit: make hooks self contained + ci config
2024-05-29 19:37:56 -07:00
Max Gautier
870049523f collection support: use manifest instead of excludes
The default for galaxy. `manifest` works well enough for our case, and
this avoids maintaining a blacklist.
2024-05-29 13:57:33 +02:00
Kay Yan
184b1add54 Merge pull request #11236 from kubernetes-sigs/dependabot/pip/ansible-9.6.0
Bump ansible from 9.5.1 to 9.6.0
2024-05-29 17:17:20 +08:00
Max Gautier
37d824fd2d Update pre-commit hooks 2024-05-28 13:28:03 +02:00
Max Gautier
ff48144607 pre-commit: adjust mardownlint default, md fixes
Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)
2024-05-28 13:26:49 +02:00
Max Gautier
0faa805525 Remove gitlab-ci job done in pre-commit 2024-05-28 13:26:47 +02:00
Max Gautier
bc21433a05 Run pre-commit hooks in dynamic pipeline
Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.

Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.
2024-05-28 13:26:46 +02:00
Max Gautier
19851bb07c collection-build-install convert to pre-commit 2024-05-28 13:26:46 +02:00
Max Gautier
7f7b65d388 Convert check_typo to pre-commit + use maintained version
client9/misspell is unmaintained, and has been forked by the golangci
team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684.

They haven't yet added a pre-commit config, so use my fork with the
pre-commit hook config until the pull request is merged.
2024-05-28 13:26:45 +02:00
Max Gautier
d50f61eae5 pre-commit: apply autofixes hooks and fix the rest manually
- markdownlint (manual fix)
- end-of-file-fixer
- requirements-txt-fixer
- trailing-whitespace
2024-05-28 13:26:44 +02:00
Max Gautier
77bfb53455 Fix ci-matrix pre-commit hook
- Remove dependency of pydblite which fails to setup on recent pythons
- Discard shell script and put everything into pre-commit
2024-05-28 13:26:44 +02:00
Max Gautier
0e449ca75e pre-commit: fix hooks dependencies
- ansible-syntax-check
- tox-inventory-builder
- jinja-syntax-check
2024-05-28 13:26:43 +02:00
Max Gautier
f6d9ff4196 Switch to upstream ansible-lint pre-commit hook
This way, the hook is self contained and does not depend on a previous
virtualenv installation.
2024-05-28 13:26:42 +02:00
Max Gautier
21aba10e08 Use alternate self-sufficient shellcheck precommit
This pre-commit does not require prerequisite on the host, making it
easier to run in CI workflows.
2024-05-28 13:26:42 +02:00
dependabot[bot]
bd9d90e00c Bump ansible from 9.5.1 to 9.6.0
Bumps [ansible](https://github.com/ansible-community/ansible-build-data) from 9.5.1 to 9.6.0.
- [Changelog](https://github.com/ansible-community/ansible-build-data/blob/main/docs/release-process.md)
- [Commits](https://github.com/ansible-community/ansible-build-data/compare/9.5.1...9.6.0)

---
updated-dependencies:
- dependency-name: ansible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-27 03:45:28 +00:00
454 changed files with 36482 additions and 2289 deletions

View File

@@ -4,4 +4,6 @@ updates:
directory: "/" directory: "/"
schedule: schedule:
interval: "weekly" interval: "weekly"
labels: [ "dependencies" ] labels:
- dependencies
- release-note-none

View File

@@ -1,12 +1,9 @@
--- ---
stages: stages:
- build - build
- unit-tests - test
- deploy-part1 - deploy-part1
- moderator - deploy-extended
- deploy-part2
- deploy-part3
- deploy-special
variables: variables:
KUBESPRAY_VERSION: v2.25.0 KUBESPRAY_VERSION: v2.25.0
@@ -43,15 +40,26 @@ before_script:
.job: &job .job: &job
tags: tags:
- packet - ffci
image: $PIPELINE_IMAGE image: $PIPELINE_IMAGE
artifacts: artifacts:
when: always when: always
paths: paths:
- cluster-dump/ - cluster-dump/
needs:
- pipeline-image
.job-moderated:
extends: .job
needs:
- pipeline-image
- ci-not-authorized
- check-galaxy-version # lint
- pre-commit # lint
- vagrant-validate # lint
.testcases: &testcases .testcases: &testcases
<<: *job extends: .job-moderated
retry: 1 retry: 1
interruptible: true interruptible: true
before_script: before_script:
@@ -61,23 +69,38 @@ before_script:
script: script:
- ./tests/scripts/testcases_run.sh - ./tests/scripts/testcases_run.sh
after_script: after_script:
- chronic ./tests/scripts/testcases_cleanup.sh - ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml # For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions # Premoderated with manual actions
ci-authorized: ci-not-authorized:
extends: .job stage: build
stage: moderator before_script: []
after_script: []
rules:
# LGTM or ok-to-test labels
- if: $PR_LABELS =~ /.*,(lgtm|approved|ok-to-test).*|^(lgtm|approved|ok-to-test).*/i
variables:
CI_OK_TO_TEST: '0'
when: always
- if: $CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "trigger"
variables:
CI_OK_TO_TEST: '0'
- if: $CI_COMMIT_BRANCH == "master"
variables:
CI_OK_TO_TEST: '0'
- when: always
variables:
CI_OK_TO_TEST: '1'
script: script:
- /bin/sh scripts/premoderator.sh - exit $CI_OK_TO_TEST
except: ['triggers', 'master'] tags:
# Disable ci moderator - ffci
only: [] needs: []
include: include:
- .gitlab-ci/build.yml - .gitlab-ci/build.yml
- .gitlab-ci/lint.yml - .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml - .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml - .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml - .gitlab-ci/vagrant.yml

View File

@@ -1,40 +1,32 @@
--- ---
.build: .build-container:
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- image-cache
tags:
- ffci
stage: build stage: build
image: image:
name: moby/buildkit:rootless name: gcr.io/kaniko-project/executor:debug
entrypoint: [""] entrypoint: ['']
variables: variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox TAG: $CI_COMMIT_SHORT_SHA
PROJECT_DIR: $CI_PROJECT_DIR
DOCKERFILE: Dockerfile
GODEBUG: "http2client=0"
before_script: before_script:
- mkdir ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json
pipeline image:
extends: .build
script: script:
- | - /kaniko/executor --cache=true
buildctl-daemonless.sh build \ --cache-dir=image-cache
--frontend=dockerfile.v0 \ --context $PROJECT_DIR
--local context=. \ --dockerfile $PROJECT_DIR/$DOCKERFILE
--local dockerfile=. \ --label 'git-branch'=$CI_COMMIT_REF_SLUG
--opt filename=./pipeline.Dockerfile \ --label 'git-tag=$CI_COMMIT_TAG'
--output type=image,name=$PIPELINE_IMAGE,push=true \ --destination $PIPELINE_IMAGE
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache
rules:
- if: '$CI_COMMIT_REF_NAME != $CI_DEFAULT_BRANCH'
pipeline image and build cache: pipeline-image:
extends: .build extends: .build-container
script: variables:
- | DOCKERFILE: pipeline.Dockerfile
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache \
--export-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache,mode=max
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'

View File

@@ -1,126 +1,35 @@
--- ---
yamllint: pre-commit:
extends: .job stage: test
stage: unit-tests tags:
tags: [light] - ffci
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:aaf2c7b38b22286f2d381c11673bec571c28f61dd086d11b43a1c9444a813cef'
variables: variables:
LANG: C.UTF-8 PRE_COMMIT_HOME: /pre-commit-cache
script: script:
- yamllint --strict . - pre-commit run --all-files
except: ['triggers', 'master'] cache:
key: pre-commit-all
paths:
- /pre-commit-cache
needs: []
vagrant-validate: vagrant-validate:
extends: .job extends: .job
stage: unit-tests stage: test
tags: [light] tags: [ffci]
variables: variables:
VAGRANT_VERSION: 2.3.7 VAGRANT_VERSION: 2.3.7
script: script:
- ./tests/scripts/vagrant-validate.sh - ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master'] except: ['triggers', 'master']
ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
script:
- ansible-lint -v
except: ['triggers', 'master']
jinja-syntax-check:
extends: .job
stage: unit-tests
tags: [light]
script:
- "find -name '*.j2' -exec tests/scripts/check-templates.py {} +"
except: ['triggers', 'master']
syntax-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_INVENTORY: inventory/local-tests.cfg
ANSIBLE_REMOTE_USER: root
ANSIBLE_BECOME: "true"
ANSIBLE_BECOME_USER: root
ANSIBLE_VERBOSITY: "3"
script:
- ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check playbooks/cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
- ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check playbooks/reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master']
collection-build-install-sanity-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
script:
- ansible-galaxy collection build
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
tags: [light]
extends: .job
before_script:
- ./tests/scripts/rebase.sh
script:
- pip3 install tox
- cd contrib/inventory_builder && tox
except: ['triggers', 'master']
markdownlint:
stage: unit-tests
tags: [light]
image: node
before_script:
- npm install -g markdownlint-cli@0.22.0
script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
generate-sidebar:
extends: .job
stage: unit-tests
tags: [light]
script:
- scripts/gen_docs_sidebar.sh
- git diff --exit-code
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
# TODO: convert to pre-commit hook
check-galaxy-version: check-galaxy-version:
stage: unit-tests needs: []
tags: [light] stage: test
tags: [ffci]
image: python:3 image: python:3
script: script:
- tests/scripts/check_galaxy_version.sh - tests/scripts/check_galaxy_version.sh
check-typo:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_typo.sh
ci-matrix:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/md-table/test.sh

View File

@@ -1,30 +1,42 @@
--- ---
.molecule: .molecule:
tags: [c3.small.x86] tags: [ffci-vm-med]
only: [/^pr-.*$/] only: [/^pr-.*$/]
except: ['triggers'] except: ['triggers']
image: $PIPELINE_IMAGE image: quay.io/kubespray/vm-kubespray-ci:v6
services: [] services: []
stage: deploy-part1 stage: deploy-part1
needs: []
# - ci-not-authorized
variables:
VAGRANT_DEFAULT_PROVIDER: "libvirt"
before_script: before_script:
- tests/scripts/rebase.sh - groups
- ./tests/scripts/vagrant_clean.sh - python3 -m venv citest
- source citest/bin/activate
- vagrant plugin expunge --reinstall --force --no-tty
- vagrant plugin install vagrant-libvirt
- pip install --no-compile --no-cache-dir pip -U
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/requirements.txt
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/tests/requirements.txt
- ./tests/scripts/rebase.sh
- ./tests/scripts/vagrant_clean.sh
script: script:
- ./tests/scripts/molecule_run.sh - ./tests/scripts/molecule_run.sh
after_script: after_script:
- chronic ./tests/scripts/molecule_logs.sh - ./tests/scripts/molecule_logs.sh
artifacts: artifacts:
when: always when: always
paths: paths:
- molecule_logs/ - molecule_logs/
# CI template for periodic CI jobs # CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set # Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic: .molecule_periodic:
only: only:
variables: variables:
- $PERIODIC_CI_ENABLED - $PERIODIC_CI_ENABLED
allow_failure: true allow_failure: true
extends: .molecule extends: .molecule
@@ -34,50 +46,50 @@ molecule_full:
molecule_no_container_engines: molecule_no_container_engines:
extends: .molecule extends: .molecule
script: script:
- ./tests/scripts/molecule_run.sh -e container-engine - ./tests/scripts/molecule_run.sh -e container-engine
when: on_success when: on_success
molecule_docker: molecule_docker:
extends: .molecule extends: .molecule
script: script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd - ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success when: on_success
molecule_containerd: molecule_containerd:
extends: .molecule extends: .molecule
script: script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd - ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success when: on_success
molecule_cri-o: molecule_cri-o:
extends: .molecule extends: .molecule
stage: deploy-part2 stage: deploy-part1
script: script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o - ./tests/scripts/molecule_run.sh -i container-engine/cri-o
allow_failure: true allow_failure: true
when: on_success when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail # # Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata: # molecule_kata:
extends: .molecule # extends: .molecule
stage: deploy-part3 # stage: deploy-extended
script: # script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers # - ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: manual # when: manual
# FIXME: this test is broken (perma-failing) # # FIXME: this test is broken (perma-failing)
molecule_gvisor: molecule_gvisor:
extends: .molecule extends: .molecule
stage: deploy-part3 stage: deploy-extended
script: script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor - ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: manual when: manual
# FIXME: this test is broken (perma-failing) # FIXME: this test is broken (perma-failing)
molecule_youki: molecule_youki:
extends: .molecule extends: .molecule
stage: deploy-part3 stage: deploy-extended
script: script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki - ./tests/scripts/molecule_run.sh -i container-engine/youki
when: manual when: manual
# FIXME: this test is broken (perma-failing) # FIXME: this test is broken (perma-failing)

View File

@@ -6,14 +6,56 @@
CI_PLATFORM: packet CI_PLATFORM: packet
SSH_USER: kubespray SSH_USER: kubespray
tags: tags:
- packet - ffci
except: [triggers] needs:
- pipeline-image
- ci-not-authorized
# CI template for PRs # CI template for PRs
.packet_pr: .packet_pr:
only: [/^pr-.*$/] stage: deploy-part1
rules:
- if: $PR_LABELS =~ /.*ci-short.*/
when: manual
allow_failure: true
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- when: manual
allow_failure: true
extends: .packet extends: .packet
## Uncomment this to have multiple stages
# needs:
# - packet_ubuntu20-calico-all-in-one
.packet_pr_short:
stage: deploy-part1
extends: .packet
rules:
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- when: manual
allow_failure: true
.packet_pr_manual:
extends: .packet_pr
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*ci-full.*/
when: on_success
# Else run as manual
- when: manual
allow_failure: true
.packet_pr_extended:
extends: .packet_pr
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
when: on_success
- when: manual
allow_failure: true
# CI template for periodic CI jobs # CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set # Enabled when PERIODIC_CI_ENABLED var is set
.packet_periodic: .packet_periodic:
@@ -34,314 +76,177 @@ packet_cleanup_old:
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken # The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-all-in-one: packet_ubuntu20-calico-all-in-one:
stage: deploy-part1 stage: deploy-part1
extends: .packet_pr extends: .packet_pr_short
when: on_success
variables: variables:
RESET_CHECK: "true" RESET_CHECK: "true"
# ### PR JOBS PART2 # ### PR JOBS PART2
packet_ubuntu20-all-in-one-docker: packet_ubuntu20-crio:
stage: deploy-part2 extends: .packet_pr_manual
extends: .packet_pr
when: on_success
packet_ubuntu20-calico-all-in-one-hardening:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-all-in-one-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-all-in-one: packet_ubuntu22-calico-all-in-one:
stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success
packet_ubuntu24-all-in-one-docker: packet_ubuntu22-calico-all-in-one-upgrade:
stage: deploy-part2
extends: .packet_pr extends: .packet_pr
when: on_success
packet_ubuntu24-calico-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu24-calico-etcd-datastore:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha:
extends: .packet_pr
stage: deploy-part2
when: on_success
packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
allow_failure: true
packet_ubuntu20-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_fedora37-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_ubuntu20-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian10-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian10-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
# This will instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.9-dind
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_almalinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-cilium:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_amazon-linux-2-all-in-one:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora38-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
allow_failure: true
packet_opensuse-docker-cilium:
stage: deploy-part2
extends: .packet_pr
when: on_success
# ### MANUAL JOBS
packet_ubuntu20-docker-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu20-cilium-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu20-flannel-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
# Calico HA eBPF
packet_almalinux8-calico-ha-ebpf:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-macvlan:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-calico-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora38-docker-calico:
stage: deploy-part2
extends: .packet_periodic
when: on_success
variables:
RESET_CHECK: "true"
packet_fedora37-calico-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_fedora37-calico-swap-selinux:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora38-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian11-custom-cni:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-kubelet-csr-approver:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian12-custom-cni-helm:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3
# Long jobs (45min+)
packet_centos7-weave-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-calico-upgrade:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables: variables:
UPGRADE_TEST: graceful UPGRADE_TEST: graceful
packet_almalinux8-calico-remove-node: packet_ubuntu24-calico-etcd-datastore:
stage: deploy-part3
extends: .packet_pr extends: .packet_pr
when: on_success
packet_almalinux8-crio:
extends: .packet_pr
packet_almalinux8-kube-ovn:
extends: .packet_pr
packet_debian11-calico:
extends: .packet_pr
packet_debian11-macvlan:
extends: .packet_pr
packet_debian12-cilium:
extends: .packet_pr
packet_rockylinux8-calico:
extends: .packet_pr
packet_rockylinux9-cilium:
extends: .packet_pr
variables:
RESET_CHECK: "true"
packet_amazon-linux-2-all-in-one:
extends: .packet_pr
packet_opensuse-docker-cilium:
extends: .packet_pr
packet_ubuntu20-cilium-sep:
extends: .packet_pr
## Extended
packet_debian11-docker:
extends: .packet_pr_extended
packet_debian12-docker:
extends: .packet_pr_extended
packet_debian12-calico:
extends: .packet_pr_extended
packet_almalinux8-calico-remove-node:
extends: .packet_pr_extended
variables: variables:
REMOVE_NODE_CHECK: "true" REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3" REMOVE_NODE_NAME: "instance-3"
packet_rockylinux9-calico:
extends: .packet_pr_extended
packet_almalinux8-calico:
extends: .packet_pr_extended
packet_almalinux8-docker:
extends: .packet_pr_extended
packet_ubuntu20-calico-all-in-one-hardening:
extends: .packet_pr_extended
packet_ubuntu24-calico-all-in-one:
extends: .packet_pr_extended
packet_ubuntu20-calico-etcd-kubeadm: packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3 extends: .packet_pr_extended
extends: .packet_pr
when: on_success packet_ubuntu24-all-in-one-docker:
extends: .packet_pr_extended
packet_ubuntu22-all-in-one-docker:
extends: .packet_pr_extended
# ### MANUAL JOBS
packet_fedora37-crio:
extends: .packet_pr_manual
packet_ubuntu20-flannel-ha:
extends: .packet_pr_manual
packet_ubuntu20-all-in-one-docker:
extends: .packet_pr_manual
packet_ubuntu20-flannel-ha-once:
extends: .packet_pr_manual
packet_fedora37-calico-swap-selinux:
extends: .packet_pr_manual
packet_almalinux8-calico-ha-ebpf:
extends: .packet_pr_manual
packet_almalinux8-calico-nodelocaldns-secondary:
extends: .packet_pr_manual
packet_debian11-custom-cni:
extends: .packet_pr_manual
packet_debian11-kubelet-csr-approver:
extends: .packet_pr_manual
packet_debian12-custom-cni-helm:
extends: .packet_pr_manual
packet_ubuntu20-calico-ha-wireguard:
extends: .packet_pr_manual
# PERIODIC
packet_fedora38-docker-calico:
stage: deploy-extended
extends: .packet_periodic
variables:
RESET_CHECK: "true"
packet_fedora37-calico-selinux:
stage: deploy-extended
extends: .packet_periodic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-extended
extends: .packet_periodic
variables:
UPGRADE_TEST: basic
packet_debian11-calico-upgrade-once: packet_debian11-calico-upgrade-once:
stage: deploy-part3 stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic
when: on_success
variables: variables:
UPGRADE_TEST: graceful UPGRADE_TEST: graceful
packet_ubuntu20-calico-ha-recover: packet_ubuntu20-calico-ha-recover:
stage: deploy-part3 stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic
when: on_success
variables: variables:
RECOVER_CONTROL_PLANE_TEST: "true" RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
packet_ubuntu20-calico-ha-recover-noquorum: packet_ubuntu20-calico-ha-recover-noquorum:
stage: deploy-part3 stage: deploy-extended
extends: .packet_periodic extends: .packet_periodic
when: on_success
variables: variables:
RECOVER_CONTROL_PLANE_TEST: "true" RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]" RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]"
packet_debian11-calico-upgrade:
stage: deploy-extended
extends: .packet_periodic
variables:
UPGRADE_TEST: graceful
packet_debian12-cilium-svc-proxy:
stage: deploy-extended
extends: .packet_periodic

View File

@@ -0,0 +1,17 @@
---
# stub pipeline for dynamic generation
pre-commit:
tags:
- light
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:aaf2c7b38b22286f2d381c11673bec571c28f61dd086d11b43a1c9444a813cef'
variables:
PRE_COMMIT_HOME: /pre-commit-cache
script:
- pre-commit run --all-files
cache:
key: pre-commit-$HOOK_ID
paths:
- /pre-commit-cache
parallel:
matrix:
- HOOK_ID:

View File

@@ -1,16 +0,0 @@
---
shellcheck:
extends: .job
stage: unit-tests
tags: [light]
variables:
SHELLCHECK_VERSION: v0.7.1
before_script:
- ./tests/scripts/rebase.sh
- curl --silent --location "https://github.com/koalaman/shellcheck/releases/download/"${SHELLCHECK_VERSION}"/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

View File

@@ -2,6 +2,10 @@
# Tests for contrib/terraform/ # Tests for contrib/terraform/
.terraform_install: .terraform_install:
extends: .job extends: .job
needs:
- ci-not-authorized
- pipeline-image
stage: deploy-part1
before_script: before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1 - update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
@@ -24,17 +28,19 @@
.terraform_validate: .terraform_validate:
extends: .terraform_install extends: .terraform_install
stage: unit-tests tags: [ffci]
tags: [light]
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
script: script:
- terraform -chdir="contrib/terraform/$PROVIDER" validate - terraform -chdir="contrib/terraform/$PROVIDER" validate
- terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff - terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
stage: test
needs:
- pipeline-image
.terraform_apply: .terraform_apply:
extends: .terraform_install extends: .terraform_install
tags: [light] tags: [ffci]
stage: deploy-part3 stage: deploy-extended
when: manual when: manual
only: [/^pr-.*$/] only: [/^pr-.*$/]
artifacts: artifacts:
@@ -51,7 +57,7 @@
- tests/scripts/testcases_run.sh - tests/scripts/testcases_run.sh
after_script: after_script:
# Cleanup regardless of exit code # Cleanup regardless of exit code
- chronic ./tests/scripts/testcases_cleanup.sh - ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack: tf-validate-openstack:
extends: .terraform_validate extends: .terraform_validate
@@ -146,8 +152,7 @@ tf-validate-nifcloud:
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df" TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
tf-elastx_cleanup: tf-elastx_cleanup:
stage: unit-tests tags: [ffci]
tags: [light]
image: python image: python
variables: variables:
<<: *elastx_variables <<: *elastx_variables
@@ -155,10 +160,11 @@ tf-elastx_cleanup:
- pip install -r scripts/openstack-cleanup/requirements.txt - pip install -r scripts/openstack-cleanup/requirements.txt
script: script:
- ./scripts/openstack-cleanup/main.py - ./scripts/openstack-cleanup/main.py
allow_failure: true
tf-elastx_ubuntu20-calico: tf-elastx_ubuntu20-calico:
extends: .terraform_apply extends: .terraform_apply
stage: deploy-part3 stage: deploy-part1
when: on_success when: on_success
allow_failure: true allow_failure: true
variables: variables:

View File

@@ -1,64 +1,63 @@
--- ---
.vagrant: .vagrant:
extends: .testcases extends: .testcases
needs:
- ci-not-authorized
variables: variables:
CI_PLATFORM: "vagrant" CI_PLATFORM: "vagrant"
SSH_USER: "vagrant" SSH_USER: "vagrant"
VAGRANT_DEFAULT_PROVIDER: "libvirt" VAGRANT_DEFAULT_PROVIDER: "libvirt"
KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb
tags: [c3.small.x86] DOCKER_NAME: vagrant
only: [/^pr-.*$/] VAGRANT_ANSIBLE_TAGS: facts
except: ['triggers'] tags: [ffci-vm-large]
image: $PIPELINE_IMAGE # only: [/^pr-.*$/]
# except: ['triggers']
image: quay.io/kubespray/vm-kubespray-ci:v6
services: [] services: []
before_script: before_script:
- echo $USER
- python3 -m venv citest
- source citest/bin/activate
- vagrant plugin expunge --reinstall --force --no-tty
- vagrant plugin install vagrant-libvirt
- pip install --no-compile --no-cache-dir pip -U
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/requirements.txt
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh - ./tests/scripts/vagrant_clean.sh
script: script:
- ./tests/scripts/testcases_run.sh - ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
vagrant_ubuntu20-calico-dual-stack: vagrant_ubuntu20-calico-dual-stack:
stage: deploy-part2 stage: deploy-extended
extends: .vagrant extends: .vagrant
when: manual when: manual
# FIXME: this test if broken (perma-failing) # FIXME: this test if broken (perma-failing)
vagrant_ubuntu20-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_ubuntu20-flannel: vagrant_ubuntu20-flannel:
stage: deploy-part2 stage: deploy-part1
extends: .vagrant extends: .vagrant
when: on_success when: on_success
allow_failure: false allow_failure: false
vagrant_ubuntu20-flannel-collection: vagrant_ubuntu20-flannel-collection:
stage: deploy-part2 stage: deploy-extended
extends: .vagrant extends: .vagrant
when: on_success when: manual
vagrant_ubuntu20-kube-router-sep: vagrant_ubuntu20-kube-router-sep:
stage: deploy-part2 stage: deploy-extended
extends: .vagrant extends: .vagrant
when: manual when: manual
# Service proxy test fails connectivity testing # Service proxy test fails connectivity testing
vagrant_ubuntu20-kube-router-svc-proxy: vagrant_ubuntu20-kube-router-svc-proxy:
stage: deploy-part2 stage: deploy-extended
extends: .vagrant extends: .vagrant
when: manual when: manual
vagrant_fedora37-kube-router: vagrant_fedora37-kube-router:
stage: deploy-part2 stage: deploy-extended
extends: .vagrant extends: .vagrant
when: manual when: manual
# FIXME: this test if broken (perma-failing) # FIXME: this test if broken (perma-failing)
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1,3 +0,0 @@
---
MD013: false
MD029: false

4
.md_style.rb Normal file
View File

@@ -0,0 +1,4 @@
all
exclude_rule 'MD013'
exclude_rule 'MD029'
rule 'MD007', :indent => 2

1
.mdlrc Normal file
View File

@@ -0,0 +1 @@
style "#{File.dirname(__FILE__)}/.md_style.rb"

View File

@@ -1,7 +1,7 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0 rev: v4.6.0
hooks: hooks:
- id: check-added-large-files - id: check-added-large-files
- id: check-case-conflict - id: check-case-conflict
@@ -15,47 +15,59 @@ repos:
- id: trailing-whitespace - id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git - repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1 rev: v1.35.1
hooks: hooks:
- id: yamllint - id: yamllint
args: [--strict] args: [--strict]
- repo: https://github.com/markdownlint/markdownlint - repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0 rev: v0.12.0
hooks: hooks:
- id: markdownlint - id: markdownlint
args: [-r, "~MD013,~MD029"] exclude: "^.github|(^docs/_sidebar\\.md$)"
exclude: "^.git"
- repo: https://github.com/jumanjihouse/pre-commit-hooks - repo: https://github.com/shellcheck-py/shellcheck-py
rev: 3.0.0 rev: v0.10.0.1
hooks: hooks:
- id: shellcheck - id: shellcheck
args: [--severity, "error"] args: ["--severity=error"]
exclude: "^.git" exclude: "^.git"
files: "\\.sh$" files: "\\.sh$"
- repo: local - repo: https://github.com/ansible/ansible-lint
rev: v24.5.0
hooks: hooks:
- id: ansible-lint - id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies: additional_dependencies:
- .[community] - ansible==9.8.0
- jsonschema==4.22.0
- jmespath==1.0.1
- netaddr==1.3.0
- distlib
- repo: https://github.com/golangci/misspell
rev: v0.6.0
hooks:
- id: misspell
exclude: "OWNERS_ALIASES$"
- repo: local
hooks:
- id: ansible-syntax-check - id: ansible-syntax-check
name: ansible-syntax-check name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml" files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
additional_dependencies:
- ansible==9.5.1
- id: tox-inventory-builder - id: tox-inventory-builder
name: tox-inventory-builder name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox" entry: bash -c "cd contrib/inventory_builder && tox"
language: python language: python
pass_filenames: false pass_filenames: false
additional_dependencies:
- tox==4.15.0
- id: check-readme-versions - id: check-readme-versions
name: check-readme-versions name: check-readme-versions
@@ -63,6 +75,15 @@ repos:
language: script language: script
pass_filenames: false pass_filenames: false
- id: collection-build-install
name: Build and install kubernetes-sigs.kubespray Ansible collection
language: python
additional_dependencies:
- ansible-core>=2.16.4
- distlib
entry: tests/scripts/collection-build-install.sh
pass_filenames: false
- id: generate-docs-sidebar - id: generate-docs-sidebar
name: generate-docs-sidebar name: generate-docs-sidebar
entry: scripts/gen_docs_sidebar.sh entry: scripts/gen_docs_sidebar.sh
@@ -71,9 +92,13 @@ repos:
- id: ci-matrix - id: ci-matrix
name: ci-matrix name: ci-matrix
entry: tests/scripts/md-table/test.sh entry: tests/scripts/md-table/main.py
language: script language: python
pass_filenames: false pass_filenames: false
additional_dependencies:
- jinja2
- pathlib
- pyaml
- id: jinja-syntax-check - id: jinja-syntax-check
name: jinja-syntax-check name: jinja-syntax-check
@@ -82,4 +107,4 @@ repos:
types: types:
- jinja - jinja
additional_dependencies: additional_dependencies:
- Jinja2 - jinja2

View File

@@ -6,7 +6,7 @@ ignore: |
.github/ .github/
# Generated file # Generated file
tests/files/custom_cni/cilium.yaml tests/files/custom_cni/cilium.yaml
# https://ansible.readthedocs.io/projects/lint/rules/yaml/
rules: rules:
braces: braces:
min-spaces-inside: 0 min-spaces-inside: 0
@@ -14,9 +14,15 @@ rules:
brackets: brackets:
min-spaces-inside: 0 min-spaces-inside: 0
max-spaces-inside: 1 max-spaces-inside: 1
comments:
min-spaces-from-content: 1
# https://github.com/adrienverge/yamllint/issues/384
comments-indentation: false
indentation: indentation:
spaces: 2 spaces: 2
indent-sequences: consistent indent-sequences: consistent
line-length: disable line-length: disable
new-line-at-end-of-file: disable new-line-at-end-of-file: disable
truthy: disable octal-values:
forbid-implicit-octal: true # yamllint defaults to false
forbid-explicit-octal: true # yamllint defaults to false

View File

@@ -6,15 +6,17 @@ aliases:
- mzaian - mzaian
- oomichi - oomichi
- yankay - yankay
- ant31
- vannten
kubespray-reviewers: kubespray-reviewers:
- cyclinder - cyclinder
- erikjiang - erikjiang
- mrfreezeex - mrfreezeex
- mzaian - mzaian
- tico88612
- vannten - vannten
- yankay - yankay
kubespray-emeritus_approvers: kubespray-emeritus_approvers:
- ant31
- atoms - atoms
- chadswen - chadswen
- luckysb - luckysb

View File

@@ -141,13 +141,13 @@ vagrant up
## Supported Linux Distributions ## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Buster - **Debian** Bookworm, Bullseye
- **Ubuntu** 20.04, 22.04, 24.04 - **Ubuntu** 20.04, 22.04, 24.04
- **CentOS/RHEL** 7, [8, 9](docs/operating_systems/centos.md#centos-8) - **CentOS/RHEL** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Fedora** 37, 38 - **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md)) - **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/operating_systems/centos.md#centos-8) - **Oracle Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8) - **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8) - **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md)) - **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
@@ -160,28 +160,28 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components ## Supported Components
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.29.5 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.30.4
- [etcd](https://github.com/etcd-io/etcd) v3.5.12 - [etcd](https://github.com/etcd-io/etcd) v3.5.12
- [docker](https://www.docker.com/) v26.1 - [docker](https://www.docker.com/) v26.1
- [containerd](https://containerd.io/) v1.7.16 - [containerd](https://containerd.io/) v1.7.21
- [cri-o](http://cri-o.io/) v1.29.1 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS) - [cri-o](http://cri-o.io/) v1.30.3 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0 - [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.27.3 - [calico](https://github.com/projectcalico/calico) v3.28.1
- [cilium](https://github.com/cilium/cilium) v1.15.4 - [cilium](https://github.com/cilium/cilium) v1.15.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0 - [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5 - [kube-ovn](https://github.com/alauda/kube-ovn) v1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0 - [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8 - [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1 - [weave](https://github.com/rajch/weave) v2.8.7
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0 - [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
- Application - Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2 - [cert-manager](https://github.com/jetstack/cert-manager) v1.14.7
- [coredns](https://github.com/coredns/coredns) v1.11.1 - [coredns](https://github.com/coredns/coredns) v1.11.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.10.1 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.11.2
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4 - [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.11.0 - [argocd](https://argoproj.github.io/) v2.11.0
- [helm](https://helm.sh/) v3.14.2 - [helm](https://helm.sh/) v3.15.4
- [metallb](https://metallb.universe.tf/) v0.13.9 - [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1 - [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin - Storage Plugin
@@ -189,11 +189,11 @@ Note: Upstart/SysV init based OS types are not supported.
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11 - [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0 - [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0 - [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.29.0 - [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.30.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2 - [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24 - [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0 - [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.14.2 - [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.16.4
## Container Runtime Notes ## Container Runtime Notes

View File

@@ -16,6 +16,7 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
1. The release issue is closed 1. The release issue is closed
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released` 1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...` 1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
1. Create/Update Issue for upgradeing kubernetes and [k8s-conformance](https://github.com/cncf/k8s-conformance)
## Major/minor releases and milestones ## Major/minor releases and milestones

6
Vagrantfile vendored
View File

@@ -1,7 +1,7 @@
# -*- mode: ruby -*- # -*- mode: ruby -*-
# # vi: set ft=ruby : # # vi: set ft=ruby :
# For help on using kubespray with vagrant, check out docs/vagrant.md # For help on using kubespray with vagrant, check out docs/developers/vagrant.md
require 'fileutils' require 'fileutils'
@@ -22,8 +22,6 @@ SUPPORTED_OS = {
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"}, "ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"}, "ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
"ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"}, "ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"}, "centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"}, "centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"}, "almalinux8" => {box: "almalinux/8", user: "vagrant"},
@@ -36,7 +34,6 @@ SUPPORTED_OS = {
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"}, "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"}, "oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
"rhel7" => {box: "generic/rhel7", user: "vagrant"},
"rhel8" => {box: "generic/rhel8", user: "vagrant"}, "rhel8" => {box: "generic/rhel8", user: "vagrant"},
"debian11" => {box: "debian/bullseye64", user: "vagrant"}, "debian11" => {box: "debian/bullseye64", user: "vagrant"},
"debian12" => {box: "debian/bookworm64", user: "vagrant"}, "debian12" => {box: "debian/bookworm64", user: "vagrant"},
@@ -278,6 +275,7 @@ Vagrant.configure("2") do |config|
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}", "local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}", "local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user], "ansible_ssh_user": SUPPORTED_OS[$os][:user],
"ansible_ssh_private_key_file": File.join(Dir.home, ".vagrant.d", "insecure_private_key"),
"unsafe_show_logs": "True" "unsafe_show_logs": "True"
} }

View File

@@ -11,6 +11,7 @@ gathering = smart
fact_caching = jsonfile fact_caching = jsonfile
fact_caching_connection = /tmp fact_caching_connection = /tmp
fact_caching_timeout = 86400 fact_caching_timeout = 86400
timeout = 300
stdout_callback = default stdout_callback = default
display_skipped_hosts = no display_skipped_hosts = no
library = ./library library = ./library

View File

@@ -1,6 +1,6 @@
--- ---
- name: Generate Azure inventory - name: Generate Azure inventory
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
roles: roles:
- generate-inventory - generate-inventory

View File

@@ -1,6 +1,6 @@
--- ---
- name: Generate Azure inventory - name: Generate Azure inventory
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
roles: roles:
- generate-inventory_2 - generate-inventory_2

View File

@@ -1,6 +1,6 @@
--- ---
- name: Generate Azure templates - name: Generate Azure templates
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
roles: roles:
- generate-templates - generate-templates

View File

@@ -12,4 +12,4 @@
template: template:
src: inventory.j2 src: inventory.j2
dest: "{{ playbook_dir }}/inventory" dest: "{{ playbook_dir }}/inventory"
mode: 0644 mode: "0644"

View File

@@ -22,10 +22,10 @@
template: template:
src: inventory.j2 src: inventory.j2
dest: "{{ playbook_dir }}/inventory" dest: "{{ playbook_dir }}/inventory"
mode: 0644 mode: "0644"
- name: Generate Load Balancer variables - name: Generate Load Balancer variables
template: template:
src: loadbalancer_vars.j2 src: loadbalancer_vars.j2
dest: "{{ playbook_dir }}/loadbalancer_vars.yml" dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
mode: 0644 mode: "0644"

View File

@@ -8,13 +8,13 @@
path: "{{ base_dir }}" path: "{{ base_dir }}"
state: directory state: directory
recurse: true recurse: true
mode: 0755 mode: "0755"
- name: Store json files in base_dir - name: Store json files in base_dir
template: template:
src: "{{ item }}" src: "{{ item }}"
dest: "{{ base_dir }}/{{ item }}" dest: "{{ base_dir }}/{{ item }}"
mode: 0644 mode: "0644"
with_items: with_items:
- network.json - network.json
- storage.json - storage.json

View File

@@ -1,7 +1,7 @@
--- ---
- name: Create nodes as docker containers - name: Create nodes as docker containers
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
roles: roles:
- { role: dind-host } - { role: dind-host }

View File

@@ -18,7 +18,7 @@ distro_settings:
init: | init: |
/sbin/init /sbin/init
centos: &CENTOS centos: &CENTOS
image: "centos:7" image: "centos:8"
user: "centos" user: "centos"
pid1_exe: /usr/lib/systemd/systemd pid1_exe: /usr/lib/systemd/systemd
init: | init: |

View File

@@ -15,7 +15,7 @@ docker_storage_options: -s overlay2 --storage-opt overlay2.override_kernel_check
dns_mode: coredns dns_mode: coredns
deploy_netchecker: True deploy_netchecker: true
netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent
netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server
netcheck_agent_image_tag: v1.0 netcheck_agent_image_tag: v1.0

View File

@@ -14,7 +14,7 @@
src: "/bin/true" src: "/bin/true"
dest: "{{ item }}" dest: "{{ item }}"
state: link state: link
force: yes force: true
with_items: with_items:
# DIND box may have swap enable, don't bother # DIND box may have swap enable, don't bother
- /sbin/swapoff - /sbin/swapoff
@@ -35,7 +35,7 @@
path-exclude=/usr/share/doc/* path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
mode: 0644 mode: "0644"
when: when:
- ansible_os_family == 'Debian' - ansible_os_family == 'Debian'
@@ -58,13 +58,13 @@
name: "{{ distro_user }}" name: "{{ distro_user }}"
uid: 1000 uid: 1000
# groups: sudo # groups: sudo
append: yes append: true
- name: Allow password-less sudo to "{{ distro_user }}" - name: Allow password-less sudo to "{{ distro_user }}"
copy: copy:
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL" content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
dest: "/etc/sudoers.d/{{ distro_user }}" dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640 mode: "0640"
- name: "Add my pubkey to {{ distro_user }} user authorized keys" - name: "Add my pubkey to {{ distro_user }} user authorized keys"
ansible.posix.authorized_key: ansible.posix.authorized_key:

View File

@@ -19,7 +19,7 @@
state: started state: started
hostname: "{{ item }}" hostname: "{{ item }}"
command: "{{ distro_init }}" command: "{{ distro_init }}"
# recreate: yes # recreate: true
privileged: true privileged: true
tmpfs: tmpfs:
- /sys/module/nf_conntrack/parameters - /sys/module/nf_conntrack/parameters
@@ -42,7 +42,7 @@
template: template:
src: inventory_builder.sh.j2 src: inventory_builder.sh.j2
dest: /tmp/kubespray.dind.inventory_builder.sh dest: /tmp/kubespray.dind.inventory_builder.sh
mode: 0755 mode: "0755"
tags: tags:
- addresses - addresses

View File

@@ -1,8 +1,8 @@
--- ---
- name: Prepare Hypervisor to later install kubespray VMs - name: Prepare Hypervisor to later install kubespray VMs
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
become: yes become: true
vars: vars:
bootstrap_os: none bootstrap_os: none
roles: roles:

View File

@@ -11,12 +11,12 @@
- name: Install required packages - name: Install required packages
apt: apt:
upgrade: yes upgrade: true
update_cache: yes update_cache: true
cache_valid_time: 3600 cache_valid_time: 3600
name: "{{ item }}" name: "{{ item }}"
state: present state: present
install_recommends: no install_recommends: false
with_items: with_items:
- dnsutils - dnsutils
- ntp - ntp

View File

@@ -20,7 +20,7 @@
br-netfilter br-netfilter
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: br_netfilter is defined when: br_netfilter is defined
@@ -30,7 +30,7 @@
value: 1 value: 1
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: "{{ sysctl_file_path }}"
state: present state: present
reload: yes reload: true
- name: Set bridge-nf-call-{arptables,iptables} to 0 - name: Set bridge-nf-call-{arptables,iptables} to 0
ansible.posix.sysctl: ansible.posix.sysctl:
@@ -38,7 +38,7 @@
state: present state: present
value: 0 value: 0
sysctl_file: "{{ sysctl_file_path }}" sysctl_file: "{{ sysctl_file_path }}"
reload: yes reload: true
with_items: with_items:
- net.bridge.bridge-nf-call-arptables - net.bridge.bridge-nf-call-arptables
- net.bridge.bridge-nf-call-ip6tables - net.bridge.bridge-nf-call-ip6tables

View File

@@ -11,7 +11,7 @@
state: directory state: directory
owner: "{{ k8s_deployment_user }}" owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}" group: "{{ k8s_deployment_user }}"
mode: 0700 mode: "0700"
- name: Configure sudo for deployment user - name: Configure sudo for deployment user
copy: copy:
@@ -20,13 +20,13 @@
dest: "/etc/sudoers.d/55-k8s-deployment" dest: "/etc/sudoers.d/55-k8s-deployment"
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
- name: Write private SSH key - name: Write private SSH key
copy: copy:
src: "{{ k8s_deployment_user_pkey_path }}" src: "{{ k8s_deployment_user_pkey_path }}"
dest: "/home/{{ k8s_deployment_user }}/.ssh/id_rsa" dest: "/home/{{ k8s_deployment_user }}/.ssh/id_rsa"
mode: 0400 mode: "0400"
owner: "{{ k8s_deployment_user }}" owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}" group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined when: k8s_deployment_user_pkey_path is defined
@@ -41,7 +41,7 @@
- name: Fix ssh-pub-key permissions - name: Fix ssh-pub-key permissions
file: file:
path: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys" path: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
mode: 0600 mode: "0600"
owner: "{{ k8s_deployment_user }}" owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}" group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined when: k8s_deployment_user_pkey_path is defined

View File

@@ -14,7 +14,7 @@
file: file:
path: "{{ item }}" path: "{{ item }}"
state: directory state: directory
mode: 0755 mode: "0755"
become: false become: false
loop: loop:
- "{{ playbook_dir }}/plugins/mitogen" - "{{ playbook_dir }}/plugins/mitogen"
@@ -25,7 +25,7 @@
url: "{{ mitogen_url }}" url: "{{ mitogen_url }}"
dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz" dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
validate_certs: true validate_certs: true
mode: 0644 mode: "0644"
- name: Extract archive - name: Extract archive
unarchive: unarchive:
@@ -40,7 +40,7 @@
- name: Add strategy to ansible.cfg - name: Add strategy to ansible.cfg
community.general.ini_file: community.general.ini_file:
path: ansible.cfg path: ansible.cfg
mode: 0644 mode: "0644"
section: "{{ item.section | d('defaults') }}" section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}" option: "{{ item.option }}"
value: "{{ item.value }}" value: "{{ item.value }}"

View File

@@ -21,7 +21,7 @@ glusterfs_default_release: ""
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy). You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
```yaml ```yaml
glusterfs_ppa_use: yes glusterfs_ppa_use: true
glusterfs_ppa_version: "3.5" glusterfs_ppa_version: "3.5"
``` ```

View File

@@ -1,7 +1,7 @@
--- ---
# For Ubuntu. # For Ubuntu.
glusterfs_default_release: "" glusterfs_default_release: ""
glusterfs_ppa_use: yes glusterfs_ppa_use: true
glusterfs_ppa_version: "4.1" glusterfs_ppa_version: "4.1"
# Gluster configuration. # Gluster configuration.

View File

@@ -15,7 +15,7 @@
file: file:
path: "{{ item }}" path: "{{ item }}"
state: directory state: directory
mode: 0775 mode: "0775"
with_items: with_items:
- "{{ gluster_mount_dir }}" - "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -3,7 +3,7 @@
apt_repository: apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}' repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present state: present
update_cache: yes update_cache: true
register: glusterfs_ppa_added register: glusterfs_ppa_added
when: glusterfs_ppa_use when: glusterfs_ppa_use

View File

@@ -1,7 +1,7 @@
--- ---
# For Ubuntu. # For Ubuntu.
glusterfs_default_release: "" glusterfs_default_release: ""
glusterfs_ppa_use: yes glusterfs_ppa_use: true
glusterfs_ppa_version: "3.12" glusterfs_ppa_version: "3.12"
# Gluster configuration. # Gluster configuration.

View File

@@ -43,13 +43,13 @@
service: service:
name: "{{ glusterfs_daemon }}" name: "{{ glusterfs_daemon }}"
state: started state: started
enabled: yes enabled: true
- name: Ensure Gluster brick and mount directories exist. - name: Ensure Gluster brick and mount directories exist.
file: file:
path: "{{ item }}" path: "{{ item }}"
state: directory state: directory
mode: 0775 mode: "0775"
with_items: with_items:
- "{{ gluster_brick_dir }}" - "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}" - "{{ gluster_mount_dir }}"
@@ -62,7 +62,7 @@
replicas: "{{ groups['gfs-cluster'] | length }}" replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}" cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}" host: "{{ inventory_hostname }}"
force: yes force: true
run_once: true run_once: true
when: groups['gfs-cluster'] | length > 1 when: groups['gfs-cluster'] | length > 1
@@ -73,7 +73,7 @@
brick: "{{ gluster_brick_dir }}" brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}" cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}" host: "{{ inventory_hostname }}"
force: yes force: true
run_once: true run_once: true
when: groups['gfs-cluster'] | length <= 1 when: groups['gfs-cluster'] | length <= 1
@@ -101,7 +101,7 @@
template: template:
dest: "{{ gluster_mount_dir }}/.test-file.txt" dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt src: test-file.txt
mode: 0644 mode: "0644"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs - name: Unmount glusterfs

View File

@@ -3,7 +3,7 @@
apt_repository: apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}' repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present state: present
update_cache: yes update_cache: true
register: glusterfs_ppa_added register: glusterfs_ppa_added
when: glusterfs_ppa_use when: glusterfs_ppa_use

View File

@@ -3,7 +3,7 @@
template: template:
src: "{{ item.file }}" src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}" dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644 mode: "0644"
with_items: with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View File

@@ -6,6 +6,6 @@
- name: Teardown disks in heketi - name: Teardown disks in heketi
hosts: heketi-node hosts: heketi-node
become: yes become: true
roles: roles:
- { role: tear-down-disks } - { role: tear-down-disks }

View File

@@ -4,7 +4,7 @@
template: template:
src: "heketi-bootstrap.json.j2" src: "heketi-bootstrap.json.j2"
dest: "{{ kube_config_dir }}/heketi-bootstrap.json" dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
mode: 0640 mode: "0640"
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap" - name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube: kube:

View File

@@ -10,7 +10,7 @@
template: template:
src: "topology.json.j2" src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json" dest: "{{ kube_config_dir }}/topology.json"
mode: 0644 mode: "0644"
- name: "Copy topology configuration into container." - name: "Copy topology configuration into container."
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json" command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"

View File

@@ -3,7 +3,7 @@
template: template:
src: "glusterfs-daemonset.json.j2" src: "glusterfs-daemonset.json.j2"
dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
mode: 0644 mode: "0644"
become: true become: true
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset" - name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
@@ -33,7 +33,7 @@
template: template:
src: "heketi-service-account.json.j2" src: "heketi-service-account.json.j2"
dest: "{{ kube_config_dir }}/heketi-service-account.json" dest: "{{ kube_config_dir }}/heketi-service-account.json"
mode: 0644 mode: "0644"
become: true become: true
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account" - name: "Kubernetes Apps | Install and configure Heketi Service Account"

View File

@@ -4,7 +4,7 @@
template: template:
src: "heketi-deployment.json.j2" src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json" dest: "{{ kube_config_dir }}/heketi-deployment.json"
mode: 0644 mode: "0644"
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi" - name: "Kubernetes Apps | Install and configure Heketi"

View File

@@ -28,7 +28,7 @@
template: template:
src: "heketi.json.j2" src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json" dest: "{{ kube_config_dir }}/heketi.json"
mode: 0644 mode: "0644"
- name: "Deploy Heketi config secret" - name: "Deploy Heketi config secret"
when: "secret_state.stdout | length == 0" when: "secret_state.stdout | length == 0"

View File

@@ -5,7 +5,7 @@
template: template:
src: "heketi-storage.json.j2" src: "heketi-storage.json.j2"
dest: "{{ kube_config_dir }}/heketi-storage.json" dest: "{{ kube_config_dir }}/heketi-storage.json"
mode: 0644 mode: "0644"
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage" - name: "Kubernetes Apps | Install and configure Heketi Storage"
kube: kube:

View File

@@ -16,7 +16,7 @@
template: template:
src: "storageclass.yml.j2" src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml" dest: "{{ kube_config_dir }}/storageclass.yml"
mode: 0644 mode: "0644"
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class" - name: "Kubernetes Apps | Install and configure Storace Class"
kube: kube:

View File

@@ -10,7 +10,7 @@
template: template:
src: "topology.json.j2" src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json" dest: "{{ kube_config_dir }}/topology.json"
mode: 0644 mode: "0644"
- name: "Copy topology configuration into container." # noqa no-handler - name: "Copy topology configuration into container." # noqa no-handler
when: "rendering.changed" when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json" command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"

View File

@@ -1,7 +1,7 @@
--- ---
- name: Collect container images for offline deployment - name: Collect container images for offline deployment
hosts: localhost hosts: localhost
become: no become: false
roles: roles:
# Just load default variables from roles. # Just load default variables from roles.
@@ -16,7 +16,7 @@
template: template:
src: ./contrib/offline/temp/{{ item }}.list.template src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list dest: ./contrib/offline/temp/{{ item }}.list
mode: 0644 mode: "0644"
with_items: with_items:
- files - files
- images - images

View File

@@ -7,17 +7,17 @@
service_facts: service_facts:
- name: Disable service firewalld - name: Disable service firewalld
systemd: systemd_service:
name: firewalld name: firewalld
state: stopped state: stopped
enabled: no enabled: false
when: when:
"'firewalld.service' in services and services['firewalld.service'].status != 'not-found'" "'firewalld.service' in services and services['firewalld.service'].status != 'not-found'"
- name: Disable service ufw - name: Disable service ufw
systemd: systemd_service:
name: ufw name: ufw
state: stopped state: stopped
enabled: no enabled: false
when: when:
"'ufw.service' in services and services['ufw.service'].status != 'not-found'" "'ufw.service' in services and services['ufw.service'].status != 'not-found'"

View File

@@ -12,8 +12,8 @@ ${list_master}
${list_worker} ${list_worker}
[k8s_cluster:children] [k8s_cluster:children]
kube-master kube_control_plane
kube-node kube_node
[k8s_cluster:vars] [k8s_cluster:vars]
network_id=${network_id} network_id=${network_id}

View File

@@ -72,6 +72,7 @@ The setup looks like following
```bash ```bash
./generate-inventory.sh > sample-inventory/inventory.ini ./generate-inventory.sh > sample-inventory/inventory.ini
```
* Export Variables: * Export Variables:

View File

@@ -368,7 +368,7 @@ def iter_host_ips(hosts, ips):
'ansible_host': ip, 'ansible_host': ip,
}) })
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0": if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0" and 'access_ip' in host[1]:
host[1].pop('access_ip') host[1].pop('access_ip')
yield host yield host

View File

@@ -1,5 +1,11 @@
# See: https://developers.upcloud.com/1.3/5-zones/ # See: https://developers.upcloud.com/1.3/5-zones/
zone = "fi-hel1" zone = "fi-hel1"
private_cloud = false
# Only used if private_cloud = true, public zone equivalent
# For example use finnish public zone for finnish private zone
public_zone = "fi-hel2"
username = "ubuntu" username = "ubuntu"
# Prefix to use for all resources to separate them from other resources # Prefix to use for all resources to separate them from other resources
@@ -146,4 +152,4 @@ server_groups = {
# ] # ]
# anti_affinity_policy = "yes" # anti_affinity_policy = "yes"
# } # }
} }

View File

@@ -11,8 +11,10 @@ provider "upcloud" {
module "kubernetes" { module "kubernetes" {
source = "./modules/kubernetes-cluster" source = "./modules/kubernetes-cluster"
prefix = var.prefix prefix = var.prefix
zone = var.zone zone = var.zone
private_cloud = var.private_cloud
public_zone = var.public_zone
template_name = var.template_name template_name = var.template_name
username = var.username username = var.username

View File

@@ -54,11 +54,12 @@ resource "upcloud_server" "master" {
if machine.node_type == "master" if machine.node_type == "master"
} }
hostname = "${local.resource-prefix}${each.key}" hostname = "${local.resource-prefix}${each.key}"
plan = each.value.plan plan = each.value.plan
cpu = each.value.plan == null ? each.value.cpu : null cpu = each.value.plan == null ? null : each.value.cpu
mem = each.value.plan == null ? each.value.mem : null mem = each.value.plan == null ? null : each.value.mem
zone = var.zone zone = var.zone
server_group = each.value.server_group == null ? null : upcloud_server_group.server_groups[each.value.server_group].id
template { template {
storage = var.template_name storage = var.template_name
@@ -111,11 +112,13 @@ resource "upcloud_server" "worker" {
if machine.node_type == "worker" if machine.node_type == "worker"
} }
hostname = "${local.resource-prefix}${each.key}" hostname = "${local.resource-prefix}${each.key}"
plan = each.value.plan plan = each.value.plan
cpu = each.value.plan == null ? each.value.cpu : null cpu = each.value.plan == null ? null : each.value.cpu
mem = each.value.plan == null ? each.value.mem : null mem = each.value.plan == null ? null : each.value.mem
zone = var.zone zone = var.zone
server_group = each.value.server_group == null ? null : upcloud_server_group.server_groups[each.value.server_group].id
template { template {
storage = var.template_name storage = var.template_name
@@ -512,8 +515,18 @@ resource "upcloud_loadbalancer" "lb" {
configured_status = "started" configured_status = "started"
name = "${local.resource-prefix}lb" name = "${local.resource-prefix}lb"
plan = var.loadbalancer_plan plan = var.loadbalancer_plan
zone = var.zone zone = var.private_cloud ? var.public_zone : var.zone
network = upcloud_network.private.id networks {
name = "Private-Net"
type = "private"
family = "IPv4"
network = upcloud_network.private.id
}
networks {
name = "Public-Net"
type = "public"
family = "IPv4"
}
} }
resource "upcloud_loadbalancer_backend" "lb_backend" { resource "upcloud_loadbalancer_backend" "lb_backend" {
@@ -534,6 +547,9 @@ resource "upcloud_loadbalancer_frontend" "lb_frontend" {
mode = "tcp" mode = "tcp"
port = each.value.port port = each.value.port
default_backend_name = upcloud_loadbalancer_backend.lb_backend[each.key].name default_backend_name = upcloud_loadbalancer_backend.lb_backend[each.key].name
networks {
name = "Public-Net"
}
} }
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" { resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
@@ -557,5 +573,9 @@ resource "upcloud_server_group" "server_groups" {
title = each.key title = each.key
anti_affinity_policy = each.value.anti_affinity_policy anti_affinity_policy = each.value.anti_affinity_policy
labels = {} labels = {}
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id] # Managed upstream via upcloud_server resource
} members = []
lifecycle {
ignore_changes = [members]
}
}

View File

@@ -6,6 +6,14 @@ variable "zone" {
type = string type = string
} }
variable "private_cloud" {
type = bool
}
variable "public_zone" {
type = string
}
variable "template_name" {} variable "template_name" {}
variable "username" {} variable "username" {}
@@ -20,6 +28,7 @@ variable "machines" {
cpu = string cpu = string
mem = string mem = string
disk_size = number disk_size = number
server_group : string
additional_disks = map(object({ additional_disks = map(object({
size = number size = number
tier = string tier = string
@@ -104,6 +113,5 @@ variable "server_groups" {
type = map(object({ type = map(object({
anti_affinity_policy = string anti_affinity_policy = string
servers = list(string)
})) }))
} }

View File

@@ -3,7 +3,7 @@ terraform {
required_providers { required_providers {
upcloud = { upcloud = {
source = "UpCloudLtd/upcloud" source = "UpCloudLtd/upcloud"
version = "~>2.12.0" version = "~>5.6.0"
} }
} }
required_version = ">= 0.13" required_version = ">= 0.13"

View File

@@ -146,4 +146,4 @@ server_groups = {
# ] # ]
# anti_affinity_policy = "yes" # anti_affinity_policy = "yes"
# } # }
} }

View File

@@ -9,6 +9,15 @@ variable "zone" {
description = "The zone where to run the cluster" description = "The zone where to run the cluster"
} }
variable "private_cloud" {
description = "Whether the environment is in the private cloud region"
default = false
}
variable "public_zone" {
description = "The public zone equivalent if the cluster is running in a private cloud zone"
}
variable "template_name" { variable "template_name" {
description = "Block describing the preconfigured operating system" description = "Block describing the preconfigured operating system"
} }
@@ -32,6 +41,7 @@ variable "machines" {
cpu = string cpu = string
mem = string mem = string
disk_size = number disk_size = number
server_group : string
additional_disks = map(object({ additional_disks = map(object({
size = number size = number
tier = string tier = string
@@ -142,7 +152,6 @@ variable "server_groups" {
type = map(object({ type = map(object({
anti_affinity_policy = string anti_affinity_policy = string
servers = list(string)
})) }))
default = {} default = {}

View File

@@ -3,7 +3,7 @@ terraform {
required_providers { required_providers {
upcloud = { upcloud = {
source = "UpCloudLtd/upcloud" source = "UpCloudLtd/upcloud"
version = "~>2.12.0" version = "~>5.6.0"
} }
} }
required_version = ">= 0.13" required_version = ">= 0.13"

View File

@@ -424,7 +424,7 @@ calico_wireguard_enabled: true
The following OSes will require enabling the EPEL repo in order to bring in wireguard tools: The following OSes will require enabling the EPEL repo in order to bring in wireguard tools:
* CentOS 7 & 8 * CentOS 8
* AlmaLinux 8 * AlmaLinux 8
* Rocky Linux 8 * Rocky Linux 8
* Amazon Linux 2 * Amazon Linux 2

View File

@@ -132,7 +132,7 @@ Wireguard option is only available in Cilium 1.10.0 and newer.
### IPsec Encryption ### IPsec Encryption
For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/gettingstarted/encryption-ipsec/) For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/security/network/encryption-ipsec/)
To enable IPsec encryption, you just need to set three variables. To enable IPsec encryption, you just need to set three variables.
@@ -157,7 +157,7 @@ echo "cilium_ipsec_key: "$(echo -n "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/uran
### Wireguard Encryption ### Wireguard Encryption
For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/gettingstarted/encryption-wireguard/) For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/security/network/encryption-wireguard/)
To enable Wireguard encryption, you just need to set two variables. To enable Wireguard encryption, you just need to set two variables.

View File

@@ -16,14 +16,6 @@ Enabling the `overlay2` graph driver:
docker_storage_options: -s overlay2 docker_storage_options: -s overlay2
``` ```
Enabling `docker_container_storage_setup`, it will configure devicemapper driver on Centos7 or RedHat7.
Deployers must be define a disk path for `docker_container_storage_setup_devs`, otherwise docker-storage-setup will be executed incorrectly.
```yaml
docker_container_storage_setup: true
docker_container_storage_setup_devs: /dev/vdb
```
Changing the Docker cgroup driver (native.cgroupdriver); valid options are `systemd` or `cgroupfs`, default is `systemd`: Changing the Docker cgroup driver (native.cgroupdriver); valid options are `systemd` or `cgroupfs`, default is `systemd`:
```yaml ```yaml

View File

@@ -231,6 +231,7 @@ The following tags are defined in playbooks:
| services | Remove services (etcd, kubelet etc...) when resetting | | services | Remove services (etcd, kubelet etc...) when resetting |
| snapshot | Enabling csi snapshot | | snapshot | Enabling csi snapshot |
| snapshot-controller | Configuring csi snapshot controller | | snapshot-controller | Configuring csi snapshot controller |
| system-packages | Install packages using OS package manager |
| upgrade | Upgrading, f.e. container images/binaries | | upgrade | Upgrading, f.e. container images/binaries |
| upload | Distributing images/binaries across hosts | | upload | Distributing images/binaries across hosts |
| vsphere-csi-driver | Configuring csi driver: vsphere | | vsphere-csi-driver | Configuring csi driver: vsphere |

View File

@@ -216,6 +216,8 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
The percent is calculated by dividing this field value by 100, so the field value must be between 0 and 100, inclusive. The percent is calculated by dividing this field value by 100, so the field value must be between 0 and 100, inclusive.
When specified, the value must be less than imageGCHighThresholdPercent. Default: 80 When specified, the value must be less than imageGCHighThresholdPercent. Default: 80
* *kubelet_max_parallel_image_pulls* - Sets the maximum number of image pulls in parallel. The value is `1` by default which means the default is serial image pulling, set it to a integer great than `1` to enable image pulling in parallel.
* *kubelet_make_iptables_util_chains* - If `true`, causes the kubelet ensures a set of `iptables` rules are present on host. * *kubelet_make_iptables_util_chains* - If `true`, causes the kubelet ensures a set of `iptables` rules are present on host.
* *kubelet_cpu_manager_policy* - If set to `static`, allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. And it should be set with `kube_reserved` or `system-reserved`, enable this with the following guide:[Control CPU Management Policies on the Node](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) * *kubelet_cpu_manager_policy* - If set to `static`, allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. And it should be set with `kube_reserved` or `system-reserved`, enable this with the following guide:[Control CPU Management Policies on the Node](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/)
@@ -243,6 +245,10 @@ kubelet_cpu_manager_policy_options:
By default the `kubelet_secure_addresses` is set with the `10.0.0.110` the ansible control host uses `eth0` to connect to the machine. In case you want to use `eth1` as the outgoing interface on which `kube-apiserver` connects to the `kubelet`s, you should override the variable in this way: `kubelet_secure_addresses: "192.168.1.110"`. By default the `kubelet_secure_addresses` is set with the `10.0.0.110` the ansible control host uses `eth0` to connect to the machine. In case you want to use `eth1` as the outgoing interface on which `kube-apiserver` connects to the `kubelet`s, you should override the variable in this way: `kubelet_secure_addresses: "192.168.1.110"`.
* *kubelet_systemd_wants_dependencies* - List of kubelet service dependencies, other than container runtime.
If you use nfs dynamically mounted volumes, sometimes rpc-statd does not start within the kubelet. You can fix it with this parameter : `kubelet_systemd_wants_dependencies: ["rpc-statd.service"]` This will add `Wants=rpc-statd.service` in `[Unit]` section of /etc/systemd/system/kubelet.service
* *node_labels* - Labels applied to nodes via `kubectl label node`. * *node_labels* - Labels applied to nodes via `kubectl label node`.
For example, labels can be set in the inventory as variables or more widely in group_vars. For example, labels can be set in the inventory as variables or more widely in group_vars.
*node_labels* can only be defined as a dict: *node_labels* can only be defined as a dict:

View File

@@ -1,4 +1,3 @@
# OpenStack # OpenStack
## Known compatible public clouds ## Known compatible public clouds

View File

@@ -5,8 +5,8 @@
1. build: build a docker image to be used in the pipeline 1. build: build a docker image to be used in the pipeline
2. unit-tests: fast jobs for fast feedback (linting, etc...) 2. unit-tests: fast jobs for fast feedback (linting, etc...)
3. deploy-part1: small number of jobs to test if the PR works with default settings 3. deploy-part1: small number of jobs to test if the PR works with default settings
4. deploy-part2: slow jobs testing different platforms, OS, settings, CNI, etc... 4. deploy-extended: slow jobs testing different platforms, OS, settings, CNI, etc...
5. deploy-part3: very slow jobs (upgrades, etc...) 5. deploy-extended: very slow jobs (upgrades, etc...)
## Runners ## Runners

View File

@@ -8,9 +8,8 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
|---| --- | --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | centos8 | :white_check_mark: | :x: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: |
debian10 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | fedora38 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: |
@@ -27,8 +26,7 @@ ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
|---| --- | --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | centos8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora37 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
@@ -46,8 +44,7 @@ ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
|---| --- | --- | --- | --- | --- | --- | --- | --- | |---| --- | --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | centos8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora37 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | fedora37 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -80,7 +80,7 @@ cat << EOF > vagrant/config.rb
\$instance_name_prefix = "kub" \$instance_name_prefix = "kub"
\$vm_cpus = 1 \$vm_cpus = 1
\$num_instances = 3 \$num_instances = 3
\$os = "centos-bento" \$os = "centos8-bento"
\$subnet = "10.0.20" \$subnet = "10.0.20"
\$network_plugin = "flannel" \$network_plugin = "flannel"
\$inventory = "$INV" \$inventory = "$INV"

View File

@@ -35,7 +35,7 @@ kubectl create clusterrolebinding cluster-admin-binding \
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version. The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console ```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml
``` ```
### Provider Specific Steps ### Provider Specific Steps

View File

@@ -1,10 +1,5 @@
# CentOS and derivatives # CentOS and derivatives
## CentOS 7
The maximum python version officially supported in CentOS is 3.6. Ansible as of version 5 (ansible core 2.12.x) increased their python requirement to python 3.8 and above.
Kubespray supports multiple ansible versions but only the default (5.x) gets wide testing coverage. If your deployment host is CentOS 7 it is recommended to use one of the earlier versions still supported.
## CentOS 8 ## CentOS 8
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`), If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),

View File

@@ -1,6 +1,6 @@
# cgroups # cgroups
To avoid the rivals for resources between containers or the impact on the host in Kubernetes, the kubelet components will rely on cgroups to limit the containers resources usage. To avoid resource contention between containers and host daemons in Kubernetes, the kubelet components can use cgroups to limit resource usage.
## Enforcing Node Allocatable ## Enforcing Node Allocatable
@@ -20,8 +20,9 @@ Here is an example:
```yaml ```yaml
kubelet_enforce_node_allocatable: "pods,kube-reserved,system-reserved" kubelet_enforce_node_allocatable: "pods,kube-reserved,system-reserved"
# Reserve this space for kube resources # Set kube_reserved to true to run kubelet and container-engine daemons in a dedicated cgroup.
# Set to true to reserve resources for kube daemons # This is required if you want to enforce limits on the resource usage of these daemons.
# It is not required if you just want to make resource reservations (kube_memory_reserved, kube_cpu_reserved, etc.)
kube_reserved: true kube_reserved: true
kube_reserved_cgroups_for_service_slice: kube.slice kube_reserved_cgroups_for_service_slice: kube.slice
kube_reserved_cgroups: "/{{ kube_reserved_cgroups_for_service_slice }}" kube_reserved_cgroups: "/{{ kube_reserved_cgroups_for_service_slice }}"

View File

@@ -30,12 +30,12 @@ loadbalancer. If you wish to control the name of the loadbalancer container,
you can set the variable `loadbalancer_apiserver_pod_name`. you can set the variable `loadbalancer_apiserver_pod_name`.
If you choose to NOT use the local internal loadbalancer, you will need to If you choose to NOT use the local internal loadbalancer, you will need to
use the [kube-vip](kube-vip.md) ansible role or configure your own loadbalancer to achieve HA. By default, it only configures a non-HA endpoint, which points to the use the [kube-vip](/docs/ingress/kube-vip.md) ansible role or configure your own loadbalancer to achieve HA. By default, it only configures a non-HA endpoint, which points to the
`access_ip` or IP address of the first server node in the `kube_control_plane` group. `access_ip` or IP address of the first server node in the `kube_control_plane` group.
It can also configure clients to use endpoints for a given loadbalancer type. It can also configure clients to use endpoints for a given loadbalancer type.
The following diagram shows how traffic to the apiserver is directed. The following diagram shows how traffic to the apiserver is directed.
![Image](figures/loadbalancer_localhost.png?raw=true) ![Image](/docs/figures/loadbalancer_localhost.png?raw=true)
A user may opt to use an external loadbalancer (LB) instead. An external LB A user may opt to use an external loadbalancer (LB) instead. An external LB
provides access for external clients, while the internal LB accepts client provides access for external clients, while the internal LB accepts client

View File

@@ -103,7 +103,9 @@ If you use the settings like the one above, you'll need to define in your invent
can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so
that you don't need to modify this setting everytime kubespray upgrades one of these components. that you don't need to modify this setting everytime kubespray upgrades one of these components.
* `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending on your OS, should point to your internal * `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending on your OS, should point to your internal
repository. Adjust the path accordingly. repository. Adjust the path accordingly. Used only for Docker/Containerd packages (if needed); other packages might
be installed from other repositories. You might disable installing packages from other repositories by skipping
the `system-packages` tag
## Install Kubespray Python Packages ## Install Kubespray Python Packages

View File

@@ -1,4 +1,3 @@
# Recovering the control plane # Recovering the control plane
To recover from broken nodes in the control plane use the "recover\-control\-plane.yml" playbook. To recover from broken nodes in the control plane use the "recover\-control\-plane.yml" playbook.
@@ -8,7 +7,6 @@ Examples of what broken means in this context:
* One or more bare metal node(s) suffer from unrecoverable hardware failure * One or more bare metal node(s) suffer from unrecoverable hardware failure
* One or more node(s) fail during patching or upgrading * One or more node(s) fail during patching or upgrading
* Etcd database corruption * Etcd database corruption
* Other node related failures leaving your control plane degraded or nonfunctional * Other node related failures leaving your control plane degraded or nonfunctional
__Note that you need at least one functional node to be able to recover using this method.__ __Note that you need at least one functional node to be able to recover using this method.__

View File

@@ -12,7 +12,7 @@
- name: Setup ssh config to use the bastion - name: Setup ssh config to use the bastion
hosts: localhost hosts: localhost
gather_facts: False gather_facts: false
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]} - { role: bastion-ssh-config, tags: ["localhost", "bastion"]}

View File

@@ -9,42 +9,16 @@ authors:
tags: tags:
- infrastructure - infrastructure
repository: https://github.com/kubernetes-sigs/kubespray repository: https://github.com/kubernetes-sigs/kubespray
issues: https://github.com/kubernetes-sigs/kubespray/issues
documentation: https://kubespray.io
license_file: LICENSE license_file: LICENSE
dependencies: dependencies:
ansible.utils: '>=2.5.0' ansible.utils: '>=2.5.0'
community.general: '>=3.0.0' community.general: '>=3.0.0'
build_ignore: ansible.netcommon: '>=5.3.0'
- .github ansible.posix: '>=1.5.4'
- '*.tar.gz' community.docker: '>=3.11.0'
- extra_playbooks kubernetes.core: '>=2.4.2'
- inventory manifest:
- scripts directives:
- test-infra - recursive-exclude tests **
- .ansible-lint
- .editorconfig
- .gitignore
- .gitlab-ci
- .gitlab-ci.yml
- .gitmodules
- .markdownlint.yaml
- .nojekyll
- .pre-commit-config.yaml
- .yamllint
- Dockerfile
- FILES.json
- MANIFEST.json
- Makefile
- Vagrantfile
- _config.yml
- ansible.cfg
- requirements*txt
- setup.cfg
- setup.py
- index.html
- reset.yml
- cluster.yml
- scale.yml
- recover-control-plane.yml
- remove-node.yml
- upgrade-cluster.yml
- library

View File

@@ -24,8 +24,21 @@
# containerd_grpc_max_recv_message_size: 16777216 # containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216 # containerd_grpc_max_send_message_size: 16777216
# Containerd debug socket location: unix or tcp format
# containerd_debug_address: ""
# Containerd log level
# containerd_debug_level: "info" # containerd_debug_level: "info"
# Containerd logs format, supported values: text, json
# containerd_debug_format: ""
# Containerd debug socket UID
# containerd_debug_uid: 0
# Containerd debug socket GID
# containerd_debug_gid: 0
# containerd_metrics_address: "" # containerd_metrics_address: ""
# containerd_metrics_grpc_histogram: false # containerd_metrics_grpc_histogram: false

View File

@@ -18,7 +18,7 @@
# quay_image_repo: "{{ registry_host }}" # quay_image_repo: "{{ registry_host }}"
## Kubernetes components ## Kubernetes components
# kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm" # kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl" # kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
# kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet" # kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
@@ -82,7 +82,7 @@
# krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz" # krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
## CentOS/Redhat/AlmaLinux ## CentOS/Redhat/AlmaLinux
### For EL7, base and extras repo must be available, for EL8, baseos and appstream ### For EL8, baseos and appstream must be available,
### By default we enable those repo automatically ### By default we enable those repo automatically
# rhel_enable_repos: false # rhel_enable_repos: false
### Docker / Containerd ### Docker / Containerd

View File

@@ -32,4 +32,7 @@
# etcd_experimental_enable_distributed_tracing: false # etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100 # etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317" # etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd # etcd_experimental_distributed_tracing_service_name: etcd
## The interval for etcd watch progress notify events
# etcd_experimental_watch_progress_notify_interval: 5s

View File

@@ -96,10 +96,16 @@ rbd_provisioner_enabled: false
# rbd_provisioner_storage_class: rbd # rbd_provisioner_storage_class: rbd
# rbd_provisioner_reclaim_policy: Delete # rbd_provisioner_reclaim_policy: Delete
# Gateway API CRDs
gateway_api_enabled: false
# gateway_api_experimental_channel: false
# Nginx ingress controller deployment # Nginx ingress controller deployment
ingress_nginx_enabled: false ingress_nginx_enabled: false
# ingress_nginx_host_network: false # ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer # ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: "" ingress_publish_status_address: ""
# ingress_nginx_nodeselector: # ingress_nginx_nodeselector:
# kubernetes.io/os: "linux" # kubernetes.io/os: "linux"

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release ## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.29.5 kube_version: v1.30.4
# Where the binaries will be downloaded. # Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G) # Note: ensure that you've enough disk space (about 1G)
@@ -262,7 +262,7 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# kubelet_runtime_cgroups_cgroupfs: "/system.slice/{{ container_manager }}.service" # kubelet_runtime_cgroups_cgroupfs: "/system.slice/{{ container_manager }}.service"
# kubelet_kubelet_cgroups_cgroupfs: "/system.slice/kubelet.service" # kubelet_kubelet_cgroups_cgroupfs: "/system.slice/kubelet.service"
# Optionally reserve this space for kube daemons. # Whether to run kubelet and container-engine daemons in a dedicated cgroup.
# kube_reserved: false # kube_reserved: false
## Uncomment to override default values ## Uncomment to override default values
## The following two items need to be set when kube_reserved is true ## The following two items need to be set when kube_reserved is true

View File

@@ -163,6 +163,13 @@ cilium_l2announcements: false
### Enable auto generate certs if cilium_hubble_install: true ### Enable auto generate certs if cilium_hubble_install: true
# cilium_hubble_tls_generate: false # cilium_hubble_tls_generate: false
### Tune cilium_hubble_event_buffer_capacity & cilium_hubble_event_queue_size values to avoid dropping events when hubble is under heavy load
### Capacity of Hubble events buffer. The provided value must be one less than an integer power of two and no larger than 65535
### (ie: 1, 3, ..., 2047, 4095, ..., 65535) (default 4095)
# cilium_hubble_event_buffer_capacity: 4095
### Buffer size of the channel to receive monitor events.
# cilium_hubble_event_queue_size: 50
# IP address management mode for v1.9+. # IP address management mode for v1.9+.
# https://docs.cilium.io/en/v1.9/concepts/networking/ipam/ # https://docs.cilium.io/en/v1.9/concepts/networking/ipam/
# cilium_ipam_mode: kubernetes # cilium_ipam_mode: kubernetes

View File

@@ -4,7 +4,7 @@ FROM ubuntu:jammy-20230308
# Pip needs this as well at the moment to install ansible # Pip needs this as well at the moment to install ansible
# (and potentially other packages) # (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219 # See: https://github.com/pypa/pip/issues/10219
ENV VAGRANT_VERSION=2.3.7 \ ENV VAGRANT_VERSION=2.4.1 \
VAGRANT_DEFAULT_PROVIDER=libvirt \ VAGRANT_DEFAULT_PROVIDER=libvirt \
VAGRANT_ANSIBLE_TAGS=facts \ VAGRANT_ANSIBLE_TAGS=facts \
LANG=C.UTF-8 \ LANG=C.UTF-8 \
@@ -30,6 +30,9 @@ RUN apt update -q \
software-properties-common \ software-properties-common \
unzip \ unzip \
libvirt-clients \ libvirt-clients \
qemu-utils \
qemu-kvm \
dnsmasq \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \ && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \ && add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
&& apt update -q \ && apt update -q \
@@ -37,13 +40,15 @@ RUN apt update -q \
&& apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/* && apt autoremove -yqq --purge && apt clean && rm -rf /var/lib/apt/lists/* /var/log/*
WORKDIR /kubespray WORKDIR /kubespray
ADD ./requirements.txt /kubespray/requirements.txt
ADD ./tests/requirements.txt /kubespray/tests/requirements.txt
ADD ./roles/kubespray-defaults/defaults/main/main.yml /kubespray/roles/kubespray-defaults/defaults/main/main.yml
RUN --mount=type=bind,target=./requirements.txt,src=./requirements.txt \
--mount=type=bind,target=./tests/requirements.txt,src=./tests/requirements.txt \ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
--mount=type=bind,target=./roles/kubespray-defaults/defaults/main/main.yml,src=./roles/kubespray-defaults/defaults/main/main.yml \
update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \ && pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \ && pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \ && KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \ && curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \ && echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \

View File

@@ -2,7 +2,7 @@
- name: Check Ansible version - name: Check Ansible version
hosts: all hosts: all
gather_facts: false gather_facts: false
become: no become: false
run_once: true run_once: true
vars: vars:
minimal_ansible_version: 2.16.4 minimal_ansible_version: 2.16.4
@@ -25,7 +25,6 @@
tags: tags:
- check - check
# CentOS 7 provides too old jinja version
- name: "Check that jinja is not too old (install via pip)" - name: "Check that jinja is not too old (install via pip)"
assert: assert:
msg: "Your Jinja version is too old, install via pip" msg: "Your Jinja version is too old, install via pip"

View File

@@ -51,7 +51,7 @@
- name: Install bastion ssh config - name: Install bastion ssh config
hosts: bastion[0] hosts: bastion[0]
gather_facts: False gather_facts: false
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray-defaults } - { role: kubespray-defaults }

View File

@@ -7,7 +7,7 @@
- name: Prepare for etcd install - name: Prepare for etcd install
hosts: k8s_cluster:etcd hosts: k8s_cluster:etcd
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -21,7 +21,7 @@
- name: Install Kubernetes nodes - name: Install Kubernetes nodes
hosts: k8s_cluster hosts: k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -30,7 +30,7 @@
- name: Install the control plane - name: Install the control plane
hosts: kube_control_plane hosts: kube_control_plane
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -41,7 +41,7 @@
- name: Invoke kubeadm and install a CNI - name: Invoke kubeadm and install a CNI
hosts: k8s_cluster hosts: k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -54,7 +54,7 @@
- name: Install Calico Route Reflector - name: Install Calico Route Reflector
hosts: calico_rr hosts: calico_rr
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -63,7 +63,7 @@
- name: Patch Kubernetes for Windows - name: Patch Kubernetes for Windows
hosts: kube_control_plane[0] hosts: kube_control_plane[0]
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -72,7 +72,7 @@
- name: Install Kubernetes apps - name: Install Kubernetes apps
hosts: kube_control_plane hosts: kube_control_plane
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -86,7 +86,7 @@
- name: Apply resolv.conf changes now that cluster DNS is up - name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster hosts: k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:

View File

@@ -15,7 +15,7 @@
- name: Gather facts - name: Gather facts
hosts: k8s_cluster:etcd:calico_rr hosts: k8s_cluster:etcd:calico_rr
gather_facts: False gather_facts: false
tags: always tags: always
tasks: tasks:
- name: Gather minimal facts - name: Gather minimal facts

View File

@@ -16,7 +16,7 @@
- name: Install etcd - name: Install etcd
hosts: etcd:kube_control_plane:_kubespray_needs_etcd hosts: etcd:kube_control_plane:_kubespray_needs_etcd
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:

View File

@@ -4,13 +4,13 @@
- name: Confirm node removal - name: Confirm node removal
hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}" hosts: "{{ node | default('etcd:k8s_cluster:calico_rr') }}"
gather_facts: no gather_facts: false
tasks: tasks:
- name: Confirm Execution - name: Confirm Execution
pause: pause:
prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes." prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes."
register: pause_result register: pause_result
run_once: True run_once: true
when: when:
- not (skip_confirmation | default(false) | bool) - not (skip_confirmation | default(false) | bool)
@@ -25,7 +25,7 @@
- name: Reset node - name: Reset node
hosts: "{{ node | default('kube_node') }}" hosts: "{{ node | default('kube_node') }}"
gather_facts: no gather_facts: false
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray-defaults, when: reset_nodes | default(True) | bool } - { role: kubespray-defaults, when: reset_nodes | default(True) | bool }
@@ -36,7 +36,7 @@
# Currently cannot remove first master or etcd # Currently cannot remove first master or etcd
- name: Post node removal - name: Post node removal
hosts: "{{ node | default('kube_control_plane[1:]:etcd[1:]') }}" hosts: "{{ node | default('kube_control_plane[1:]:etcd[1:]') }}"
gather_facts: no gather_facts: false
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
- { role: kubespray-defaults, when: reset_nodes | default(True) | bool } - { role: kubespray-defaults, when: reset_nodes | default(True) | bool }

View File

@@ -7,13 +7,13 @@
- name: Reset cluster - name: Reset cluster
hosts: etcd:k8s_cluster:calico_rr hosts: etcd:k8s_cluster:calico_rr
gather_facts: False gather_facts: false
pre_tasks: pre_tasks:
- name: Reset Confirmation - name: Reset Confirmation
pause: pause:
prompt: "Are you sure you want to reset cluster state? Type 'yes' to reset your cluster." prompt: "Are you sure you want to reset cluster state? Type 'yes' to reset your cluster."
register: reset_confirmation_prompt register: reset_confirmation_prompt
run_once: True run_once: true
when: when:
- not (skip_confirmation | default(false) | bool) - not (skip_confirmation | default(false) | bool)
- reset_confirmation is not defined - reset_confirmation is not defined

View File

@@ -7,7 +7,7 @@
- name: Generate the etcd certificates beforehand - name: Generate the etcd certificates beforehand
hosts: etcd:kube_control_plane hosts: etcd:kube_control_plane
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -24,7 +24,7 @@
- name: Download images to ansible host cache via first kube_control_plane node - name: Download images to ansible host cache via first kube_control_plane node
hosts: kube_control_plane[0] hosts: kube_control_plane[0]
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -34,7 +34,7 @@
- name: Target only workers to get kubelet installed and checking in on any new nodes(engine) - name: Target only workers to get kubelet installed and checking in on any new nodes(engine)
hosts: kube_node hosts: kube_node
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -53,7 +53,7 @@
- name: Target only workers to get kubelet installed and checking in on any new nodes(node) - name: Target only workers to get kubelet installed and checking in on any new nodes(node)
hosts: kube_node hosts: kube_node
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -63,7 +63,7 @@
- name: Upload control plane certs and retrieve encryption key - name: Upload control plane certs and retrieve encryption key
hosts: kube_control_plane | first hosts: kube_control_plane | first
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
gather_facts: False gather_facts: false
tags: kubeadm tags: kubeadm
roles: roles:
- { role: kubespray-defaults } - { role: kubespray-defaults }
@@ -84,7 +84,7 @@
- name: Target only workers to get kubelet installed and checking in on any new nodes(network) - name: Target only workers to get kubelet installed and checking in on any new nodes(network)
hosts: kube_node hosts: kube_node
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -96,7 +96,7 @@
- name: Apply resolv.conf changes now that cluster DNS is up - name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster hosts: k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:

View File

@@ -7,7 +7,7 @@
- name: Download images to ansible host cache via first kube_control_plane node - name: Download images to ansible host cache via first kube_control_plane node
hosts: kube_control_plane[0] hosts: kube_control_plane[0]
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -17,7 +17,7 @@
- name: Prepare nodes for upgrade - name: Prepare nodes for upgrade
hosts: k8s_cluster:etcd:calico_rr hosts: k8s_cluster:etcd:calico_rr
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -27,7 +27,7 @@
- name: Upgrade container engine on non-cluster nodes - name: Upgrade container engine on non-cluster nodes
hosts: etcd:calico_rr:!k8s_cluster hosts: etcd:calico_rr:!k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
serial: "{{ serial | default('20%') }}" serial: "{{ serial | default('20%') }}"
@@ -39,7 +39,7 @@
import_playbook: install_etcd.yml import_playbook: install_etcd.yml
- name: Handle upgrades to master components first to maintain backwards compat. - name: Handle upgrades to master components first to maintain backwards compat.
gather_facts: False gather_facts: false
hosts: kube_control_plane hosts: kube_control_plane
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
@@ -62,7 +62,7 @@
- name: Upgrade calico and external cloud provider on all masters, calico-rrs, and nodes - name: Upgrade calico and external cloud provider on all masters, calico-rrs, and nodes
hosts: kube_control_plane:calico_rr:kube_node hosts: kube_control_plane:calico_rr:kube_node
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}" serial: "{{ serial | default('20%') }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
@@ -75,7 +75,7 @@
- name: Finally handle worker upgrades, based on given batch size - name: Finally handle worker upgrades, based on given batch size
hosts: kube_node:calico_rr:!kube_control_plane hosts: kube_node:calico_rr:!kube_control_plane
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
serial: "{{ serial | default('20%') }}" serial: "{{ serial | default('20%') }}"
@@ -93,7 +93,7 @@
- name: Patch Kubernetes for Windows - name: Patch Kubernetes for Windows
hosts: kube_control_plane[0] hosts: kube_control_plane[0]
gather_facts: False gather_facts: false
any_errors_fatal: true any_errors_fatal: true
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -102,7 +102,7 @@
- name: Install Calico Route Reflector - name: Install Calico Route Reflector
hosts: calico_rr hosts: calico_rr
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -111,7 +111,7 @@
- name: Install Kubernetes apps - name: Install Kubernetes apps
hosts: kube_control_plane hosts: kube_control_plane
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:
@@ -122,7 +122,7 @@
- name: Apply resolv.conf changes now that cluster DNS is up - name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster hosts: k8s_cluster
gather_facts: False gather_facts: false
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}" environment: "{{ proxy_disable_env }}"
roles: roles:

View File

@@ -1,10 +1,7 @@
ansible==9.5.1 ansible==9.8.0
cryptography==42.0.7 # Needed for jinja2 json_query templating
jinja2==3.1.4
jmespath==1.0.1 jmespath==1.0.1
MarkupSafe==2.1.5 # Needed for ansible.utils.validate module
netaddr==1.2.1 jsonschema==4.23.0
pbr==6.0.0 # Needed for ansible.utils.ipaddr
ruamel.yaml==0.18.6 netaddr==1.3.0
ruamel.yaml.clib==0.2.8
jsonschema==4.22.0

Some files were not shown because too many files have changed in this diff Show More