Compare commits

...

345 Commits

Author SHA1 Message Date
github-actions[bot]
ea9bbc716a Patch versions updates 2025-12-13 02:52:58 +00:00
ChengHao Yang
c4c3205a71 Releng: galaxy version to 2.29.2 (#12786)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-12-11 19:47:30 -08:00
ChengHao Yang
0c6a29553f Patch versions updates (#12782)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-11 00:55:31 -08:00
Max Gautier
2375fae1c2 Patch versions updates (#12763)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-04 06:39:01 -08:00
Max Gautier
55f7b7f54c Patch versions updates (#12744)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-25 00:52:36 -08:00
k8s-infra-cherrypick-robot
dbca6a7757 [release-2.29] CI: enable unsafe_show_logs == true by default (#12728)
* CI: enable unsafe_show_logs == true by default

* Deduplicate defaults vars (unsafe_show_logs)

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-19 23:32:02 -08:00
k8s-infra-cherrypick-robot
c5c43619a7 Upgrade cilium from 1.18.3 to 1.18.4 (#12725)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
Co-authored-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-11-18 20:09:59 -08:00
Max Gautier
584b0a4036 Patch versions updates (#12719)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-18 05:15:39 -08:00
k8s-infra-cherrypick-robot
084c2be8b9 CI: use a dedicated disk for releases (#12721)
This should make 'no space left on device' problems easier to handle

Use /tmp/releases as local_release_dir CI created machine, while keeping
the same folder on the runner (needed for gitlab-ci runner pods)

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-18 03:29:39 -08:00
k8s-infra-cherrypick-robot
932025fbd6 Let containerd create storage / state dir (#12722)
Containerd manages by itself, so there is no need to override it and
change permissions.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-18 03:09:38 -08:00
k8s-infra-cherrypick-robot
a04592de18 Adjust hubble export values for cilium 1.18 schema change (#12718)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
Co-authored-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-11-18 00:41:42 -08:00
k8s-infra-cherrypick-robot
d8b9288b27 [release-2.29] CI: Try a full ssh connection on hosts instead of only checking the port (#12711)
* CI: Try a full ssh connection on hosts instead of only checking the port

If we only try the port, we can try to connect in the playbook which is
executed next even though the managed node has not yet completed it's
boot-up sequence ("System is booting up. Unprivileged users are not
permitted to log in yet. Please come back later. For technical details,
see pam_nologin(8).")

This does not account for python-less hosts, but we don't use those in
CI anyway (for now, at least).

* CI: Remove connection method override when creating VMs

This prevented wait_for_connection to work correctly by hijacking the
connection to localhost, thus bypassing the connection check.

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-15 12:37:38 -08:00
Max Gautier
cbdd7cf3a7 update pre-commit hooks (#12706) 2025-11-14 22:41:40 -08:00
k8s-infra-cherrypick-robot
3c0cff983d fix(cilium):correct loadBalancer.mode rendering in values.yaml (#12705)
Co-authored-by: Anurag Ojha <aojharaj2004@gmail.com>
2025-11-14 07:01:40 -08:00
k8s-infra-cherrypick-robot
e5a1f68a2c Update Calico apiserver RBAC for Kubernetes 1.33+ (#12695)
Add missing RBAC permissions for Calico apiserver to function correctly
with Kubernetes 1.33+

Changes:

1. Add K8s 1.33 ValidatingAdmissionPolicy resources to calico-webhook-reader
   - validatingadmissionpolicies
   - validatingadmissionpolicybindings

Kubernetes 1.33 introduced ValidatingAdmissionPolicy resources (KEP-3488)
that require explicit RBAC permissions. Without these changes, Calico
apiserver on k8s 1.33+ will not work and needless errors are logged

Co-authored-by: rickerc <chris.ricker@gmail.com>
2025-11-14 04:49:38 -08:00
k8s-infra-cherrypick-robot
fe566df651 Fix the (upgrade/remove_node) + collection test cases (#12687)
The 'old' playbook and the collection use '-' and '_' as separator,
which breaks the logic in scripts/testcases_run.sh.

Add aliases using the old schemes to make the test work and avoid
breaking anything.

Both '-' and '_' variants will be deleted once we switch to supporting
collection only.

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-10 06:46:57 -08:00
k8s-infra-cherrypick-robot
59b3c686a8 [release-2.29] Remove etcd member by peerURLs (#12685)
* Remove etcd member by peerURLs

The way to obtain the IP of a particular member is convoluted and depend
on multiple variables. The match is also textual and it's not clear
against what we're matching

It's also broken for etcd member which are not also Kubernetes nodes,
because the "Lookup node IP in kubernetes" task will fail and abort the
play.

Instead, match against 'peerURLs', which does not need new variable, and
use json output.

* Add testcase for etcd removal on external etcd

* do not merge

* fixup! Remove etcd member by peerURLs

* fixup! Remove etcd member by peerURLs

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-11-10 05:48:56 -08:00
Ali Afsharzadeh
4b970baa5a [release-2.29] Upgrade cilium from 1.18.2 to 1.18.3 (#12679) 2025-11-09 06:00:52 -08:00
ChengHao Yang
a15fcb729b Patch versions updates (#12646)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-03 02:19:36 -08:00
k8s-infra-cherrypick-robot
9a9e33dc9f fix(calico): Add missed rbac verb for hostendpoints (#12644)
Signed-off-by: Meza <meza-xyz@proton.me>
Co-authored-by: Meza <meza-xyz@proton.me>
2025-10-24 01:05:34 -07:00
ChengHao Yang
d9f188c39c [release-2.29] Releng: galaxy version to 2.29.1 (#12645)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-10-24 00:41:36 -07:00
ChengHao Yang
9991412b45 Docs: bump version to 2.29.0 (#12621)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-10-14 01:29:36 -07:00
Mahendra Reddy
ee6a792ec0 feat: add support crio additional mounts (#12561)
removed default since it's already set in variables

fix pre commit issue in the pipeline
2025-10-13 18:15:32 -07:00
Max Gautier
fbf957ab5d Fix breakage when ignoring all kubeadm preflight errors (#12606)
kubeadm errors out if 'all' is specified with specific checks, so check
that case when we add hardcoded checks.

Add a test to catch regression.
2025-10-13 05:54:58 -07:00
dependabot[bot]
202a0f3461 build(deps): bump redhat-plumbers-in-action/advanced-issue-labeler (#12600)
Bumps [redhat-plumbers-in-action/advanced-issue-labeler](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler) from 3.2.2 to 3.2.3.
- [Release notes](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler/releases)
- [Commits](0db433d412...e38e6809c5)

---
updated-dependencies:
- dependency-name: redhat-plumbers-in-action/advanced-issue-labeler
  dependency-version: 3.2.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 11:53:00 -07:00
Arthur Outhenin-Chalandre
8c16c0f2b9 owner: remove myself from reviewers (#12594)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2025-10-09 02:47:03 -07:00
Jan Breitkopf
deaabb694d fix missing directory when run with download_run_once (#12275) 2025-10-09 02:01:02 -07:00
Mahendra Reddy
e39e005306 bugfix: skip etcd cert extraction if cilium identity uses crd (#12565)
* bugfix: skip etcd cert extraction if cilium identity uses crd

* remove new line end of the file
2025-10-09 00:31:00 -07:00
Matthias Lohr
6d6633a905 show node name to be more clear which node is going to be upgraded (#12399)
* show node name to be more clear which node is going to be upgraded

* also show nodename when uncordoning
2025-10-09 00:19:07 -07:00
Mohamed Omar Zaian
fd7f39043b [ingress-nginx] upgrade to 1.13.3 (#12604) 2025-10-08 19:04:59 -07:00
Ali Afsharzadeh
f8e74aafb9 Fix cilium_policy_audit_mode variable (#12569)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-10-07 09:15:02 -07:00
ChengHao Yang
aa255f8831 Patch versions updates (#12602)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-07 07:25:02 -07:00
Bas
9ded45f703 Documentation - hardening.md - etcd_deployment_type: host (#12520)
* Fix for #12447

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>

* Update hardening.md

Co-authored-by: spatterlight <81454789+spatterIight@users.noreply.github.com>

---------

Signed-off-by: Bas Meijer <bas.meijer@enexis.nl>
Co-authored-by: spatterlight <81454789+spatterIight@users.noreply.github.com>
2025-10-06 02:07:00 -07:00
Mahendra Reddy
270ff65992 fix crio restart while switching runtime (#12008)
fixed kubelet condition

CRI-O: fix for handling of container runtime switching

refactored kubelet start condition

stop/start kubelet and crio only when default runtime is changed

fixed condition for runtime_matches fact variable

fixed set facts for existing container runtime

added crio runtime switch variable

changed condition to use runtime switch variable

added comment for not-found for readers
2025-10-06 01:58:59 -07:00
dependabot[bot]
324e7f50c9 build(deps): bump cryptography from 46.0.1 to 46.0.2 (#12599)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.1 to 46.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.1...46.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 01:47:00 -07:00
R. P. Taylor
055274937b Fix variable typos (#12595) 2025-10-06 01:28:58 -07:00
philipp-check24
b98ed6ddf8 Remove update flag from pip install in ansible docs (#12590) 2025-10-03 06:56:58 -07:00
Meza
05c3e2c87c Fix typo in CONTRIBUTING.md (#12592)
Signed-off-by: Meza <meza-xyz@proton.me>
2025-10-03 04:30:57 -07:00
Alessio Greggi
b0571ccbf9 docs(hardening): fix broken link (#12577)
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2025-09-29 21:10:16 -07:00
Ali Afsharzadeh
8b62a71f31 Upgrade cilium related images (#12568)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-09-29 10:04:19 -07:00
JaeyungLee
411fdddaae fix(docs): update calico.md wrong image path (#12582) 2025-09-28 00:24:15 -07:00
Sassan torabkheslat
51a1f08624 reset: set v4/v6 default policies to ACCEPT and drop user chains (#12552) 2025-09-24 20:14:15 -07:00
dependabot[bot]
67632844cd build(deps): bump cryptography from 45.0.7 to 46.0.1 (#12567)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.7 to 46.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.7...46.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-22 03:02:19 -07:00
Seena Fallah
13c70d3a58 coredns: set deploy replicas when dns autoscaler is disabled (#12387)
Allow setting deployment replicas through `coredns_replicas` when
`enable_dns_autoscaler` is set to false.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2025-09-20 03:50:14 -07:00
Ali Afsharzadeh
fae4e08f35 Upgrade cilium from 1.18.1 to 1.18.2 (#12559) 2025-09-18 23:56:12 -07:00
Takuya Murakami
1d91e47878 Fix: Fix calico_crds_archive checksum (#12564)
It looks like the checksum was changed due to Github's compress algorithm change.
See #12523 for details.
2025-09-18 23:14:11 -07:00
Ali Afsharzadeh
6b973d072c Upgrade haproxy load balancer from 3.1.7 to 3.2.4 (#12557)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-09-17 01:18:12 -07:00
ChengHao Yang
a36912e2c4 Patch versions updates (#12553)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-15 12:58:09 -07:00
Max Gautier
8d7d9907a1 Rough rework of the CI setup documentation (#12094) 2025-09-15 03:24:11 -07:00
Takuya Murakami
643087fea5 Bump cni-plugin 1.4.1 -> 1.8.0 (#12551)
- Add 1.5, 1.6, 1.7 and 1.8 hashes
- Drop <1.3.0

Signed-off-by: Takuya Murakami <murakami_da@nec.com>
2025-09-14 05:32:08 -07:00
Ali Afsharzadeh
2955dfe69f Upgrade flannel from 0.26.7 to 0.27.3 (#12543) 2025-09-11 00:22:07 -07:00
Ali Afsharzadeh
0a35c624ad Upgrade local-path-provisioner from 0.0.24 to 0.0.32 (#12545)
* Upgrade local-path-provisioner from 0.0.24 to 0.0.32

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>

* Remove local_path_provisioner_image_tag variable

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-09-10 04:25:57 -07:00
Ali Afsharzadeh
456a3dda09 Upgrade cilium from 1.17.7 to 1.18.1 (#12542)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-09-09 19:47:59 -07:00
dependabot[bot]
efd30981f8 build(deps): bump actions/setup-python from 5 to 6 (#12539)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5 to 6.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-07 22:13:26 -07:00
dependabot[bot]
aabe063490 build(deps): bump cryptography from 45.0.6 to 45.0.7 (#12538)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.6 to 45.0.7.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.6...45.0.7)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-07 21:21:27 -07:00
jaehanbyun
50c5f39a9d chore: add 'nftables' to kube_proxy_mode comment (#12522)
Signed-off-by: jaehanbyun <awbrg789@naver.com>
2025-09-02 00:57:15 -07:00
Takuya Murakami
8e401f94ea [calico] Add version 3.30.3 and make it default (#12523)
Signed-off-by: Takuya Murakami <murakami_da@nec.com>
2025-09-02 00:41:16 -07:00
Max Gautier
0b082ac2f4 Patch versions updates (#12518)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-01 20:05:12 -07:00
David Bidorff
fe7592dd0c fix: provide an option to ignore sysctl errors about unknown keys (#12514)
* fix: provide an option to ignore sysctl errors about unknown keys

* fix: rename sysctl_ignoreerrors and remove useless var definitions
2025-09-01 07:07:14 -07:00
Kim Hyunyoung, Abel
eb26449e80 fix: typo (#12517) 2025-09-01 03:07:12 -07:00
ujstor
4ab213bc44 feat: add containerd_extra_runtime_args for CRI runtime configuration (#12247)
Add support for injecting additional configuration options into the
  containerd CRI runtime plugin section via containerd_extra_runtime_args.
2025-09-01 02:57:12 -07:00
Kim Hyunyoung, Abel
66cab15498 fix: redeploy coredns and nodelocaldns when its config changed (#12401) 2025-09-01 00:23:11 -07:00
Max Gautier
c03c68e8c7 Do not suppress output during cert generation (#12479)
Makes debugging easier.
2025-08-28 19:43:09 -07:00
ERIK
72c983c41e Fix(system_packages): Avoid version comparison error on non-numeric versions (#12512)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-08-28 00:19:10 -07:00
vdveldet
a01e96e21a Introduced internal_facts.yml and adapt playbooks to use this (#12492) 2025-08-28 00:11:10 -07:00
vdveldet
e52e262e78 Making 28.3 the new docker default (#12509) 2025-08-27 19:53:09 -07:00
Max Gautier
84504d156f Fold kubernetes-apps/network_plugin into network_plugin (#12506)
For what I can see, there is no reason for the split, and it makes
things confusing.
2025-08-27 18:43:10 -07:00
Hyeonki Hong
56c830713e Fix SAN address collection from ansible_default_ipv{4,6} (#12413)
Signed-off-by: Hyeonki Hong <hhk7734@gmail.com>
2025-08-26 02:40:11 -07:00
Max Gautier
acdc338fa4 Patch versions updates (#12503)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-26 02:36:11 -07:00
Mahdad Ghasemian
72877d68ec Fix: render tcp and udp service ports as integers in Ingress NGINX templates (#12442) 2025-08-26 02:32:11 -07:00
Qasim Mehmood
0f158e4e28 feat: Upgrade multus cni from 4.1.0 to 4.2.2 (#12495) 2025-08-26 02:28:10 -07:00
Ali Afsharzadeh
7d79f17b12 Fix duplicate dict key warning in bootstrap_os task includes (#12488)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-26 01:32:11 -07:00
wangsifei99
f973deb95f fix netcheck_etcd_image_tag (#12402)
Signed-off-by: wangsifei99 <wangsifei@kylinos.cn>
2025-08-25 22:49:06 -07:00
Ali Afsharzadeh
4a4201c84d Remove ara_default from callbacks_enabled (#12490)
The option ara_default was still present in ansible.cfg under callbacks_enabled.
This is a leftover from commit b9e9364 ("Remove ara support in CI") and should
have been removed together with the rest of the ara integration.

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-24 22:49:06 -07:00
Mohamed Omar Zaian
80e0ad0fac [feat] Update metrics server to v0.8.0 (#12493) 2025-08-22 21:07:05 -07:00
Ali Afsharzadeh
303dd1cbc1 Enable reserved variable name checks and fix violations (#12463)
* Enable reserved variable name checks and fix violations

Updated .ansible-lint configuration to skip only var-naming[pattern]
and var-naming[no-role-prefix] instead of skipping the entire var-naming rule.
This enables the check for reserved variable names.

Renamed variables that used reserved names to avoid conflicts.
Updated all references in tasks, variables, and templates.

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>

* Rename namespace variable inside tasks instead of deleting it

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>

* Change hosts variable to vm_hosts

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>

* Use k8s_namespace instead of dashboard_namespace in dashboard.yml.j2 template

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>

---------

Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-21 00:47:07 -07:00
Kubernetes Prow Robot
eb4f6d73fb Merge pull request #12441 from tico88612/feat/crds-installation
Feat: add common_crds role and Prometheus Operator CRDs installation
2025-08-19 05:25:37 -07:00
ChengHao Yang
44f511814b Test: add prometheus operator crds install
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-19 18:45:33 +08:00
Alejandro Macedo
e2046749ac Fix: Change "empty" definition for PodSecurity Admission configuration (#12439)
Fixes a bug where `kube-apiserver` fails to start if the PodSecurity
configuration file doesn't have the `apiVersion` and `kind` keys.

Signed-off-by: Alejandro Macedo <alex.macedopereira@gmail.com>
2025-08-19 02:57:36 -07:00
Max Gautier
f832271f5c Directly list conntrack modules instead of using a variable (#12475)
The conntrack kernel modules have no reason to be something else than
those two options, so there is no reason to have a variable.
2025-08-18 09:05:13 -07:00
Elias Probst
dc9d3bf39d Fix when expr of conntrack module loading (#12458)
Retrying to load conntrack modules was bound to fail due to the way, the current `when` conditions were utilized.
It was based on the assumption, that in case of success, the registered variable would have an `rc` attribute with the value `0`.
Unfortunately, the `rc` attribute is only present in case of a failure, where it's value is >1.

The result of `community.general.modprobe` in case of success looks like this:
```
{
    "changed": false,
    "msg": "All items completed",
    "results": [
        {
            "ansible_loop_var": "item",
            "changed": false,
            "failed": false,
            "invocation": {
                "module_args": {
                    "name": "nf_conntrack",
                    "params": "",
                    "persistent": "present",
                    "state": "present"
                }
            },
            "item": "nf_conntrack",
            "name": "nf_conntrack",
            "params": "",
            "state": "present"
        }
    ],
    "skipped": false
}
```

While it looks like this in case of a failure:
```
{
    "changed": false,
    "failed": true,
    "msg": "One or more items failed",
    "results": [
        {
            "ansible_loop_var": "item",
            "attempts": 3,
            "changed": false,
            "failed": true,
            "invocation": {
                "module_args": {
                    "name": "nf_conntrack_doesnotexist",
                    "params": "",
                    "persistent": "present",
                    "state": "present"
                }
            },
            "item": "nf_conntrack_doesnotexist",
            "msg": "modprobe: FATAL: Module nf_conntrack_doesnotexist not found in directory /lib/modules/5.14.0-570.32.1.el9_6.x86_64\n",
            "name": "nf_conntrack_doesnotexist",
            "params": "",
            "rc": 1,
            "state": "present",
            "stderr": "modprobe: FATAL: Module nf_conntrack_doesnotexist not found in directory /lib/modules/5.14.0-570.32.1.el9_6.x86_64\n",
            "stderr_lines": [
                "modprobe: FATAL: Module nf_conntrack_doesnotexist not found in directory /lib/modules/5.14.0-570.32.1.el9_6.x86_64"
            ],
            "stdout": "",
            "stdout_lines": []
        }
    ],
    "skipped": false
}
```

By evaluating `failed` instead, this issue can be prevented.
See also:
- https://github.com/kubernetes-sigs/kubespray/issues/11340

Co-authored-by: Max Gautier <mg@max.gautier.name>
2025-08-18 08:17:10 -07:00
Ali Afsharzadeh
7d3e0d4fe5 Simplify group_by logic by moving conditional to when clause (#12469)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-18 07:39:11 -07:00
ChengHao Yang
9dca520b33 Feat: add prometheus_operator_crds in common_crds
The Prometheus Operator CRDs are commonly used for monitoring and are
used by some CNIs (such as Cilium). Kubespray can be installed first,
and the subsequent installation of the operator can be handled by the
user (or later extensions).

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-18 22:13:15 +08:00
Ali Afsharzadeh
fa22f9e5ab Ensure apt cache is updated before dist-upgrade (#12465)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-18 07:13:12 -07:00
Shaleen Bathla
082507cff2 kubelet: conditionalize staticPodPath location (#12433)
Add variable to set kubelet staticPodPath location.
It can be set to empty so that we can choose to disable it for some nodes.
STIG recommendation is to disable it.

Signed-off-by: Shaleen Bathla <shaleenbathla@gmail.com>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-18 06:51:11 -07:00
ChengHao Yang
1e327b4747 Feat: add prometheus_operator_crds download item
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-18 21:14:06 +08:00
ChengHao Yang
3ece592b51 Refactor: add common_crds role & migrate gateway_api
Adding commonly used CRDs can be expanded

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-18 19:27:25 +08:00
dependabot[bot]
bae7278fa8 build(deps): bump actions/checkout from 4.2.2 to 5.0.0 (#12472)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](11bd71901b...08c6903cd8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 01:41:10 -07:00
ChengHao Yang
cf2332c38f Patch versions updates (#12461)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-17 20:21:08 -07:00
Ali Afsharzadeh
51764b208b Upgrade cilium from 1.17.3 to 1.17.7 (#12470)
Signed-off-by: Ali Afsharzadeh <afsharzadeh8@gmail.com>
2025-08-16 10:53:07 -07:00
Andrew Peabody
936f9faeaf docs: update OS and firewall (#12464)
* docs: update OS and firewall

* Update setting-up-your-first-cluster.md
2025-08-15 17:17:06 -07:00
Ho Kim
707616178e feat: add support for custom kubeadm pull image repository (#12128)
Signed-off-by: Ho Kim <ho.kim@ulagbulag.io>
2025-08-13 18:03:06 -07:00
Kubernetes Prow Robot
155c1c1531 Merge pull request #12456 from tico88612/feat/debian13
Feat: Debian 13 Trixie support
2025-08-13 00:05:14 -07:00
ChengHao Yang
7f64758592 Fix: Debian 13 system_package not found software-properties-common
Debian Trixie recently removed the package `software-properties-common`,
add the condition not on Debian Trixie.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-12 20:29:35 +08:00
ChengHao Yang
4e1205958f Docs: add Debian 13 in README.md
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-12 20:29:35 +08:00
ChengHao Yang
2081df24ec CI: add Debian 13 tests
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-12 20:29:35 +08:00
ChengHao Yang
7a72031d1e Add Debian 13 kubevirt image
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-12 20:29:35 +08:00
ChengHao Yang
622ed15532 Fix the typo in debian12-calico test
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-12 20:28:33 +08:00
Aman Shrivastava
b4d3be482f Make control plane health check retries configurable (#12452) 2025-08-11 02:23:07 -07:00
dependabot[bot]
92f57e0811 build(deps): bump cryptography from 45.0.5 to 45.0.6 (#12453)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.5 to 45.0.6.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.5...45.0.6)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-10 22:57:04 -07:00
Clement Phu
6c147dfe3c Add cilium_extra_values to make use of any cilium values (#12375)
fix noqa
2025-08-08 20:29:43 -07:00
ChengHao Yang
502ba663c5 Fix: Convert -backports sources to archive.debian.org for bullseye and older (#12434)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-08-06 20:26:45 -07:00
Aman Shrivastava
5e54fd4da3 add proxy_env to cilium install task for proxy support (#12417) 2025-07-30 00:34:27 -07:00
Ho Kim
f347c12145 feat: add support for coredns_affinity (#11994)
Signed-off-by: Ho Kim <ho.kim@ulagbulag.io>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-07-27 20:18:27 -07:00
Xuhui Sun
95640819f5 🐛 fix missing cilium_enable_bgp_control_plane config (#12430) 2025-07-26 21:38:26 -07:00
Psycho Mantys
5b1334102b Remove --auth-anonymous if kube_api_anonymous_auth is undefined. (#12353)
Remove --auth-anonymous if kube_api_anonymous_auth in undefined, to avoid
compatibility errors with other arguments of the kube-apiserver, such as
--authentication-config when anonymous field is configured.
2025-07-26 20:20:27 -07:00
ak1ra
96c39ae7fd doc: modify section order, remove redundant section (#12423) 2025-07-25 01:52:38 -07:00
Romain Lalaut
d198b2ca53 playbooks/remove_node.yml: fixes localhost validation task (#12420)
- Add gather_facts: false (no system facts needed for validation)
- Add become: false (no privilege escalation needed on localhost)
2025-07-25 01:52:31 -07:00
dependabot[bot]
9e8bf18aa1 build(deps): bump distlib from 0.3.9 to 0.4.0 (#12415)
Bumps [distlib](https://github.com/pypa/distlib) from 0.3.9 to 0.4.0.
- [Release notes](https://github.com/pypa/distlib/releases)
- [Changelog](https://github.com/pypa/distlib/blob/master/CHANGES.rst)
- [Commits](https://github.com/pypa/distlib/compare/0.3.9...0.4.0)

---
updated-dependencies:
- dependency-name: distlib
  dependency-version: 0.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 04:48:26 -07:00
Kubernetes Prow Robot
fcaaee537e Merge pull request #12377 from yankay/bump-containerd
feature: support containerd static binary
2025-07-19 05:56:25 -07:00
Kay Yan
97946cfdb7 support containerd static binary
Co-authored-by: Max Gautier <mg@max.gautier.name>
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-07-18 04:20:58 +00:00
Kay Yan
72518b4497 bump containerd to 2.1.3, runc to 1.3.0,nerdctl to 2.1.2
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-07-18 04:19:35 +00:00
Chad Swenson
18d7a02280 Patch versions updates (#12410)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-17 20:28:24 -07:00
ChengHao Yang
8d275dcb4f Fix: nodelocaldns capabilities usage (#12398)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-07-15 18:54:22 -07:00
vdveldet
ff2179985c Added new checksums for helm_archive_checksums (#12389)
* Added new checksums for helm_archive_checksums

* Changed default helm version
2025-07-15 01:00:23 -07:00
Romain Lalaut
b1cc016cc0 dd external_openstack_lbaas_member_subnet_id variable to external-openstack-cloud-config.j2 (#12267) 2025-07-13 23:58:24 -07:00
wangsifei99
263e8b24cf Fix#12385 cilium typo (#12393)
Signed-off-by: wangsifei99 <wangsifei@kylinos.cn>
2025-07-10 19:19:27 -07:00
mathgaming
ce2ba28dec Fixed syntax error in _bgp_config dict (#12258) 2025-07-10 18:43:27 -07:00
Takuya Murakami
784bf36c66 fix: Use crun in the cri-o distribution and don't use crun role from cri-o role anymore (#12289)
Signed-off-by: Takuya Murakami <murakami_da@nec.com>
2025-07-08 06:37:27 -07:00
Max Gautier
cbdfad8e80 CI: fix broken debugging (#12381) 2025-07-08 04:33:27 -07:00
pando85
d02910c675 Add header configuration in containerd hosts.toml (#12368)
* Add header configuration in containerd hosts.toml

Signed-off-by: Alexander Gil <pando855@gmail.com>

* Disable log output on containerd mirrors settings if required

Signed-off-by: Alexander Gil <pando855@gmail.com>

---------

Signed-off-by: Alexander Gil <pando855@gmail.com>
2025-07-07 23:41:27 -07:00
Chad Swenson
1e523a267c Fix kubeadm upgrade node skipPhases with multiple CP nodes (#12367)
Add 1.32 conditional defaults

Restore support for kubeadm upgrade node --skip-phases < 1.32, apply still needs to be restricted
2025-07-07 11:29:26 -07:00
Max Gautier
15c8a4768d Do not alter etc/hosts (#12382)
This is no longer needed, likely for a long time.
2025-07-07 04:53:26 -07:00
Elias Probst
6ca9f1f731 docs: Ansible Collection 404s (#12376)
* docs: remove obsolete reference to `gen_tags.sh`

`scripts/gen_tags.sh` was removed in 373b952a0c

* docs: fix 404 links

Merge the `Requirements` section with the `Usage` section and just
reference the inventory documentation, which then points to all further
information related to group vars etc.
2025-07-07 03:01:25 -07:00
wangsifei99
3311ceaa7b Fix kubespray reset shouldn't remove /etc/dnsmasq files (#12380)
Signed-off-by: wangsifei99 <2209856191@qq.com>
2025-07-07 00:25:25 -07:00
Max Gautier
6354aa686e Allow vagrant jobs to be triggered manually in Gitlab UI (#12349) 2025-07-06 23:57:25 -07:00
dependabot[bot]
90d5b34eca build(deps): bump cryptography from 45.0.4 to 45.0.5 (#12378)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.4 to 45.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.4...45.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-06 23:49:26 -07:00
Kay Yan
7f6db0cbfa add rocky linux 10 image (#12379)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-07-06 23:45:26 -07:00
vdveldet
8d7cbe732e Adding proper quotation (#12371)
* Adding proper quotation

* Update file with correct quotes
2025-07-06 02:33:24 -07:00
Dexter
1e5a203ddc Vagrant: Ensure IP Subnet not in use by localhost (#12332)
* feat(subnet): Ensure Vagrant subnet not in use by localhost

This commit ensures that Vagrantfile supplied $subnet is not in use by
the localhost. Previously, if the subnet is in use by localhost (i.e.
bridge network), Vagrant VM boxes can not communicate.

* refactor(socket): Use ruby Socket library to find addrs

This commit reverts the usage of Ruby .scan() which may result in
failure if program is not provided. Instead, this commit refactors to
use Socket library to determine interfaces in use, then proceeds to
compare with Vagrantfile supplied subnets. Additionally, the commit
supports IPv6 comparisons.
2025-07-04 01:15:25 -07:00
Mustafa Mertcan Çam
cde6e815dd Cilium: Pass cluster DNS to hubble.peerService in values.yaml.j2 (#12346)
* cilium: pass cluster DNS to hubble.peerService in values.yaml.j2

* Add dedicated Hubble variable defaulting to inventory cluster domain
2025-07-03 09:37:25 -07:00
ERIK
c1c52002cf Remove unused Calico CNI pool variables (#12369)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-07-02 18:43:24 -07:00
Zied Kharrat
5cd3f40cbc Replace deprecated MAINTAINER with OCI-compliant LABEL (#12360) 2025-06-30 17:06:31 -07:00
Aman Shrivastava
f9385ec918 Add argocd_install component to hash update script with checksum entries (#12358) 2025-06-30 07:00:35 -07:00
ERIK
7ead3e2f11 fix(kubeadm): Conditionally add --skip-phases flag for v1.32.0+ (#12351)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-06-28 05:12:28 -07:00
Max Gautier
e0018268d6 CI: Add a test for scale.yml (#12285) 2025-06-28 01:16:29 -07:00
Kubernetes Prow Robot
d4cb5da017 Merge pull request #12295 from VannTen/ci/collection
CI: Simplify running playbooks as collection + various CI Fixes
2025-06-27 10:00:33 -07:00
刘旭
62f49822dd fix ETCD_INITIAL_CLUSTER config in etcd.env and etcd-events.env (#12342) 2025-06-27 07:44:29 -07:00
Romain Lalaut
878da9fb16 Argo CD : checksum support for the install url (#12266)
Fixes https://github.com/kubernetes-sigs/kubespray/issues/12223
2025-06-27 07:24:30 -07:00
Max Gautier
f55de03fa6 CI: update sonobuoy url (heptio is now part of VMware)
Suggested-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-27 14:32:21 +02:00
Max Gautier
7b6ff769f0 CI: 020_check_pods -> more readable output
Filter pod to describe / logs only the broken ones.
2025-06-27 14:32:20 +02:00
Max Gautier
e369ac2f24 CI: more readable loop
Avoids putting whole pod spec in loop label
2025-06-27 14:32:19 +02:00
Max Gautier
4a0a73b307 CI: fix check for kube_version 2025-06-27 14:15:11 +02:00
Max Gautier
253fc5ee59 CI: factorize tests into a single playbook
This allows to use kubespray_defaults (once) instead of redefining
defaults in the tests.
Test test files becomes imported tasks rather thand standalone
playbooks.
2025-06-27 14:15:11 +02:00
Max Gautier
bf41d3bfea CI: Simplify running playbooks as collection 2025-06-27 14:15:09 +02:00
Chad Swenson
ede92b0654 Fix calico etcd mode networkpolicy RBAC (#12344) 2025-06-27 04:50:29 -07:00
Takuya Murakami
048967e3b0 feat: Add cilium_install_extra_flags (#12262)
Enable to use --chart-directory options etc for offline installation

Signed-off-by: Takuya Murakami <murakami_da@nec.com>
2025-06-25 05:58:29 -07:00
Kim Hyunyoung, Abel
8cc5897d5c fix: add cilium extraConfig values (#12335) 2025-06-23 23:36:29 -07:00
ChengHao Yang
479e239016 CI: replace kaniko with buildkit (#12305)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-23 00:08:53 -07:00
Chad Swenson
39e0fc64ba Patch versions updates (#12322)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-19 23:10:51 -07:00
Kubernetes Prow Robot
5ed7042808 Merge pull request #11924 from tico88612/bump/ansible-10.7.0
Bump Ansible to 10.7.0 & Deprecate Pre-installed Python 3.7-OS tests
2025-06-19 23:06:51 -07:00
ChengHao Yang
48cc0e1cde CI: add pip install in upgrade job
This will avoid check ansible version failed in upgrade job.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:10 +08:00
ChengHao Yang
854dbef25e Docs: remove unused CI tests information
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:10 +08:00
ChengHao Yang
95998e437b CI: remove OpenSUSE 15.6 tests
Because pre-installed python version is 3.6, which is deprecated by
Ansible 10.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:09 +08:00
ChengHao Yang
fc0206e313 CI: remove RHEL8-related OS tests
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:09 +08:00
ChengHao Yang
26acce9cec Docs: update ansible-core version to 2.17.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:09 +08:00
ChengHao Yang
d3c3ccd168 Update ansible_version minimal and maximal version
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:09 +08:00
ChengHao Yang
58e302ec31 Bump ansible to 10.7.0
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-20 12:18:09 +08:00
ChengHao Yang
3cda93405a Cleanup: Ubuntu 20.04 tests (#12301)
* Test: molecule replace ubuntu2004 with ubuntu2204 ubuntu2404

cri-dockerd, adduser and bastion-ssh-config can't run ubuntu2404, maybe needs to check login.

"System is booting up. Unprivileged users are not permitted to log in yet. Please come back later. For technical details, see pam_nologin(8)."

Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>

* Test: replace ubuntu-2004 with ubuntu-2404

All ubuntu-2004 tests are removed.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: update ci.md

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: update README.md

Remove Ubuntu 20.04 support

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-19 18:54:51 -07:00
Chad Swenson
540cfd1087 Add version pinning for AWS tf provider to fix CI (#12323) 2025-06-19 18:38:51 -07:00
Kubernetes Prow Robot
f58315f69e Merge pull request #12254 from tico88612/fix/cilium-migration
Fix: the cluster is upgraded from 2.27 to 2.28 cilium will break
2025-06-19 18:34:51 -07:00
_xat_
dca2a5ecb3 Skip kube-proxy addon phase during kubeadm upgrade if disabled (#12306) 2025-06-18 03:48:52 -07:00
Max Gautier
85cf0014cd CI: Run vagrant validate on master as well (#12311)
Not really a reason not to, and this actually breaks daily-ci because
some jobs depends on this one so the whole pipeline is invalid if it's
not created.
2025-06-18 02:12:53 -07:00
Kubernetes Prow Robot
170b3dc55d Merge pull request #12302 from VannTen/ci/factorize_molecule_scenario
CI: cleanup and factorization of molecule tests
2025-06-17 10:23:00 -07:00
Max Gautier
50a32acf51 CI: use debug stdout callback everywhere (except pre-commit) 2025-06-17 14:56:16 +02:00
Max Gautier
b372a6f0f3 Fix alternatives runtimes CI
- youki and gvisor molecule tests are now passing
- kata-containers still broken
2025-06-17 14:56:15 +02:00
Max Gautier
5671037b0e Convert alternatives runtimes molecule to ansible verifier 2025-06-17 14:56:14 +02:00
Max Gautier
1ccb3a38a2 Convert cri-dockerd molecule to ansible verifier 2025-06-17 14:56:06 +02:00
Max Gautier
68c4ee23cb Convert CRI-O molecule to ansible verifier 2025-06-17 14:56:04 +02:00
Max Gautier
3f26203ed0 Convert containerd molecule to ansible verifier 2025-06-17 14:56:02 +02:00
Max Gautier
a5ede2a5c7 container-engine: factorize molecule testing infra 2025-06-17 14:56:00 +02:00
Max Gautier
69c4c90634 Factorize dynamic groups into a role 2025-06-17 14:55:59 +02:00
Max Gautier
06d8d48488 Add previous release to the auto update script. (#12312) 2025-06-16 22:53:00 -07:00
dependabot[bot]
9c621970ff build(deps): bump cryptography from 45.0.3 to 45.0.4 (#12317)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.3 to 45.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.3...45.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 18:39:02 -07:00
dependabot[bot]
7bb9d57dc9 build(deps): bump redhat-plumbers-in-action/advanced-issue-labeler (#12318)
Bumps [redhat-plumbers-in-action/advanced-issue-labeler](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler/releases)
- [Commits](39087a4b30...0db433d412)

---
updated-dependencies:
- dependency-name: redhat-plumbers-in-action/advanced-issue-labeler
  dependency-version: 3.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-15 20:38:58 -07:00
Max Gautier
f866fd76f8 Patch versions updates (#12313)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-15 20:20:57 -07:00
ChengHao Yang
fa880b6bcc Feat: add nftable mode in calico (#12255)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-15 18:54:58 -07:00
Jay.H
6fc1abba2e fix offline prepare scripts (#11962)
fix offline prepare scripts
2025-06-15 07:28:58 -07:00
Jay.H
1abadd8caa fix manage-offline-container-images.sh get image_id (#11961) 2025-06-15 07:10:57 -07:00
Kubernetes Prow Robot
ad31de4220 Merge pull request #12132 from tico88612/fix/remove-anonymous-kubeadm-validation
Fix: kubeadm secondary use file discovery validation
2025-06-15 05:48:56 -07:00
Max Gautier
144742cbce Use last patch versions by default for etcd/crio/crictl (#12233)
This uses the same logic than the other versions, with simplications for
crictl and crio whose versionning scheme is tied to upstream kubernetes.

Also move some version variables in vars/ rather than defaults/, because
they are not used elsewhere and don't really make sense as modifiable by
the user.
2025-06-14 18:56:55 -07:00
ChengHao Yang
f77aea13e9 Cleanup: kubeadm-config v1beta4 extra args defined conditions (#12307)
* Cleanup: kubeadm-config v1beta4 extra args defined conditions

Some variables have already been defined, so there is no need to
useconditional statements to check whether they have been defined.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Cleanup: cloud-provider extra args

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-14 13:38:56 -07:00
ChengHao Yang
f810e80b6c Bump: external snapshot CRD to v0.15.0 (#12308)
Currently, there is no reliable way to obtain individual CRD files, so
the only solution is to update first.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-14 13:34:56 -07:00
Chad Swenson
b04ceba89b Fix calico CNI timeouts in reset role (#12300)
* Fix an issue with CNI timeouts in reset role

* Consolidate secondary service removal tasks
2025-06-13 02:54:56 -07:00
Max Gautier
f6d29a27fc Remove stale TODOs (#12298)
Upstream consider it working as expected, won't fix
https://github.com/ansible-collections/community.general/issues/7717#issuecomment-2061880929
2025-06-12 20:14:57 -07:00
Kubernetes Prow Robot
28d23ffc3b Merge pull request #12236 from VannTen/cleanup/bootstap+packages
Cleanup of bootstrap and package installation
2025-06-12 07:24:56 -07:00
ChengHao Yang
ac0b0e7d6e Fix: upgrade cluster discovery kubeconfig not found
When installing or upgrading in the past, there was no validation
config. Check if the file exists first to prevent subsequent validation
errors.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-12 10:05:59 +08:00
ChengHao Yang
e618d71f2a Fix: kubeadm secondary use file discovery validation
The validation step is moved to the end to avoid the loss of files that
may lead to verification failure.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-12 10:05:58 +08:00
Kay Yan
cd82ac552b Add CI images for Fedora 41 and Fedora 42 (#12286)
* add CI image fedora-41 and fedora-42

Signed-off-by: Kay Yan <kay.yan@daocloud.io>

* Apply suggestions from code review

Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Co-authored-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-10 21:14:54 -07:00
ChengHao Yang
b981e2f740 Replace terraform with opentf (#12291)
Terraform is no longer open source software and has been removed and
replaced with OpenTofu.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-10 06:44:25 -07:00
Kubernetes Prow Robot
739e5e1c6b Merge pull request #12199 from tmurakam/feature/kubernetes-1.33
[kubernetes] Support kubernetes 1.33
2025-06-05 20:20:38 -07:00
ChengHao Yang
1f9020f0b4 Fix: if cilium release exist, the action will set upgrade
`cilium install` is equivalent to `helm install`, it will failed if
cilium relase exist. `cilium version` can know the release exist without
helm binary

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-05 21:14:49 +08:00
ChengHao Yang
7bb9552e94 Fix: add cilium remove old resources option
Give users two options: besides skip Cilium, add
`cilium_remove_old_resources`, default is `false`, when set to `true`,
it will remove the content of the old version, but it will cause the
downtime, need to be careful to use.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-05 21:14:49 +08:00
Slavi Pantaleev
d1bd610049 Fix indentation issue in Cilium values file and ensure booleans are lowercase (#12280)
This patch fixes the indentation in the `encryption` section.
Previously configuration like this:

```yml
cilium_encryption_enabled: true
cilium_encryption_type: wireguard
```

Would template to a `values.yaml` file with indentation that looks like this:

```yml
encryption:
  enabled: True
    type: wireguard
    nodeEncryption: False
```

instead of this:

```yml
encryption:
  enabled: true
  type: wireguard
  nodeEncryption: false
```

This syntax issue causes an error during Cilium installation.

This patch also makes all boolean values in this template file go through the `to_json` filter.
Since values like `True` and `False` are not compliant with the YAML v1.2 spec,
avoiding them is preferable.

`to_json` may be used for all other values in this template to ensure we end up with
a valid YAML document in all cases (even when various strings include special characters),
but this was left for another (future) patch.
2025-06-05 05:48:39 -07:00
Max Gautier
5243b33bd7 Cleanup support for removed OS in bootstrap
- centos < 8
- debian 10
2025-06-05 11:16:25 +02:00
Max Gautier
d5b2a9b5ba opensuse: move package installation to system_packages
No reason to special case
2025-06-05 11:16:24 +02:00
Max Gautier
2152022926 debian-based distro: handle apt update cache when installing packages
The package module pass options to the underlying packages manager
module if they support it. No need to handle it in bootstrap.
2025-06-05 11:16:24 +02:00
Max Gautier
f13b80cac0 ClearLinux: remove special casing
- put package install in system_packages
- docker should be handled by the approriate roles if used as container
  engine
2025-06-05 11:16:23 +02:00
Shuu
a87b86c6d3 Make main_ip cacheable in facts (#12243) 2025-06-05 01:58:38 -07:00
Kubernetes Prow Robot
d287420e8e Merge pull request #11868 from tico88612/test/flatcar-4081
Add Flatcar 4081 CI test
2025-06-05 01:08:43 -07:00
Peter Pan
85b0be144a Fix: check expiry before do breaking renew and container restart actions (#12194)
* Fix: check expiraty before renew

Since certificate renewal and container restarts involve higher risks,
they should be executed with extra caution.

* squash to Fix: check expiraty before renew

* squash to Fix: address more comments from VannTen

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

---------

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
2025-06-05 01:04:41 -07:00
ChengHao Yang
6f7822d25c [flannel] upgrade to 0.26.7 (#12260)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-05 00:42:53 -07:00
ChengHao Yang
b1fc870750 Add tico88612 as approver (#12281)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-06-04 22:56:42 -07:00
dependabot[bot]
d0e9088976 build(deps): bump cryptography from 45.0.2 to 45.0.3 (#12259)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.2 to 45.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.2...45.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-02 22:50:45 -07:00
Imran Ahmed
ce26f17e9e fix unquoted san cert causing issues with ips (#12256) 2025-06-02 22:50:38 -07:00
Christos Papageorgiou
a9f600ffa2 Import centos bootstrap os task for Alma/Rocky Linux (#12264) 2025-06-02 22:42:38 -07:00
ERIK
3454cd2c69 feat: Support certificate validity period config in kubeadm v1beta4 (#12272)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-06-02 20:44:37 -07:00
ChengHao Yang
0d5e18053e Test: remove bin_dir from other tasks move to common_vars.yml
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-29 12:43:38 +08:00
Max Gautier
2fbbf2e1e4 CI/kubevirt: Configure ignition provisioning
Flatcar does not support cloud-init
2025-05-27 23:29:56 +08:00
ant31
3597b8d7fe Kubevirt: use Ignition cloud config 2025-05-27 23:29:55 +08:00
ChengHao Yang
68d8f14f0d Update CI.md document
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-27 23:29:55 +08:00
ChengHao Yang
32675695d7 Add flatcar 4081 CI packet test
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-27 23:29:55 +08:00
Kubernetes Prow Robot
c7c3d2ba95 Merge pull request #12163 from VannTen/cleanup/etcd_inv_sample
Move etcd inventory sample doc to role defaults
2025-05-26 03:16:16 -07:00
Ali Afsharzadeh
c89c34f4d6 Update load balancers versions to Nginx 1.28.0, Haproxy 3.1.7 (#12178) 2025-05-23 20:50:34 -07:00
Max Gautier
92e8ac9de2 Remove tag 'master' (#12228)
* Remove tag master

Following it's deprecation in 4b324cb0f (Rename master to control plane
- non-breaking changes only (#11394), 2024-09-06)

* Add fail fast path when using removed tags

- Used for the master tag, but this could be used for other things in
  the future
2025-05-22 01:20:36 -07:00
Anshuman Agarwala
73b3e9b557 Removed weave support (#12230) 2025-05-22 01:10:36 -07:00
Max Gautier
b79f7d79f0 docs: remove obsolete cgroups variables (#12239)
Those variables are removed since 1bc61c9f3 (Simplify kubelet-config
template, 2023-11-23), removing them from docs as well.
2025-05-21 22:40:35 -07:00
Max Gautier
490dece3bf Cleanup assert after 2.28 (#12245)
Users should have used 2.28 and adapted their inventories now.
2025-05-21 20:28:35 -07:00
Takuya Murakami
c1e3f3120c CI: Use ubuntu-2204 for crio test 2025-05-22 08:59:52 +09:00
Takuya Murakami
16c05338d9 Update cri-o to 1.33.0 for kubernetes 1.33
Use ubuntu 22.04 for molecule test of cri-o,
because crun included in the cri-o does not work on
ubuntu 20.04.
2025-05-22 08:43:03 +09:00
Takuya Murakami
8ad1253b4f [kubernetes] Support kubernetes 1.33.1
- Add checksum entries.
- Set min required version to Kubernetes 1.31.x
- Update supported versions
- Refactor coredns_version
2025-05-21 23:56:47 +09:00
Takuya Murakami
cee065920f fix: The 'AppArmor' feature gate is removed from kubernetes 1.33
Signed-off-by: Takuya Murakami <murakami_da@nec.com>
2025-05-21 23:56:47 +09:00
ChengHao Yang
871941f663 Chore: upgrade galaxy.yml version (#12241)
* Chore: upgrade galaxy.yml version

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: upgrade version to v2.28.0

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-21 07:46:35 -07:00
Anshuman Agarwala
63cdf87915 Removed equinix provider (#12229) 2025-05-20 03:53:15 -07:00
Max Gautier
175babc4df Move some approvers to emeritus (#12156)
Thanks for you work !
2025-05-20 03:11:17 -07:00
Ekko
6c5c45b328 Allow stopping ubuntu unattended-upgrades (#12174)
Signed-off-by: Ekko Tu <lihai.tu@daocloud.io>
2025-05-20 01:07:16 -07:00
Kubernetes Prow Robot
019cf2ab42 Merge pull request #12101 from tico88612/refactor/cilium-install
Refactor Cilium CNI installation
2025-05-20 01:01:15 -07:00
dependabot[bot]
571e747689 build(deps): bump cryptography from 44.0.3 to 45.0.2 (#12235)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.3 to 45.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.3...45.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 07:21:15 -07:00
ChengHao Yang
1266527014 Add cilium cli binary hash before 0.18.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
5e2e63ebe3 Make cilium dnsProxy transparent mode configure
When Cilium is configured to replace kube-proxy, it automatically
enables dnsProxy, which can conflict with nodelocaldns.
2025-05-19 08:48:15 +08:00
ChengHao Yang
db290ca686 Add cilium gateway api support
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
6619d98682 Add cilium hubble export dynamic content
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
b771d73fe0 Add cilium hubble export file max backups & size mb
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
65751e8193 Add cilium operator tolerations default values
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:15 +08:00
ChengHao Yang
4c16fc155f Cilium values k8sServiceHost and k8sServicePort use auto
Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
dcd3461bce Cilium values use image variables
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
48f75c2c2b Upgrade Cilium related images
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
a4b73c09a7 Upgrade cilium version to 1.17.3
Signed-off-by: ChengHao Yang
<17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
af62570110 Change cilium_kube_proxy_replacement to true for CI tests
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
bebba47eb4 Change kube_owner to root for cilium CI test
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
86437730de Use cilium-cli install Cilium
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:14 +08:00
ChengHao Yang
6fe64323db Remove old cilium templates install
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:13 +08:00
ChengHao Yang
1e471d5eeb Upgrade outdated cilium_min_version_required
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-19 08:48:11 +08:00
Max Gautier
3a2862ea19 Move checksums to kubespray_defaults/vars (#12234)
The checksums are not a defaults and are not meant to be changed from
the inventories.

Furthermore, role defaults have a lower priority that hosts facts, which
technically means a rogue hosts could hijack the hashes for its
variables.
2025-05-18 16:13:14 -07:00
Jay.H
8a4f4d13f7 fix manage-offline-container-images.sh create_registry (#11964) 2025-05-17 07:25:13 -07:00
ErmolenkoMaxim
46a0dc9a51 Add support for hubble-export-file-max-backups and max-size-mb variables (#12072)
* feat(cilium): add configurable Hubble export log rotation parameters

- Adds support for `cilium_hubble_export_file_max_backups` and `cilium_hubble_export_file_max_size_mb`
- Applies values only if `cilium_hubble_export_file_path` is defined
- Default values are set in role defaults
- Cleans up template logic by removing unnecessary conditionals

* Fix indentation for hubble export settings

* Fix undefined variable issue with ipwrap in kubeconfig override that caused pre-commit errors

* Update main.yml

rollback
2025-05-17 00:35:13 -07:00
Max Gautier
faae36086c Patch versions updates (#12226)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-16 14:13:14 -07:00
Max Gautier
9c2bdeec63 Decouple etcd defaults in a separate role
This allows us to reuse the defaults in other places without putting
everything in kubespray-defaults.

In that, for kubernetes/control-plane.
2025-05-16 14:51:29 +02:00
ERIK
e4c0c427a3 improve NTP package conflict handling (#12212)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-05-16 03:55:14 -07:00
Max Gautier
bca5a4ce3b CI: remove ci-not-authorized job (#12225)
This is now handled directly at the failfast-ci level (== integration
Github <-> Gitlab).
The whole pipeline will not be triggered unless:
- The author is a maintainer
- The PR has the /ok-to-test label
2025-05-16 03:27:13 -07:00
Antoine Legrand
5c07c6e6d3 Add option to [not] install coredns via Kubespray (#12218) 2025-05-16 03:23:13 -07:00
Takuya Murakami
c6dfe22a41 Improve logging of kubeadm init failure of first control plane node (#12216)
Split retry task of 'kubeadm init' to show the failure log of
the first execution.
2025-05-16 03:01:13 -07:00
Seena Fallah
ec85b7e2c9 download: respect enable_dns_autoscaler when enabling dnsautoscaler (#12217)
dnsautoscaler should only be enabled when enable_dns_autoscaler is
set to true. without this, it could be enabled without any manifest
actually using it, which makes it a false signal.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2025-05-15 12:45:13 -07:00
Kubernetes Prow Robot
acd6872c80 Merge pull request #12219 from VannTen/test/ha_etcd_separate
Fix broken workaround for separate etcd setup
2025-05-15 12:39:14 -07:00
Max Gautier
22d3cf9c2b Move 'pretend certificates' **after** cert distribution
The link target will only exist after we distribute the certs on each node.
2025-05-15 18:35:34 +02:00
Max Gautier
2d3bd8686f Add testcase separate ha-etcd
Also use a distinct node to test certificate distribution.
2025-05-15 18:20:13 +02:00
Hyeonki Hong
2c3b6c9199 feat: add trigger to restart kube-apiserver when config files change (#12172)
* feat: add trigger to restart kube-apiserver when config files change

* fix: remove not upgrade_cluster_setup condition

* refactor: streamline kube-apiserver restart notifications
2025-05-15 06:51:14 -07:00
Max Gautier
a55932e1de Patch versions updates (#12204)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-14 18:55:20 -07:00
Max Gautier
973bd2e520 Stop cleaning up containerd packages (#12213)
The switch to not use system packages for containerd packages happened
multiples releases ago ; there should not be any up-to-date installation
of kubespray needing that cleanup.

Remove those steps and variables only used by them.
2025-05-13 21:07:16 -07:00
Kubernetes Prow Robot
ea7331f5fc Merge pull request #12211 from VannTen/cleanup/rename_remove_node
rename-without-hypens: remove-node/pre-remove
2025-05-13 17:13:16 -07:00
Kubernetes Prow Robot
df241800ce Merge pull request #12203 from VannTen/cleanup/rename_bootstrap_os
Rename bootstrap-os to bootstrap_os
2025-05-13 05:03:16 -07:00
Cyclinder
8cc5694580 calico: update calico-kube-controller manifest (#12169) 2025-05-13 01:43:17 -07:00
Max Gautier
1d15baf405 Add compat and deprecation warning for boostrap-os 2025-05-13 09:39:59 +02:00
Max Gautier
47508d5c6e Rename bootstrap-os to bootstrap_os
Role names in ansible collections should not have hyphens.
2025-05-13 09:39:54 +02:00
Max Gautier
2a1ae14275 Compat layer remove-node/pre-remove 2025-05-12 22:22:20 +02:00
Max Gautier
e361def9cd Rename remove-node/pre-remove (no hypens for role in collection) 2025-05-12 22:19:50 +02:00
Max Gautier
fa6888df4c kubernetes_audit: Remove redundant defaults filter (#12208) 2025-05-12 07:23:14 -07:00
Max Gautier
373b952a0c Cleanup CI scripts (#12205)
* Delete unused scripts

- gen_tags.sh: not the right file, produce garbage even if path is fixed
- premoderator.sh: not used since ef6d24a49 (CI require a 'lgtm' or
  'ok-to-test' labels to pass (#11251), 2024-05-31)
- gitlab-branch-cleanup: unused AFAICT

* CI: inline molecule logs

Single use site -> less indirection makes it easier to read.
2025-05-12 05:53:15 -07:00
felipe88alves
9bbd597e20 create cilium_operator_tolerations variable in group_var (#12200)
- This enables ithe override of the tolerations for the cilium-operator deployment
 - default behaviour is to leave the toleration as is unless the var is set
2025-05-12 03:25:15 -07:00
Cheolhui Kim
fceb1516b8 Update: add Cilium LB IP Pool configuration to support ranges (#12140) 2025-05-12 01:39:18 -07:00
Kubernetes Prow Robot
43e19ab281 Merge pull request #12202 from VannTen/cleanup/rename_kubespray_defaults
Rename kubespray-defaults to kubespray_defaults
2025-05-12 01:21:14 -07:00
Max Gautier
4052cd5237 Add compat and deprecation warning for kubespray-defaults 2025-05-12 09:46:07 +02:00
Kim Hyunyoung, Abel
e1be469995 fix: do not mount hubble-ui tls volume when cilium_hubble_tls_generate is false (#12143) 2025-05-11 20:27:14 -07:00
Max Gautier
23d8c9a820 CI: enabled all jobs on daily CI (#12207) 2025-05-11 19:51:14 -07:00
Max Gautier
e618421697 Don't run upgrade-patch jobs on forks (#12206)
With the current github-workflow setup, workflows are triggered on every
forked repository (which is quite wasteful).

Add a condition to only run on the main repository.
2025-05-10 06:15:14 -07:00
Max Gautier
7db2aa1cba Rename kubespray-defaults to kubespray_defaults
Role names in ansible collection should not contains hyphens.
2025-05-10 10:04:37 +02:00
Kubernetes Prow Robot
0c8dfb8e43 Merge pull request #12185 from VannTen/cleanup/iproute_with_the_rest
Move package installation to bootstrap-os
2025-05-09 20:49:14 -07:00
Max Gautier
25e4fa17a8 Split kubespray-defaults (-> network_facts)
kubespray-defaults currently does two things:
- records a number of default variable values (in particular values used
  in several places)
- gather and compose some complex network facts (in particular,
  `fallback_ip` and `no_proxy`

There is no actual reason to couple those two things, and it makes using
defaults more difficult (because computing the network facts is somewhat
expensive, we don't want to do it willy-nilly)

Split the two and adjust import paths as needed.
2025-05-09 21:14:26 +02:00
Max Gautier
bb4b2af02e Drop install of python-libselinux for RHEL family below 8
RHEL 7 and derivates support has been removed from some time, clean up
of leftovers.
2025-05-09 21:14:25 +02:00
ChengHao Yang
27e93ee9f6 Feat: Gateway API early installation (#12189)
The Gateway API needs to be installed first if you want to use Cilium's
Gateway API functionality. The Gateway API is just CRD without any Pod,
Deployment, etc., so I think it can be brought forward to before the CNI
installation.

Signed-off-by: ChengHao Yang
2025-05-09 09:47:14 -07:00
dependabot[bot]
65bcddb9fd build(deps): bump cryptography from 44.0.2 to 44.0.3 (#12190)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.2 to 44.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.2...44.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 44.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-09 01:53:14 -07:00
Chad Swenson
76707073c4 Fix indentation on AuthorizationConfiguration task (#12197) 2025-05-09 00:05:19 -07:00
Bas
a104fb6a00 kubedns_version no longer used (#12201)
This variable is documented, but not found in the rest of the sources.
2025-05-09 00:01:14 -07:00
ERIK
1c4b18b089 fix: arm64 checksums for youki and kata-containers (#12173)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-05-08 19:05:14 -07:00
Max Gautier
d6d87e9a83 Move cilium_deploy_additionnaly to kubespray-default (#12191)
Instead of using default(false) all over the place, use
kubespray-defaults
2025-05-07 05:05:17 -07:00
Max Gautier
985e4ebb23 Remove versions from inventory sample (#12164)
The recommended usage of kubespray is to use the default versions.
So putting them in inventory/sample is not really very helpful, and
causes:
- churn (keeping the inventory/sample up to date)
- support issues (mismatch between defaults and sample inventory)

Remove all concrete versions from the inventory sample.
2025-05-06 08:43:14 -07:00
Max Gautier
fcc294600c Workaround missing etcd certds on control plane node (#12181) 2025-05-05 01:05:57 -07:00
Max Gautier
9631b5fd44 Move etcd inventory sample doc to role defaults 2025-05-04 21:24:26 +02:00
Max Gautier
a7d681abff Install iputils with other packages 2025-05-04 21:22:49 +02:00
Max Gautier
5867fa1b9f Move back iproute install to system_packages
Packages are now installed before network facts collection, so we can
install iproute with the rest.
2025-05-04 21:22:49 +02:00
Max Gautier
1e79c7b3cb Move package install to bootstrap-os 2025-05-04 21:22:48 +02:00
Max Gautier
34d64d4d04 Remove outdated comment
bootstrap-os does not do anything in sudoers since e2ad6aad5 (bootstrap:
rework role (#4045), 2019-02-11).

So SSH pipelining working is effectively a pre-requisite anyway.
2025-05-04 21:22:48 +02:00
Max Gautier
87726faab4 Move check 'sorted pkgs list to pre-commit'
This is a lint check, which should not live in the playbook itself.
2025-05-04 21:22:47 +02:00
Max Gautier
1b9919547a Split 'offline' assert into their own role
The preinstall assert cover a number of things, many of which depends
only on the inventory, and can be run without any ansible_facts
collected.

Split them off to simplify re-ordering.
2025-05-04 21:22:46 +02:00
Kubernetes Prow Robot
84d96d5195 Merge pull request #12165 from tico88612/fix/failing-test-coredns-autoscaler
Feat: add `dns_autoscaler_affinity` and remove in-place values
2025-05-03 13:17:55 -07:00
ChengHao Yang
1374a97787 Test: ubuntu22-calico-all-in-one-upgrade disable dns autoscaler
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-05-04 00:23:05 +08:00
bin.pan
6f0fc020e8 update containerd.options key name (#12170) 2025-05-02 23:27:55 -07:00
Takuya Murakami
f58a6e2057 docs: Fix offline-environment.md to add 'v' prefix of some versions (#12166)
* docs: Fix offline-environment.md to add 'v' prefix of some versions

Now some version variables (kube_version, etcd_version, etc) don't have 'v' prefix,
so you need to add 'v' prefix to download URLs.

* fix: Fix offline.yml to add 'v' prefix of some versions
2025-05-02 01:57:55 -07:00
Ali Afsharzadeh
09fad4886a Fix path to facts.yml in node facts refresh section (#12177) 2025-05-02 00:39:56 -07:00
Ho Kim
c47711c2f2 fix: correct indent of cpuManagerPolicyOptions (#12123) 2025-05-02 00:27:56 -07:00
Karthik S
a3e6e66204 Etcd Certificates are not generated when adding nodes to an existing cluster with scale.yml (#12120)
* [Issue-12117]-Certificates for the new hosts are not generated during scale.yml

* [Issue-12117]-Certificates for the new hosts are not generated during scale.yml

* [Issue-12117]-Certificates for the new hosts are not generated during scale.yml
2025-05-02 00:03:56 -07:00
ChengHao Yang
2907936c85 Feat: add dns_autoscaler_affinity remove in-place values
Upstream has removed affinity, and fix upgrade failing test.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-28 19:18:19 +08:00
ChengHao Yang
71a323039f Fix: kubelet-csr-approver moves to regular application installation (#12141)
This commit fixed the process to ensure that CCM is installed first to
avoid the chicken-and-egg problem.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-28 01:27:26 -07:00
ChengHao Yang
5e5e509698 Revert "Update cluster-proportional-autoscaler to v1.9.0 (#11982)" (#12168)
This reverts commit 16841a1fb0.
2025-04-28 01:23:32 -07:00
Takuya Murakami
4a598c1ef3 Make kubernetes 1.32.4 default (#12161) 2025-04-25 01:22:30 -07:00
Aviral Agarwal
1da9f0dec4 Fixed kube-vip to use kube-vip/kube-vip-iptables image instead of kube-vip/kube-vip when lb_fwdmethod or kube_vip_lb_fwdmethod is set to masquerade (#12145) 2025-04-24 15:54:30 -07:00
ShinyaIshitobi
629a690886 fix: Enable NRI for containerd and disable plugin when nri_enabled is false (#12152)
* fix(containerd): always render NRI plugin block with conditional disable flag

* feat: enable Node Resource Interface plugin when using containerd

* fix: remove the

* fix: fix for linter
2025-04-24 01:40:33 -07:00
Mathieu Parent
16841a1fb0 Update cluster-proportional-autoscaler to v1.9.0 (#11982) 2025-04-24 01:32:37 -07:00
ERIK
22c19a40fa feat: Update containerd and nerdctl checksums to latest versions (#12154)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-04-24 01:02:31 -07:00
ERIK
8f41a2886d Update version comparison syntax and optimize whitespace (#12146)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2025-04-24 00:56:31 -07:00
Max Gautier
38cea5b866 Patch versions updates (#12119)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-04-23 21:48:30 -07:00
Ekko
4177289ef6 Fix typo in .gitlab-ci/kubevirt.yml (#12134)
Signed-off-by: Ekko Tu <lihai.tu@daocloud.io>
2025-04-18 03:59:06 -07:00
Kubernetes Prow Robot
4ad9f9b535 Merge pull request #11763 from tico88612/feat/gateway-api-v1.2.1
Refactor Gateway API installation process and bump Gateway API v1.2.1
2025-04-11 08:38:42 -07:00
ChengHao Yang
6f58b33de0 Deprecate gateway_api_experimental_channel
Please use `gateway_api_channel` and set `experimental`.

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-11 23:04:01 +08:00
ChengHao Yang
9456e792f1 Remove unused Gateway API template
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-11 22:57:00 +08:00
ChengHao Yang
7f60dda565 Refactor Gateway API manifests installation process
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-11 22:57:00 +08:00
ChengHao Yang
582fe2cbde Add Gateway API download information in kubespray-default
Remove old variables in kubernetes-apps/gateway_api

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-11 22:57:00 +08:00
Max Gautier
79fbfdf271 component_hash_update: support calico_crds (#12122)
- add support for "no_arch" downloads: arch-indendendant files such as
  YAML manifests, helm charts, etc.
- wire calico_crds with it.
2025-04-10 02:18:47 -07:00
ChengHao Yang
cfaf397d4a Bump: OpenStack Cloud Controller Manager upgrade to v1.32.0 (#12121)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-10 01:44:41 -07:00
Kubernetes Prow Robot
2f404de77c Merge pull request #12037 from VannTen/ci/convert_vagrant_to_kubevirt_2
CI: convert remaining vagrant jobs (except IPv6) to kubevirt + cleanups
2025-04-09 01:16:42 -07:00
Mohammd Reza Mollasalehi
d304966d75 doc: fix a broken link in the Calico documentation (#12108) (#12109) 2025-04-08 06:32:46 -07:00
ChengHao Yang
4ce5510c1a [rbd-provisioner] deprecate outdated application and documentation (#12114)
* Cleanup: deprecate rbd-provisioner application

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: remove rbd-provisioner application

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-08 06:22:44 -07:00
ChengHao Yang
8032b8281d [cephfs-provisioner] deprecate outdated application and documentation (#12113)
* Cleanup: deprecated CephFS application

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

* Docs: Remove CephFS Application

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>

---------

Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
2025-04-08 03:08:39 -07:00
Farshad Asadpour
45ecceb3e1 docs(terraform): update command for destroying infrastructure in README (#12111) 2025-04-08 02:16:39 -07:00
Max Gautier
5a6ef1dafa Timeout on RHEL subscription check (#12115)
subscription-manager status can in some circumstances just never
terminates, with nothing indicating the problem from the Ansible
playbook log.
This makes it difficult to find the hosts misbehaving.

Add a timeout to the subscription checks (defaulting to 3 minutes). This
should be more than enough for normal circumstances while allowing
easier troubleshooting, as the hosts will be FAILED instead of the
playbook just waiting indefinitely.
2025-04-08 01:24:44 -07:00
Max Gautier
0ae9ab36ce CI: Pin github actions for security (#12105)
Dependabot can still upgrade the action version.
2025-04-03 06:22:38 -07:00
Bas
cf48915657 Documenting offline installation with secure files repo and registry. (#11993)
* Add config for addon helm and local_path_provisioner

* Documenting offline installation with secure files_repo

* Documenting offline installation with secure registry
2025-04-03 02:06:37 -07:00
Fredrik Liv
6f74ef17f7 Upcloud: Add possibility to setup cluster using nodes with no public IPs (#11696)
* terraform upcloud: Added possibility to set up nodes with only private IPs

* terraform upcloud: add support for gateway in private zone

* terraform upcloud: split LB proxy protocol config per backend

* terraform upcloud: fix flexible plans

* terraform upcloud: Removed overview of cluster setup

---------

Co-authored-by: davidumea <david.andersson@elastisys.com>
2025-04-01 07:58:42 -07:00
Max Gautier
fe2ab898b8 component_hash_update: remove obsolete todos (#12098) 2025-03-31 15:18:35 -07:00
dependabot[bot]
c8b8567781 build(deps): bump actions/checkout from 3 to 4 (#12089)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 01:40:34 -07:00
dependabot[bot]
bf86c14d35 build(deps): bump redhat-plumbers-in-action/advanced-issue-labeler (#12090)
Bumps [redhat-plumbers-in-action/advanced-issue-labeler](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler) from 2 to 3.
- [Release notes](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler/releases)
- [Commits](https://github.com/redhat-plumbers-in-action/advanced-issue-labeler/compare/v2...v3)

---
updated-dependencies:
- dependency-name: redhat-plumbers-in-action/advanced-issue-labeler
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 01:14:35 -07:00
dependabot[bot]
e47eb4bc7f build(deps): bump pytest-testinfra from 10.1.1 to 10.2.2 (#12096)
Bumps [pytest-testinfra](https://github.com/pytest-dev/pytest-testinfra) from 10.1.1 to 10.2.2.
- [Release notes](https://github.com/pytest-dev/pytest-testinfra/releases)
- [Changelog](https://github.com/pytest-dev/pytest-testinfra/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-testinfra/compare/10.1.1...10.2.2)

---
updated-dependencies:
- dependency-name: pytest-testinfra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 01:10:35 -07:00
Max Gautier
5222f48978 auto-update: use a branch prefix rather than suffix (#12097)
This is more in-line with dependabot and similar auto-updaters.

Reduce ci coverage on github action updating (it does not change
kubespray code, no need for testing).
2025-03-31 01:04:36 -07:00
Max Gautier
7b6b7318b2 Remove unused manifest (docs) (#12092)
This file is no longer referenced since e0d67367e (Update installation
doc with vagrant (#8406), 2022-01-11).
2025-03-29 11:26:34 -07:00
Kubernetes Prow Robot
f02d313fee Merge pull request #12093 from VannTen/cleanup/contrib
Cleanup old things in contrib/
2025-03-29 10:16:34 -07:00
Max Gautier
7c9870d15b Remove contrib/mitogen
- the playbook does not work
- the mitogen version is not up to date

This strongly suggests this is not used ; let's drop it.
2025-03-28 09:49:28 +01:00
Max Gautier
c8ea1468d1 Remove unmaintained contrib: kvm-setup 2025-03-28 09:39:30 +01:00
Max Gautier
0fc56ed344 CI: fix terraform
- add default testcase
- fix ansible ssh connection
2025-03-26 20:05:26 +01:00
Max Gautier
5c4e597987 CI: workaround build: disable rebase 2025-03-26 20:05:25 +01:00
Max Gautier
ef133fd93d CI: cleanups leftovers things
include_vars is redundant as the file is already included by extra_vars
2025-03-26 20:05:25 +01:00
Max Gautier
f6ca3bf477 CI: simplify image build job 2025-03-26 20:05:24 +01:00
Max Gautier
b9e251ac7a CI: cleanup terraform + deduplicate and simplify 2025-03-26 20:05:23 +01:00
Max Gautier
43fceebdd3 CI: convert vagrant jobs to kubevirt
Vagrant jobs needs a big cache which makes them slow / sometimes stuck
completely. Using the kubevirt provisionning playbook is now
significantly faster, so do just that.

Having only one provisionner in CI will also allows us to remove some of
the custom runners executors we use for vagrant, and more generally
reduce the CI maintenance.

Our kubevirt CI platform does not support ivp6 yet, so we keep the
relevant jobs in vagrant, but we'll migrate them as well as soon as
possible.
2025-03-26 20:05:21 +01:00
Max Gautier
862aec4dc6 CI: remove 'packet' from jobs name + rename to kubevirt
This is more accurate, the name 'packet' being an aterfact of history
(the Kubevirt jobs used to run on Packet, the previous name of Equinix)
2025-03-26 14:32:26 +01:00
Max Gautier
4f3b214ef5 CI: streamline packet jobs definition
- Take advantage of `parallel:matrix` to make the jobs definition shorter
  and more readable.
- Remove helper scripts which are no longer needed
- Remove redundant indirection in the gitlab-ci pipelines definitions
  (only one user)
2025-03-26 14:32:24 +01:00
493 changed files with 6628 additions and 41262 deletions

View File

@@ -12,10 +12,12 @@ skip_list:
# (Disabled in June 2021)
- 'role-name'
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
# [var-naming]
# In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021)
- 'var-naming'
- 'var-naming[pattern]'
# Variables names from within roles in kubespray don't need role name as a prefix
- 'var-naming[no-role-prefix]'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
@@ -39,5 +41,7 @@ exclude_paths:
- .github
- .ansible
- .cache
- .gitlab-ci.yml
- .gitlab-ci
mock_modules:
- gluster.gluster.gluster_volume

View File

@@ -108,7 +108,6 @@ body:
- meta
- multus
- ovn4nfv
- weave
validations:
required: true

View File

@@ -16,5 +16,6 @@ updates:
directory: "/"
labels:
- release-note-none
- ci-short
schedule:
interval: "weekly"

View File

@@ -13,16 +13,16 @@ jobs:
issues: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- name: Parse issue form
uses: stefanbuck/github-issue-parser@v3
uses: stefanbuck/github-issue-parser@2ea9b35a8c584529ed00891a8f7e41dc46d0441e
id: issue-parser
with:
template-path: .github/ISSUE_TEMPLATE/bug-report.yaml
- name: Set labels based on OS field
uses: redhat-plumbers-in-action/advanced-issue-labeler@v2
uses: redhat-plumbers-in-action/advanced-issue-labeler@e38e6809c5420d038eed380d49ee9a6ca7c92dbf
with:
issue-form: ${{ steps.issue-parser.outputs.jsonString }}
section: os

View File

@@ -8,18 +8,19 @@ on:
permissions: {}
jobs:
get-releases-branches:
if: github.repository == 'kubernetes-sigs/kubespray'
runs-on: ubuntu-latest
outputs:
branches: ${{ steps.get-branches.outputs.data }}
steps:
- uses: octokit/graphql-action@v2.3.2
- uses: octokit/graphql-action@8ad880e4d437783ea2ab17010324de1075228110
id: get-branches
with:
query: |
query get_release_branches($owner:String!, $name:String!) {
repository(owner:$owner, name:$name) {
refs(refPrefix: "refs/heads/",
first: 0, # TODO increment once we have release branch with the new checksums format
first: 1, # TODO increment once we have release branch with the new checksums format
query: "release-",
orderBy: {
field: ALPHABETICAL,

View File

@@ -11,10 +11,10 @@ jobs:
update-patch-versions:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
with:
ref: ${{ inputs.branch }}
- uses: actions/setup-python@v5
- uses: actions/setup-python@v6
with:
python-version: '3.13'
cache: 'pip'
@@ -29,12 +29,12 @@ jobs:
~/.cache/pre-commit
- run: pre-commit run --all-files propagate-ansible-variables
continue-on-error: true
- uses: peter-evans/create-pull-request@v7
- uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e
with:
commit-message: Patch versions updates
title: Patch versions updates - ${{ inputs.branch }}
labels: bot
branch: ${{ inputs.branch }}-patch-updates
branch: component_hash_update/${{ inputs.branch }}
sign-commits: true
body: |
/kind feature

View File

@@ -1,9 +1,9 @@
---
stages:
- build
- test
- deploy-part1
- deploy-extended
- build # build docker image used in most other jobs
- test # unit tests
- deploy-part1 # kubespray runs - common setup
- deploy-extended # kubespray runs - rarer or costlier (to test) setups
variables:
FAILFASTCI_NAMESPACE: 'kargo-ci'
@@ -24,6 +24,7 @@ variables:
ANSIBLE_REMOTE_USER: kubespray
ANSIBLE_PRIVATE_KEY_FILE: /tmp/id_rsa
ANSIBLE_INVENTORY: /tmp/inventory
ANSIBLE_STDOUT_CALLBACK: "default"
RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false"
@@ -31,12 +32,12 @@ variables:
ANSIBLE_VERBOSITY: 2
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7
OPENTOFU_VERSION: v1.9.1
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script:
- ./tests/scripts/rebase.sh
- mkdir -p /.ssh
- mkdir -p cluster-dump $ANSIBLE_INVENTORY
.job: &job
tags:
@@ -48,60 +49,18 @@ before_script:
- cluster-dump/
needs:
- pipeline-image
variables:
ANSIBLE_STDOUT_CALLBACK: "debug"
.job-moderated:
extends: .job
needs:
- pipeline-image
- ci-not-authorized
- pre-commit # lint
- vagrant-validate # lint
.testcases: &testcases
extends: .job-moderated
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
ci-not-authorized:
stage: build
before_script: []
after_script: []
rules:
# LGTM or ok-to-test labels
- if: $PR_LABELS =~ /.*,(lgtm|approved|ok-to-test).*|^(lgtm|approved|ok-to-test).*/i
variables:
CI_OK_TO_TEST: '0'
when: always
- if: $CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "trigger"
variables:
CI_OK_TO_TEST: '0'
- if: $CI_COMMIT_BRANCH == "master"
variables:
CI_OK_TO_TEST: '0'
- when: always
variables:
CI_OK_TO_TEST: '1'
script:
- exit $CI_OK_TO_TEST
tags:
- ffci
needs: []
include:
- .gitlab-ci/build.yml
- .gitlab-ci/lint.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/kubevirt.yml
- .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@@ -1,5 +1,5 @@
---
.build-container:
pipeline-image:
cache:
key: $CI_COMMIT_REF_SLUG
paths:
@@ -7,27 +7,24 @@
tags:
- ffci
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: ['']
image: moby/buildkit:rootless
variables:
TAG: $CI_COMMIT_SHORT_SHA
PROJECT_DIR: $CI_PROJECT_DIR
DOCKERFILE: Dockerfile
GODEBUG: "http2client=0"
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
CACHE_IMAGE: $CI_REGISTRY_IMAGE/pipeline:cache
# TODO: remove the override
# currently rebase.sh depends on bash (not available in the kaniko image)
# once we have a simpler rebase (which should be easy if the target branch ref is available as variable
# we'll be able to rebase here as well hopefully
before_script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
- mkdir -p ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
script:
- /kaniko/executor --cache=true
--cache-dir=image-cache
--context $PROJECT_DIR
--dockerfile $PROJECT_DIR/$DOCKERFILE
--label 'git-branch'=$CI_COMMIT_REF_SLUG
--label 'git-tag=$CI_COMMIT_TAG'
--destination $PIPELINE_IMAGE
--log-timestamp=true
pipeline-image:
extends: .build-container
variables:
DOCKERFILE: pipeline.Dockerfile
- |
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=$CI_PROJECT_DIR \
--local dockerfile=$CI_PROJECT_DIR \
--opt filename=pipeline.Dockerfile \
--export-cache type=registry,ref=$CACHE_IMAGE \
--import-cache type=registry,ref=$CACHE_IMAGE \
--output type=image,name=$PIPELINE_IMAGE,push=true

153
.gitlab-ci/kubevirt.yml Normal file
View File

@@ -0,0 +1,153 @@
---
.kubevirt:
extends: .job-moderated
interruptible: true
script:
- ansible-playbook tests/cloud_playbooks/create-kubevirt.yml
-e @"tests/files/${TESTCASE}.yml"
- ./tests/scripts/testcases_run.sh
variables:
ANSIBLE_TIMEOUT: "120"
tags:
- ffci
needs:
- pipeline-image
# TODO: generate testcases matrixes from the files in tests/files/
# this is needed to avoid the need for PR rebasing when a job was added or removed in the target branch
# (currently, a removed job in the target branch breaks the tests, because the
# pipeline definition is parsed by gitlab before the rebase.sh script)
# CI template for PRs
pr:
stage: deploy-part1
rules:
- if: $PR_LABELS =~ /.*ci-short.*/
when: manual
allow_failure: true
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
- when: manual
allow_failure: true
extends: .kubevirt
parallel:
matrix:
- TESTCASE:
- almalinux9-crio
- almalinux9-kube-ovn
- debian11-calico-collection
- debian11-macvlan
- debian12-cilium
- debian13-cilium
- fedora39-kube-router
- openeuler24-calico
- rockylinux9-cilium
- ubuntu22-calico-all-in-one
- ubuntu22-calico-all-in-one-upgrade
- ubuntu24-calico-etcd-datastore
- ubuntu24-calico-all-in-one-hardening
- ubuntu24-cilium-sep
- ubuntu24-flannel-collection
- ubuntu24-kube-router-sep
- ubuntu24-kube-router-svc-proxy
- ubuntu24-ha-separate-etcd
- flatcar4081-calico
- fedora40-flannel-crio-collection-scale
# The ubuntu24-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
ubuntu24-calico-all-in-one:
stage: deploy-part1
extends: .kubevirt
variables:
TESTCASE: ubuntu24-calico-all-in-one
rules:
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
- when: manual
allow_failure: true
pr_full:
extends: .kubevirt
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*ci-full.*/
when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
# Else run as manual
- when: manual
allow_failure: true
parallel:
matrix:
- TESTCASE:
- almalinux9-calico-ha-ebpf
- almalinux9-calico-nodelocaldns-secondary
- debian11-custom-cni
- debian11-kubelet-csr-approver
- debian12-custom-cni-helm
- fedora39-calico-swap-selinux
- fedora39-crio
- ubuntu24-calico-ha-wireguard
- ubuntu24-flannel-ha
- ubuntu24-flannel-ha-once
# Need an update of the container image to use schema v2
# update: quay.io/kubespray/vm-amazon-linux-2:latest
manual:
extends: pr_full
parallel:
matrix:
- TESTCASE:
- amazon-linux-2-all-in-one
rules:
- when: manual
allow_failure: true
pr_extended:
extends: .kubevirt
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
- when: manual
allow_failure: true
parallel:
matrix:
- TESTCASE:
- almalinux9-calico
- almalinux9-calico-remove-node
- almalinux9-docker
- debian11-docker
- debian12-calico
- debian12-docker
- debian13-calico
- rockylinux9-calico
- ubuntu22-all-in-one-docker
- ubuntu24-all-in-one-docker
- ubuntu24-calico-all-in-one
- ubuntu24-calico-etcd-kubeadm
- ubuntu24-flannel
# TODO: migrate to pr-full, fix the broken ones
periodic:
allow_failure: true
extends: .kubevirt
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
parallel:
matrix:
- TESTCASE:
- debian11-calico-upgrade
- debian11-calico-upgrade-once
- debian12-cilium-svc-proxy
- fedora39-calico-selinux
- fedora40-docker-calico
- ubuntu24-calico-etcd-kubeadm-upgrade-ha
- ubuntu24-calico-ha-recover
- ubuntu24-calico-ha-recover-noquorum

View File

@@ -6,6 +6,7 @@ pre-commit:
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:fe01a6ec51b298412990b88627c3973b1146c7304f930f469bafa29ba60bcde9'
variables:
PRE_COMMIT_HOME: ${CI_PROJECT_DIR}/.cache/pre-commit
ANSIBLE_STDOUT_CALLBACK: default
script:
- pre-commit run --all-files --show-diff-on-failure
cache:
@@ -23,4 +24,3 @@ vagrant-validate:
VAGRANT_VERSION: 2.3.7
script:
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']

View File

@@ -1,19 +1,24 @@
---
.molecule:
tags: [ffci]
only: [/^pr-.*$/]
except: ['triggers']
rules: # run on ci-short as well
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
- when: manual
allow_failure: true
stage: deploy-part1
image: $PIPELINE_IMAGE
needs:
- pipeline-image
# - ci-not-authorized
before_script:
- ./tests/scripts/rebase.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- ./tests/scripts/molecule_logs.sh
- rm -fr molecule_logs
- mkdir -p molecule_logs
- find ~/.cache/molecule/ \( -name '*.out' -o -name '*.err' \) -type f | xargs tar -uf molecule_logs/molecule.tar
- gzip molecule_logs/molecule.tar
artifacts:
when: always
paths:
@@ -29,28 +34,22 @@ molecule:
- container-engine/cri-dockerd
- container-engine/containerd
- container-engine/cri-o
- container-engine/gvisor
- container-engine/youki
- adduser
- bastion-ssh-config
- bootstrap-os
- bootstrap_os
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
molecule_full:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
- when: manual
allow_failure: true
extends: molecule
parallel:
matrix:
- ROLE:
- container-engine/cri-dockerd
- container-engine/containerd
- container-engine/cri-o
- adduser
- bastion-ssh-config
- bootstrap-os
# FIXME : tests below are perma-failing
- container-engine/kata-containers
- container-engine/gvisor
- container-engine/youki

View File

@@ -1,257 +0,0 @@
---
.packet:
extends: .testcases
variables:
ANSIBLE_TIMEOUT: "120"
CI_PLATFORM: packet
SSH_USER: kubespray
tags:
- ffci
needs:
- pipeline-image
- ci-not-authorized
# CI template for PRs
.packet_pr:
stage: deploy-part1
rules:
- if: $PR_LABELS =~ /.*ci-short.*/
when: manual
allow_failure: true
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- when: manual
allow_failure: true
extends: .packet
## Uncomment this to have multiple stages
# needs:
# - packet_ubuntu20-calico-all-in-one
.packet_pr_short:
stage: deploy-part1
extends: .packet
rules:
- if: $CI_COMMIT_BRANCH =~ /^pr-.*$/
when: on_success
- when: manual
allow_failure: true
.packet_pr_manual:
extends: .packet_pr
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*ci-full.*/
when: on_success
# Else run as manual
- when: manual
allow_failure: true
.packet_pr_extended:
extends: .packet_pr
stage: deploy-extended
rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
when: on_success
- when: manual
allow_failure: true
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.packet_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .packet
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-all-in-one:
stage: deploy-part1
extends: .packet_pr_short
variables:
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_ubuntu20-crio:
extends: .packet_pr_manual
packet_ubuntu22-calico-all-in-one:
extends: .packet_pr
packet_ubuntu22-calico-all-in-one-upgrade:
extends: .packet_pr
variables:
UPGRADE_TEST: graceful
packet_ubuntu24-calico-etcd-datastore:
extends: .packet_pr
packet_almalinux9-crio:
extends: .packet_pr
packet_almalinux9-kube-ovn:
extends: .packet_pr
packet_debian11-calico-collection:
extends: .packet_pr
packet_debian11-macvlan:
extends: .packet_pr
packet_debian12-cilium:
extends: .packet_pr
packet_almalinux8-calico:
extends: .packet_pr
packet_rockylinux8-calico:
extends: .packet_pr
packet_rockylinux9-cilium:
extends: .packet_pr
variables:
RESET_CHECK: "true"
# Need an update of the container image to use schema v2
# update: quay.io/kubespray/vm-amazon-linux-2:latest
packet_amazon-linux-2-all-in-one:
extends: .packet_pr_manual
rules:
- when: manual
allow_failure: true
packet_opensuse15-6-calico:
extends: .packet_pr
packet_ubuntu20-cilium-sep:
extends: .packet_pr
packet_openeuler24-calico:
extends: .packet_pr
packet_ubuntu20-calico-all-in-one-hardening:
extends: .packet_pr
## Extended
packet_debian11-docker:
extends: .packet_pr_extended
packet_debian12-docker:
extends: .packet_pr_extended
packet_debian12-calico:
extends: .packet_pr_extended
packet_almalinux9-calico-remove-node:
extends: .packet_pr_extended
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_rockylinux9-calico:
extends: .packet_pr_extended
packet_almalinux9-calico:
extends: .packet_pr_extended
packet_almalinux9-docker:
extends: .packet_pr_extended
packet_opensuse15-6-docker-cilium:
extends: .packet_pr_extended
packet_ubuntu24-calico-all-in-one:
extends: .packet_pr_extended
packet_ubuntu20-calico-etcd-kubeadm:
extends: .packet_pr_extended
packet_ubuntu24-all-in-one-docker:
extends: .packet_pr_extended
packet_ubuntu22-all-in-one-docker:
extends: .packet_pr_extended
# ### MANUAL JOBS
packet_fedora39-crio:
extends: .packet_pr_manual
packet_ubuntu20-flannel-ha:
extends: .packet_pr_manual
packet_ubuntu20-all-in-one-docker:
extends: .packet_pr_manual
packet_ubuntu20-flannel-ha-once:
extends: .packet_pr_manual
packet_fedora39-calico-swap-selinux:
extends: .packet_pr_manual
packet_almalinux9-calico-ha-ebpf:
extends: .packet_pr_manual
packet_almalinux9-calico-nodelocaldns-secondary:
extends: .packet_pr_manual
packet_debian11-custom-cni:
extends: .packet_pr_manual
packet_debian11-kubelet-csr-approver:
extends: .packet_pr_manual
packet_debian12-custom-cni-helm:
extends: .packet_pr_manual
packet_ubuntu20-calico-ha-wireguard:
extends: .packet_pr_manual
# PERIODIC
packet_fedora40-docker-calico:
stage: deploy-extended
extends: .packet_periodic
variables:
RESET_CHECK: "true"
packet_fedora39-calico-selinux:
stage: deploy-extended
extends: .packet_periodic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-extended
extends: .packet_periodic
variables:
UPGRADE_TEST: basic
packet_debian11-calico-upgrade-once:
stage: deploy-extended
extends: .packet_periodic
variables:
UPGRADE_TEST: graceful
packet_ubuntu20-calico-ha-recover:
stage: deploy-extended
extends: .packet_periodic
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
packet_ubuntu20-calico-ha-recover-noquorum:
stage: deploy-extended
extends: .packet_periodic
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]"
packet_debian11-calico-upgrade:
stage: deploy-extended
extends: .packet_periodic
variables:
UPGRADE_TEST: graceful
packet_debian12-cilium-svc-proxy:
stage: deploy-extended
extends: .packet_periodic

View File

@@ -3,39 +3,41 @@
.terraform_install:
extends: .job
needs:
- ci-not-authorized
- pipeline-image
variables:
TF_VAR_public_key_path: "${ANSIBLE_PRIVATE_KEY_FILE}.pub"
TF_VAR_ssh_private_key_path: $ANSIBLE_PRIVATE_KEY_FILE
CLUSTER: $CI_COMMIT_REF_NAME
TERRAFORM_STATE_ROOT: $CI_PROJECT_DIR
stage: deploy-part1
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
- ./tests/scripts/terraform_install.sh
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Prepare inventory
- mkdir -p cluster-dump $ANSIBLE_INVENTORY
- ./tests/scripts/opentofu_install.sh
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -s contrib/terraform/$PROVIDER/hosts
- terraform -chdir="contrib/terraform/$PROVIDER" init
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- mkdir -p contrib/terraform/$PROVIDER/group_vars
# Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
- ln -rs -t $ANSIBLE_INVENTORY contrib/terraform/$PROVIDER/hosts
- tofu -chdir="contrib/terraform/$PROVIDER" init
.terraform_validate:
terraform_validate:
extends: .terraform_install
tags: [ffci]
only: ['master', /^pr-.*$/]
script:
- terraform -chdir="contrib/terraform/$PROVIDER" validate
- terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
- tofu -chdir="contrib/terraform/$PROVIDER" validate
- tofu -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
stage: test
needs:
- pipeline-image
parallel:
matrix:
- PROVIDER:
- openstack
- aws
- exoscale
- hetzner
- vsphere
- upcloud
- nifcloud
.terraform_apply:
extends: .terraform_install
@@ -43,99 +45,24 @@
stage: deploy-extended
when: manual
only: [/^pr-.*$/]
artifacts:
when: always
paths:
- cluster-dump/
variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
ANSIBLE_INVENTORY: hosts
CI_PLATFORM: tf
TF_VAR_ssh_user: $SSH_USER
ANSIBLE_REMOTE_USER: ubuntu # the openstack terraform module does not handle custom user correctly
ANSIBLE_SSH_RETRIES: 15
TF_VAR_ssh_user: $ANSIBLE_REMOTE_USER
TF_VAR_cluster_name: $CI_JOB_ID
script:
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
- ssh-keygen -N '' -f $ANSIBLE_PRIVATE_KEY_FILE -t rsa
- mkdir -p contrib/terraform/$PROVIDER/group_vars
# Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
- tofu -chdir="contrib/terraform/$PROVIDER" apply -auto-approve -parallelism=1
- tests/scripts/testcases_run.sh
after_script:
# Cleanup regardless of exit code
- ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-equinix:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: equinix
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: exoscale
tf-validate-hetzner:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: hetzner
tf-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-nifcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: nifcloud
# tf-packet-ubuntu20-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_metro: am
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_20_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
OS_PROJECT_ID: 8d3cd5d737d74227ace462dee0b903fe
OS_PROJECT_NAME: "9361447987648822"
OS_USER_DOMAIN_NAME: Default
OS_PROJECT_DOMAIN_ID: default
OS_USERNAME: 8XuhBMfkKVrk
OS_REGION_NAME: UK1
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
- tofu -chdir="contrib/terraform/$PROVIDER" destroy -auto-approve
# Elastx is generously donating resources for Kubespray on Openstack CI
# Contacts: @gix @bl0m1
@@ -169,11 +96,8 @@ tf-elastx_ubuntu20-calico:
allow_failure: true
variables:
<<: *elastx_variables
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
@@ -194,46 +118,3 @@ tf-elastx_ubuntu20-calico:
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-20.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
# tf-ovh_cleanup:
# stage: unit-tests
# tags: [light]
# image: python
# environment: ovh
# variables:
# <<: *ovh_variables
# before_script:
# - pip install -r scripts/openstack-cleanup/requirements.txt
# script:
# - ./scripts/openstack-cleanup/main.py
# tf-ovh_ubuntu20-calico:
# extends: .terraform_apply
# when: on_success
# environment: ovh
# variables:
# <<: *ovh_variables
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: openstack
# CLUSTER: $CI_COMMIT_REF_NAME
# ANSIBLE_TIMEOUT: "60"
# SSH_USER: ubuntu
# TF_VAR_number_of_k8s_masters: "0"
# TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
# TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
# TF_VAR_number_of_etcd: "0"
# TF_VAR_number_of_k8s_nodes: "0"
# TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
# TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
# TF_VAR_number_of_bastions: "0"
# TF_VAR_number_of_k8s_masters_no_etcd: "0"
# TF_VAR_use_neutron: "0"
# TF_VAR_floatingip_pool: "Ext-Net"
# TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
# TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 20.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -1,20 +1,16 @@
---
.vagrant:
extends: .testcases
needs:
- ci-not-authorized
vagrant:
extends: .job-moderated
variables:
CI_PLATFORM: "vagrant"
SSH_USER: "vagrant"
VAGRANT_DEFAULT_PROVIDER: "libvirt"
KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb
KUBESPRAY_VAGRANT_CONFIG: tests/files/${TESTCASE}.rb
DOCKER_NAME: vagrant
VAGRANT_ANSIBLE_TAGS: facts
VAGRANT_HOME: "$CI_PROJECT_DIR/.vagrant.d"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
tags: [ffci-vm-large]
# only: [/^pr-.*$/]
# except: ['triggers']
image: quay.io/kubespray/vm-kubespray-ci:v13
services: []
before_script:
@@ -28,54 +24,26 @@
- pip install --no-compile --no-cache-dir -r $CI_PROJECT_DIR/tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- vagrant up
- ./tests/scripts/testcases_run.sh
after_script:
- vagrant destroy -f
cache:
key: $CI_JOB_NAME_SLUG
paths:
- .vagrant.d/boxes
- .cache/pip
policy: pull-push # TODO: change to "pull" when not on main
vagrant_ubuntu24-calico-dual-stack:
stage: deploy-extended
extends: .vagrant
rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
when: on_success
allow_failure: false
vagrant_ubuntu24-calico-ipv6only-stack:
stage: deploy-extended
extends: .vagrant
rules:
- if: $PR_LABELS =~ /.*(ci-extended|ci-full).*/
- if: $CI_PIPELINE_SOURCE == "schedule" && $CI_PIPELINE_SCHEDULE_DESCRIPTION == "daily-ci"
when: on_success
allow_failure: false
vagrant_ubuntu20-flannel:
stage: deploy-part1
extends: .vagrant
when: on_success
allow_failure: false
vagrant_ubuntu20-flannel-collection:
stage: deploy-extended
extends: .vagrant
when: manual
vagrant_ubuntu20-kube-router-sep:
stage: deploy-extended
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu20-kube-router-svc-proxy:
stage: deploy-extended
extends: .vagrant
when: manual
vagrant_fedora39-kube-router:
stage: deploy-extended
extends: .vagrant
when: manual
# FIXME: this test if broken (perma-failing)
- when: manual
allow_failure: true
parallel:
matrix:
- TESTCASE:
- ubuntu24-calico-dual-stack
- ubuntu24-calico-ipv6only-stack

View File

@@ -1,7 +1,7 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
rev: v6.0.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
@@ -15,13 +15,13 @@ repos:
- id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.35.1
rev: v1.37.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
rev: v0.11.0.1
hooks:
- id: shellcheck
args: ["--severity=error"]
@@ -29,7 +29,7 @@ repos:
files: "\\.sh$"
- repo: https://github.com/ansible/ansible-lint
rev: v25.1.1
rev: v25.11.0
hooks:
- id: ansible-lint
additional_dependencies:
@@ -38,7 +38,7 @@ repos:
- distlib
- repo: https://github.com/golangci/misspell
rev: v0.6.0
rev: v0.7.0
hooks:
- id: misspell
exclude: "OWNERS_ALIASES$"

View File

@@ -40,7 +40,7 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
5. Addess any pre-commit validation failures.
5. Address any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
# Some tools like yamllint need this
@@ -35,8 +35,8 @@ RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN OS_ARCHITECTURE=$(dpkg --print-architecture) \
&& curl -L "https://dl.k8s.io/release/v1.32.3/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.32.3/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
&& echo "$(curl -L "https://dl.k8s.io/release/v1.33.7/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl
COPY *.yml ./

View File

@@ -1,17 +1,13 @@
aliases:
kubespray-approvers:
- cristicalin
- floryut
- liupeng0518
- mzaian
- oomichi
- yankay
- ant31
- mzaian
- tico88612
- vannten
- yankay
kubespray-reviewers:
- cyclinder
- erikjiang
- mrfreezeex
- mzaian
- tico88612
- vannten
@@ -19,8 +15,12 @@ aliases:
kubespray-emeritus_approvers:
- atoms
- chadswen
- cristicalin
- floryut
- liupeng0518
- luckysb
- mattymo
- miouge1
- oomichi
- riverzhang
- woopstar

View File

@@ -22,7 +22,7 @@ Ensure you have installed Docker then
```ShellSession
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.27.0 bash
quay.io/kubespray/kubespray:v2.29.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@@ -87,8 +87,8 @@ vagrant up
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye
- **Ubuntu** 20.04, 22.04, 24.04
- **Debian** Bookworm, Bullseye, Trixie
- **Ubuntu** 22.04, 24.04
- **CentOS/RHEL** [8, 9](docs/operating_systems/rhel.md#rhel-8)
- **Fedora** 39, 40
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
@@ -111,37 +111,34 @@ Note:
<!-- BEGIN ANSIBLE MANAGED BLOCK -->
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.32.3
- [etcd](https://github.com/etcd-io/etcd) 3.5.16
- [docker](https://www.docker.com/) 28.0
- [containerd](https://containerd.io/) 2.0.3
- [cri-o](http://cri-o.io/) 1.32.0 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [kubernetes](https://github.com/kubernetes/kubernetes) 1.33.7
- [etcd](https://github.com/etcd-io/etcd) 3.5.25
- [docker](https://www.docker.com/) 28.3
- [containerd](https://containerd.io/) 2.1.5
- [cri-o](http://cri-o.io/) 1.33.7 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) 1.4.1
- [calico](https://github.com/projectcalico/calico) 3.29.2
- [cilium](https://github.com/cilium/cilium) 1.15.9
- [flannel](https://github.com/flannel-io/flannel) 0.22.0
- [cni-plugins](https://github.com/containernetworking/plugins) 1.8.0
- [calico](https://github.com/projectcalico/calico) 3.30.5
- [cilium](https://github.com/cilium/cilium) 1.18.4
- [flannel](https://github.com/flannel-io/flannel) 0.27.3
- [kube-ovn](https://github.com/alauda/kube-ovn) 1.12.21
- [kube-router](https://github.com/cloudnativelabs/kube-router) 2.1.1
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.1.0
- [weave](https://github.com/rajch/weave) 2.8.7
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) 4.2.2
- [kube-vip](https://github.com/kube-vip/kube-vip) 0.8.0
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) 1.15.3
- [coredns](https://github.com/coredns/coredns) 1.11.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.12.1
- [coredns](https://github.com/coredns/coredns) 1.12.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) 1.13.3
- [argocd](https://argoproj.github.io/) 2.14.5
- [helm](https://helm.sh/) 3.16.4
- [helm](https://helm.sh/) 3.18.4
- [metallb](https://metallb.universe.tf/) 0.13.9
- [registry](https://github.com/distribution/distribution) 2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) 2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) 2.1.1-k8s1.11
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) 0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) 1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) 1.30.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) 1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 0.0.24
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 0.0.32
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) 2.5.0
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) 0.16.4
@@ -185,9 +182,6 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational

29
Vagrantfile vendored
View File

@@ -4,6 +4,8 @@
# For help on using kubespray with vagrant, check out docs/developers/vagrant.md
require 'fileutils'
require 'ipaddr'
require 'socket'
Vagrant.require_version ">= 2.0.0"
@@ -99,6 +101,33 @@ $extra_vars ||= {}
host_vars = {}
def collect_networks(subnet, subnet_ipv6)
Socket.getifaddrs.filter_map do |iface|
next unless iface&.netmask&.ip_address && iface.addr
is_ipv6 = iface.addr.ipv6?
ip = IPAddr.new(iface.addr.ip_address.split('%').first)
ip_test = is_ipv6 ? IPAddr.new("#{subnet_ipv6}::0") : IPAddr.new("#{subnet}.0")
prefix = IPAddr.new(iface.netmask.ip_address).to_i.to_s(2).count('1')
network = ip.mask(prefix)
[IPAddr.new("#{network}/#{prefix}"), ip_test]
end
end
def subnet_in_use?(network_ips)
network_ips.any? { |net, test_ip| net.include?(test_ip) && test_ip != net }
end
network_ips = collect_networks($subnet, $subnet_ipv6)
if subnet_in_use?(network_ips)
puts "Invalid subnet provided, subnet is already in use: #{$subnet}.0"
puts "Subnets in use: #{network_ips.inspect}"
exit 1
end
# throw error if os is not supported
if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}"

View File

@@ -15,7 +15,7 @@ timeout = 300
stdout_callback = default
display_skipped_hosts = no
library = ./library
callbacks_enabled = profile_tasks,ara_default
callbacks_enabled = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@@ -1,11 +0,0 @@
# Kubespray on KVM Virtual Machines hypervisor preparation
A simple playbook to ensure your system has the right settings to enable Kubespray
deployment on VMs.
This playbook does not create Virtual Machines, nor does it run Kubespray itself.
## User creation
If you want to create a user for running Kubespray deployment, you should specify
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.

View File

@@ -1,2 +0,0 @@
#k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -1,9 +0,0 @@
---
- name: Prepare Hypervisor to later install kubespray VMs
hosts: localhost
gather_facts: false
become: true
vars:
bootstrap_os: none
roles:
- { role: kvm-setup }

View File

@@ -1,30 +0,0 @@
---
- name: Install required packages
package:
name: "{{ item }}"
state: present
with_items:
- bind-utils
- ntp
when: ansible_os_family == "RedHat"
- name: Install required packages
apt:
upgrade: true
update_cache: true
cache_valid_time: 3600
name: "{{ item }}"
state: present
install_recommends: false
with_items:
- dnsutils
- ntp
when: ansible_os_family == "Debian"
- name: Create deployment user if required
include_tasks: user.yml
when: k8s_deployment_user is defined
- name: Set proper sysctl values
import_tasks: sysctl.yml

View File

@@ -1,46 +0,0 @@
---
- name: Load br_netfilter module
community.general.modprobe:
name: br_netfilter
state: present
register: br_netfilter
- name: Add br_netfilter into /etc/modules
lineinfile:
dest: /etc/modules
state: present
line: 'br_netfilter'
when: br_netfilter is defined and ansible_os_family == 'Debian'
- name: Add br_netfilter into /etc/modules-load.d/kubespray.conf
copy:
dest: /etc/modules-load.d/kubespray.conf
content: |-
### This file is managed by Ansible
br-netfilter
owner: root
group: root
mode: "0644"
when: br_netfilter is defined
- name: Enable net.ipv4.ip_forward in sysctl
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: "{{ sysctl_file_path }}"
state: present
reload: true
- name: Set bridge-nf-call-{arptables,iptables} to 0
ansible.posix.sysctl:
name: "{{ item }}"
state: present
value: 0
sysctl_file: "{{ sysctl_file_path }}"
reload: true
with_items:
- net.bridge.bridge-nf-call-arptables
- net.bridge.bridge-nf-call-ip6tables
- net.bridge.bridge-nf-call-iptables
when: br_netfilter is defined

View File

@@ -1,47 +0,0 @@
---
- name: Create user {{ k8s_deployment_user }}
user:
name: "{{ k8s_deployment_user }}"
groups: adm
shell: /bin/bash
- name: Ensure that .ssh exists
file:
path: "/home/{{ k8s_deployment_user }}/.ssh"
state: directory
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
mode: "0700"
- name: Configure sudo for deployment user
copy:
content: |
%{{ k8s_deployment_user }} ALL=(ALL) NOPASSWD: ALL
dest: "/etc/sudoers.d/55-k8s-deployment"
owner: root
group: root
mode: "0644"
- name: Write private SSH key
copy:
src: "{{ k8s_deployment_user_pkey_path }}"
dest: "/home/{{ k8s_deployment_user }}/.ssh/id_rsa"
mode: "0400"
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined
- name: Write public SSH key
shell: "ssh-keygen -y -f /home/{{ k8s_deployment_user }}/.ssh/id_rsa \
> /home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
args:
creates: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
when: k8s_deployment_user_pkey_path is defined
- name: Fix ssh-pub-key permissions
file:
path: "/home/{{ k8s_deployment_user }}/.ssh/authorized_keys"
mode: "0600"
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
when: k8s_deployment_user_pkey_path is defined

View File

@@ -1,15 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -1,51 +0,0 @@
---
- name: Check ansible version
import_playbook: kubernetes_sigs.kubespray.ansible_version
- name: Install mitogen
hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.2
mitogen_url: https://github.com/mitogen-hq/mitogen/archive/refs/tags/v{{ mitogen_version }}.tar.gz
ansible_connection: local
tasks:
- name: Create mitogen plugin dir
file:
path: "{{ item }}"
state: directory
mode: "0755"
become: false
loop:
- "{{ playbook_dir }}/plugins/mitogen"
- "{{ playbook_dir }}/dist"
- name: Download mitogen release
get_url:
url: "{{ mitogen_url }}"
dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
validate_certs: true
mode: "0644"
- name: Extract archive
unarchive:
src: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
dest: "{{ playbook_dir }}/dist/"
- name: Copy plugin
ansible.posix.synchronize:
src: "{{ playbook_dir }}/dist/mitogen-{{ mitogen_version }}/"
dest: "{{ playbook_dir }}/plugins/mitogen"
- name: Add strategy to ansible.cfg
community.general.ini_file:
path: ansible.cfg
mode: "0644"
section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}"
value: "{{ item.value }}"
with_items:
- option: strategy
value: mitogen_linear
- option: strategy_plugins
value: plugins/mitogen/ansible_mitogen/plugins/strategy

View File

@@ -31,7 +31,7 @@ manage-offline-container-images.sh register
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/kubespray-defaults/defaults/main/download.yml` file.
This script generates the list of downloaded files and the list of container images by `roles/kubespray_defaults/defaults/main/download.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.

View File

@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/kubespray-defaults/defaults/main/download.yml"}
: ${DOWNLOAD_YML:="roles/kubespray_defaults/defaults/main/download.yml"}
mkdir -p ${TEMP_DIR}
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/kubespray-defaults/defaults/main/download.yml
# Those container images are downloaded by kubeadm, then roles/kubespray_defaults/defaults/main/download.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"

View File

@@ -5,7 +5,7 @@
roles:
# Just load default variables from roles.
- role: kubespray-defaults
- role: kubespray_defaults
when: false
- role: download
when: false

View File

@@ -36,7 +36,7 @@ function create_container_image_tar() {
mkdir ${IMAGE_DIR}
cd ${IMAGE_DIR}
sudo ${runtime} pull registry:latest
sudo --preserve-env=http_proxy,https_proxy,no_proxy ${runtime} pull registry:latest
sudo ${runtime} save -o registry-latest.tar registry:latest
while read -r image
@@ -45,7 +45,7 @@ function create_container_image_tar() {
set +e
for step in $(seq 1 ${RETRY_COUNT})
do
sudo ${runtime} pull ${image}
sudo --preserve-env=http_proxy,https_proxy,no_proxy ${runtime} pull ${image}
if [ $? -eq 0 ]; then
break
fi
@@ -127,7 +127,7 @@ function register_container_images() {
tar -zxvf ${IMAGE_TAR_FILE}
if [ "${create_registry}" ]; then
if ${create_registry}; then
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
set +e
@@ -148,7 +148,7 @@ function register_container_images() {
if [ "${org_image}" == "ID:" ]; then
org_image=$(echo "${load_image}" | awk '{print $4}')
fi
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
image_id=$(sudo ${runtime} image inspect --format "{{.Id}}" "${org_image}")
if [ -z "${file_name}" ]; then
echo "Failed to get file_name for line ${line}"
exit 1

View File

@@ -41,7 +41,7 @@ fi
sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
sudo --preserve-env=http_proxy,https_proxy,no_proxy "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \

View File

@@ -1,5 +1,11 @@
terraform {
required_version = ">= 0.12.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {

View File

@@ -1,246 +0,0 @@
# Kubernetes on Equinix Metal with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
[Equinix Metal](https://metal.equinix.com) ([formerly Packet](https://blog.equinix.com/blog/2020/10/06/equinix-metal-metal-and-more/)).
## Status
This will install a Kubernetes cluster on Equinix Metal. It should work in all locations and on most server types.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Equinix Metal project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../../cluster.yml)
to actually install Kubernetes with Kubespray.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts.
- Master nodes with etcd: `number_of_k8s_masters` variable
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Standalone etcd hosts: `number_of_etcd` variable
- Kubernetes worker nodes: `number_of_k8s_nodes` variable
Note that the Ansible script will report an invalid configuration if you wind up
with an *even number* of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible dependencies](/docs/ansible/ansible.md#installing-ansible)
- Account with Equinix Metal
- An SSH key pair
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (Equinix Metal hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Equinix Metal (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
```ShellSession
ssh-keygen -f ~/.ssh/id_rsa
```
## Terraform
Terraform will be used to provision all of the Equinix Metal resources with base software as appropriate.
### Configuration
#### Inventory files
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/equinix/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/equinix/hosts
```
This will be the base for subsequent Terraform commands.
#### Equinix Metal API access
Your Equinix Metal API key must be available in the `METAL_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see
[Accounts Index](https://metal.equinix.com/developers/docs/accounts/).
The Equinix Metal Project ID associated with the key will be set later in `cluster.tfvars`.
For more information about the API, please see [Equinix Metal API](https://metal.equinix.com/developers/api/).
For more information about terraform provider authentication, please see [the equinix provider documentation](https://registry.terraform.io/providers/equinix/equinix/latest/docs).
Example:
```ShellSession
export METAL_AUTH_TOKEN="Example-API-Token"
```
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- equinix_metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
`kubeconfig_localhost: true` in the Kubespray configuration.
Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml` and comment back in the following line and change from `false` to `true`:
`\# kubeconfig_localhost: false`
becomes:
`kubeconfig_localhost: true`
Once the Kubespray playbooks are run, a Kubernetes configuration file will be written to the local host at `inventory/$CLUSTER/artifacts/admin.conf`
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `contrib/terraform/equinix` directory :
- `.terraform`
- `.tfvars`
- `.tfstate`
- `.tfstate.backup`
- `.lock.hcl`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform -chdir=../../contrib/terraform/metal init -var-file=cluster.tfvars
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform -chdir=../../contrib/terraform/equinix apply -var-file=cluster.tfvars
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform -chdir=../../contrib/terraform/equinix destroy -var-file=cluster.tfvars
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
- Remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
- Clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```ShellSession
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa
```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Test access
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```ShellSession
$ ansible -i inventory/$CLUSTER/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
### Deploy Kubernetes
```ShellSession
ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
```
This will take some time as there are many tasks to run.
## Kubernetes
### Set up kubectl
- [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
- Verify that Kubectl runs correctly
```ShellSession
kubectl version
```
- Verify that the Kubernetes configuration file has been copied over
```ShellSession
cat inventory/alpha/$CLUSTER/admin.conf
```
- Verify that all the nodes are running correctly.
```ShellSession
kubectl version
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
```
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View File

@@ -1 +0,0 @@
../terraform.py

View File

@@ -1,57 +0,0 @@
resource "equinix_metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "equinix_metal_device" "k8s_master" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "equinix_metal_device" "k8s_master_no_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters_no_etcd
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "equinix_metal_device" "k8s_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = var.plan_etcd
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "equinix_metal_device" "k8s_node" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = var.plan_k8s_nodes
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

View File

@@ -1,15 +0,0 @@
output "k8s_masters" {
value = equinix_metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = equinix_metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = equinix_metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = equinix_metal_device.k8s_node.*.access_public_ipv4
}

View File

@@ -1,17 +0,0 @@
terraform {
required_version = ">= 1.0.0"
provider_meta "equinix" {
module_name = "kubespray"
}
required_providers {
equinix = {
source = "equinix/equinix"
version = "1.24.0"
}
}
}
# Configure the Equinix Metal Provider
provider "equinix" {
}

View File

@@ -1,35 +0,0 @@
# your Kubernetes cluster name here
cluster_name = "mycluster"
# Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/
equinix_metal_project_id = "Example-Project-Id"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project
# Terraform will complain if the public key is setup in Equinix Metal
public_key_path = "~/.ssh/id_rsa.pub"
# Equinix interconnected bare metal across our global metros.
metro = "da"
# operating_system
operating_system = "ubuntu_22_04"
# standalone etcds
number_of_etcd = 0
plan_etcd = "t1.small.x86"
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
plan_k8s_masters = "t1.small.x86"
plan_k8s_masters_no_etcd = "t1.small.x86"
# nodes
number_of_k8s_nodes = 2
plan_k8s_nodes = "t1.small.x86"

View File

@@ -1 +0,0 @@
../../../../inventory/sample/group_vars

View File

@@ -1,56 +0,0 @@
variable "cluster_name" {
default = "kubespray"
}
variable "equinix_metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
}
variable "operating_system" {
default = "ubuntu_22_04"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
}
variable "billing_cycle" {
default = "hourly"
}
variable "metro" {
default = "da"
}
variable "plan_k8s_masters" {
default = "c3.small.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c3.small.x86"
}
variable "plan_etcd" {
default = "c3.small.x86"
}
variable "plan_k8s_nodes" {
default = "c3.medium.x86"
}
variable "number_of_k8s_masters" {
default = 1
}
variable "number_of_k8s_masters_no_etcd" {
default = 0
}
variable "number_of_etcd" {
default = 0
}
variable "number_of_k8s_nodes" {
default = 1
}

View File

@@ -102,7 +102,8 @@ Please read the instructions in both repos on how to install it.
You can teardown your infrastructure using the following Terraform command:
```bash
terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
cd ./kubespray
terraform -chdir=./contrib/terraform/hetzner/ destroy --var-file=../../../inventory/$CLUSTER/default.tfvars
```
## Variables

View File

@@ -624,7 +624,7 @@ Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
- **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_controllers/openstack.md) to allow service and pod subnets
```yml
# Choose network plugin (calico, weave or flannel)
# Choose network plugin (calico or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel
```

View File

@@ -2,35 +2,6 @@
Provision a Kubernetes cluster on [UpCloud](https://upcloud.com/) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+--------------------------+
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Master/etcd | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
+--------------------------+
```
The nodes uses a private network for node to node communication and a public interface for all external communication.
## Requirements
* Terraform 0.13.0 or newer
@@ -100,6 +71,8 @@ terraform destroy --var-file cluster-settings.tfvars \
* `template_name`: The name or UUID of a base image
* `username`: a user to access the nodes, defaults to "ubuntu"
* `private_network_cidr`: CIDR to use for the private network, defaults to "172.16.0.0/24"
* `dns_servers`: DNS servers that will be used by the nodes. Until [this is solved](https://github.com/UpCloudLtd/terraform-provider-upcloud/issues/562) this is done using user_data to reconfigure resolved. Defaults to `[]`
* `use_public_ips`: If a NIC connencted to the Public network should be attached to all nodes by default. Can be overridden by `force_public_ip` if this is set to `false`. Defaults to `true`
* `ssh_public_keys`: List of public SSH keys to install on all machines
* `zone`: The zone where to run the cluster
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
@@ -108,6 +81,8 @@ terraform destroy --var-file cluster-settings.tfvars \
* `cpu`: number of cpu cores
* `mem`: memory size in MB
* `disk_size`: The size of the storage in GB
* `force_public_ip`: If `use_public_ips` is set to `false`, this forces a public NIC onto the machine anyway when set to `true`. Useful if you're migrating from public nodes to only private. Defaults to `false`
* `dns_servers`: This works the same way as the global `dns_severs` but only applies to a single node. If set to `[]` while the global `dns_servers` is set to something else, then it will not add the user_data and thus will not be recreated. Useful if you're migrating from public nodes to only private. Defaults to `null`
* `additional_disks`: Additional disks to attach to the node.
* `size`: The size of the additional disk in GB
* `tier`: The tier of disk to use (`maxiops` is the only one you can choose atm)
@@ -139,6 +114,7 @@ terraform destroy --var-file cluster-settings.tfvars \
* `port`: Port to load balance.
* `target_port`: Port to the backend servers.
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
* `proxy_protocol`: If the loadbalancer should set up the backend using proxy protocol.
* `router_enable`: If a router should be connected to the private network or not
* `gateways`: Gateways that should be connected to the router, requires router_enable is set to true
* `features`: List of features for the gateway
@@ -171,3 +147,27 @@ terraform destroy --var-file cluster-settings.tfvars \
* `server_groups`: Group servers together
* `servers`: The servers that should be included in the group.
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
## Migration
When `null_resource.inventories` and `data.template_file.inventory` was changed to `local_file.inventory` the old state file needs to be cleaned of the old state.
The error messages you'll see if you encounter this is:
```text
Error: failed to read schema for null_resource.inventories in registry.terraform.io/hashicorp/null: failed to instantiate provider "registry.terraform.io/hashicorp/null" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/null"
Error: failed to read schema for data.template_file.inventory in registry.terraform.io/hashicorp/template: failed to instantiate provider "registry.terraform.io/hashicorp/template" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/template"
```
This can be fixed with the following lines
```bash
terraform state rm -state=terraform.tfstate null_resource.inventories
terraform state rm -state=terraform.tfstate data.template_file.inventory
```
### Public to Private only migration
Since there's no way to remove the public NIC on a machine without recreating its private NIC it's not possible to inplace change a cluster to only use private IPs.
The way to migrate is to first set `use_public_ips` to `false`, `dns_servers` to some DNS servers and then update all existing servers to have `force_public_ip` set to `true` and `dns_severs` set to `[]`.
After that you can add new nodes without `force_public_ip` and `dns_servers` set and create them.
Add the new nodes into the cluster and when all of them are added, remove the old nodes.

View File

@@ -122,11 +122,11 @@ k8s_allowed_remote_ips = [
master_allowed_ports = []
worker_allowed_ports = []
loadbalancer_enabled = false
loadbalancer_plan = "development"
loadbalancer_proxy_protocol = false
loadbalancer_enabled = false
loadbalancer_plan = "development"
loadbalancers = {
# "http" : {
# "proxy_protocol" : false
# "port" : 80,
# "target_port" : 80,
# "backend_servers" : [

View File

@@ -20,24 +20,26 @@ module "kubernetes" {
username = var.username
private_network_cidr = var.private_network_cidr
dns_servers = var.dns_servers
use_public_ips = var.use_public_ips
machines = var.machines
ssh_public_keys = var.ssh_public_keys
firewall_enabled = var.firewall_enabled
firewall_default_deny_in = var.firewall_default_deny_in
firewall_default_deny_out = var.firewall_default_deny_out
master_allowed_remote_ips = var.master_allowed_remote_ips
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
firewall_enabled = var.firewall_enabled
firewall_default_deny_in = var.firewall_default_deny_in
firewall_default_deny_out = var.firewall_default_deny_out
master_allowed_remote_ips = var.master_allowed_remote_ips
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
loadbalancer_enabled = var.loadbalancer_enabled
loadbalancer_plan = var.loadbalancer_plan
loadbalancer_outbound_proxy_protocol = var.loadbalancer_proxy_protocol ? "v2" : ""
loadbalancer_legacy_network = var.loadbalancer_legacy_network
loadbalancers = var.loadbalancers
loadbalancer_enabled = var.loadbalancer_enabled
loadbalancer_plan = var.loadbalancer_plan
loadbalancer_legacy_network = var.loadbalancer_legacy_network
loadbalancers = var.loadbalancers
router_enable = var.router_enable
gateways = var.gateways
@@ -52,32 +54,12 @@ module "kubernetes" {
# Generate ansible inventory
#
data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")
vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip),
values(module.kubernetes.master_ip).*.public_ip,
values(module.kubernetes.master_ip).*.private_ip,
range(1, length(module.kubernetes.master_ip) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip),
values(module.kubernetes.worker_ip).*.public_ip,
values(module.kubernetes.worker_ip).*.private_ip))
list_master = join("\n", formatlist("%s",
keys(module.kubernetes.master_ip)))
list_worker = join("\n", formatlist("%s",
keys(module.kubernetes.worker_ip)))
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers = {
template = data.template_file.inventory.rendered
}
resource "local_file" "inventory" {
content = templatefile("${path.module}/templates/inventory.tpl", {
master_ip = module.kubernetes.master_ip
worker_ip = module.kubernetes.worker_ip
bastion_ip = module.kubernetes.bastion_ip
username = var.username
})
filename = var.inventory_file
}

View File

@@ -53,6 +53,44 @@ locals {
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
# Else don't prefix with anything
resource-prefix = "%{if var.prefix != ""}${var.prefix}-%{endif}"
master_ip = {
for instance in upcloud_server.master :
instance.hostname => {
for nic in instance.network_interface :
nic.type => nic.ip_address
if nic.ip_address != null
}
}
worker_ip = {
for instance in upcloud_server.worker :
instance.hostname => {
for nic in instance.network_interface :
nic.type => nic.ip_address
if nic.ip_address != null
}
}
bastion_ip = {
for instance in upcloud_server.bastion :
instance.hostname => {
for nic in instance.network_interface :
nic.type => nic.ip_address
if nic.ip_address != null
}
}
node_user_data = {
for name, machine in var.machines :
name => <<EOF
%{ if ( length(machine.dns_servers != null ? machine.dns_servers : [] ) > 0 ) || ( length(var.dns_servers) > 0 && machine.dns_servers == null ) ~}
#!/bin/bash
echo -e "[Resolve]\nDNS=${ join(" ", length(machine.dns_servers != null ? machine.dns_servers : []) > 0 ? machine.dns_servers : var.dns_servers) }" > /etc/systemd/resolved.conf
systemctl restart systemd-resolved
%{ endif ~}
EOF
}
}
resource "upcloud_network" "private" {
@@ -62,6 +100,9 @@ resource "upcloud_network" "private" {
ip_network {
address = var.private_network_cidr
dhcp_default_route = var.router_enable
# TODO: When support for dhcp_dns for private networks are in, remove the user_data and enable it here.
# See more here https://github.com/UpCloudLtd/terraform-provider-upcloud/issues/562
# dhcp_dns = length(var.private_network_dns) > 0 ? var.private_network_dns : null
dhcp = true
family = "IPv4"
}
@@ -89,8 +130,8 @@ resource "upcloud_server" "master" {
hostname = "${local.resource-prefix}${each.key}"
plan = each.value.plan
cpu = each.value.plan == null ? null : each.value.cpu
mem = each.value.plan == null ? null : each.value.mem
cpu = each.value.cpu
mem = each.value.mem
zone = var.zone
server_group = each.value.server_group == null ? null : upcloud_server_group.server_groups[each.value.server_group].id
@@ -99,9 +140,12 @@ resource "upcloud_server" "master" {
size = each.value.disk_size
}
# Public network interface
network_interface {
type = "public"
dynamic "network_interface" {
for_each = each.value.force_public_ip || var.use_public_ips ? [1] : []
content {
type = "public"
}
}
# Private network interface
@@ -136,6 +180,9 @@ resource "upcloud_server" "master" {
keys = var.ssh_public_keys
create_password = false
}
metadata = local.node_user_data[each.key] != "" ? true : null
user_data = local.node_user_data[each.key] != "" ? local.node_user_data[each.key] : null
}
resource "upcloud_server" "worker" {
@@ -147,8 +194,8 @@ resource "upcloud_server" "worker" {
hostname = "${local.resource-prefix}${each.key}"
plan = each.value.plan
cpu = each.value.plan == null ? null : each.value.cpu
mem = each.value.plan == null ? null : each.value.mem
cpu = each.value.cpu
mem = each.value.mem
zone = var.zone
server_group = each.value.server_group == null ? null : upcloud_server_group.server_groups[each.value.server_group].id
@@ -158,9 +205,12 @@ resource "upcloud_server" "worker" {
size = each.value.disk_size
}
# Public network interface
network_interface {
type = "public"
dynamic "network_interface" {
for_each = each.value.force_public_ip || var.use_public_ips ? [1] : []
content {
type = "public"
}
}
# Private network interface
@@ -195,6 +245,63 @@ resource "upcloud_server" "worker" {
keys = var.ssh_public_keys
create_password = false
}
metadata = local.node_user_data[each.key] != "" ? true : null
user_data = local.node_user_data[each.key] != "" ? local.node_user_data[each.key] : null
}
resource "upcloud_server" "bastion" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "bastion"
}
hostname = "${local.resource-prefix}${each.key}"
plan = each.value.plan
cpu = each.value.cpu
mem = each.value.mem
zone = var.zone
server_group = each.value.server_group == null ? null : upcloud_server_group.server_groups[each.value.server_group].id
template {
storage = var.template_name
size = each.value.disk_size
}
# Private network interface
network_interface {
type = "private"
network = upcloud_network.private.id
}
# Private network interface
network_interface {
type = "public"
}
firewall = var.firewall_enabled
dynamic "storage_devices" {
for_each = {
for disk_key_name, disk in upcloud_storage.additional_disks :
disk_key_name => disk
# Only add the disk if it matches the node name in the start of its name
if length(regexall("^${each.key}_.+", disk_key_name)) > 0
}
content {
storage = storage_devices.value.id
}
}
# Include at least one public SSH key
login {
user = var.username
keys = var.ssh_public_keys
create_password = false
}
}
resource "upcloud_firewall_rules" "master" {
@@ -543,6 +650,53 @@ resource "upcloud_firewall_rules" "k8s" {
}
}
resource "upcloud_firewall_rules" "bastion" {
for_each = upcloud_server.bastion
server_id = each.value.id
dynamic "firewall_rule" {
for_each = var.bastion_allowed_remote_ips
content {
action = "accept"
comment = "Allow bastion SSH access from this network"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = firewall_rule.value.end_address
source_address_start = firewall_rule.value.start_address
}
}
dynamic "firewall_rule" {
for_each = length(var.bastion_allowed_remote_ips) > 0 ? [1] : []
content {
action = "drop"
comment = "Drop bastion SSH access from other networks"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = "255.255.255.255"
source_address_start = "0.0.0.0"
}
}
firewall_rule {
action = var.firewall_default_deny_in ? "drop" : "accept"
direction = "in"
}
firewall_rule {
action = var.firewall_default_deny_out ? "drop" : "accept"
direction = "out"
}
}
resource "upcloud_loadbalancer" "lb" {
count = var.loadbalancer_enabled ? 1 : 0
configured_status = "started"
@@ -583,7 +737,7 @@ resource "upcloud_loadbalancer_backend" "lb_backend" {
loadbalancer = upcloud_loadbalancer.lb[0].id
name = "lb-backend-${each.key}"
properties {
outbound_proxy_protocol = var.loadbalancer_outbound_proxy_protocol
outbound_proxy_protocol = each.value.proxy_protocol ? "v2" : ""
}
}
@@ -622,7 +776,7 @@ resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
backend = upcloud_loadbalancer_backend.lb_backend[each.value.lb_name].id
name = "${local.resource-prefix}${each.key}"
ip = merge(upcloud_server.master, upcloud_server.worker)[each.value.server_name].network_interface[1].ip_address
ip = merge(local.master_ip, local.worker_ip)["${local.resource-prefix}${each.value.server_name}"].private
port = each.value.port
weight = 100
max_sessions = var.loadbalancer_plan == "production-small" ? 50000 : 1000
@@ -662,7 +816,7 @@ resource "upcloud_router" "router" {
resource "upcloud_gateway" "gateway" {
for_each = var.router_enable ? var.gateways : {}
name = "${local.resource-prefix}${each.key}-gateway"
zone = var.zone
zone = var.private_cloud ? var.public_zone : var.zone
features = each.value.features
plan = each.value.plan

View File

@@ -1,22 +1,13 @@
output "master_ip" {
value = {
for instance in upcloud_server.master :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
}
}
value = local.master_ip
}
output "worker_ip" {
value = {
for instance in upcloud_server.worker :
instance.hostname => {
"public_ip" : instance.network_interface[0].ip_address
"private_ip" : instance.network_interface[1].ip_address
}
}
value = local.worker_ip
}
output "bastion_ip" {
value = local.bastion_ip
}
output "loadbalancer_domain" {

View File

@@ -20,15 +20,21 @@ variable "username" {}
variable "private_network_cidr" {}
variable "dns_servers" {}
variable "use_public_ips" {}
variable "machines" {
description = "Cluster machines"
type = map(object({
node_type = string
plan = string
cpu = string
mem = string
cpu = optional(number)
mem = optional(number)
disk_size = number
server_group : string
force_public_ip : optional(bool, false)
dns_servers : optional(set(string))
additional_disks = map(object({
size = number
tier = string
@@ -58,6 +64,13 @@ variable "k8s_allowed_remote_ips" {
}))
}
variable "bastion_allowed_remote_ips" {
type = list(object({
start_address = string
end_address = string
}))
}
variable "master_allowed_ports" {
type = list(object({
protocol = string
@@ -94,10 +107,6 @@ variable "loadbalancer_plan" {
type = string
}
variable "loadbalancer_outbound_proxy_protocol" {
type = string
}
variable "loadbalancer_legacy_network" {
type = bool
default = false
@@ -107,6 +116,7 @@ variable "loadbalancers" {
description = "Load balancers"
type = map(object({
proxy_protocol = bool
port = number
target_port = number
allow_internal_frontend = optional(bool)

View File

@@ -7,6 +7,10 @@ output "worker_ip" {
value = module.kubernetes.worker_ip
}
output "bastion_ip" {
value = module.kubernetes.bastion_ip
}
output "loadbalancer_domain" {
value = module.kubernetes.loadbalancer_domain
}

View File

@@ -1,17 +1,33 @@
[all]
${connection_strings_master}
${connection_strings_worker}
%{ for name, ips in master_ip ~}
${name} ansible_user=${username} ansible_host=${lookup(ips, "public", ips.private)} ip=${ips.private}
%{ endfor ~}
%{ for name, ips in worker_ip ~}
${name} ansible_user=${username} ansible_host=${lookup(ips, "public", ips.private)} ip=${ips.private}
%{ endfor ~}
[kube_control_plane]
${list_master}
%{ for name, ips in master_ip ~}
${name}
%{ endfor ~}
[etcd]
${list_master}
%{ for name, ips in master_ip ~}
${name}
%{ endfor ~}
[kube_node]
${list_worker}
%{ for name, ips in worker_ip ~}
${name}
%{ endfor ~}
[k8s_cluster:children]
kube_control_plane
kube_node
%{ if length(bastion_ip) > 0 ~}
[bastion]
%{ for name, ips in bastion_ip ~}
bastion ansible_user=${username} ansible_host=${ips.public}
%{ endfor ~}
%{ endif ~}

View File

@@ -32,16 +32,31 @@ variable "private_network_cidr" {
default = "172.16.0.0/24"
}
variable "dns_servers" {
description = "DNS servers that will be used by the nodes. Until [this is solved](https://github.com/UpCloudLtd/terraform-provider-upcloud/issues/562) this is done using user_data to reconfigure resolved"
type = set(string)
default = []
}
variable "use_public_ips" {
description = "If all nodes should get a public IP"
type = bool
default = true
}
variable "machines" {
description = "Cluster machines"
type = map(object({
node_type = string
plan = string
cpu = string
mem = string
cpu = optional(number)
mem = optional(number)
disk_size = number
server_group : string
force_public_ip : optional(bool, false)
dns_servers : optional(set(string))
additional_disks = map(object({
size = number
tier = string
@@ -89,6 +104,15 @@ variable "k8s_allowed_remote_ips" {
default = []
}
variable "bastion_allowed_remote_ips" {
description = "List of IP start/end addresses allowed to SSH to bastion"
type = list(object({
start_address = string
end_address = string
}))
default = []
}
variable "master_allowed_ports" {
description = "List of ports to allow on masters"
type = list(object({
@@ -131,11 +155,6 @@ variable "loadbalancer_plan" {
default = "development"
}
variable "loadbalancer_proxy_protocol" {
type = bool
default = false
}
variable "loadbalancer_legacy_network" {
description = "If the loadbalancer should use the deprecated network field instead of networks blocks. You probably want to have this set to false"
@@ -147,6 +166,7 @@ variable "loadbalancers" {
description = "Load balancers"
type = map(object({
proxy_protocol = bool
port = number
target_port = number
allow_internal_frontend = optional(bool, false)

View File

@@ -180,7 +180,7 @@ calico_group_id=rr1
The inventory above will deploy the following topology assuming that calico's
`global_as_num` is set to `65400`:
![Image](figures/kubespray-calico-rr.png?raw=true)
![Image](../figures/kubespray-calico-rr.png?raw=true)
### Optional : Define default endpoint to host action
@@ -377,7 +377,7 @@ To clean up any ipvs leftovers:
### Calico access to the kube-api
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf) for guidelines on how to do this.
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/operations/ha-mode.md).

View File

@@ -54,6 +54,10 @@ cilium_loadbalancer_ip_pools:
- name: "blue-pool"
cidrs:
- "10.0.10.0/24"
ranges:
- start: "20.0.20.100"
stop: "20.0.20.200"
- start: "1.2.3.4"
```
For further information, check [LB IPAM documentation](https://docs.cilium.io/en/stable/network/lb-ipam/)
@@ -233,7 +237,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: "1.15.9"
cilium_version: "1.18.4"
```
## Add variable to config

View File

@@ -1,79 +0,0 @@
# Weave
Weave 2.0.1 is supported by kubespray
Weave uses [**consensus**](https://www.weave.works/docs/net/latest/ipam/##consensus) mode (default mode) and [**seed**](https://www.weave.works/docs/net/latest/ipam/#seed) mode.
`Consensus` mode is best to use on static size cluster and `seed` mode is best to use on dynamic size cluster
Weave encryption is supported for all communication
* To use Weave encryption, specify a strong password (if no password, no encryption)
```ShellSession
# In file ./inventory/sample/group_vars/k8s_cluster.yml
weave_password: EnterPasswordHere
```
This password is used to set an environment variable inside weave container.
Weave is deployed by kubespray using a daemonSet
* Check the status of Weave containers
```ShellSession
# From client
kubectl -n kube-system get pods | grep weave
# output
weave-net-50wd2 2/2 Running 0 2m
weave-net-js9rb 2/2 Running 0 2m
```
There must be as many pods as nodes (here kubernetes have 2 nodes so there are 2 weave pods).
* Check status of weave (connection,encryption ...) for each node
```ShellSession
# On nodes
curl http://127.0.0.1:6784/status
# output on node1
Version: 2.0.1 (up to date; next check at 2017/08/01 13:51:34)
Service: router
Protocol: weave 1..2
Name: fa:16:3e:b3:d6:b2(node1)
Encryption: enabled
PeerDiscovery: enabled
Targets: 2
Connections: 2 (1 established, 1 failed)
Peers: 2 (with 2 established connections)
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.233.64.0/18
DefaultSubnet: 10.233.64.0/18
```
* Check parameters of weave for each node
```ShellSession
# On nodes
ps -aux | grep weaver
# output on node1 (here its use seed mode)
root 8559 0.2 3.0 365280 62700 ? Sl 08:25 0:00 /home/weave/weaver --name=fa:16:3e:b3:d6:b2 --port=6783 --datapath=datapath --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.233.64.0/18 --nickname=node1 --ipalloc-init seed=fa:16:3e:b3:d6:b2,fa:16:3e:f0:50:53 --conn-limit=30 --expect-npc 192.168.208.28 192.168.208.19
```
## Consensus mode (default mode)
This mode is best to use on static size cluster
### Seed mode
This mode is best to use on dynamic size cluster
The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters deployment.
* Switch from consensus mode to seed/Observation mode
See [weave ipam documentation](https://www.weave.works/docs/net/latest/tasks/ipam/ipam/) and use `weave_extra_args` to enable.

View File

@@ -68,8 +68,8 @@ containerd_runc_runtime:
engine: ""
root: ""
options:
systemdCgroup: "false"
binaryName: /usr/local/bin/my-runc
SystemdCgroup: "false"
BinaryName: /usr/local/bin/my-runc
base_runtime_spec: cri-base.json
```
@@ -149,3 +149,11 @@ following configuration:
```yaml
nri_enabled: true
```
### Optional : Static Binary
To ensure compatibility with older distributions (such as Debian 11), you can use a static containerd binary. By default, this is static binary if the system's glibc version is less than 2.34; otherwise, it is the default binary.
```yaml
containerd_static_binary: true
```

4
docs/_sidebar.md generated
View File

@@ -23,7 +23,6 @@
* [Aws](/docs/cloud_providers/aws.md)
* [Azure](/docs/cloud_providers/azure.md)
* [Cloud](/docs/cloud_providers/cloud.md)
* [Equinix-metal](/docs/cloud_providers/equinix-metal.md)
* CNI
* [Calico](/docs/CNI/calico.md)
* [Cilium](/docs/CNI/cilium.md)
@@ -33,7 +32,6 @@
* [Kube-router](/docs/CNI/kube-router.md)
* [Macvlan](/docs/CNI/macvlan.md)
* [Multus](/docs/CNI/multus.md)
* [Weave](/docs/CNI/weave.md)
* CRI
* [Containerd](/docs/CRI/containerd.md)
* [Cri-o](/docs/CRI/cri-o.md)
@@ -52,9 +50,7 @@
* [Test Cases](/docs/developers/test_cases.md)
* [Vagrant](/docs/developers/vagrant.md)
* External Storage Provisioners
* [Cephfs Provisioner](/docs/external_storage_provisioners/cephfs_provisioner.md)
* [Local Volume Provisioner](/docs/external_storage_provisioners/local_volume_provisioner.md)
* [Rbd Provisioner](/docs/external_storage_provisioners/rbd_provisioner.md)
* [Scheduler Plugins](/docs/external_storage_provisioners/scheduler_plugins.md)
* Getting Started
* [Comparisons](/docs/getting_started/comparisons.md)

View File

@@ -9,7 +9,6 @@ The following table shows the impact of the CPU architecture on compatible featu
| kube_network_plugin | amd64 | arm64 | amd64 + arm64 |
|---------------------|-------|-------|---------------|
| Calico | Y | Y | Y |
| Weave | Y | Y | Y |
| Flannel | Y | N | N |
| Canal | Y | N | N |
| Cilium | Y | Y | N |

View File

@@ -1,6 +1,6 @@
# Setting up Environment Proxy
If you set http and https proxy, all nodes and loadbalancer will be excluded from proxy with generating no_proxy variable in `roles/kubespray-defaults/tasks/no_proxy.yml`, if you have additional resources for exclude add them to `additional_no_proxy` variable. If you want fully override your `no_proxy` setting, then fill in just `no_proxy` and no nodes or loadbalancer addresses will be added to no_proxy.
If you set http and https proxy, all nodes and loadbalancer will be excluded from proxy with generating no_proxy variable in `roles/kubespray_defaults/tasks/no_proxy.yml`, if you have additional resources for exclude add them to `additional_no_proxy` variable. If you want fully override your `no_proxy` setting, then fill in just `no_proxy` and no nodes or loadbalancer addresses will be added to no_proxy.
## Set proxy for http and https

View File

@@ -13,7 +13,7 @@ KUBESPRAYDIR=kubespray
python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements.txt
pip install -r requirements.txt
```
In case you have a similar message when installing the requirements:
@@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
| Ansible Version | Python Version |
|-----------------|----------------|
| >= 2.16.4 | 3.10-3.12 |
| >= 2.17.3 | 3.10-3.12 |
## Customize Ansible vars
@@ -62,10 +62,9 @@ The following tags are defined in playbooks:
| aws-ebs-csi-driver | Configuring csi driver: aws-ebs |
| azure-csi-driver | Configuring csi driver: azure |
| bastion | Setup ssh config for bastion |
| bootstrap-os | Anything related to host OS configuration |
| bootstrap_os | Anything related to host OS configuration |
| calico | Network plugin Calico |
| calico_rr | Configuring Calico route reflector |
| cephfs-provisioner | Configuring CephFS |
| cert-manager | Configuring certificate manager for K8s |
| cilium | Network plugin Cilium |
| cinder-csi-driver | Configuring csi driver: cinder |
@@ -119,7 +118,6 @@ The following tags are defined in playbooks:
| local-path-provisioner | Configure External provisioner: local-path |
| local-volume-provisioner | Configure External provisioner: local-volume |
| macvlan | Network plugin macvlan |
| master (DEPRECATED) | Deprecated - see `control-plane` |
| metallb | Installing and configuring metallb |
| metrics_server | Configuring metrics_server |
| netchecker | Installing netchecker K8s app |
@@ -147,7 +145,6 @@ The following tags are defined in playbooks:
| registry | Configuring local docker registry |
| reset | Tasks running doing the node reset |
| resolvconf | Configuring /etc/resolv.conf for hosts/apps |
| rbd-provisioner | Configure External provisioner: rdb |
| services | Remove services (etcd, kubelet etc...) when resetting |
| snapshot | Enabling csi snapshot |
| snapshot-controller | Configuring csi snapshot controller |
@@ -155,21 +152,16 @@ The following tags are defined in playbooks:
| upgrade | Upgrading, f.e. container images/binaries |
| upload | Distributing images/binaries across hosts |
| vsphere-csi-driver | Configuring csi driver: vsphere |
| weave | Network plugin Weave |
| win_nodes | Running windows specific tasks |
| youki | Configuring youki runtime |
Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
field.
## Example commands
Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap_os
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
@@ -208,11 +200,11 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.27.0
docker pull quay.io/kubespray/kubespray:v2.27.0
git checkout v2.29.0
docker pull quay.io/kubespray/kubespray:v2.29.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.27.0 bash
quay.io/kubespray/kubespray:v2.29.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```

View File

@@ -2,14 +2,13 @@
Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html).
## Requirements
- An inventory file with the appropriate host groups. See the [README](../README.md#usage).
- A `group_vars` directory. These group variables **need** to match the appropriate variable names under `inventory/local/group_vars`. See the [README](../README.md#usage).
## Usage
1. Add Kubespray to your requirements.yml file
1. Set up an inventory with the appropriate host groups and required group vars.
See also the documentation on [kubespray inventories](./inventory.md) and the
general ["Getting started" documentation](../getting_started/getting-started.md#building-your-own-inventory).
2. Add Kubespray to your requirements.yml file
```yaml
collections:
@@ -18,20 +17,20 @@ Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/a
version: master # use the appropriate tag or branch for the version you need
```
2. Install your collection
3. Install your collection
```ShellSession
ansible-galaxy install -r requirements.yml
```
3. Create a playbook to install your Kubernetes cluster
4. Create a playbook to install your Kubernetes cluster
```yaml
- name: Install Kubernetes
ansible.builtin.import_playbook: kubernetes_sigs.kubespray.cluster
```
4. Update INVENTORY and PLAYBOOK so that they point to your inventory file and the playbook you created above, and then install Kubespray
5. Update INVENTORY and PLAYBOOK so that they point to your inventory file and the playbook you created above, and then install Kubespray
```ShellSession
ansible-playbook -i INVENTORY --become --become-user=root PLAYBOOK

View File

@@ -103,13 +103,13 @@ following default cluster parameters:
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
* *kube_service_subnets* - All service subnets separated by commas (default is a mix of ``kube_service_addresses`` and ``kube_service_addresses_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stacke`` options),
* *kube_service_subnets* - All service subnets separated by commas (default is a mix of ``kube_service_addresses`` and ``kube_service_addresses_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stack`` options),
for example ``10.233.0.0/18,fd85:ee78:d8a6:8607::1000/116`` for dual stack(ipv4_stack/ipv6_stack set to `true`).
It is not recommended to change this variable directly.
* *kube_pods_subnet_ipv6* - Subnet for Pod IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1:0000/112``). Must not overlap with ``kube_service_addresses_ipv6``.
* *kube_pods_subnets* - All pods subnets separated by commas (default is a mix of ``kube_pods_subnet`` and ``kube_pod_subnet_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stacke`` options),
* *kube_pods_subnets* - All pods subnets separated by commas (default is a mix of ``kube_pods_subnet`` and ``kube_pod_subnet_ipv6`` depending on ``ipv4_stack`` and ``ipv6_stack`` options),
for example ``10.233.64.0/18,fd85:ee78:d8a6:8607::1:0000/112`` for dual stack(ipv4_stack/ipv6_stack set to `true`).
It is not recommended to change this variable directly.
@@ -180,7 +180,7 @@ and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.
IPv4 stack enable by *ipv4_stack* is set to ``true``, by default.
IPv6 stack enable by *ipv6_stack* is set to ``false`` by default.
This will use the default IPv4 and IPv6 subnets specified in the defaults file in the ``kubespray-defaults`` role, unless overridden of course. The default config will give you room for up to 256 nodes with 126 pods per node, and up to 4096 services.
This will use the default IPv4 and IPv6 subnets specified in the defaults file in the ``kubespray_defaults`` role, unless overridden of course. The default config will give you room for up to 256 nodes with 126 pods per node, and up to 4096 services.
Set both variables to ``true`` for Dual Stack mode.
IPv4 has higher priority in Dual Stack mode(e.g. in variables `main_ip`, `main_access_ip` and other).
You can also make IPv6 only clusters with ``false`` in *ipv4_stack*.

View File

@@ -73,6 +73,7 @@ The cloud provider is configured to have Octavia by default in Kubespray.
external_openstack_lbaas_method: ROUND_ROBIN
external_openstack_lbaas_provider: amphora
external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
external_openstack_lbaas_member_subnet_id: "Neutron subnet ID on which to create the members of the load balancer"
external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
external_openstack_lbaas_manage_security_groups: false
external_openstack_lbaas_create_monitor: false

View File

@@ -1,100 +0,0 @@
# Equinix Metal
Kubespray provides support for bare metal deployments using the [Equinix Metal](http://metal.equinix.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
as cell tower, edge collocated installations. The deployment mechanism used by Kubespray for Equinix Metal is similar to that used for
AWS and OpenStack clouds (notably using Terraform to deploy the infrastructure). Terraform uses the Equinix Metal provider plugin
to provision and configure hosts which are then used by the Kubespray Ansible playbooks. The Ansible inventory is generated
dynamically from the Terraform state file.
## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal.
In this example, we are provisioning a m1.large CentOS7 OpenStack VM as the localhost for the Kubernetes installation.
You'll need Ansible, Git, and PIP.
```bash
sudo yum install epel-release
sudo yum install ansible
sudo yum install git
sudo yum install python-pip
```
## Playbook SSH Key
An SSH key is needed by Kubespray/Ansible to run the playbooks.
This key is installed into the bare metal hosts during the Terraform deployment.
You can generate a key new key or use an existing one.
```bash
ssh-keygen -f ~/.ssh/id_rsa
```
## Install Terraform
Terraform is required to deploy the bare metal infrastructure. The steps below are for installing on CentOS 7.
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it.
```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_linux_amd64.zip"
sudo yum install unzip
sudo unzip terraform_0.14.10_linux_amd64.zip -d /usr/local/bin/
```
## Download Kubespray
Pull over Kubespray and setup any required libraries.
```bash
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
```
## Install Ansible
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Cluster Definition
In this example, a new cluster called "alpha" will be created.
```bash
cp -LRp contrib/terraform/packet/sample-inventory inventory/alpha
cd inventory/alpha/
ln -s ../../contrib/terraform/packet/hosts
```
Details about the cluster, such as the name, as well as the authentication tokens and project ID
for Equinix Metal need to be defined. To find these values see [Equinix Metal API Accounts](https://metal.equinix.com/developers/docs/accounts/).
```bash
vi cluster.tfvars
```
* cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
## Deploy Bare Metal Hosts
Initializing Terraform will pull down any necessary plugins/providers.
```bash
terraform init ../../contrib/terraform/packet/
```
Run Terraform to deploy the hardware.
```bash
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
```
## Run Kubespray Playbooks
With the bare metal infrastructure deployed, Kubespray can now install Kubernetes and setup the cluster.
```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
```

View File

@@ -2,19 +2,14 @@
## Pipeline
1. build: build a docker image to be used in the pipeline
2. unit-tests: fast jobs for fast feedback (linting, etc...)
3. deploy-part1: small number of jobs to test if the PR works with default settings
4. deploy-extended: slow jobs testing different platforms, OS, settings, CNI, etc...
5. deploy-extended: very slow jobs (upgrades, etc...)
See [.gitlab-ci.yml](/.gitlab-ci.yml) and the included files for an overview.
## Runners
Kubespray has 3 types of GitLab runners:
Kubespray has 2 types of GitLab runners, both deployed on the Kubespray CI cluster (hosted on Oracle Cloud Infrastucture):
- packet runners: used for E2E jobs (usually long), running on Equinix Metal (ex-packet), on kubevirt managed VMs
- light runners: used for short lived jobs, running on Equinix Metal (ex-packet), as managed pods
- auto scaling runners (managed via docker-machine on Equinix Metal): used for on-demand resources, see [GitLab docs](https://docs.gitlab.com/runner/configuration/autoscale.html) for more info
- pods: use the [gitlab-ci kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/)
- vagrant: custom executor running in pods with access to the libvirt socket on the nodes
## Vagrant
@@ -22,18 +17,17 @@ Vagrant jobs are using the [quay.io/kubespray/vagrant](/test-infra/vagrant-docke
## CI Variables
In CI we have a set of overrides we use to ensure greater success of our CI jobs and avoid throttling by various APIs we depend on. See:
In CI we have a [set of extra vars](/test/common_vars.yml) we use to ensure greater success of our CI jobs and avoid throttling by various APIs we depend on.
- [Docker mirrors](/tests/common/_docker_hub_registry_mirror.yml)
- [Test settings](/tests/common/_kubespray_test_settings.yml)
## CI clusters
## CI Environment
DISCLAIMER: The following information is not fully up to date, in particular, the CI cluster is now on Oracle Cloud Infrastcture, not Equinix.
The CI packet and light runners are deployed on a kubernetes cluster on Equinix Metal. The cluster is deployed with kubespray itself and maintained by the kubespray maintainers.
The cluster is deployed with kubespray itself and maintained by the kubespray maintainers.
The following files are used for that inventory:
### cluster.tfvars
### cluster.tfvars (OBSOLETE: this section is no longer accurate)
```ini
# your Kubernetes cluster name here
@@ -166,18 +160,6 @@ kube_feature_gates:
This section documents additional files used to complete a deployment of the kubespray CI, these files sit on the control-plane node and assume a working kubernetes cluster.
### /root/nscleanup.sh
```bash
#!/bin/bash
kubectl=/usr/local/bin/kubectl
$kubectl get ns | grep -P "(\d.+-\d.+)" | awk 'match($3,/[0-9]+d/) {print $1}' | xargs -r $kubectl delete ns
$kubectl get ns | grep -P "(\d.+-\d.+)" | awk 'match($3,/[3-9]+h/) {print $1}' | xargs -r $kubectl delete ns
$kubectl get ns | grep Terminating | awk '{print $1}' | xargs -i $kubectl delete vmi/instance-1 vmi/instance-0 vmi/instance-2 -n {} --force --grace-period=0 &
```
### /root/path-calico.sh
```bash

View File

@@ -6,55 +6,52 @@ To generate this Matrix run `./tests/scripts/md-table/main.py`
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: |
debian12 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: |
debian13 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: |
## crio
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## docker
| OS / CNI | calico | cilium | custom_cni | flannel | kube-ovn | kube-router | macvlan |
|---| --- | --- | --- | --- | --- | --- | --- |
almalinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
almalinux9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian11 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian12 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
debian13 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora39 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora40 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
flatcar4081 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
openeuler24 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse15 | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
rockylinux8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
rockylinux9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu22 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu24 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: |

View File

@@ -1,73 +0,0 @@
# CephFS Volume Provisioner for Kubernetes 1.5+
[![Docker Repository on Quay](https://quay.io/repository/external_storage/cephfs-provisioner/status "Docker Repository on Quay")](https://quay.io/repository/external_storage/cephfs-provisioner)
Using Ceph volume client
## Development
Compile the provisioner
``` console
make
```
Make the container image and push to the registry
``` console
make push
```
## Test instruction
- Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/)
- Create a Ceph admin secret
``` bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
```
- Start CephFS provisioner
The following example uses `cephfs-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
``` bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
- Create a CephFS Storage Class
Replace Ceph monitor's IP in [example class](example/class.yaml) with your own and create storage class:
``` bash
kubectl create -f example/class.yaml
```
- Create a claim
``` bash
kubectl create -f example/claim.yaml
```
- Create a Pod using the claim
``` bash
kubectl create -f example/test-pod.yaml
```
## Known limitations
- Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
- Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
- Currently each Ceph user created by the provisioner has `allow r` MDS cap to permit CephFS mount.
## Acknowledgement
Inspired by CephFS Manila provisioner and conversation with John Spray

View File

@@ -1,79 +0,0 @@
# RBD Volume Provisioner for Kubernetes 1.5+
`rbd-provisioner` is an out-of-tree dynamic provisioner for Kubernetes 1.5+.
You can use it quickly & easily deploy ceph RBD storage that works almost
anywhere.
It works just like in-tree dynamic provisioner. For more information on how
dynamic provisioning works, see [the docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
or [this blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html).
## Development
Compile the provisioner
```console
make
```
Make the container image and push to the registry
```console
make push
```
## Test instruction
* Start Kubernetes local cluster
See [Kubernetes](https://kubernetes.io/).
* Create a Ceph admin secret
```bash
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
```
* Create a Ceph pool and a user secret
```bash
ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube > /tmp/secret
kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
```
* Start RBD provisioner
The following example uses `rbd-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
```bash
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1
```
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
* Create a RBD Storage Class
Replace Ceph monitor's IP in [examples/class.yaml](examples/class.yaml) with your own and create storage class:
```bash
kubectl create -f examples/class.yaml
```
* Create a claim
```bash
kubectl create -f examples/claim.yaml
```
* Create a Pod using the claim
```bash
kubectl create -f examples/test-pod.yaml
```
## Acknowledgements
* This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project.

View File

@@ -7,7 +7,7 @@ The kube-scheduler binary includes a list of plugins:
- [CapacityScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/capacityscheduling) [Beta]
- [CoScheduling](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/coscheduling) [Beta]
- [NodeResources](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/noderesources) [Beta]
- [NodeResouceTopology](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md) [Beta]
- [NodeResourceTopology](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md) [Beta]
- [PreemptionToleration](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/preemptiontoleration/README.md) [Alpha]
- [Trimaran](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/trimaran/README.md) [Alpha]
- [NetworkAware](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/networkaware/README.md) [Sample]

View File

@@ -61,12 +61,12 @@ gcloud compute networks subnets create kubernetes \
#### Firewall Rules
Create a firewall rule that allows internal communication across all protocols.
It is important to note that the vxlan protocol has to be allowed in order for
It is important to note that the vxlan (udp) protocol has to be allowed in order for
the calico (see later) networking plugin to work.
```ShellSession
gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \
--allow tcp,udp,icmp,vxlan \
--allow tcp,udp,icmp \
--network kubernetes-the-kubespray-way \
--source-ranges 10.240.0.0/24
```
@@ -88,7 +88,7 @@ cluster.
### Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04.
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 24.04.
Each compute instance will be provisioned with a fixed private IP address and
a public IP address (that can be fixed - see [guide](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address)).
Using fixed public IP addresses has the advantage that our cluster node
@@ -103,7 +103,7 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-family ubuntu-2404-lts-amd64 \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
@@ -124,7 +124,7 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-family ubuntu-2404-lts-amd64 \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.2${i} \

View File

@@ -35,7 +35,7 @@ kubectl create clusterrolebinding cluster-admin-binding \
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml
```
### Provider Specific Steps

View File

@@ -1,4 +1,4 @@
# bootstrap-os
# bootstrap_os
Bootstrap an Ansible host to be able to run Ansible modules.
@@ -48,8 +48,8 @@ Remember to disable fact gathering since Python might not be present on hosts.
- hosts: all
gather_facts: false # not all hosts might be able to run modules yet
roles:
- kubespray-defaults
- bootstrap-os
- kubespray_defaults
- bootstrap_os
```
## License

View File

@@ -30,11 +30,6 @@ kube_memory_reserved: 256Mi
kube_cpu_reserved: 100m
# kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000"
# Reservation for master hosts
kube_master_memory_reserved: 512Mi
kube_master_cpu_reserved: 200m
# kube_master_ephemeral_storage_reserved: 2Gi
# kube_master_pid_reserved: "1000"
# Set to true to reserve resources for system daemons
system_reserved: true
@@ -44,11 +39,6 @@ system_memory_reserved: 512Mi
system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi
# system_pid_reserved: "1000"
# Reservation for master hosts
system_master_memory_reserved: 256Mi
system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
# system_master_pid_reserved: "1000"
```
After the setup, the cgroups hierarchy is as follows:

View File

@@ -18,8 +18,6 @@ The **kubernetes** version should be at least `v1.23.6` to have all the most rec
## kube-apiserver
authorization_modes: ['Node', 'RBAC']
# AppArmor-based OS
# kube_apiserver_feature_gates: ['AppArmor=true']
kube_apiserver_request_timeout: 120s
kube_apiserver_service_account_lookup: true
@@ -77,17 +75,17 @@ remove_anonymous_access: true
## kube-controller-manager
kube_controller_manager_bind_address: 127.0.0.1
kube_controller_terminated_pod_gc_threshold: 50
# AppArmor-based OS
# kube_controller_feature_gates: ["RotateKubeletServerCertificate=true", "AppArmor=true"]
kube_controller_feature_gates: ["RotateKubeletServerCertificate=true"]
## kube-scheduler
kube_scheduler_bind_address: 127.0.0.1
# AppArmor-based OS
# kube_scheduler_feature_gates: ["AppArmor=true"]
## etcd
etcd_deployment_type: kubeadm
# Running etcd (on dedicated hosts) outside the Kubernetes cluster is the most secure deployment option,
# as it isolates etcd from the cluster's CNI network and removes direct pod-level attack vectors.
# This approach prevents RBAC misconfigurations that potentially compromise etcd,
# creating an additional security boundary that protects the cluster's critical state store.
etcd_deployment_type: host
## kubelet
kubelet_authorization_mode_webhook: true
@@ -102,6 +100,8 @@ kubelet_make_iptables_util_chains: true
kubelet_feature_gates: ["RotateKubeletServerCertificate=true"]
kubelet_seccomp_default: true
kubelet_systemd_hardening: true
# To disable kubelet's staticPodPath (for nodes that don't use static pods like worker nodes)
kubelet_static_pod_path: ""
# In case you have multiple interfaces in your
# control plane nodes and you want to specify the right
# IP addresses, kubelet_secure_addresses allows you
@@ -126,9 +126,8 @@ Let's take a deep look to the resultant **kubernetes** configuration:
* The `encryption-provider-config` provide encryption at rest. This means that the `kube-apiserver` encrypt data that is going to be stored before they reach `etcd`. So the data is completely unreadable from `etcd` (in case an attacker is able to exploit this).
* The `rotateCertificates` in `KubeletConfiguration` is set to `true` along with `serverTLSBootstrap`. This could be used in alternative to `tlsCertFile` and `tlsPrivateKeyFile` parameters. Additionally it automatically generates certificates by itself. By default the CSRs are approved automatically via [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver). You can customize approval configuration by modifying Helm values via `kubelet_csr_approver_values`.
See <https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/> for more information on the subject.
* If you are installing **kubernetes** in an AppArmor-based OS (eg. Debian/Ubuntu) you can enable the `AppArmor` feature gate uncommenting the lines with the comment `# AppArmor-based OS` on top.
* The `kubelet_systemd_hardening`, both with `kubelet_secure_addresses` setup a minimal firewall on the system. To better understand how these variables work, here's an explanatory image:
![kubelet hardening](img/kubelet-hardening.png)
![kubelet hardening](../img/kubelet-hardening.png)
Once you have the file properly filled, you can run the **Ansible** command to start the installation:

View File

@@ -2,58 +2,6 @@
Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/issues/3471#issuecomment-530036084)
## Limitation: Removal of first kube_control_plane and etcd-master
Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to:
### 1) Change order of current control planes
Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
```yaml
children:
kube_control_plane:
hosts:
node-1:
node-2:
node-3:
kube_node:
hosts:
node-1:
node-2:
node-3:
etcd:
hosts:
node-1:
node-2:
node-3:
```
change your inventory to:
```yaml
children:
kube_control_plane:
hosts:
node-2:
node-3:
node-1:
kube_node:
hosts:
node-2:
node-3:
node-1:
etcd:
hosts:
node-2:
node-3:
node-1:
```
## 2) Upgrade the cluster
run `upgrade-cluster.yml` or `cluster.yml`. Now you are good to go on with the removal.
## Adding/replacing a worker node
This should be the easiest.
@@ -100,40 +48,74 @@ crictl ps | grep nginx-proxy | awk '{print $1}' | xargs crictl stop
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars.
## Replacing a first control plane node
## Adding/Removal of first `kube_control_plane` and etcd-master
### 1) Change control plane nodes order in inventory
Currently you can't remove the first node in your `kube_control_plane` and etcd-master list. If you still want to remove this node you have to:
from
### 1) Change order of current control planes
```ini
[kube_control_plane]
node-1
node-2
node-3
Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
```yaml
all:
hosts:
children:
kube_control_plane:
hosts:
node-1:
node-2:
node-3:
kube_node:
hosts:
node-1:
node-2:
node-3:
etcd:
hosts:
node-1:
node-2:
node-3:
```
to
change your inventory to:
```ini
[kube_control_plane]
node-2
node-3
node-1
```yaml
all:
hosts:
children:
kube_control_plane:
hosts:
node-2:
node-3:
node-1:
kube_node:
hosts:
node-2:
node-3:
node-1:
etcd:
hosts:
node-2:
node-3:
node-1:
```
### 2) Remove old first control plane node from cluster
### 2) Upgrade the cluster
run `upgrade-cluster.yml` or `cluster.yml`. Now you are good to go on with the removal.
### 3) Remove old first control plane node from cluster
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=node-1` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` and `allow_ungraceful_removal=true` to your extra-vars.
### 3) Edit cluster-info configmap in kube-public namespace
### 4) Edit cluster-info configmap in kube-public namespace
`kubectl edit cm -n kube-public cluster-info`
Change ip of old kube_control_plane node with ip of live kube_control_plane node (`server` field). Also, update `certificate-authority-data` field if you changed certs.
### 4) Add new control plane node
### 5) Add new control plane node
Update inventory (if needed)

View File

@@ -22,6 +22,45 @@ Then you need to setup the following services on your offline environment:
You can get artifact lists with [generate_list.sh](/contrib/offline/generate_list.sh) script.
In addition, you can find some tools for offline deployment under [contrib/offline](/contrib/offline/README.md).
## Access Control
### Note: access controlled files_repo
To specify a username and password for "{{ files_repo }}", used to download the binaries, you can use url-encoding. Be aware that the Boolean `unsafe_show_logs` will show these credentials when `roles/download/tasks/download_file.yml` runs the task "Download_file | Show url of file to download". You can disable that Boolean in a job-template when running AWX/AAP/Semaphore.
```yaml
files_repo_host: example.com
files_repo_path: /repo
files_repo_user: download
files_repo_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
61663232643236353864663038616361373739613338623338656434386662363539613462626661
6435333438313034346164313631303534346564316361370a306661393232626364376436386439
64653965663965356137333436616536643132336630313235333232336661373761643766356366
6232353233386534380a373262313634613833623537626132633033373064336261383166323230
3164
files_repo: "https://{{ files_repo_user ~ ':' ~ files_repo_pass ~ '@' ~ files_repo_host ~ files_repo_path }}"
```
### Note: access controlled registry
To specify a username and password for "{{ registry_host }}", used to download the container images, you can use url-encoding too.
```yaml
registry_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
61663232643236353864663038616361373739613338623338656434386662363539613462626661
6435333438313034346164313631303534346564316361370a306661393232626364376436386439
64653965663965356137333436616536643132336630313235333232336661373761643766356366
6232353233386534380a373262313634613833623537626132633033373064336261383166323230
3164
containerd_registry_auth:
- registry: "{{ registry_host }}"
username: "{{ registry_user }}"
password: "{{ registry_pass }}"
```
## Configure Inventory
Once all artifacts are accessible from your internal network, **adjust** the following variables
@@ -35,21 +74,23 @@ docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"
github_image_repo: "{{ registry_host }}"
kubeadm_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubectl"
kubelet_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubelet"
local_path_provisioner_helper_image_repo: "{{ registry_host }}/busybox"
kubeadm_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubectl"
kubelet_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubelet"
# etcd is optional if you **DON'T** use etcd_deployment=host
etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
cni_download_url: "{{ files_repo }}/kubernetes/cni/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
crictl_download_url: "{{ files_repo }}/kubernetes/cri-tools/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-v{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
cni_download_url: "{{ files_repo }}/kubernetes/cni/cni-plugins-linux-{{ image_arch }}-v{{ cni_version }}.tgz"
crictl_download_url: "{{ files_repo }}/kubernetes/cri-tools/crictl-v{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
# If using Calico
calicoctl_download_url: "{{ files_repo }}/kubernetes/calico/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
calicoctl_download_url: "{{ files_repo }}/kubernetes/calico/v{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# If using Calico with kdd
calico_crds_download_url: "{{ files_repo }}/kubernetes/calico/{{ calico_version }}.tar.gz"
calico_crds_download_url: "{{ files_repo }}/kubernetes/calico/v{{ calico_version }}.tar.gz"
# Containerd
containerd_download_url: "{{ files_repo }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
runc_download_url: "{{ files_repo }}/runc.{{ image_arch }}"
nerdctl_download_url: "{{ files_repo }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
get_helm_url: "{{ files_repo }}/get.helm.sh"
# Insecure registries for containerd
containerd_registries_mirrors:
- prefix: "{{ registry_addr }}"
@@ -95,7 +136,7 @@ If you use the settings like the one above, you'll need to define in your invent
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that
the ones defined
in [kubesprays-defaults's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/defaults/main/download.yml)
in [kubesprays-defaults's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray_defaults/defaults/main/download.yml)
, you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the
same repository path, you won't have to override anything else.
* `registry_addr`: Container image registry, but only have [domain or ip]:[port].

View File

@@ -13,9 +13,7 @@ versions. Here are all version vars for each component:
* etcd_version
* calico_version
* calico_cni_version
* weave_version
* flannel_version
* kubedns_version
> **Warning**
> [Attempting to upgrade from an older release straight to the latest release is unsupported and likely to break something](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515)
@@ -84,7 +82,7 @@ If you don't want to upgrade all nodes in one run, you can use `--limit` [patter
Before using `--limit` run playbook `facts.yml` without the limit to refresh facts cache for all nodes:
```ShellSession
ansible-playbook facts.yml -b -i inventory/sample/hosts.ini
ansible-playbook playbooks/facts.yml -b -i inventory/sample/hosts.ini
```
After this upgrade control plane and etcd groups [#5147](https://github.com/kubernetes-sigs/kubespray/issues/5147):
@@ -357,7 +355,7 @@ follows:
* Containerd
* etcd
* kubelet and kube-proxy
* network_plugin (such as Calico or Weave)
* network_plugin (such as Calico)
* kube-apiserver, kube-scheduler, and kube-controller-manager
* Add-ons (such as KubeDNS)

View File

@@ -12,7 +12,7 @@
hosts: kube_control_plane[0]
tasks:
- name: Include kubespray-default variables
include_vars: ../roles/kubespray-defaults/defaults/main/main.yml
include_vars: ../roles/kubespray_defaults/defaults/main/main.yml
- name: Copy get_cinder_pvs.sh to first control plane node
copy:
src: get_cinder_pvs.sh

View File

@@ -14,7 +14,7 @@
hosts: localhost
gather_facts: false
roles:
- { role: kubespray-defaults}
- { role: kubespray_defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Bootstrap hosts OS for Ansible
@@ -22,18 +22,18 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
# Need to disable pipelining for bootstrap_os as some systems have requiretty in sudoers set, which makes pipelining
# fail. bootstrap_os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- { role: kubespray_defaults}
- { role: bootstrap_os, tags: bootstrap_os}
- name: Preinstall
hosts: k8s_cluster:etcd:calico_rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray_defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- name: Handle upgrades to control plane components first to maintain backwards compat.
@@ -41,7 +41,7 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: 1
roles:
- { role: kubespray-defaults}
- { role: kubespray_defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/control-plane, tags: master, upgrade_cluster_setup: true }
@@ -54,8 +54,8 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray_defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: upgrade/post-upgrade, tags: post-upgrade }
- { role: kubespray-defaults}
- { role: kubespray_defaults}

View File

@@ -2,7 +2,7 @@
namespace: kubernetes_sigs
description: Deploy a production ready Kubernetes cluster
name: kubespray
version: 2.28.0
version: 2.29.2
readme: README.md
authors:
- The Kubespray maintainers (https://kubernetes.slack.com/channels/kubespray)

View File

@@ -57,7 +57,7 @@ loadbalancer_apiserver_healthcheck_port: 8081
# https_proxy: ""
# https_proxy_cert_file: ""
## Refer to roles/kubespray-defaults/defaults/main/main.yml before modifying no_proxy
## Refer to roles/kubespray_defaults/defaults/main/main.yml before modifying no_proxy
# no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug
@@ -115,6 +115,9 @@ no_proxy_exclude_workers: false
# sysctl_file_path to add sysctl conf to
# sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"
# ignore sysctl errors about unknown keys
# sysctl_ignore_unknown_keys: false
## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
kube_webhook_token_auth: false
kube_webhook_token_auth_url_skip_tls_verify: false

View File

@@ -50,6 +50,8 @@
# - host: https://registry-1.docker.io
# capabilities: ["pull", "resolve"]
# skip_verify: false
# header:
# Authorization: "Basic XXX"
# containerd_max_container_log_line_size: 16384

View File

@@ -43,7 +43,6 @@
# ocid1.subnet.oc1.phx.aaaaaaaahuxrgvs65iwdz7ekwgg3l5gyah7ww5klkwjcso74u3e4i64hvtvq: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
## If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
# oci_use_instance_principals: false
# oci_cloud_controller_version: 0.6.0
## If you would like to control OCI query rate limits for the controller
# oci_rate_limit:
# rate_limit_qps_read:

View File

@@ -18,9 +18,9 @@
# quay_image_repo: "{{ registry_host }}"
## Kubernetes components
# kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
# kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
# kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/v{{ kube_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/v{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
# kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/v{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
## Two options - Override entire repository or override only a single binary.
@@ -33,24 +33,24 @@
## [Optional] 2 - Override a specific binary
## CNI Plugins
# cni_download_url: "{{ files_repo }}/github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
# cni_download_url: "{{ files_repo }}/github.com/containernetworking/plugins/releases/download/v{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-v{{ cni_version }}.tgz"
## cri-tools
# crictl_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
# crictl_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/cri-tools/releases/download/v{{ crictl_version }}/crictl-v{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
## [Optional] etcd: only if you use etcd_deployment=host
# etcd_download_url: "{{ files_repo }}/github.com/etcd-io/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
# etcd_download_url: "{{ files_repo }}/github.com/etcd-io/etcd/releases/download/v{{ etcd_version }}/etcd-v{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] Calico: If using Calico network plugin
# calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/v{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
# calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
# calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/v{{ calico_version }}.tar.gz"
# [Optional] Cilium: If using Cilium network plugin
# ciliumcli_download_url: "{{ files_repo }}/github.com/cilium/cilium-cli/releases/download/{{ cilium_cli_version }}/cilium-linux-{{ image_arch }}.tar.gz"
# ciliumcli_download_url: "{{ files_repo }}/github.com/cilium/cilium-cli/releases/download/v{{ cilium_cli_version }}/cilium-linux-{{ image_arch }}.tar.gz"
# [Optional] helm: only if you set helm_enabled: true
# helm_download_url: "{{ files_repo }}/get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
# helm_download_url: "{{ files_repo }}/get.helm.sh/helm-v{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] crun: only if you set crun_enabled: true
# crun_download_url: "{{ files_repo }}/github.com/containers/crun/releases/download/{{ crun_version }}/crun-{{ crun_version }}-linux-{{ image_arch }}"
@@ -62,13 +62,13 @@
# cri_dockerd_download_url: "{{ files_repo }}/github.com/Mirantis/cri-dockerd/releases/download/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz"
# [Optional] runc: if you set container_manager to containerd or crio
# runc_download_url: "{{ files_repo }}/github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
# runc_download_url: "{{ files_repo }}/github.com/opencontainers/runc/releases/download/v{{ runc_version }}/runc.{{ image_arch }}"
# [Optional] cri-o: only if you set container_manager: crio
# crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"
# crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"
# crio_download_url: "{{ files_repo }}/storage.googleapis.com/cri-o/artifacts/cri-o.{{ image_arch }}.{{ crio_version }}.tar.gz"
# skopeo_download_url: "{{ files_repo }}/github.com/lework/skopeo-binary/releases/download/{{ skopeo_version }}/skopeo-linux-{{ image_arch }}"
# crio_download_url: "{{ files_repo }}/storage.googleapis.com/cri-o/artifacts/cri-o.{{ image_arch }}.v{{ crio_version }}.tar.gz"
# skopeo_download_url: "{{ files_repo }}/github.com/lework/skopeo-binary/releases/download/v{{ skopeo_version }}/skopeo-linux-{{ image_arch }}"
# [Optional] containerd: only if you set container_runtime: containerd
# containerd_download_url: "{{ files_repo }}/github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"

View File

@@ -1,5 +1,4 @@
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
# openstack_blockstorage_version: "v1/v2/auto (default)"
# openstack_blockstorage_ignore_volume_az: yes
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
# openstack_lbaas_enabled: True

View File

@@ -7,26 +7,6 @@
# external_vsphere_datacenter: "DATACENTER_name"
# external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
## Vsphere version where located VMs
# external_vsphere_version: "6.7u3"
## Tags for the external vSphere Cloud Provider images
## registry.k8s.io/cloud-pv-vsphere/cloud-provider-vsphere
# external_vsphere_cloud_controller_image_tag: "v1.31.0"
## registry.k8s.io/csi-vsphere/syncer
# vsphere_syncer_image_tag: "v3.3.1"
## registry.k8s.io/sig-storage/csi-attacher
# vsphere_csi_attacher_image_tag: "v3.4.0"
## registry.k8s.io/csi-vsphere/driver
# vsphere_csi_controller: "v3.3.1"
## registry.k8s.io/sig-storage/livenessprobe
# vsphere_csi_liveness_probe_image_tag: "v2.6.0"
## registry.k8s.io/sig-storage/csi-provisioner
# vsphere_csi_provisioner_image_tag: "v3.1.0"
## registry.k8s.io/sig-storage/csi-resizer
## makes sense only for vSphere version >=7.0
# vsphere_csi_resizer_tag: "v1.3.0"
## To use vSphere CSI plugin to provision volumes set this value to true
# vsphere_csi_enabled: true
# vsphere_csi_controller_replicas: 1

View File

@@ -1,38 +0,0 @@
---
## Etcd auto compaction retention for mvcc key value store in hour
# etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
# etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
## This value is only relevant when deploying etcd with `etcd_deployment_type: docker`
# etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
# 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
# etcd_quota_backend_bytes: "2147483648"
# Maximum client request size in bytes the server will accept.
# etcd is designed to handle small key value pairs typical for metadata.
# Larger requests will work, but may increase the latency of other requests
# etcd_max_request_bytes: "1572864"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true
## Enable distributed tracing
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
# etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd
## The interval for etcd watch progress notify events
# etcd_experimental_watch_progress_notify_interval: 5s

View File

@@ -29,7 +29,6 @@ local_path_provisioner_enabled: false
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.24"
# local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest"
@@ -65,40 +64,8 @@ local_volume_provisioner_enabled: false
# csi snapshot namespace
# snapshot_controller_namespace: kube-system
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "cephfs-provisioner"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# cephfs_provisioner_reclaim_policy: Delete
# cephfs_provisioner_claim_root: /volumes
# cephfs_provisioner_deterministic_names: true
# RBD provisioner deployment
rbd_provisioner_enabled: false
# rbd_provisioner_namespace: rbd-provisioner
# rbd_provisioner_replicas: 2
# rbd_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# rbd_provisioner_pool: kube
# rbd_provisioner_admin_id: admin
# rbd_provisioner_secret_name: ceph-secret-admin
# rbd_provisioner_secret: ceph-key-admin
# rbd_provisioner_user_id: kube
# rbd_provisioner_user_secret_name: ceph-secret-user
# rbd_provisioner_user_secret: ceph-key-user
# rbd_provisioner_user_secret_namespace: rbd-provisioner
# rbd_provisioner_fs_type: ext4
# rbd_provisioner_image_format: "2"
# rbd_provisioner_image_features: layering
# rbd_provisioner_storage_class: rbd
# rbd_provisioner_reclaim_policy: Delete
# Gateway API CRDs
gateway_api_enabled: false
# gateway_api_experimental_channel: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
@@ -180,7 +147,6 @@ cert_manager_enabled: false
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"
# metallb_version: 0.13.9
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
@@ -242,7 +208,6 @@ metallb_namespace: "metallb-system"
# - pool2
argocd_enabled: false
# argocd_version: 2.14.5
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
@@ -270,6 +235,7 @@ kube_vip_enabled: false
# kube_vip_cp_detect: false
# kube_vip_leasename: plndr-cp-lock
# kube_vip_enable_node_labeling: false
# kube_vip_lb_fwdmethod: local
# Node Feature Discovery
node_feature_discovery_enabled: false

View File

@@ -16,9 +16,6 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: 1.32.2
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
@@ -65,7 +62,7 @@ credentials_dir: "{{ inventory_dir }}/credentials"
# kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false
# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Choose network plugin (cilium, calico, kube-ovn or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
@@ -349,7 +346,7 @@ event_ttl_duration: "1h0m0s"
## Automatically renew K8S control plane certificates on first Monday of each month
auto_renew_certificates: false
# First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:00:00"
kubeadm_patches_dir: "{{ kube_config_dir }}/patches"
kubeadm_patches: []

View File

@@ -25,15 +25,9 @@ calico_pool_blocksize: 26
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
# calico_pool_cidr: 1.2.3.4/5
# add default ippool CIDR to CNI config
# calico_cni_pool: true
# Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set.
# calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# Add default IPV6 IPPool CIDR to CNI config
# calico_cni_pool_ipv6: true
# Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512"

View File

@@ -1,6 +1,4 @@
---
# cilium_version: "1.15.9"
# Log-level
# cilium_debug: false
@@ -177,6 +175,10 @@ cilium_l2announcements: false
### Buffer size of the channel to receive monitor events.
# cilium_hubble_event_queue_size: 50
# Override the DNS suffix that Hubble-Relay uses to resolve its peer service.
# It defaults to the inventory's `dns_domain`.
# cilium_hubble_peer_service_cluster_domain: "{{ dns_domain }}"
# IP address management mode for v1.9+.
# https://docs.cilium.io/en/v1.9/concepts/networking/ipam/
# cilium_ipam_mode: kubernetes
@@ -255,6 +257,10 @@ cilium_l2announcements: false
# - name: "blue-pool"
# cidrs:
# - "10.0.10.0/24"
# ranges:
# - start: "20.0.20.100"
# stop: "20.0.20.200"
# - start: "1.2.3.4"
# -- Configure BGP Instances (New bgpv2 API v1.16+)
# cilium_bgp_cluster_configs:
@@ -378,3 +384,7 @@ cilium_l2announcements: false
# resourceNames:
# - toto
# cilium_clusterrole_rules_operator_extra_vars: []
# Cilium extra values, use any values from cilium Helm Chart
# ref: https://docs.cilium.io/en/stable/helm-reference/
# cilium_extra_values: {}

View File

@@ -45,7 +45,7 @@
# custom_cni_chart_repository_name: cilium
# custom_cni_chart_repository_url: https://helm.cilium.io
# custom_cni_chart_ref: cilium/cilium
# custom_cni_chart_version: 1.14.3
# custom_cni_chart_version: <chart version> (e.g.: 1.14.3)
# custom_cni_chart_values:
# cluster:
# name: "cilium-demo"

View File

@@ -1,11 +1,5 @@
# See roles/network_plugin/kube-router/defaults/main.yml
# Kube router version
# Default to v2
# kube_router_version: "2.0.0"
# Uncomment to use v1 (Deprecated)
# kube_router_version: "1.6.0"
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true

View File

@@ -1,64 +0,0 @@
# see roles/network_plugin/weave/defaults/main.yml
# Weave's network password for encryption, if null then no network encryption.
# weave_password: ~
# If set to 1, disable checking for new Weave Net versions (default is blank,
# i.e. check is enabled)
# weave_checkpoint_disable: false
# Soft limit on the number of connections between peers. Defaults to 100.
# weave_conn_limit: 100
# Weave Net defaults to enabling hairpin on the bridge side of the veth pair
# for containers attached. If you need to disable hairpin, e.g. your kernel is
# one of those that can panic if hairpin is enabled, then you can disable it by
# setting `HAIRPIN_MODE=false`.
# weave_hairpin_mode: true
# The range of IP addresses used by Weave Net and the subnet they are placed in
# (CIDR format; default 10.32.0.0/12)
# weave_ipalloc_range: "{{ kube_pods_subnet }}"
# Set to 0 to disable Network Policy Controller (default is on)
# weave_expect_npc: "{{ enable_network_policy }}"
# List of addresses of peers in the Kubernetes cluster (default is to fetch the
# list from the api-server)
# weave_kube_peers: ~
# Set the initialization mode of the IP Address Manager (defaults to consensus
# amongst the KUBE_PEERS)
# weave_ipalloc_init: ~
# Set the IP address used as a gateway from the Weave network to the host
# network - this is useful if you are configuring the addon as a static pod.
# weave_expose_ip: ~
# Address and port that the Weave Net daemon will serve Prometheus-style
# metrics on (defaults to 0.0.0.0:6782)
# weave_metrics_addr: ~
# Address and port that the Weave Net daemon will serve status requests on
# (defaults to disabled)
# weave_status_addr: ~
# Weave Net defaults to 1376 bytes, but you can set a smaller size if your
# underlying network has a tighter limit, or set a larger size for better
# performance if your network supports jumbo frames (e.g. 8916)
# weave_mtu: 1376
# Set to 1 to preserve the client source IP address when accessing Service
# annotated with `service.spec.externalTrafficPolicy=Local`. The feature works
# only with Weave IPAM (default).
# weave_no_masq_local: true
# set to nft to use nftables backend for iptables (default is iptables)
# weave_iptables_backend: iptables
# Extra variables that passing to launch.sh, useful for enabling seed mode, see
# https://www.weave.works/docs/net/latest/tasks/ipam/ipam/
# weave_extra_args: ~
# Extra variables for weave_npc that passing to launch.sh, useful for change log level, ex --log-level=error
# weave_npc_extra_args: ~

View File

@@ -1,2 +1,2 @@
---
requires_ansible: '>=2.16.4'
requires_ansible: ">=2.17.3"

View File

@@ -1,4 +1,4 @@
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
# Use immutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
@@ -47,8 +47,8 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 \
&& pip install --no-compile --no-cache-dir pip -U \
&& pip install --no-compile --no-cache-dir -r tests/requirements.txt \
&& pip install --no-compile --no-cache-dir -r requirements.txt \
&& curl -L https://dl.k8s.io/release/v1.32.3/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.32.3/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& curl -L https://dl.k8s.io/release/v1.33.7/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/v1.33.7/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
# Install Vagrant
&& curl -LO https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}-1_$(dpkg --print-architecture).deb \

View File

@@ -5,8 +5,8 @@
become: false
run_once: true
vars:
minimal_ansible_version: 2.16.4
maximal_ansible_version: 2.17.0
minimal_ansible_version: 2.17.3
maximal_ansible_version: 2.18.0
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"

View File

@@ -6,34 +6,18 @@
# - to ensure we keep compatibility with old style group names
# - to reduce inventory boilerplate (defining parent groups / empty groups)
- name: Define groups for legacy less structured inventories
- name: Inventory setup and validation
hosts: all
gather_facts: false
tags: always
tasks:
- name: Match needed groups by their old names or definition
vars:
group_mappings:
kube_control_plane:
- kube-master
kube_node:
- kube-node
calico_rr:
- calico-rr
no_floating:
- no-floating
k8s_cluster:
- kube_node
- kube_control_plane
- calico_rr
group_by:
key: "{{ (group_names | intersect(item.value) | length > 0) | ternary(item.key, '_all') }}"
loop: "{{ group_mappings | dict2items }}"
roles:
- dynamic_groups
- validate_inventory
- name: Install bastion ssh config
hosts: bastion[0]
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubespray_defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }

Some files were not shown because too many files have changed in this diff Show More