Compare commits

...

353 Commits

Author SHA1 Message Date
Kay Yan
c33e4d7bb7 fix-resolv.conf-nameserver-inline-comments (#10415) 2023-09-07 05:34:59 -07:00
Mohamed Omar Zaian
24b82917d1 [calico] add v3.25.2 and make it default (#10414) 2023-09-06 19:50:56 -07:00
Florian Ruynat
9696936b59 Fixup recover control plane playbook + add debian12/cilium test (#10411)
* Add debian12 cilium testing

* Fixup recover control plane playbook
2023-09-05 10:42:52 -07:00
Mohamed Omar Zaian
aeca9304f4 Update etcd version on README (#10410) 2023-09-04 03:11:49 -07:00
NierYYDS
8fef156e8f fix: specify owner to kube_owner in task of copy cni plugins (#10407)
if not set owner to kube_owner in unarchive module, the owner of /opt/cni/bin will changed to root, which is inconsistent with the previous task.
2023-09-04 02:29:49 -07:00
Kay Yan
8497528240 update-load-balancers-versions (#10409) 2023-09-03 23:57:49 -07:00
蔣 航
ebd71f6ad7 Fix Typo kubelet_topology_manager_policy (#10384)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-09-03 23:39:48 -07:00
Nicolas Goudry
c677438189 docs: add command to restart nginx-proxy container when adding node (#10406) 2023-09-01 09:24:32 -07:00
Mohamed Omar Zaian
d646053c0e [feat] Update metrics server to v0.6.4 (#10400) 2023-08-30 00:44:47 -07:00
Uri Zafrir
c9a7ae1cae Update README.md (#10398) 2023-08-29 02:33:22 -07:00
Mohamed Omar Zaian
e84c1004df [containerd] add hashes for 1.7.4-5 (#10397) 2023-08-28 19:29:20 -07:00
Kay Yan
b19b727fe7 change maximal_ansible_version to 2.15 (#10395) 2023-08-28 04:35:45 -07:00
Louis Tu
0932318b85 fix not-found service error (#10391)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-08-24 19:05:17 -07:00
Daniel Strufe
e573a2f6d4 Add huawei cloud controller (#10198)
* Add huaweicloud as external cloud controller

* Add huaweicloud example config

* Rename AK,SK to ACCESS_KEY and SECRET_KEY

* Add reference to huaweicloud

* Fix variable naming

* Fix env var name

* Update example

* Fix variable naming

* Fix cloud_config path

* Add namespace for leader election

* Revert reviewers

* Delete OWNERS

Delete owners who are not responsible here.

* Fix build validation
2023-08-24 18:55:17 -07:00
Mohamed Omar Zaian
52c1826423 [kubernetes] Make 1.27.5 default (#10392)
* Add hashes for 1.27.5 1.26.8, 1.25.13
* Address CVE-2023-3955 , CVE-2023-3676
* Make kubernetes v1.27.5 default
2023-08-24 18:51:17 -07:00
Samuel Liu
e1881fae02 Install etcdutl file by default (#10385) 2023-08-23 07:04:22 -07:00
Victor Morales
5ed85094c2 Update checksum values (#10369)
The following binaries has been updated:

* crio
* krew
* runc
* crun
* gvisor
* nerdctl
* skopeo
* yq

Signed-off-by: Victor Morales <chipahuac@hotmail.com>
2023-08-18 09:46:29 -07:00
tenni
bf29ea55cf fix: flatcar bootstrap (#10363) 2023-08-18 08:14:29 -07:00
Louis Tu
cafe4f1352 Add kubelet topology manager policy on the node (#10370)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-08-18 01:26:28 -07:00
cortex3
a9ee1c4167 fix argocd install not working using the kubespray docker image (#10371) 2023-08-17 18:30:28 -07:00
Florian Ruynat
a8c1bccdd5 Move runroot from crio.conf to storage.conf (#10372) 2023-08-17 10:17:22 -07:00
Mohamed Omar Zaian
71cf553aa8 [containerd] add hashes for 1.7.3 , 1.6.22 , 1.6.23 (#10368) 2023-08-17 05:05:24 -07:00
Mohamed Omar Zaian
a894a5e29b [argocd] update argocd to v2.8.0 (#10364) 2023-08-16 21:38:20 -07:00
Mohamed Omar Zaian
9bc7492ff2 [kubernetes] Make 1.27.4 default (#10359) 2023-08-16 21:12:19 -07:00
yun
77bda0df1c Fix containerd config_path mirrors and remove nerdctl insecure_registry (#10196)
* Fix containerd_registries in config_path for mirrors and remove nerdctl global insecure_registry setting

* Make containerd hosts.toml mode 0640

* Add containerd_registries_mirrors and keep containerd_registries to pass packet_debian11-calico-upgrade
2023-08-16 05:18:27 -07:00
cortex3
4c37399c75 fix hcloud-cloud-controller-manager not working in certain setups (#10297) 2023-08-16 05:14:27 -07:00
Mohamed Omar Zaian
cd69283184 [helm] upgrade to 3.12.3 (#10365) 2023-08-16 05:10:29 -07:00
R. P. Taylor
cf3b3ca6fd clean up /etc/hosts file if populate_inventory_to_hosts_file is false (#10144)
* de-populate hosts file if populate_inventory_to_hosts_file is false

keep newline

* fix when condition
2023-08-15 20:22:28 -07:00
Luke Simmons
1955943d4a Removes Ansible reinstall from pipeline (#10032) 2023-08-14 05:11:21 -07:00
charlychiu
3b68d63643 fix: not mount tls when disable (#10357) 2023-08-11 09:01:27 -07:00
Arthur Outhenin-Chalandre
d21bfb84ad project: resolve ansible-lint key-order rule (#10314)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-08-10 00:57:27 -07:00
Nicolas Goudry
2a7c9d27b2 fix(multus): loop_control template error when item is None (#10347) 2023-08-09 20:51:26 -07:00
ERIK
9c610ee11d not requiring 'v' in youki version (#10346)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-08-08 19:33:51 -07:00
Francisco Orselli
7295d13d60 [EOS-11830] Use ETCD port 2381 for metrics (#10332) 2023-08-08 11:06:16 -07:00
ERIK
2fbbb70baa Fix youki binary download url (#10337)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-08-08 06:12:15 -07:00
Nico
b5ce69cf3c Set owner/group to root/root when unarchiving kata-containers (#10338)
Set owner/group to root/root when unarchiving kata-containers binary to prevent kata-containers binaries/directories and especially / from getting chowned to 1001:123, the file owner specified in the kata-containers archive
2023-08-08 05:06:15 -07:00
Arthur Outhenin-Chalandre
1c5f657f97 tests/packet-ci: sanitize branch name for kubernetes labels (#10315)
'/' doesn't work in kubernetes label so we replace it.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-08-08 01:54:15 -07:00
Arthur Outhenin-Chalandre
9613ed8782 Use supported version of fedora in CI (#10108)
* tests: replace fedora35 with fedora37

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: replace fedora36 with fedora38

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* docs: update fedora version in docs

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* molecule: upgrade fedora version

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: upgrade fedora images for vagrant and kubevirt

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* vagrant: workaround to fix private network ip address in fedora

Fedora stop supporting syconfig network script so we added a workaround
here
https://github.com/hashicorp/vagrant/issues/12762#issuecomment-1535957837
to fix it.

* netowrkmanager: do not configure dns if using systemd-resolved

We should not configure dns if we point to systemd-resolved.
Systemd-resolved is using NetworkManager to infer the upstream DNS
server so if we set NetworkManager to 127.0.0.53 it will prevent
systemd-resolved to get the correct network DNS server.

Thus if we are in this case we just don't set this setting.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* image-builder: update centos7 image

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* gitlab-ci: mark fedora packet jobs as allow failure

Fedora networking is still broken on Packet, let's mark it as allow
failure for now.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-08-08 00:50:12 -07:00
bo.jiang
b142995808 Add ErikJiang as reviewer
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-08-08 09:46:11 +02:00
Arthur Outhenin-Chalandre
36e5d742dc Resolve ansible-lint name errors (#10253)
* project: fix ansible-lint name

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: ignore jinja template error in names

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: capitalize ansible name

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: update notify after name capitalization

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-07-26 07:36:22 -07:00
Kay Yan
b9e3861385 add-cpuManagerPolicy (#10309) 2023-07-25 13:12:20 -07:00
Louis Tu
f2bb3aba1e Update README (#10308)
update minimal ansible version to v2.14+

update supported list of docker versions

Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-07-24 21:08:04 -07:00
Mikhail Vintcukevich
4243003c94 fix: define variable for reset confirmation (#10303) 2023-07-23 23:58:14 -07:00
satandyh
050bd0527f enchance security with CIS Kubernetes V1.23 (#10304)
Benchmark item number 4.1.9
2023-07-23 19:24:11 -07:00
Mohamed Omar Zaian
fe32de94b9 [kubernetes] Add hashes for kubernetes 1.27.4, 1.26.7, 1.25.12 (#10300) 2023-07-23 19:20:10 -07:00
Louis Tu
d2383d27a9 Bump versions (#10295)
The following applications have been upgraded:

* helm
* skopeo
* yq

Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-07-19 00:26:03 -07:00
somewho
788190beca reset_confirmation in reset.yml (#10288)
* Update reset.yml

reset confirmation user input fix

* Update reset.yml

added default for non-interactive run in ci/cd

* fix reset_confirmation in reset.yml

* skip reset_confirmation promtp when reset_confirmation is defined via extra-vars option (for tests)
* check both string type and object type with user_input for reset_confirmation var

* reset_confirmation_prompt in conjunction with reset_confirmation

improvement inspired by:
https://github.com/kubernetes-sigs/kubespray/pull/10288#issuecomment-1637056880
2023-07-18 05:45:10 -07:00
yangsenzk
13aa32278a bugfix: fix grep command without -w option causing prefix matched while adding one etcd member (#10291) 2023-07-13 21:43:29 -07:00
Mohamed Omar Zaian
38ce02c610 [ingress-nginx] upgrade to 1.8.1 (#10281) 2023-07-10 21:05:12 -07:00
Arthur Outhenin-Chalandre
9312ae7c6e project: fix galaxy ansible-lint rule (#10277)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-07-07 00:01:04 -07:00
yun
1d86919883 Clean up calicoctl_alternate_download_url (#10271) 2023-07-05 08:16:57 -07:00
Victor Morales
78c1775661 Upgrade versions (#9798)
The following applications have been upgraded:

* Cilium
* Helm
* crun
* Katacontainers
* youki
* gvisor
* skopeo
* yq

Signed-off-by: Victor Morales <chipahuac@hotmail.com>
2023-07-05 03:32:58 -07:00
Arthur Outhenin-Chalandre
5d00b851ce project: fix var-spacing ansible rule (#10266)
* project: fix var-spacing ansible rule

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix spacing on the beginning/end of jinja template

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix spacing of default filter

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix spacing between filter arguments

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix double space at beginning/end of jinja

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix remaining jinja[spacing] ansible-lint warning

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-07-04 20:36:54 -07:00
Kundan Kumar
f8b93fa88a link for aws_alb_ingress_controller (#10264) 2023-07-03 03:44:51 -07:00
jeremy-thuon
0405af1107 [cilium] add custom vars for clusterrole cilium operator (#10267) 2023-07-03 02:20:51 -07:00
Wendy
872e173887 update cilium version to 1.13.4 (#10269)
Signed-off-by: yulng <wei.yang@daocloud.io>
2023-07-03 00:02:51 -07:00
yun
b42757d330 Fix RHEL subscription activation key by removing auto_attach and syspurpose (#10258) 2023-06-30 03:21:45 -07:00
Florian Berchtold
a4d8d15a0e Add github container registry (#10265) 2023-06-30 03:17:45 -07:00
Arthur Outhenin-Chalandre
f8f197e26b Fix outdated tag and experimental ansible-lint rules (#10254)
* project: fix outdated tag and experimental

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: remove no longer useful noqa 301

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: replace unnamed-task by name[missing]

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: fix daemon-reload -> daemon_reload

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-30 02:51:57 -07:00
Cyclinder
4f85b75087 using configmap to configure calico cni config (#10177)
Signed-off-by: cyclinder qifeng.guo@daocloud.io

Signed-off-by: cyclinder qifeng.guo@daocloud.io
2023-06-30 02:51:45 -07:00
Arthur Outhenin-Chalandre
8895e38060 Update doc after ansible-core upgrade to 2.14 (#10261)
* docs/ansible: update ansible venv install method and ansible version

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* docs/ansible: add a disclaimer about using version below python 3.9

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-28 06:28:32 -07:00
yun
9a896957d9 Dockerfile after ansible upgrade (#10259) 2023-06-28 03:54:32 -07:00
Arthur Outhenin-Chalandre
37e004164b metallb: increase wait timeout from 30s to 2m (#10260)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-27 20:54:32 -07:00
Mathieu Parent
77069354cf Add system-upgrade to upgrade-cluster playbook (#10184) 2023-06-26 18:24:30 -07:00
ERIK
2aafab6c19 fix etcdctl copy operation in crio (#10242)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-06-26 18:08:30 -07:00
nltimv
35aaf97216 Stop firewalld for rockylinux8 on Vagrant (#10252) 2023-06-26 18:02:30 -07:00
Arthur Outhenin-Chalandre
25cb90bc2d Upgrade ansible (#10190)
* project: update all dependencies including ansible

Upgrade to ansible 7.x and ansible-core 2.14.x. There seems to be issue
with ansible 8/ansible-core 2.15 so we remain on those versions for now.
It's quite a big bump already anyway.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: install aws galaxy collection

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* ansible-lint: disable various rules after ansible upgrade

Temporarily disable a bunch of linting action following ansible upgrade.
Those should be taken care of separately.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve deprecated-module ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve no-free-form ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve schema[meta] ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve schema[playbook] ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve schema[tasks] ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve risky-file-permissions ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve risky-shell-pipe ansible-lint error

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: remove deprecated warn args

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: use fqcn for non builtin tasks

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: resolve syntax-check[missing-file] for contrib playbook

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* project: use arithmetic inside jinja to fix ansible 6 upgrade

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-26 03:15:45 -07:00
Arthur Outhenin-Chalandre
3311e0a296 tests: cleanup stale packet namespace automatically (#10245)
* tests: cleanup stale packet namespace automatically

Cancelled job on Gitlab can produce stale VMs as the delete playbook
will never be executed. This commits allow removing old vms by getting
all the namespace created from the same branch with an older pipeline
id.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: cleanup stale packet namespace after 2 hours

This ensure that we don't have any packet namespace remaining for more
than 2 hours. All the jobs complete usually within 30min-1hour so 2
hours is enough to detect a stale namespace.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: ignore vm cleanup failure

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: use pipeline_id var instead of fetching namespace for cleanup packet vm

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-26 00:57:08 -07:00
Tiago Epifânio
eb31653d66 Disable fapolicyd service (#10081) 2023-06-23 20:49:06 -07:00
Vyacheslav Vershinin
180df831ba feat: add option to use custome CA for https_proxy (#10215) 2023-06-23 09:59:24 -07:00
Pat Riehecky
2fa64f9fd6 Add flag to prevent running helm update (#10169)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-06-23 06:03:23 -07:00
Vaibhav Goel
a1521dc16e Updates the broken links in ingress-controller and kubernetes-apps under kubespray docs (#10239) 2023-06-22 02:29:39 -07:00
Victor Morales
bf31a3a872 Split defaults main file (#10121) 2023-06-22 02:19:40 -07:00
peterw
4a8fd94a5f add growpart azure enabled (#10241) 2023-06-21 06:23:40 -07:00
Louis Tu
e214bd0e1b clean up outdate os files (#10236)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-06-21 02:45:39 -07:00
Arthur Outhenin-Chalandre
4ad89ef8f1 local_path_provisioner: fix invalid podhelper yaml (#10237)
New line was not inserted between image and imagePullPolicy for some
reasons with the jinja. Simplifying this altogether should fix this.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-20 20:10:21 -07:00
Emin AKTAS
7a66be8254 bump flannel version to v0.22.0 and flannel-cni-plugin version to v1.1.2 (#10205)
This also changes flannel repository from flannelcni to flannel

Signed-off-by: Emin Aktaş <eminaktas34@gmail.com>
2023-06-19 16:52:24 -07:00
Samuel Liu
db696785d5 update local path provisioner version and remove psp (#10054)
* update local_path_provisioner_version

* remove psp and update cm
2023-06-19 11:44:21 -07:00
Mohamed Omar Zaian
dfec133273 [calico] add hashes for v3.26.1 (#10235) 2023-06-19 10:40:23 -07:00
Xieql
41605b4135 Fix broken calico link in README (#10232)
Signed-off-by: Xieql <xieqianglong@huawei.com>
2023-06-19 09:58:21 -07:00
Arthur Outhenin-Chalandre
475abcc3a8 project: drop Kubernetes 1.24 support (#10234)
* project: drop Kubernetes 1.24 support

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* readme: bump crio version to 1.27 in the readme

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-19 08:42:21 -07:00
Mohamed Omar Zaian
3a7d84e014 [feature] Correct CoreDNS versions for kubernetes releases (#10233) 2023-06-19 07:34:22 -07:00
Mohamed Omar Zaian
ad3f84df98 [argocd] update argocd to v2.7.4 (#10226) 2023-06-19 07:20:22 -07:00
Emin AKTAS
79e742c03b bump coredns version to 1.10.1 (#10199)
Signed-off-by: Emin Aktaş <eminaktas34@gmail.com>
2023-06-19 04:06:21 -07:00
Victor Morales
d79ada931d Update download hash bash script (#10120) 2023-06-19 02:52:22 -07:00
Takuya Murakami
b2f6abe4ab fix parsing of RHSM proxy configuration (#10060) (#10228)
Remove URL scheme part from http_proxy for server.proxy_hostname
2023-06-19 02:24:21 -07:00
Louis Tu
c5dac1cdf6 Add Debian 12(bookworm) support and CI (#10221)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-06-19 02:20:21 -07:00
Yoshitaka Fujii
89a0f515c7 Added terraform support for NIFCLOUD (#10227)
* Add NIFCLOUD

* Add tf-validate-nifcloud in gitlab-ci
2023-06-19 02:02:22 -07:00
Samuel Liu
d296adcd65 allow change argocd url (#10176) 2023-06-18 19:18:20 -07:00
Mohamed Omar Zaian
141064c443 [helm] upgrade to 3.12.1 (#10225) 2023-06-18 17:04:20 -07:00
ERIK
54859cb814 Fix etcdctl copy operation (#10230)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-06-16 04:18:19 -07:00
Mohamed Omar Zaian
0f0991b145 [ingress-nginx] upgrade to 1.8.0 (#10223) 2023-06-15 19:48:25 -07:00
Mohamed Omar Zaian
658d62be16 [kubernetes] upgrade versions to address CVE-2023-2728 (#10220)
* [kubernetes] Add hashes for 1.27.3, 1.26.6, 1.25.11
* [kubernetes] make 1.26.6 default
2023-06-15 19:48:18 -07:00
Mohamed Omar Zaian
0139bfdb71 [calico] add hashes for v3.26.0 (#10224) 2023-06-15 19:44:18 -07:00
Kay Yan
efeac70e40 add-ci-for-debian12 (#10222) 2023-06-15 08:34:18 -07:00
Furkan Türkal
b4db077e6a containerd: bump to 1.7.2 (#10219)
Signed-off-by: Furkan <furkan.turkal@trendyol.com>
2023-06-15 03:22:18 -07:00
R. P. Taylor
280e4e3b57 exclude terraform.tfstate backups in .gitignore (#10216)
Newer versions of Terraform use timestamps in the backup name, e.g. `terraform.tfstate.1614728479.backup`
2023-06-14 19:20:17 -07:00
Ugur Can Ozturk
a962fa2357 [podSecurityConfiguration]: fix apiVersion and change default policy versions (#10210)
Signed-off-by: Ugur <ugurozturk918@gmail.com>
2023-06-12 17:55:57 -07:00
palme
775851b00c [flatcar] add python dependency check for helm-apps (#10192)
* add pyyaml install via task instead of package

* Change condition for better consistency in the codebase
2023-06-12 17:51:58 -07:00
Arthur Outhenin-Chalandre
f8fadf53cd helm: fix pyyaml package on RH distros (#10204)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-12 17:39:57 -07:00
ERIK
ce13699dfa Use a uniform way to get the local path of the binaries (#10211)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-06-12 00:39:48 -07:00
Ashish Singh Dev
fc5937e948 fix gce-pd-csi driver (#10208)
* fix gce-pd-csi driver

* fixed, 1. reading replicas value from defaults.yml, and 2. corrected gcp-pd-csi driver version in README.md
2023-06-11 20:45:47 -07:00
Kay Yan
729e2c565b cleanup-for-2.22.1 (#10201) 2023-06-08 07:36:15 -07:00
James
26ed50f04a Enable interruptible jobs' pipelines (#10167) 2023-06-08 03:12:13 -07:00
Emin AKTAS
2b80d053f3 bump nodelocaldns version to 1.22.20 (#10200)
Signed-off-by: Emin Aktaş <eminaktas34@gmail.com>
2023-06-08 03:08:14 -07:00
Pat Riehecky
f5ee8b71ff Permit custom names for API server lb/proxy containers. (#10166)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-06-08 02:54:13 -07:00
James
4c76feb574 Kubelet csr approver fixes (#10165)
* Fix upgrade-path for kubelet-csr-approver

Fixes an error when you enable kubelet-csr-approver when upgrading.
It hangs waiting for the certificate to be approved since the
kubelet-csr-approver is not installed yet.

* Add missing package when using helm role
2023-06-06 02:27:00 -07:00
Mateusz Mojsiejuk
18d84db41c Don't search filesystem mounts in docker build step (#10131)
Limit find cmd to /usr/ where __pycache__ files are located
2023-06-06 01:13:01 -07:00
Kenichi Omichi
08a571b4a1 Remove flannel_cni_download_url (#10188)
Since the commit 937e64d296 the variable
has not been used at all.
This removes it from offline.yml which was the remaining part.
2023-06-05 05:57:25 -07:00
yun
5ebd305d17 remove cri-o using crio_bin_files (#10182) 2023-06-04 20:02:42 -07:00
Arthur Outhenin-Chalandre
edc73bc3c8 project: upgrade test dependencies and drop ansible-core 2.11 (#10034)
Molecule 5.0 require ansible-core 2.12.10.
So this commit we update ansible-core from 2.12.5 to 2.12.10.
We also drop supporting two ansible-core version. Also we now use the "oldest"
still supported ansible-core version as both 2.11 is EOL and not
supported by molecule.



tests/molecule: remove linting in molecule to support molecule 5



tests/molecule: remove role name check for molecule 5 support

Kubespray doesn't use ansible galaxy style naming so we have to disable
that check.



contrib/inventory_builder: fix tox.ini for tox4



tests/molecule: fix get_playbook in testinfra tests



tests: upgrade most tests requirements

Exclude ansible-lint for now, I will do that in a separate PR.



tests/molecule: force kvm driver option

If we don't do this it fallbacks to qemu emulated on our CI for some
reasons.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-06-02 20:40:40 -07:00
Mohamed Omar Zaian
b7fa2d7b87 Fix metrics-server for k8s 1.26 (#10183) 2023-06-02 18:16:40 -07:00
Samuel Liu
7771ac6074 add krew_no_upgrade_check (#10175) 2023-06-02 18:12:40 -07:00
Michael Stötzer
f25b6fce1c Add node_taints to aws_inventory script (#10168) (#10170) 2023-06-01 22:12:52 -07:00
Samuel Liu
d7b79395c7 Add labels to kube-vip static pods (#10139) 2023-06-01 16:45:46 -07:00
Richard Fairthorne
ce18b0f22d fix missing newline in template (#10174) 2023-05-31 23:27:45 -07:00
Aleksandr Karabanov
2d8f60000c Solves #2933: Allow http_proxy, https_proxy and no_proxy environment variables in cert-manager playbook (#10162) 2023-05-31 20:23:45 -07:00
yjqg6666
0b102287d1 [#10148] The download.timeout can be changed by variable download.timeout (#10149)
Reference:
  https://docs.ansible.com/ansible/latest/collections/ansible/builtin/get_url_module.html#parameter-timeout
2023-05-31 18:15:45 -07:00
Pat Riehecky
d325fd6af7 Don't create calico CNI dir when not using calico (#10156)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-31 08:35:48 -07:00
Pat Riehecky
e949b8a1e8 Update cilium to latest (1.13.3) (#10158)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-31 03:23:46 -07:00
Pat Riehecky
ab6e284180 Locate mount names isn't a change to the system (#10161)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-31 01:33:46 -07:00
Pat Riehecky
7421b6e180 Running ping doesn't change state (#10160)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-31 01:29:46 -07:00
Vaibhav Goel
a2f03c559a Fixed the incorrect links in kubespray/docs (#10159) 2023-05-30 19:35:47 -07:00
Pat Riehecky
3ced391fab Print the found version when it is incorrect (#10109)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-30 11:43:49 -07:00
Jeroen Rijken
ea7dcd46d7 Update MetalLB deployment, wait for resource. (#9995)
* Update MetalLB deployment, wait for resource.

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>

* yml to yaml, add basic test for metallb

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>

---------

Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>
2023-05-30 11:37:49 -07:00
Mohammad KhoshZaban
94e33bdbbf fix manage-offline-files script - wrong path (#9886) 2023-05-28 21:27:42 -07:00
Maxime Leroy
29f833e9a4 fix(ssl-ca): mount ssl ca directories (#9794)
Signed-off-by: Maxime Leroy <19607336+maxime1907@users.noreply.github.com>
2023-05-28 19:43:42 -07:00
qlijin
8c32be5feb Add insecure_registry config to crio.conf (#10142) 2023-05-28 19:03:41 -07:00
Victor Login
0ba2e655f4 Fix problem migration to k8s 1.27 (#10136)
* Fix `The task includes an option with an undefined variable` for 1.27

* delete old flag --container-runtime

Signed-off-by: Victor Login <batazor@evrone.com>

---------

Signed-off-by: Victor Login <batazor@evrone.com>
2023-05-28 17:09:42 -07:00
Daniel Strufe
78189186e5 Rebasechanges from upstream (#10128) 2023-05-26 00:28:52 -07:00
Andrei Costescu
96e875cd50 Add systemd_resolved_disable_stub_listener (#9875) 2023-05-25 10:04:51 -07:00
Kay Yan
808524bed6 fix-ci-tf-elastx_cleanup (#10133) 2023-05-25 08:38:52 -07:00
ERIK
75e00420ec Add arch and version to the downloaded binary name (#10122)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-05-24 22:30:50 -07:00
Mohamed Omar Zaian
8be5604da4 [kubernetes] support 1.27.2 (#9976) 2023-05-24 20:00:50 -07:00
Arthur Outhenin-Chalandre
02624554ae Remove end of life ubuntu versions in CI (#10107)
* tests: replace ubuntu16 with ubuntu20

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: replace ubuntu18 with ubuntu20

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* docs: update docs to remove support for ubuntu 16 and 18

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* molecule: upgrade ubuntu versions

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* vagrant: upgrade ubuntu versions

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: cleanup ubuntu{16,18}

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* tests: increase ubuntu22 ram to allow molecule creation

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-05-24 19:56:50 -07:00
Kay Yan
9d1e9a6a78 kube_ovn_cni_config_priority (#10125) 2023-05-24 18:34:51 -07:00
Kay Yan
861d5b763d fix-dockerfile (#10127) 2023-05-24 17:02:50 -07:00
Kay Yan
4013c48acb cleanup-for-2.22.0 (#10126) 2023-05-24 08:56:50 -07:00
Rob Tongue
f264426646 cert-manager controller args: (#10049)
- Adding in the ability to feed extra-args to cert-manager-controller.
2023-05-24 08:12:53 -07:00
Mathias Petermann
862fd2c5c4 feature(ingress_nginx) Add ingressclass for ingress_nginx (#10091)
Add option to configure class as the default class
Add option to disable wathcing for ingresses without class

Remove redundant if that always evaluates to true

Fix default value missing for ingress_nginx_default
2023-05-24 04:12:50 -07:00
darkobas2
4014a1cccb fix multus include (#10105)
``
"msg": "Failed to template loop_control.label: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'item'. 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'item'", "skip_reason": "Conditional result was False"}
``
fixes case when multus should NOT be included.
2023-05-23 01:12:27 -07:00
Arthur Outhenin-Chalandre
c55844b80e playbooks: bootstrap in facts playbook (#10069)
Calling bootstrap in facts.yaml so that we can always collect facts even on
new nodes. This is useful when you want to add nodes to an inventory
beforehand and then collect facts and scale the cluster with the scale
playbook and --limits. With dynamic inventory sometimes it might be more
difficult to add the nodes after running the facts playbook in this
specific situation.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-05-23 00:18:28 -07:00
Ricky Sadowski
a4fa9aed75 fix: use dl.k8s.io, not kubernetes-release bucket (#10118)
Signed-off-by: Ricky Sadowski <richard.j.sadowski@gmail.com>
2023-05-22 17:50:21 -07:00
Mohamed Omar Zaian
659001c9d7 [nerdctl] upgrade to version 1.4.0 (#10119) 2023-05-22 17:44:20 -07:00
Mohamed Omar Zaian
07647fb720 Fix broken CI tests link in README (#10114) 2023-05-22 16:58:20 -07:00
James
161bd55ab2 Remove deprecated crio_pids_limits (#10056)
As per https://github.com/cri-o/cri-o/pull/5831, option is now
deprecated.
2023-05-22 08:49:03 -07:00
Mohamed Omar Zaian
4b67c7d6a6 [calico] add hashes for v3.24.6 (#10113) 2023-05-22 07:50:35 -07:00
James
e26921e3e1 Fix search path for custom-cni (#10088) 2023-05-22 05:22:30 -07:00
Mohamed Omar Zaian
f80a5755c3 [feat] Update pause image version to v3.9 (#10112) 2023-05-22 03:42:31 -07:00
Vasubabu
feeea7e512 Enabled module_name in provider meta for Equinix (#10044) 2023-05-21 17:32:19 -07:00
Arthur Outhenin-Chalandre
09ea2ca688 project: fix arithmetic outside of jinja (#10106)
This feature no longer works on Ansible 6 / ansible-core 2.13. We do not
support these version officially yet but this will help for the future
upgrade and may help some people running those inadvertently.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-05-21 17:28:21 -07:00
Mohamed Omar Zaian
b7a8d7a4d5 [helm] upgrade to 3.12.0 (#10085) 2023-05-19 06:16:30 -07:00
Mohamed Omar Zaian
9405eb821b [feature] Supprt enabling cpu limit in corends deployment (#10103) 2023-05-19 03:38:29 -07:00
Mohamed Omar Zaian
708677caf1 [argocd] update argocd to v2.7.2 (#10086) 2023-05-19 02:18:29 -07:00
Mohamed Omar Zaian
d5cdae1f16 [kubernetes] Add hashes for 1.26.4-5, 1.25.9-10, 1.24.13-14 (#9983) 2023-05-18 20:06:28 -07:00
qlijin
b7a9217d77 Some update for the deploy on fedora coreos: (#10030)
- Test with new version: 37.20230322.3.0. Both containerd and
  cri-o is tested
- bugfix: when we use crio and the var bin_dir is changed,
  there will be some error about the new bin dir.
2023-05-18 15:46:33 -07:00
Kay Yan
82633c6f61 Remove the Support of Debian 9 because Debian 9 is EOF (#10097)
* remove-debian9-support

* Add six module into openstack-cleanup/requirements.txt (#10099)

To fix tf-elastx_cleanup job which was failed with the following error:

   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/generic/password.py", line 16, in <module>
     from keystoneauth1.identity import v3
   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/v3/__init__.py", line 27, in <module>
     from keystoneauth1.identity.v3.oauth2_mtls_client_credential import *  # noqa
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/v3/oauth2_mtls_client_credential.py", line 17, in <module>
     import six
 ModuleNotFoundError: No module named 'six'

---------

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2023-05-18 15:42:33 -07:00
Kenichi Omichi
7afbdb3e1e Drop canal network_plugin (#10100)
According to the canal github[1] the repo is not maintained over 5 years.
In addition, the README says
```
  Originally, we thought we might more deeply integrate the two projects
  (possibly even going as far as a rebranding!). However, over time it
  became clear that that wasn't really necessary to fulfil our goal of
  making them work well together. Ultimately, we decided to focus on
  adding features to both projects rather than doing work just to
  combine them.
```
So it is difficult to support canal by Kubespray at this situation.

[1]: https://github.com/projectcalico/canal
2023-05-18 03:40:33 -07:00
Kenichi Omichi
c14d9c5c97 Add six module into openstack-cleanup/requirements.txt (#10099)
To fix tf-elastx_cleanup job which was failed with the following error:

   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/generic/password.py", line 16, in <module>
     from keystoneauth1.identity import v3
   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/v3/__init__.py", line 27, in <module>
     from keystoneauth1.identity.v3.oauth2_mtls_client_credential import *  # noqa
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/local/lib/python3.11/site-packages/keystoneauth1/identity/v3/oauth2_mtls_client_credential.py", line 17, in <module>
     import six
 ModuleNotFoundError: No module named 'six'
2023-05-17 20:22:33 -07:00
Kenichi Omichi
48035e3a7e Drop CI jobs related to canal (#10092)
* Drop CI jobs related to canal

According to the canal github[1] the repo is not maintained over 5 years.
In addition, the README says

  Originally, we thought we might more deeply integrate the two projects
  (possibly even going as far as a rebranding!). However, over time it
  became clear that that wasn't really necessary to fulfil our goal of
  making them work well together. Ultimately, we decided to focus on
  adding features to both projects rather than doing work just to
  combine them.

So we don't need to run CI jobs related to the canal at this situation.

[1]: https://github.com/projectcalico/canal

* Update ci.md
2023-05-17 04:42:33 -07:00
Cyclinder
a257e61f60 bump cni version to v1.3.0 (#10058)
Signed-off-by: cyclinder qifeng.guo@daocloud.io

Signed-off-by: cyclinder qifeng.guo@daocloud.io
2023-05-17 01:42:33 -07:00
Kulwant Singh
9948863d3a use dl.k8s.io not gs://kubernetes-release (#10066) 2023-05-16 21:02:33 -07:00
Mikhail Gorozhin
3a3addb91e Ignore errors in check mode performing "Disable swapOnZram for Fedora" (#10077) 2023-05-16 16:38:33 -07:00
Samuel Liu
72b8830f62 fix custom cni task name (#10087) 2023-05-16 05:03:36 -07:00
Kay Yan
e6ba73349e fix-ci-broken-by-docker-limit (#10083) 2023-05-16 01:15:36 -07:00
Louis Tu
55e581be3b Clear http scheme on containerd insecure-registry tls config (#10084)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-05-16 00:47:36 -07:00
蒋 航
9cd7d66332 Fix Calico Installation (#10068)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-05-15 21:21:36 -07:00
Mohamed Omar Zaian
6ea7abf443 [ingress-nginx] upgrade to 1.7.1 (#10052) 2023-05-15 14:23:35 -07:00
Arthur Outhenin-Chalandre
3254080a1c cri-o: fix crio restart on config change (#10057)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-05-14 19:27:28 -07:00
Maxime Leroy
4ffe138dfa feat(coredns): coredns_rewrite_block to perform internal message rewriting (#10045)
Signed-off-by: Maxime Leroy <19607336+maxime1907@users.noreply.github.com>
2023-05-12 14:32:46 -07:00
Pat Riehecky
86b81a855a fix: typo in comment (#10064)
Signed-off-by: Pat Riehecky <riehecky@fnal.gov>
2023-05-12 05:59:01 -07:00
Mohamed Omar Zaian
bde261bd06 [containerd] add hashes for version 1.7.1, 1.6.21 (#10061) 2023-05-12 02:42:47 -07:00
Manuelraa
2b75552d1c Replace swap vars with single kubelet_fail_swap_on (#10036) 2023-05-11 10:53:04 -07:00
Florian Ruynat
951face343 Migrate CI_BUILD_ID to CI_JOB_ID and CI_BUILD_REF to CI_COMMIT_SHA (#10063) 2023-05-11 04:21:17 -07:00
James
07d45e6b62 Kubelet csr approver (#9877)
* chore(helm-apps): fix README example

README shows a non-working example according to the specs for this role.

* Add support for kubelet-csr-approver

Co-Authored-By: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* Add tests for kubelet-csr-approver

Co-Authored-By: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* Add Documentation for Kubelet CSR Approver

Co-Authored-By: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Co-authored-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-05-10 17:49:09 -07:00
John Adams
9a72de54de Cleanup of external openstack cloud config (#9899)
* redorder options and remove use-octavia

* lowercase true/false
2023-05-10 03:41:02 -07:00
Navid Nabavi
4313c13656 [feature] Add coredns_additional_configs to handle any extra configurations for coredns deployment (#10023) (#10025) 2023-05-09 06:45:58 -07:00
Eugene Marchanka
c880b24a80 [MetalLB] Remove unused resources (#10004)
* Fix MetalLB deploy

This will fix MetalLB deploy

* Remove `metallb_ip_range` check

* Remove missing `metallb-config.yml`

* fix template name

* make deployment of layer3 conditional

* revert

* revert
2023-05-08 17:20:52 -07:00
Denis
29827711f1 fix: missed double quotes in cri-o config (#10040) 2023-05-07 17:27:16 -07:00
Qasim Mehmood
ab6d204641 Remove deprecated provider, fix flatcar configs, enable CI tests and refactor hetzner terraform (#10002)
* Remove deprecated provider and fix flatcar configs

* Refactor for DRYness

* Add missing line endings

* Enable tests for hetzner terraform in CI

* Add missing inventory for CI tests
2023-05-07 17:15:16 -07:00
ERIK
426b8913c0 Update flannel image repo (#10041)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-05-07 16:57:17 -07:00
Patrick
970ecbb008 Add runc v1.1.7 checksums (#10039)
* Add runc v1.1.7 checksums

* Add runc v1.1.6 and v1.1.5 checksums
2023-05-05 18:55:15 -07:00
Louis Tu
eb951f1c2a update rhsm repo trigger (#10001)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-05-02 12:00:16 -07:00
Luke Simmons
3378c9f385 Use caching to speed up docker build (#10008) 2023-05-02 11:56:15 -07:00
Aleksey Karpov
4c820b853b dockerfile ubuntu update to 22.04 (#10033)
dockerfile ubuntu update to 22.04

Update Dockerfile
2023-05-02 00:56:13 -07:00
Mohamed Omar Zaian
a505a4c71f [feat] Update metrics server to v0.6.3 (#10026) 2023-04-26 04:10:16 -07:00
pli
8727f88e41 metrics_server: add extras nodeselector, affinity, tolerations (#9972)
* metrics_server: add extras nodeselector, affinity, tolerations

* fix tolerations invalid YAML if undefined
2023-04-26 00:30:16 -07:00
Mohamed Omar Zaian
c2a8d543fb [flannel] update to v0.21.4 (#10027) 2023-04-25 13:08:16 -07:00
蒋航
4ddbd2bd2d Add Retry for restart kube-controller-manager (#10013)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-04-25 13:04:16 -07:00
Denis Kasanic
f9f5143c93 [cri-o] Bump versions to 1.26.3, 1.25.3, 1.24.5 (#9999)
Signed-off-by: Kasanic, Denis <denisx.kasanic@intel.com>
2023-04-24 17:13:02 -07:00
Mohamed Omar Zaian
fccd99c96c [nerdctl] upgrade to version 1.3.1 (#10024) 2023-04-24 11:13:01 -07:00
Mohamed Omar Zaian
dc7cf7ecd8 [helm] upgrade to 3.11.3 (#10022) 2023-04-24 08:41:02 -07:00
Denis Kasanic
169eb34a59 Fix playbook names for galaxy (#10021)
Signed-off-by: Kasanic, Denis <denisx.kasanic@intel.com>
2023-04-24 07:09:02 -07:00
Mohamed Omar Zaian
4deeaba335 [feature] Update dns-autoscaler (#9996) 2023-04-24 02:47:01 -07:00
蒋航
a59e27cb6b Update kube-vip to v0.5.12 (#10005)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-04-22 19:01:12 -07:00
Luke Simmons
617af4beda Updates requirements to latest available versions (#9938) 2023-04-20 22:43:11 -07:00
Samuel Liu
b3ed25ee35 use string for ipv6 forward conf (#9992) 2023-04-19 03:21:12 -07:00
Louis Tu
c7072b48dc add calico kubeconfig wait timeout (#9994)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-04-18 06:58:58 -07:00
Ho Kim
02dc9fbd3e Suggest to run reset script for first-time users (#9865) 2023-04-17 22:10:57 -07:00
Kay Yan
c98e1d1b5b add-kube-profile-to-scheduler (#9993) 2023-04-17 18:54:58 -07:00
pli
e907d55621 fix calico checksums mismatch (#9990) 2023-04-16 19:44:43 -07:00
lijin-union
cb318931aa * corrected a link (#9988)
* remove a useless parenthesis in the _sidebar file
2023-04-16 18:28:43 -07:00
Jeroen Rijken
709ae1d244 Update MetalLB and switch to CRD notation. (#9120)
Signed-off-by: Jeroen Rijken <jeroen.rijken@xs4all.nl>
2023-04-14 01:14:41 -07:00
Samuel Liu
73ce6aef97 kube.py support kubeconfig (#9982) 2023-04-14 00:14:40 -07:00
ERIK
6682a843b4 Support multi-arch using the same image name (#9978)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-04-13 00:36:36 -07:00
Kei Kori
dc33a1971d [etcd] fix make-ssl-etcd.sh.j2; move pem files only if any new certs exist (#9974) 2023-04-12 21:52:35 -07:00
Mohamed Omar Zaian
ed6f8df784 [feature] Update CoreDNS manifests (#9977) 2023-04-12 21:38:35 -07:00
Louis Tu
43216436ab disable rhsm repo when rhel_enable_repos is false (#9973)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-04-12 20:04:35 -07:00
pingrulkin
cdc25523bf Change nerdctl snapshotter to overlayfs by default (#9979) 2023-04-12 14:58:32 -07:00
Aleksey Karpov
b77780ebf7 Adding checksum verification kubectl (#9971) 2023-04-12 02:04:32 -07:00
Kay Yan
f27bea574e wqAdd-Port-Requirements (#9969) 2023-04-12 00:04:36 -07:00
Xingjian Zhang
c38cf5dd5c Fix confusing instance sizing (etcd, kube_master) in Vagrantfile (#9966) 2023-04-11 16:40:31 -07:00
Louis Tu
2985b129fc remove invalid character (#9970)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-04-11 04:27:19 -07:00
Aleksey Karpov
107cb7f549 Adding checksum verification kubectl (#9963)
* Adding checksum verification kubectl

Added checksum check of binary file, added PYTHONDONTWRITEBYTECODE variable to improve stability of pip after installing packages and deleting cache, added --no-compile switch to pip package installation to improve performance after deleting cache.

* Update Dockerfile
2023-04-11 02:47:18 -07:00
Xingjian Zhang
6c30b3f263 Add throwing error when specifying unsupported os in Vagrant (#9965) 2023-04-10 23:43:18 -07:00
Samuel Liu
0104396c50 use var: kube_apiserver_address (#9967) 2023-04-10 15:01:17 -07:00
Eugene Marchanka
eecaec2919 [vSphere-csi-driver] Custom namespace fails playbook (#9946)
* Fix: vSphere Error: `Apply a CSI secret manifest`

This PR will fix an issue that you will see on 2nd deploy when deploying External vSphere
How to re-produce:
1. Set custom `vsphere_csi_namespace: "vmware-system-csi"`
2. Deploy as usual
3. Observe no errors
4. Deploy 2nd time without `reset`
5. Playbook fails with:
```
TASK [kubernetes-apps/csi_driver/vsphere : vSphere CSI Driver | Apply a CSI secret manifest]
fatal: [node-00]: FAILED! => changed=true                                                                                                                                                 
  censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'
```

* create namespace if does not exist

* lint fix

* try to fix lint errors

* fix `too few spaces before comment`

* change the order of applied manifests

* typo
2023-04-09 22:13:15 -07:00
jeremy-thuon
4a03d13d08 [cilium] fix rbac and upgrade hubble v0.11.0 (#3) (#9959)
* [cilium] fix rbac and upgrade hubble v0.11.0 (#3)

* [cilium] fix rbac for LB bgp ipam

* [cilium] Upgrade Hubble to v0.11.0 and add mTLS between Hubble UI and Hubble Relay

* fix dns domain hubble for tls

---------

Co-authored-by: Thuon Jeremy <d107869@olinfra1.infra.bdm.outscale.c1.dav.fr>

* Fix blank line

---------

Co-authored-by: Thuon Jeremy <d107869@olinfra1.infra.bdm.outscale.c1.dav.fr>
2023-04-09 22:07:15 -07:00
rtsp
fcb5e77338 [cert-manager] Upgrade to v1.11.1 (#9964) 2023-04-09 21:37:15 -07:00
Samuel Liu
ece174da7c fix resatrt k8s components (#9962) 2023-04-09 19:51:15 -07:00
Mohamed Omar Zaian
a94b893e2c [containerd] add hashes for 1.6.20 (#9954) 2023-04-04 16:01:39 -07:00
Dominykas Norkus
5e2cb4d244 Add bind address variable to OCCM (#9958) 2023-04-04 15:57:40 -07:00
Mohamed Omar Zaian
dff58023d9 [argocd] update argocd to v2.6.7 (#9953) 2023-04-04 12:01:43 -07:00
Processus42
9a8f95e73d Documentation: Fix collection URL (#9949) 2023-04-03 18:29:51 -07:00
Mohamed Omar Zaian
766d3696c9 [calico] add v3.25.1 and make it default (#9950) 2023-04-03 18:21:51 -07:00
Mohamed Omar Zaian
b88229a662 [ingress-nginx] upgrade to 1.7.0 (#9952) 2023-04-03 17:51:51 -07:00
Mohamed Omar Zaian
c00cea7b17 [helm] upgrade to 3.11.2 (#9951) 2023-04-03 17:47:51 -07:00
ERIK
0c4f57a093 Support extended settings for the Debian os family (#9943)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-03-30 18:53:49 -07:00
Kundan Kumar
3a6069916d updated link for baremetel consideration (#9944) 2023-03-30 08:23:48 -07:00
Kundan Kumar
e6eda9d811 corrected reference link to valid one (#9940)
* corrected reference link to valid one

* Update calico.md

incorporated review comments
2023-03-29 16:57:48 -07:00
Kay Yan
e8f0fb82fe fix-kube-bench-1.2.20 (#9939) 2023-03-29 09:35:49 -07:00
Kay Yan
19856cf692 fix-kube-bench-1.1.19 (#9937) 2023-03-28 21:01:24 -07:00
Mathias Petermann
3450865d3f docs(argocd): ArgoCD no longer uses the pod name as initial password 2023-03-28 09:47:45 +02:00
Kay Yan
deb532ce27 fix-kube-bench-4.1.1 (#9934) 2023-03-27 21:48:22 -07:00
Anton
1bb4f88af1 cilium: Additional fix the configuration of tls for hubble #9880 (#9932) 2023-03-27 08:48:27 -07:00
Mathias Petermann
dcc04e54f3 fix(cert manager): Fix manifest if cert_manager_trusted_internal_ca is provided (#9922) 2023-03-27 08:12:28 -07:00
xiuguang.huang
4020a93d7e delete the probe option of cilium_kube_proxy_replacement (#9929) 2023-03-27 08:08:28 -07:00
R. P. Taylor
a676c106d3 change bash for loop for SAN check (#9060)
fix merge conflict
2023-03-27 06:36:30 -07:00
Luke Simmons
acbf44a1b4 Adds support for Ansible collections (#9582) 2023-03-27 02:25:55 -07:00
HirazawaUi
baed5f0b32 Remove deprecated udpIdleTimeout field in KubeProxyConfiguration (#9925) 2023-03-27 02:05:55 -07:00
Toru Komatsu
8afd74ce1f cilium: Fix the configuration of tls for hubble (#9880)
Signed-off-by: utam0k <k0ma@utam0k.jp>
2023-03-24 01:10:31 -07:00
Maxime Picaud
f6e4a231cb fix(download): validate mirrors on localhost (#9669) 2023-03-23 08:04:32 -07:00
Toru Komatsu
3a5f5692ca Cilium v1.13.0 (#9879)
Signed-off-by: utam0k <k0ma@utam0k.jp>
2023-03-23 01:20:23 -07:00
Jiri Fiala
9b37699d0d Cilium Operator replicas configuration (#9894)
Signed-off-by: Fiala, JiriX <jirix.fiala@intel.com>
2023-03-22 08:28:38 -07:00
Kay Yan
cc382f2412 haproxy-proxy-ipv6 (#9674) 2023-03-22 05:58:36 -07:00
Maxime Leroy
9a8bf0e38a fix(contrib/terraform): do not add access_ip when not wanted (#9869) 2023-03-21 20:56:36 -07:00
Will Hegedus
97dfdcd8fe feat: support cilium 1.13.1 (#9914)
Cilium 1.13.1 changed how the cilium-cni binary gets placed in /opt/cni/bin,
so that it takes place in an init container rather than in the main agent.
2023-03-21 12:56:12 -07:00
prashantchitta
a9f52060c9 Fix ciliums hubble relay configuration (#9876)
* Fix ciliums hubble relay configuration

* Fixed the tls from code review

* Updated to dna_domain instead of hardcoding
2023-03-21 12:50:12 -07:00
tu1h
8cf5fefe84 Add download retries option (#9911)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-03-21 09:38:12 -07:00
Arthur Outhenin-Chalandre
f73b941d8a Add MrFreezeex as reviewer (#9906)
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-03-21 01:35:17 -07:00
ERIK
fb8631cdf6 fix allow unsupported distribution (#9904)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-03-21 01:35:09 -07:00
Mohamed Omar Zaian
7859aee735 [kubernetes] Add hashes for 1.26.3, 1.25.8, 1.24.12 (#9900) 2023-03-21 01:31:08 -07:00
蒋航
83c3ce7f8f Add Retry for Checking calico exists (#9883)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-03-20 21:51:06 -07:00
Kay Yan
309aaee427 fix-cilium-error (#9902) 2023-03-20 02:41:17 -07:00
Mohamed Omar Zaian
349c8901f8 [containerd] add hashes for 1.7.0 (#9892) 2023-03-14 21:48:14 -07:00
Samuel Liu
df9aba6298 fix typo word 2023-03-14 15:49:22 +01:00
James
8f0bd36155 README: add mention to custom_cni (#9878) 2023-03-14 07:38:17 -07:00
biqiang Wu
2ae3ea9ee3 Modified the default value of cilium IPAM and added the support for related parameters (#9443)
Signed-off-by: dcwbq <biqiang.wu@daocloud.io>
2023-03-13 17:45:10 -07:00
蒋航
99115ad04b Fix Get current calico version (#9873)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-03-10 05:48:40 -08:00
ERIK
7747ff2572 Fix uniontech os installation failure (#9862)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-03-09 22:00:39 -08:00
Victor Morales
fff400513b Improve method to get binary checksums (#9782) 2023-03-09 13:56:30 -08:00
Marijn van der Giesen
eb4bd36f73 fix(kubernetes): Also apply kubeadm patches during upgrade (#9781) 2023-03-09 13:50:30 -08:00
panguicai
2d20f0c024 fix cri-o arm64 v1.26.0 wrong archive checksum (#9872)
Signed-off-by: panguicai008 <guicai.pan@daocloud.io>
2023-03-09 13:32:31 -08:00
Cyclinder
b0793df293 bump calico to v3.25.0 (#9860)
Signed-off-by: cyclinder qifeng.guo@daocloud.io

Signed-off-by: cyclinder qifeng.guo@daocloud.io
2023-03-09 00:02:02 -08:00
Brendan McShane
ab213a7db0 spelling 2023-03-09 08:58:08 +01:00
Brendan McShane
9fb1814784 Fix warning/info markdown 2023-03-09 08:58:08 +01:00
Jack
1ca50f3eea Update check calico version command (#9861) 2023-03-08 00:31:12 -08:00
Arthur Outhenin-Chalandre
82f68ca395 calico: cilium: use localhost lb by default on kube-proxy replacement (#9718)
This commit removes the variable `use_localhost_as_kubeapi_loadbalancer`
and rather detects that we are in a situation where we can use the
localhost apiserver loadbalancer (meaning that we use the localhost load
balancer and that the same ports are used for both the load balancer and
the kube-apiserver).

This also cleanups the calico code to use `kube_apiserver_global_endpoint`
rather than implementing the same logic all over again.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-03-07 04:28:36 -08:00
panguicai
3a675393dc upgrade rancher local-path-provisioner to v0.0.23 (#9855)
Signed-off-by: panguicai008 <1121906548@qq.com>
2023-03-06 16:54:17 -08:00
Jack
9c41769dab Update nodes in etc hosts after cluster scale (#9837) 2023-03-06 16:18:18 -08:00
Mohamed Zaian
dba29db58d [helm] upgrade to 3.11.1 (#9849) 2023-03-06 15:56:17 -08:00
panguicai
e175ccdde0 the url of multus has been moved (#9850)
Signed-off-by: panguicai008 <1121906548@qq.com>
2023-03-05 18:52:57 -08:00
Arthur Outhenin-Chalandre
9e2104c7d3 node: fix default kubelet/runtime cgroups when kube_reserved is false (#9834)
* node: fix default kubelet/runtime cgroups when kube_reserved is false (default)

Commit 1c4db6132d introduced a notion of
kube_reserved. This introduced a breaking change defaulting to use
kube.slice for the container_manager and the kubelet as if kube_reserved
was always enabled whereas it is disabled by default.

This commit fixes this by bringing back system.slice whenever
kube_reserved is disabled.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* inventory/sample: change false for kube_reserved as its the default

Changing the commented value in sample inventory to the actual default
value.

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
2023-03-05 18:48:58 -08:00
DRAGON2002
1d9502e01d update args (#9856)
Signed-off-by: Anant Vijay <anantvijay3@gmail.com>
2023-03-05 18:38:57 -08:00
panguicai
c710c93c02 upgrade kubevip to v0.5.11 (#9852)
Signed-off-by: panguicai008 <1121906548@qq.com>
2023-03-05 17:54:57 -08:00
DRAGON2002
13c793fd0d add flag (#9827)
Signed-off-by: Anant Vijay <anantvijay3@gmail.com>
2023-03-05 17:50:57 -08:00
panguicai
1555d78155 upgrade argocd to v2.6.3 (#9848)
Signed-off-by: panguicai008 <1121906548@qq.com>
2023-03-03 06:44:58 -08:00
Maxime Leroy
fd8260b930 fix(upgrade-cluster): retry other masters upgrade (#9768)
Signed-off-by: Maxime Leroy <19607336+maxime1907@users.noreply.github.com>
2023-03-03 05:44:58 -08:00
Arthur Outhenin-Chalandre
6769bb32b1 Network plugin custom (#9819)
* network_plugin/custom_cni: add CNI to apply provided manifests

Add a new simple custom_cni to install provided Kubernetes manifests.
This could be useful to use manifests directly provided by a CNI when
there are not support by Kubespray (i.e.: helm chart or any other manifests
generation method).

Co-authored-by: James Landrein <james.landrein@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

* network_plugin/custom_cni: add test with cilium

Co-authored-by: James Landrein <james.landrein@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>

---------

Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Co-authored-by: James Landrein <james.landrein@proton.ch>
2023-03-03 00:23:08 -08:00
Victor Morales
677b7ecd89 Drop crun_bin_dir unused var (#9845)
crun_bin_dir was used to specify the destination of the crun binary during the
download process. This path must match with the value provided in the CRI-O
configuration file. So changing its value to bin_dir helps to mismatch errors.

Signed-off-by: Victor Morales <chipahuac@hotmail.com>
2023-03-02 18:30:57 -08:00
Maxime Leroy
659fa0eddc feat(contrib/terraform): support custom ssh port (#9836) 2023-03-02 18:24:58 -08:00
Jiffs Maverick
501deecdd0 Downgrade version of coredns to 1.8.6 for compatibility with 1.23-1.24 (#9846) 2023-03-02 17:56:57 -08:00
Kenichi Omichi
7fec254f62 Drop part for supporting ansible 2.9 and 2.10 (#9842)
requirements-$ANSIBLE_VERSION.yml doesn't exist in Kubespray repo.
That was for supporting ansible 2.10-, and now Kubespray supports
2.11+. So this drops the part to avoid confusion.
2023-03-02 01:54:58 -08:00
Maxime Leroy
835811ec84 fix(contrib/terraform): do not set ssh port (#9828)
Signed-off-by: Maxime Leroy <19607336+maxime1907@users.noreply.github.com>
2023-03-01 18:50:55 -08:00
Maxime Leroy
b7fe368469 feat(Dockerfile): openssh-client support (#9835) 2023-03-01 18:40:55 -08:00
Mohamed Zaian
8b3f3c04cc [kubernetes] Add hashes for 1.26.2, 1.25.7, 1.24.11 (#9829) 2023-03-01 15:31:17 -08:00
Mohamed Zaian
ecd649846a [containerd] add hashes for 1.6.19 (#9838) 2023-02-28 15:35:18 -08:00
Mykola Ulianytskyi (Nikolay Ulyanitsky)
27c2d7e9e2 Replace semicolons by commas in options (#9840) 2023-02-28 07:33:16 -08:00
Jack
f366863a99 Add rsync in Dockerfile (#9839) 2023-02-28 07:29:27 -08:00
Robin Wallace
5bb54ef6a2 upcloud: add server groups and target port for lb (#9831) 2023-02-27 17:21:15 -08:00
Mohamed Zaian
f7dade867a [feature] add mzaian to approvers (#9767) 2023-02-27 15:53:16 -08:00
Eugene Artemenko
5cbcec8968 Add resources section to all containers releated to Vsphere CSI driver (#9687) 2023-02-27 02:36:20 -08:00
Jack
62f34c6085 add image garbage collection (#9832) 2023-02-27 00:26:19 -08:00
Aleksey Karpov
d908e86590 Reducing the number of layers and commands (#9822) 2023-02-27 00:18:19 -08:00
Samuel Liu
f9ce176211 dont use var etcd_kubeadm_enabled (#9823) 2023-02-26 15:58:18 -08:00
Daniel VG
1dab5b5d9c docs: small vsphere docs fixes (#9796)
* docs: fix storageClassName in PersistentVolume

* docs: minor typo fix and formatting

* docs: fix proper STORAGECLASS in example prompt
2023-02-24 00:43:34 -08:00
Aleksey Karpov
739608454d Dockerfile optimization (#9821)
Reducing the number of layers, increasing readability, reducing the size of the image (how much I can’t check, it’s impossible for me to build due to the unavailability of the vagrant repository)
2023-02-23 01:39:34 -08:00
Mohamed Zaian
260dad8f10 [ingress-nginx] upgrade to 1.6.4 (#9818) 2023-02-23 01:35:34 -08:00
Mohamed Zaian
c950bfface [containerd] add hashes for 1.5.17, 1.5.18, 1.6.17, 1.6.8 (#9814) 2023-02-22 19:13:06 -08:00
Aleksey Karpov
75b07ad40c Reducing the image size (#9810) 2023-02-21 22:27:56 -08:00
jianse
bd84353fc9 add krew_download_url to offline.yml (#9788) 2023-02-20 16:23:48 -08:00
Kay Yan
9ee2fbc51c add-ci-for-insecure_registries (#9797) 2023-02-20 16:19:48 -08:00
DRAGON2002
fa92d9c0e9 feature: add vim to kubespray docker image (#9805)
* install nano/vi/vim

Signed-off-by: Anant Vijay <anantvijay3@gmail.com>

* update Dockerfile

Signed-off-by: Anant Vijay <anantvijay3@gmail.com>

---------

Signed-off-by: Anant Vijay <anantvijay3@gmail.com>
2023-02-20 04:25:49 -08:00
JaneLiuL
4aacec4542 fix calico rbac issue (#9806) 2023-02-20 01:43:40 -08:00
Karl Fischer
6278b12af6 fixed clinet to client 2023-02-20 10:09:03 +01:00
Maxime Leroy
64e4de371e fix(kubelet): no cloud config for external cloud provider (#9793) 2023-02-20 01:07:40 -08:00
Marijn van der Giesen
ad4958249f fix(crio): First runc then crictl (#9780) 2023-02-19 22:27:38 -08:00
Alexander
29f01d3e5b update docker image tag to v2.21.0 in README.md (#9802) 2023-02-19 22:23:49 -08:00
Mathieu Parent
3fd7d91452 Update nodelocaldns to 1.22.18 (#9800)
Cf. ceb37c3a5c
2023-02-19 22:23:38 -08:00
pli
4ba1df5237 Fix kubernetes-app/argocd: download related things with the download role (#9786)
* Fix yq install in argocd role: use download_file instead of get_url

* Fix use download_file instead of get_url to download argocd-install manifest in argocd role

* Fix order and add arm64 checksum

* Fix: Failed to template loop_control.label: 'None'
2023-02-19 16:11:37 -08:00
rongfu.leng
145c80e9ab Fix containerd config_path error when containerd_registries is configed (#9770)
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
2023-02-16 20:57:39 -08:00
王煎饼
ab0e06eae6 Fix CentOS Extras repo url for Oracle Linux 7 aarch64 (#9791) 2023-02-15 17:43:38 -08:00
ERIK
786ce8ddd7 Update the description of runc in offline.yml (#9783)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-02-13 18:41:30 -08:00
JaneLiuL
f06de0735f fix ingress url not found issue (#9789) 2023-02-13 18:37:30 -08:00
ERIK
6ff845a199 Enable control plane load balancing for kube-vip (#9785)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-02-12 19:25:28 -08:00
tu1h
fe9e11b501 Fix cni documentation (#9778)
Signed-off-by: tu1h <lihai.tu@daocloud.io>
2023-02-12 16:05:31 -08:00
Kenichi Omichi
3c2eb52828 Copy contrib/ to Dockerfile (#9774)
Since Kubespray v2.21.0, the commit a98ab40434 removes copying
contrib/ accidentaly. The contrib/ contains useful tools like offline
tools etc. This adds the contrib/ to Dockerfile again.
2023-02-09 19:01:31 -08:00
Samuel BECK
2838a7c304 add proxy_env variable to apt_key cleanup task (#9766) 2023-02-09 06:38:22 -08:00
Ho Kim
2788a02096 Fix a bug in removing kubelet data dir (#9764) 2023-02-08 19:04:36 -08:00
Sean Knight
8a2e1189fb correct typo hhttps -> https (#9763) 2023-02-07 17:55:10 -08:00
Bas
bdd1c7bcb5 Catch ShellCheck errors in pre-commit using same command as CI. (#9752) 2023-02-06 19:08:57 -08:00
Denis Kasanic
d81978625c Update cri-o archive checksum (#9761)
Signed-off-by: Kasanic, Denis <denisx.kasanic@intel.com>
2023-02-06 06:25:01 -08:00
Bas
2c93c997cf pre-commit autocorrected files (#9750) 2023-02-06 01:35:16 -08:00
Haitian Chen
10337f2fcb skip ensuring ntp packages in coreos (#9742)
Check OS when ensuring NTP package and tzdata package.
2023-02-06 01:35:04 -08:00
manzsolutions-lpr
6c41191646 Add support for PodSecurityStandards (#9713) 2023-02-06 01:27:01 -08:00
Chauncey
7730cfd619 fix: add ipamconfigs resource for calico (#9755)
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
2023-02-05 15:50:30 -08:00
Kevin Huang
1853085ffe feat(cinder-csi): Allow deletionPolicy to be configurable (#9736) 2023-02-02 15:46:28 -08:00
stelucz
9247137e60 Replace label k8s-app: nodelocaldns in DaemonSet template by k8s-app: node-local-dns (#9745) 2023-02-02 15:42:28 -08:00
杨刚 (成都)
e8f048c71d [argocd] update argocd to v2.5.10 (#9753)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-02-02 15:38:29 -08:00
Fish-pro
6cb027dfab Optimize the document for readability (#9730)
Signed-off-by: Fish-pro <zechun.chen@daocloud.io>
2023-02-01 00:01:06 -08:00
David Moreau Simard
edde594bbe tests: Update ara 1.5.7 to 1.6.1 (#9737)
1.5.7 was released Aug 2, 2021 and 1.6.1 came out on Dec 13, 2022.

There's been a good amount of new features, improvements and fixes since
1.5.7 and the changelogs for each version are available in the docs:
https://ara.readthedocs.io/en/latest/changelog-release-notes.html
2023-01-31 19:29:06 -08:00
rongfu.leng
0707c8ea6f fix: with_item to with_dict (#9729)
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
2023-01-31 03:18:50 -08:00
Fish-pro
c0c2cd6e03 Adjust the table style to make it easier to read (#9731)
Signed-off-by: Fish-pro <zechun.chen@daocloud.io>
2023-01-31 00:56:48 -08:00
James
36c6de9abd Fix cilium's hubble ui configuration (#9735)
This fixes the CrashLoopBackoff error that appears because envoy
configuration has changed a lot and upstream removed the envoy proxy to
use nginx only instead. Those changes are based on upstream cilium helm.
2023-01-31 00:28:48 -08:00
蒋航
c5debf013c Update kubevip to v0.5.8 (#9734)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2023-01-31 00:24:55 -08:00
Kay Yan
f9cc8ae10c [kubernetes] Make kubernetes v1.26 default (#9732)
* make-kube-1.26-default

* fix-bugs
2023-01-31 00:24:48 -08:00
杨刚 (成都)
94dd02121b Update containerd version : containerd1.6.16. (#9727)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-31 00:16:48 -08:00
杨刚 (成都)
c360501854 fix typo in doc. (#9728)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-30 16:58:49 -08:00
杨刚 (成都)
8523f525aa fix docs for cert_manager.md (#9724)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-28 19:14:40 -08:00
杨刚 (成都)
b9a34b83d4 [argocd] update argocd to v2.5.9 (#9723)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-28 19:14:33 -08:00
杨刚 (成都)
2a24c2e359 fix moved url in multus.md (#9722)
Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-28 19:10:33 -08:00
杨刚
8d6cfd6e53 [argocd] update argocd to v2.5.8 (#9708)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2023-01-27 00:14:25 -08:00
Florian Ruynat
1f36df666d Update fedora35 vagrant box url (#9699)
* Update fedora35 vagrant box url

* Update Terraform to 1.3.7

* Update Vagrant to 2.3.4
2023-01-26 21:28:25 -08:00
Cristian Calin
64dbf2e429 update equinox terraform code to fix kubespray CI (#9702)
* add terraform lock files to ignore list

* move contrib/terraform/metal to contrib/terraform/equinix to reflect upstream change
2023-01-26 21:24:25 -08:00
Florian Ruynat
6881398941 Add ruamel.yaml to docker image (#9707) 2023-01-26 18:26:25 -08:00
Cristian Calin
57638124c5 document the CI environment (#9714) 2023-01-26 05:02:26 -08:00
ERIK
ee2193d4cf Add dns configuration for cert manager (#9673)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2023-01-23 17:42:15 -08:00
Florian Ruynat
eb56130433 Add jmespath back to Dockerfile image (#9697) 2023-01-23 16:24:17 -08:00
Tristan
5fbbcedebc 9693: Fix comma-separated-list splitting of kubelet_enforce_node_allocatable (#9694)
See https://github.com/kubernetes-sigs/kubespray/issues/9693
2023-01-23 16:20:17 -08:00
Florian Ruynat
18f2abad2f Cleanup v1.23.x missing references/conditions/hashes (#9698) 2023-01-23 16:16:16 -08:00
Mohamed Zaian
391dd97f95 [kubernetes] support 1.26.x (#9570) 2023-01-23 00:10:11 -08:00
Tom Janson
44243eada9 reword confusing etcd download url comment (#9686)
It is quite confusing that there's an all-caps, bolded comment that seems to imply that `etcd_download_url` is relevant only when not using host-based deployment. The opposite is true: of course the artifact download URL is relevant and required for host-based etcd.

Perhaps the entire comment can be read in a different way, and should perhaps be reworded entirely, cf. 374438a3d6/docs/offline-environment.md (L38)

Removing the "**DON'T**" matches the way the other comments in this file are written and matches my personal interpretation.
2023-01-22 01:14:03 -08:00
Florian Ruynat
34d0451585 Update KUBESPRAY_VERSION and kube_version_min_required (with hashes cleanup) (#9691) 2023-01-20 14:11:54 -08:00
735 changed files with 14980 additions and 12654 deletions

View File

@@ -7,18 +7,6 @@ skip_list:
# These rules are intentionally skipped:
#
# [E204]: "Lines should be no longer than 160 chars"
# This could be re-enabled with a major rewrite in the future.
# For now, there's not enough value gain from strictly limiting line length.
# (Disabled in May 2019)
- '204'
# [E701]: "meta/main.yml should contain relevant info"
# Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701'
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
# Meta roles in Kubespray don't need proper names
# (Disabled in June 2021)
@@ -28,3 +16,23 @@ skip_list:
# In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021)
- 'var-naming'
# [fqcn-builtins]
# Roles in kubespray don't need fully qualified collection names
# (Disabled in Feb 2023)
- 'fqcn-builtins'
# We use template in names
- 'name[template]'
# No changed-when on commands
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'no-changed-when'
# Disable run-once check with free strategy
# (Disabled in June 2023 after ansible upgrade; FIXME)
- 'run-once[task]'
exclude_paths:
# Generated files
- tests/files/custom_cni/cilium.yaml
- venv

8
.ansible-lint-ignore Normal file
View File

@@ -0,0 +1,8 @@
# This file contains ignores rule violations for ansible-lint
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/kube-proxy.yml jinja[spacing]
roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
roles/kubernetes/node/defaults/main.yml jinja[spacing]
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]

7
.gitignore vendored
View File

@@ -11,7 +11,8 @@ contrib/offline/offline-files.tar.gz
.cache
*.bak
*.tfstate
*.tfstate.backup
*.tfstate*backup
*.lock.hcl
.terraform/
contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
@@ -113,3 +114,7 @@ roles/**/molecule/**/__pycache__/
# Temp location used by our scripts
scripts/tmp/
tmp.md
# Ansible collection files
kubernetes_sigs-kubespray*tar.gz
ansible_collections

View File

@@ -9,12 +9,12 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.20.0
KUBESPRAY_VERSION: v2.22.1
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
TEST_ID: "$CI_PIPELINE_ID-$CI_JOB_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
@@ -33,16 +33,12 @@ variables:
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_VERSION: 1.0.8
ANSIBLE_MAJOR_VERSION: "2.11"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
TERRAFORM_VERSION: 1.3.7
PIPELINE_IMAGE: "$CI_REGISTRY_IMAGE/pipeline:${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}"
before_script:
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements-${ANSIBLE_MAJOR_VERSION}.txt
- mkdir -p /.ssh
.job: &job
@@ -57,6 +53,7 @@ before_script:
.testcases: &testcases
<<: *job
retry: 1
interruptible: true
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh

View File

@@ -1,16 +1,40 @@
---
pipeline image:
.build:
stage: build
image: docker:20.10.22-cli
image:
name: moby/buildkit:rootless
entrypoint: [""]
variables:
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20.10.22-dind
# See https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300 for why this is required
command: ["--tls=false"]
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json
pipeline image:
extends: .build
script:
# DOCKER_HOST is overwritten if we set it as a GitLab variable
- DOCKER_HOST=tcp://docker:2375; docker build --network host --file pipeline.Dockerfile --tag $PIPELINE_IMAGE .
- docker push $PIPELINE_IMAGE
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache
rules:
- if: '$CI_COMMIT_REF_NAME != $CI_DEFAULT_BRANCH'
pipeline image and build cache:
extends: .build
script:
- |
buildctl-daemonless.sh build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename=./pipeline.Dockerfile \
--output type=image,name=$PIPELINE_IMAGE,push=true \
--import-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache \
--export-cache type=registry,ref=$CI_REGISTRY_IMAGE/pipeline:cache,mode=max
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'

View File

@@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.2.19
VAGRANT_VERSION: 2.3.4
script:
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
@@ -39,21 +39,34 @@ syntax-check:
ANSIBLE_VERBOSITY: "3"
script:
- ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check playbooks/cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
- ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check playbooks/reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master']
collection-build-install-sanity-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
script:
- ansible-galaxy collection build
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
tags: [light]
extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
- cd contrib/inventory_builder && tox
@@ -75,6 +88,13 @@ check-readme-versions:
script:
- tests/scripts/check_readme_versions.sh
check-galaxy-version:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_galaxy_version.sh
check-typo:
stage: unit-tests
tags: [light]

View File

@@ -9,10 +9,6 @@
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
@@ -58,6 +54,7 @@ molecule_cri-o:
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
allow_failure: true
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail

View File

@@ -23,6 +23,14 @@
allow_failure: true
extends: .packet
packet_cleanup_old:
stage: deploy-part1
extends: .packet_periodic
script:
- cd tests
- make cleanup-packet
after_script: []
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-aio:
stage: deploy-part1
@@ -31,21 +39,8 @@ packet_ubuntu20-calico-aio:
variables:
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.11"
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_ubuntu18-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu20-aio-docker:
stage: deploy-part2
extends: .packet_pr
@@ -56,11 +51,6 @@ packet_ubuntu20-calico-aio-hardening:
extends: .packet_pr
when: on_success
packet_ubuntu18-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-aio-docker:
stage: deploy-part2
extends: .packet_pr
@@ -80,28 +70,19 @@ packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
allow_failure: true
packet_ubuntu18-crio:
packet_ubuntu20-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_fedora35-crio:
packet_fedora37-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_ubuntu16-canal-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_ubuntu16-canal-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu16-flannel-ha:
packet_ubuntu20-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -131,6 +112,21 @@ packet_debian11-docker:
extends: .packet_pr
when: on_success
packet_debian12-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian12-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet_pr
@@ -143,7 +139,7 @@ packet_centos7-calico-ha-once-localhost:
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
extends: .packet_pr
when: on_success
packet_almalinux8-calico:
@@ -173,15 +169,11 @@ packet_almalinux8-docker:
extends: .packet_pr
when: on_success
packet_fedora36-docker-weave:
packet_fedora38-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_opensuse-canal:
stage: deploy-part2
extends: .packet_periodic
when: on_success
allow_failure: true
packet_opensuse-docker-cilium:
stage: deploy-part2
@@ -190,22 +182,17 @@ packet_opensuse-docker-cilium:
# ### MANUAL JOBS
packet_ubuntu16-docker-weave-sep:
packet_ubuntu20-docker-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu18-cilium-sep:
packet_ubuntu20-cilium-sep:
stage: deploy-special
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-ha-once:
packet_ubuntu20-flannel-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -216,7 +203,7 @@ packet_almalinux8-calico-ha-ebpf:
extends: .packet_pr
when: manual
packet_debian9-macvlan:
packet_debian10-macvlan:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -231,24 +218,19 @@ packet_centos7-multus-calico:
extends: .packet_pr
when: manual
packet_centos7-canal-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora36-docker-calico:
packet_fedora38-docker-calico:
stage: deploy-part2
extends: .packet_periodic
when: on_success
variables:
RESET_CHECK: "true"
packet_fedora35-calico-selinux:
packet_fedora37-calico-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_fedora35-calico-swap-selinux:
packet_fedora37-calico-swap-selinux:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -263,11 +245,21 @@ packet_almalinux8-calico-nodelocaldns-secondary:
extends: .packet_pr
when: manual
packet_fedora36-kube-ovn:
packet_fedora38-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian11-custom-cni:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-kubelet-csr-approver:
stage: deploy-part2
extends: .packet_pr
when: manual
# ### PR JOBS PART3
# Long jobs (45min+)
@@ -318,18 +310,18 @@ packet_debian11-calico-upgrade-once:
variables:
UPGRADE_TEST: graceful
packet_ubuntu18-calico-ha-recover:
packet_ubuntu20-calico-ha-recover:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:]:kube_control_plane[1:]"
packet_ubuntu18-calico-ha-recover-noquorum:
packet_ubuntu20-calico-ha-recover-noquorum:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube_control_plane[1:]"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:]:kube_control_plane[1:]"

View File

@@ -60,11 +60,11 @@ tf-validate-openstack:
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-metal:
tf-validate-equinix:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: metal
PROVIDER: equinix
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
@@ -80,6 +80,12 @@ tf-validate-exoscale:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: exoscale
tf-validate-hetzner:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: hetzner
tf-validate-vsphere:
extends: .terraform_validate
variables:
@@ -94,7 +100,13 @@ tf-validate-upcloud:
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
tf-validate-nifcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: nifcloud
# tf-packet-ubuntu20-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
@@ -104,23 +116,9 @@ tf-validate-upcloud:
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ewr1
# TF_VAR_metro: am
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ams1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04
# TF_VAR_operating_system: ubuntu_20_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -158,7 +156,7 @@ tf-elastx_cleanup:
script:
- ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu18-calico:
tf-elastx_ubuntu20-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
@@ -188,7 +186,7 @@ tf-elastx_ubuntu18-calico:
TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_image: ubuntu-20.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
@@ -205,7 +203,7 @@ tf-elastx_ubuntu18-calico:
# script:
# - ./scripts/openstack-cleanup/main.py
# tf-ovh_ubuntu18-calico:
# tf-ovh_ubuntu20-calico:
# extends: .terraform_apply
# when: on_success
# environment: ovh
@@ -231,5 +229,5 @@ tf-elastx_ubuntu18-calico:
# TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 18.04"
# TF_VAR_image: "Ubuntu 20.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@@ -13,10 +13,6 @@
image: $PIPELINE_IMAGE
services: []
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
@@ -24,17 +20,12 @@
- chronic ./tests/scripts/testcases_cleanup.sh
allow_failure: true
vagrant_ubuntu18-calico-dual-stack:
vagrant_ubuntu20-calico-dual-stack:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-weave-medium:
vagrant_ubuntu20-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
@@ -45,18 +36,23 @@ vagrant_ubuntu20-flannel:
when: on_success
allow_failure: false
vagrant_ubuntu16-kube-router-sep:
vagrant_ubuntu20-flannel-collection:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu20-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
vagrant_ubuntu20-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
vagrant_fedora37-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success

View File

@@ -1,5 +1,20 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-xml
- id: check-merge-conflict
- id: detect-private-key
- id: end-of-file-fixer
- id: forbid-new-submodules
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
@@ -13,6 +28,14 @@ repos:
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: shellcheck
args: [ --severity, "error" ]
exclude: "^.git"
files: "\\.sh$"
- repo: local
hooks:
- id: ansible-lint

View File

@@ -3,6 +3,8 @@ extends: default
ignore: |
.git/
# Generated file
tests/files/custom_cni/cilium.yaml
rules:
braces:

1
CHANGELOG.md Normal file
View File

@@ -0,0 +1 @@
# See our release notes on [GitHub](https://github.com/kubernetes-sigs/kubespray/releases)

2
CNAME
View File

@@ -1 +1 @@
kubespray.io
kubespray.io

View File

@@ -12,6 +12,7 @@ To install development dependencies you can set up a python virtual env with the
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
ansible-galaxy install -r tests/requirements.yml
```
#### Linting

View File

@@ -1,36 +1,45 @@
# Use imutable image tags rather than mutable tags (like ubuntu:20.04)
FROM ubuntu:focal-20220531
ARG ARCH=amd64
ARG TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y \
&& apt install -y \
curl python3 python3-pip sshpass \
&& rm -rf /var/lib/apt/lists/*
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
FROM ubuntu:jammy-20230308
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219
ENV LANG=C.UTF-8
ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /kubespray
COPY *yml /kubespray/
COPY roles /kubespray/roles
COPY inventory /kubespray/inventory
COPY library /kubespray/library
COPY extra_playbooks /kubespray/extra_playbooks
COPY *.yml ./
COPY *.cfg ./
COPY roles ./roles
COPY contrib ./contrib
COPY inventory ./inventory
COPY library ./library
COPY extra_playbooks ./extra_playbooks
COPY playbooks ./playbooks
COPY plugins ./plugins
RUN python3 -m pip install --no-cache-dir \
ansible==5.7.1 \
ansible-core==2.12.5 \
cryptography==3.4.8 \
jinja2==2.11.3 \
netaddr==0.7.19 \
MarkupSafe==1.1.1 \
RUN apt update -q \
&& apt install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
rsync \
openssh-client \
&& pip install --no-compile --no-cache-dir \
ansible==7.6.0 \
ansible-core==2.14.6 \
cryptography==41.0.1 \
jinja2==3.1.2 \
netaddr==0.8.0 \
jmespath==1.0.1 \
MarkupSafe==2.1.3 \
ruamel.yaml==0.17.21 \
passlib==1.7.4 \
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/$ARCH/kubectl \
&& chmod a+x kubectl \
&& mv kubectl /usr/local/bin/kubectl
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
&& chmod a+x /usr/local/bin/kubectl \
&& rm -rf /var/lib/apt/lists/* /var/log/* \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

View File

@@ -187,7 +187,7 @@
identification within third-party archives.
Copyright 2016 Kubespray
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

2
OWNERS
View File

@@ -5,4 +5,4 @@ approvers:
reviewers:
- kubespray-reviewers
emeritus_approvers:
- kubespray-emeritus_approvers
- kubespray-emeritus_approvers

View File

@@ -10,6 +10,7 @@ aliases:
- cristicalin
- liupeng0518
- yankay
- mzaian
kubespray-reviewers:
- holmsten
- bozzo
@@ -21,6 +22,8 @@ aliases:
- yankay
- cyclinder
- mzaian
- mrfreezeex
- erikjiang
kubespray-emeritus_approvers:
- riverzhang
- atoms

View File

@@ -34,6 +34,13 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
# uninstalling old packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
# And be mind it will remove the current kubernetes cluster (if it's running)!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
@@ -45,7 +52,7 @@ Note: When Ansible is already installed via system packages on the control node,
Python packages installed via `sudo pip install -r requirements.txt` will go to
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
buntu). As a consequence, the `ansible-playbook` command will fail with:
Ubuntu). As a consequence, the `ansible-playbook` command will fail with:
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
@@ -68,15 +75,19 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
to access the inventory and SSH key in the container, like this:
```ShellSession
git checkout v2.20.0
docker pull quay.io/kubespray/kubespray:v2.20.0
git checkout v2.22.1
docker pull quay.io/kubespray/kubespray:v2.22.1
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.20.0 bash
quay.io/kubespray/kubespray:v2.22.1 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
#### Collection
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant
For Vagrant we need to install Python dependencies for provisioning tasks.
@@ -131,10 +142,10 @@ vagrant up
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bullseye, Buster, Jessie, Stretch
- **Ubuntu** 16.04, 18.04, 20.04, 22.04
- **Debian** Bookworm, Bullseye, Buster
- **Ubuntu** 20.04, 22.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8)
- **Fedora** 35, 36
- **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8)
@@ -150,30 +161,29 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.25.6
- [etcd](https://github.com/etcd-io/etcd) v3.5.6
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.5
- [etcd](https://github.com/etcd-io/etcd) v3.5.7
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.6.15
- [cri-o](http://cri-o.io/) v1.24 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) v1.7.5
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.24.5
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.12.1
- [flannel](https://github.com/flannel-io/flannel) v0.20.2
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.10.7
- [calico](https://github.com/projectcalico/calico) v3.25.2
- [cilium](https://github.com/cilium/cilium) v1.13.4
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/intel/multus-cni) v3.8
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.5
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
- Application
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.0
- [coredns](https://github.com/coredns/coredns) v1.9.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.5.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.3
- [argocd](https://argoproj.github.io/) v2.5.7
- [helm](https://helm.sh/) v3.10.3
- [metallb](https://metallb.universe.tf/) v0.12.1
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
- [coredns](https://github.com/coredns/coredns) v1.10.1
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
- [argocd](https://argoproj.github.io/) v2.8.0
- [helm](https://helm.sh/) v3.12.3
- [metallb](https://metallb.universe.tf/) v0.13.9
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
@@ -181,25 +191,25 @@ Note: Upstart/SysV init based OS types are not supported.
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.22
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes
- Supported Docker versions are 18.09, 19.03 and 20.10. The *recommended* Docker version is 20.10. `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.23**
- **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- **Minimum required version of Kubernetes is v1.25**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
- If kubespray is run from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
@@ -217,13 +227,11 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
@@ -240,6 +248,9 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
@@ -265,7 +276,7 @@ See also [Network checker](docs/netcheck.md).
## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).

View File

@@ -60,7 +60,7 @@ release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note)
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
## Container image creation

39
Vagrantfile vendored
View File

@@ -19,9 +19,8 @@ SUPPORTED_OS = {
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
@@ -29,8 +28,8 @@ SUPPORTED_OS = {
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"},
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
"fedora36" => {box: "fedora/36-cloud-base", user: "vagrant"},
"fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
"fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
@@ -53,16 +52,16 @@ $shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804"
$os ||= "ubuntu2004"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
$download_run_once ||= "True"
$download_force_cache ||= "False"
# The first three nodes are etcd servers
$etcd_instances ||= $num_instances
$etcd_instances ||= [$num_instances, 3].min
# The first two nodes are kube masters
$kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
$kube_master_instances ||= [$num_instances, 2].min
# All nodes are kube nodes
$kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider
@@ -82,6 +81,13 @@ $playbook ||= "cluster.yml"
host_vars = {}
# throw error if os is not supported
if ! SUPPORTED_OS.key?($os)
puts "Unsupported OS: #{$os}"
puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
exit 1
end
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
@@ -201,7 +207,8 @@ Vagrant.configure("2") do |config|
end
ip = "#{$subnet}.#{i+100}"
node.vm.network :private_network, ip: ip,
node.vm.network :private_network,
:ip => ip,
:libvirt__guest_ipv6 => 'yes',
:libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
:libvirt__ipv6_prefix => "64",
@@ -211,14 +218,22 @@ Vagrant.configure("2") do |config|
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
# ubuntu1804 and ubuntu2004 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu1804", "ubuntu2004"].include? $os
# ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu2004", "ubuntu2204"].include? $os
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end
# Hack for fedora37/38 to get the IP address of the second interface
if ["fedora37", "fedora38"].include? $os
config.vm.provision "shell", inline: <<-SHELL
nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)
nmcli conn modify 'Wired connection 2' ipv4.method manual
service NetworkManager restart
SHELL
end
# Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end

View File

@@ -1,131 +1,3 @@
---
- name: Check ansible version
import_playbook: ansible_version.yml
- name: Ensure compatibility with old groups
import_playbook: legacy_groups.yml
- hosts: bastion[0]
gather_facts: False
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s_cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s_cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/control-plane, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: kubernetes/node-label, tags: node-label }
- { role: network_plugin, tags: network }
- hosts: calico_rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube_control_plane[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- { role: kubernetes-apps, tags: apps }
- name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
- name: Install Kubernetes
ansible.builtin.import_playbook: playbooks/cluster.yml

View File

@@ -39,7 +39,7 @@ class SearchEC2Tags(object):
hosts[group] = []
tag_key = "kubespray-role"
tag_value = ["*"+group+"*"]
region = os.environ['REGION']
region = os.environ['AWS_REGION']
ec2 = boto3.resource('ec2', region)
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
@@ -67,6 +67,11 @@ class SearchEC2Tags(object):
if node_labels_tag:
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
##Set when instance actually has node_taints
node_taints_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-taints', instance.tags))
if node_taints_tag:
ansible_host['node_taints'] = list([ taint.strip() for taint in node_taints_tag[0]['Value'].split(',') ])
hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host

View File

@@ -1 +1 @@
boto3 # Apache-2.0
boto3 # Apache-2.0

View File

@@ -1,2 +1,2 @@
.generated
/inventory
/inventory

View File

@@ -1,5 +1,6 @@
---
- hosts: localhost
- name: Generate Azure inventory
hosts: localhost
gather_facts: False
roles:
- generate-inventory

View File

@@ -1,5 +1,6 @@
---
- hosts: localhost
- name: Generate Azure inventory
hosts: localhost
gather_facts: False
roles:
- generate-inventory_2

View File

@@ -1,5 +1,6 @@
---
- hosts: localhost
- name: Generate Azure templates
hosts: localhost
gather_facts: False
roles:
- generate-templates

View File

@@ -1,6 +1,6 @@
---
- name: Query Azure VMs # noqa 301
- name: Query Azure VMs
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

View File

@@ -1,14 +1,14 @@
---
- name: Query Azure VMs IPs # noqa 301
- name: Query Azure VMs IPs
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles # noqa 301
- name: Query Azure VMs Roles
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- name: Query Azure Load Balancer Public IP # noqa 301
- name: Query Azure Load Balancer Public IP
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

View File

@@ -31,4 +31,3 @@
[k8s_cluster:children]
kube_node
kube_control_plane

View File

@@ -24,14 +24,14 @@ bastionIPAddressName: bastion-pubip
disablePasswordAuthentication: true
sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
sshKeyPath: "/home/{{ admin_username }}/.ssh/authorized_keys"
imageReference:
publisher: "OpenLogic"
offer: "CentOS"
sku: "7.5"
version: "latest"
imageReferenceJson: "{{imageReference|to_json}}"
imageReferenceJson: "{{ imageReference | to_json }}"
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountName: "sa{{ nameSuffix | replace('-', '') }}"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -103,4 +103,4 @@
}
{% endif %}
]
}
}

View File

@@ -5,4 +5,4 @@
"variables": {},
"resources": [],
"outputs": {}
}
}

View File

@@ -16,4 +16,4 @@
}
}
]
}
}

View File

@@ -1,9 +1,11 @@
---
- hosts: localhost
- name: Create nodes as docker containers
hosts: localhost
gather_facts: False
roles:
- { role: dind-host }
- hosts: containers
- name: Customize each node containers
hosts: containers
roles:
- { role: dind-cluster }

View File

@@ -1,9 +1,9 @@
---
- name: set_fact distro_setup
- name: Set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: set_fact other distro settings
- name: Set_fact other distro settings
set_fact:
distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
@@ -43,7 +43,7 @@
package:
name: "{{ item }}"
state: present
with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
with_items: "{{ distro_extra_packages + ['rsyslog', 'openssh-server'] }}"
- name: Start needed services
service:
@@ -66,8 +66,8 @@
dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
authorized_key:
- name: "Add my pubkey to {{ distro_user }} user authorized keys"
ansible.posix.authorized_key:
user: "{{ distro_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -1,9 +1,9 @@
---
- name: set_fact distro_setup
- name: Set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: set_fact other distro settings
- name: Set_fact other distro settings
set_fact:
distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}"
@@ -13,7 +13,7 @@
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section
docker_container:
community.docker.docker_container:
image: "{{ distro_image }}"
name: "{{ item }}"
state: started
@@ -53,7 +53,7 @@
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("SKIPPED") < 0
@@ -63,26 +63,25 @@
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
changed_when: false
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
delegate_to: "{{ item._ansible_item_label | default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("removed") >= 0

View File

@@ -1,3 +1,3 @@
configparser>=3.3.0
ruamel.yaml>=0.15.88
ipaddress
ruamel.yaml>=0.15.88

View File

@@ -1,3 +1,3 @@
hacking>=0.10.2
pytest>=2.8.0
mock>=1.3.0
pytest>=2.8.0

View File

@@ -1,21 +1,27 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8, py33
envlist = pep8
[testenv]
whitelist_externals = py.test
allowlist_externals = py.test
usedevelop = True
deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv = VIRTUAL_ENV={envdir}
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
passenv =
http_proxy
HTTP_PROXY
https_proxy
HTTPS_PROXY
no_proxy
NO_PROXY
commands = pytest -vv #{posargs:./tests}
[testenv:pep8]
usedevelop = False
whitelist_externals = bash
allowlist_externals = bash
commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"

View File

@@ -1,3 +1,2 @@
#k8s_deployment_user: kubespray
#k8s_deployment_user_pkey_path: /tmp/ssh_rsa

View File

@@ -1,8 +1,9 @@
---
- hosts: localhost
- name: Prepare Hypervisor to later install kubespray VMs
hosts: localhost
gather_facts: False
become: yes
vars:
- bootstrap_os: none
bootstrap_os: none
roles:
- kvm-setup
- { role: kvm-setup }

View File

@@ -22,9 +22,9 @@
- ntp
when: ansible_os_family == "Debian"
# Create deployment user if required
- include: user.yml
- name: Create deployment user if required
include_tasks: user.yml
when: k8s_deployment_user is defined
# Set proper sysctl values
- include: sysctl.yml
- name: Set proper sysctl values
import_tasks: sysctl.yml

View File

@@ -1,6 +1,6 @@
---
- name: Load br_netfilter module
modprobe:
community.general.modprobe:
name: br_netfilter
state: present
register: br_netfilter
@@ -25,7 +25,7 @@
- name: Enable net.ipv4.ip_forward in sysctl
sysctl:
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: "{{ sysctl_file_path }}"
@@ -33,7 +33,7 @@
reload: yes
- name: Set bridge-nf-call-{arptables,iptables} to 0
sysctl:
ansible.posix.sysctl:
name: "{{ item }}"
state: present
value: 0

View File

@@ -1,8 +1,9 @@
---
- name: Check ansible version
import_playbook: ansible_version.yml
import_playbook: kubernetes_sigs.kubespray.ansible_version
- hosts: localhost
- name: Install mitogen
hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.2
@@ -19,24 +20,25 @@
- "{{ playbook_dir }}/plugins/mitogen"
- "{{ playbook_dir }}/dist"
- name: download mitogen release
- name: Download mitogen release
get_url:
url: "{{ mitogen_url }}"
dest: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
validate_certs: true
mode: 0644
- name: extract archive
- name: Extract archive
unarchive:
src: "{{ playbook_dir }}/dist/mitogen_{{ mitogen_version }}.tar.gz"
dest: "{{ playbook_dir }}/dist/"
- name: copy plugin
synchronize:
- name: Copy plugin
ansible.posix.synchronize:
src: "{{ playbook_dir }}/dist/mitogen-{{ mitogen_version }}/"
dest: "{{ playbook_dir }}/plugins/mitogen"
- name: add strategy to ansible.cfg
ini_file:
- name: Add strategy to ansible.cfg
community.general.ini_file:
path: ansible.cfg
mode: 0644
section: "{{ item.section | d('defaults') }}"

View File

@@ -1,24 +1,29 @@
---
- hosts: gfs-cluster
- name: Bootstrap hosts
hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: all
- name: Gather facts
hosts: all
gather_facts: true
- hosts: gfs-cluster
- name: Install glusterfs server
hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles:
- { role: glusterfs/server }
- hosts: k8s_cluster
- name: Install glusterfs servers
hosts: k8s_cluster
roles:
- { role: glusterfs/client }
- hosts: kube_control_plane[0]
- name: Configure Kubernetes to use glusterfs
hosts: kube_control_plane[0]
roles:
- { role: kubernetes-pv }

View File

@@ -41,4 +41,3 @@
# [network-storage:children]
# gfs-cluster

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
min_ansible_version: "2.0"
platforms:
- name: EL
versions:
- 6
- 7
- "6"
- "7"
- name: Ubuntu
versions:
- precise

View File

@@ -3,14 +3,19 @@
# hyperkube and needs to be installed as part of the system.
# Setup/install tasks.
- include: setup-RedHat.yml
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- include: setup-Debian.yml
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist.
file: "path={{ item }} state=directory mode=0775"
file:
path: "{{ item }}"
state: directory
mode: 0775
with_items:
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa no-handler
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,10 +1,14 @@
---
- name: Install Prerequisites
package: name={{ item }} state=present
package:
name: "{{ item }}"
state: present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package: name={{ item }} state=present
package:
name: "{{ item }}"
state: present
with_items:
- glusterfs-client

View File

@@ -6,12 +6,12 @@ galaxy_info:
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
min_ansible_version: "2.0"
platforms:
- name: EL
versions:
- 6
- 7
- "6"
- "7"
- name: Ubuntu
versions:
- precise

View File

@@ -4,78 +4,97 @@
include_vars: "{{ ansible_os_family }}.yml"
# Install xfs package
- name: install xfs Debian
apt: name=xfsprogs state=present
- name: Install xfs Debian
apt:
name: xfsprogs
state: present
when: ansible_os_family == "Debian"
- name: install xfs RedHat
package: name=xfsprogs state=present
- name: Install xfs RedHat
package:
name: xfsprogs
state: present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
- name: Format volumes in xfs
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
community.general.filesystem:
fstype: xfs
dev: "{{ disk_volume_device_1 }}"
# Mount external volumes
- name: mounting new xfs filesystem
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
- name: Mounting new xfs filesystem
ansible.posix.mount:
name: "{{ gluster_volume_node_mount_dir }}"
src: "{{ disk_volume_device_1 }}"
fstype: xfs
state: mounted
# Setup/install tasks.
- include: setup-RedHat.yml
- name: Setup RedHat distros for glusterfs
include_tasks: setup-RedHat.yml
when: ansible_os_family == 'RedHat'
- include: setup-Debian.yml
- name: Setup Debian distros for glusterfs
include_tasks: setup-Debian.yml
when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot.
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
service:
name: "{{ glusterfs_daemon }}"
state: started
enabled: yes
- name: Ensure Gluster brick and mount directories exist.
file: "path={{ item }} state=directory mode=0775"
file:
path: "{{ item }}"
state: directory
mode: 0775
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume with replicas
gluster_volume:
gluster.gluster.gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length > 1
when: groups['gfs-cluster'] | length > 1
- name: Configure Gluster volume without replicas
gluster_volume:
gluster.gluster.gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip'] | default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length <= 1
when: groups['gfs-cluster'] | length <= 1
- name: Mount glusterfs to retrieve disk size
mount:
ansible.posix.mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
opts: "defaults,_netdev"
state: mounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size
setup: filter=ansible_mounts
setup:
filter: ansible_mounts
register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Set Gluster disk size to variable
set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024 * 1024 * 1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS
@@ -86,9 +105,9 @@
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
mount:
ansible.posix.mount:
name: "{{ gluster_mount_dir }}"
fstype: glusterfs
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
src: "{{ ip | default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa no-handler
apt:
name: "{{ item }}"
state: absent

View File

@@ -1,11 +1,15 @@
---
- name: Install Prerequisites
package: name={{ item }} state=present
package:
name: "{{ item }}"
state: present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package: name={{ item }} state=present
package:
name: "{{ item }}"
state: present
with_items:
- glusterfs-server
- glusterfs-client

View File

@@ -18,6 +18,6 @@
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest','present') }}"
state: "{{ item.changed | ternary('latest', 'present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined

View File

@@ -1,9 +1,11 @@
---
- hosts: kube_control_plane[0]
- name: Tear down heketi
hosts: kube_control_plane[0]
roles:
- { role: tear-down }
- hosts: heketi-node
- name: Teardown disks in heketi
hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -1,9 +1,11 @@
---
- hosts: heketi-node
- name: Prepare heketi install
hosts: heketi-node
roles:
- { role: prepare }
- hosts: kube_control_plane[0]
- name: Provision heketi
hosts: kube_control_plane[0]
tags:
- "provision"
roles:

View File

@@ -5,7 +5,7 @@
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
modprobe:
community.general.modprobe:
name: "{{ item }}"
state: "present"

View File

@@ -1,3 +1,3 @@
---
- name: "stop port forwarding"
- name: "Stop port forwarding"
command: "killall "

View File

@@ -7,9 +7,9 @@
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Service']\")) | length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Deployment']\")) | length == 0"
- "(initial_heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod']\")) | length == 0"
include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology
@@ -20,11 +20,11 @@
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
that: "(initial_heketi_pod.stdout | from_json | json_query('items[*]')) | length == 1"
- name: Store the initial heketi pod name
set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout | from_json | json_query(\"items[*].metadata.name | [0]\") }}"
- name: "Test heketi topology."
changed_when: false
@@ -32,7 +32,7 @@
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
when: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*]\") | flatten | length == 0"
include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume
@@ -58,7 +58,7 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"

View File

@@ -17,11 +17,11 @@
register: "initial_heketi_state"
vars:
initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
pods_query: "items[?kind=='Pod'].status.conditions | [0][?type=='Ready'].status | [0]"
deployments_query: "items[?kind=='Deployment'].status.conditions | [0][?type=='Available'].status | [0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
- "initial_heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -15,10 +15,10 @@
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
- "heketi_storage_state.stdout | from_json | json_query(secret_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(endpoints_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(service_query) | length == 0"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 0"
register: "heketi_storage_result"
- name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json"
@@ -28,6 +28,6 @@
heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until:
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
- "heketi_storage_state.stdout | from_json | json_query(job_query) | length == 1"
retries: 60
delay: 5

View File

@@ -5,10 +5,10 @@
changed_when: false
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over." # noqa 301
when: "heketi_resources.stdout | from_json | json_query('items[*]') | length > 0"
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
retries: 60
delay: 5

View File

@@ -14,7 +14,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa 503
- name: "Load heketi topology." # noqa no-handler
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
@@ -22,6 +22,6 @@
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
retries: 60
delay: 5

View File

@@ -6,19 +6,19 @@
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
vars: { volume: "{{ volume_information.stdout | from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod." # noqa 301
- name: "Copy configuration from pod."
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
@@ -28,14 +28,14 @@
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
with_items: "{{ heketi_volumes.stdout | from_json | json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
vars: { volume: "{{ volume_information.stdout | from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }

View File

@@ -23,8 +23,8 @@
changed_when: false
vars:
daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
ready: "{{ daemonset_state.stdout | from_json | json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout | from_json | json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3"
retries: 60
delay: 5

View File

@@ -5,7 +5,7 @@
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines|length == 0"
when: "label_present.stdout_lines | length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- name: Get storage nodes again
@@ -15,5 +15,5 @@
- name: Ensure the label has been set
assert:
that: "label_present|length > 0"
that: "label_present | length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@@ -24,11 +24,11 @@
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
- "heketi_state.stdout | from_json | json_query(pods_query) == 'True'"
- "heketi_state.stdout | from_json | json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- name: Set the Heketi pod name
set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"
heketi_pod_name: "{{ heketi_state.stdout | from_json | json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -12,7 +12,7 @@
- name: "Render storage class configuration."
become: true
vars:
endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
endpoint_address: "{{ (heketi_service.stdout | from_json).spec.clusterIP }}"
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"

View File

@@ -11,16 +11,16 @@
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." # noqa 503
- name: "Copy topology configuration into container." # noqa no-handler
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa 503
- name: "Load heketi topology." # noqa no-handler
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
until: "heketi_topology.stdout | from_json | json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\") | flatten | length == groups['heketi-node'] | length"
retries: 60
delay: 5

View File

@@ -22,7 +22,7 @@
ignore_errors: true # noqa ignore-errors
changed_when: false
- name: "Remove volume groups." # noqa 301
- name: "Remove volume groups."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
@@ -30,7 +30,7 @@
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks." # noqa 301
- name: "Remove physical volume from cluster disks."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true

View File

@@ -1,43 +1,43 @@
---
- name: Remove storage class. # noqa 301
- name: Remove storage class.
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
- name: Tear down heketi.
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
- name: Tear down heketi.
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true # noqa ignore-errors
- name: Tear down bootstrap.
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: Ensure there is nothing left over. # noqa 301
- name: Ensure there is nothing left over.
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
retries: 60
delay: 5
- name: Ensure there is nothing left over. # noqa 301
- name: Ensure there is nothing left over.
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
until: "heketi_result.stdout | from_json | json_query('items[*]') | length == 0"
retries: 60
delay: 5
- name: Tear down glusterfs. # noqa 301
- name: Tear down glusterfs.
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi storage service. # noqa 301
- name: Remove heketi storage service.
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi gluster role binding # noqa 301
- name: Remove heketi gluster role binding
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi config secret # noqa 301
- name: Remove heketi config secret
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi db backup # noqa 301
- name: Remove heketi db backup
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi service account # noqa 301
- name: Remove heketi service account
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true # noqa ignore-errors
- name: Get secrets
@@ -46,6 +46,6 @@
changed_when: false
- name: Remove heketi storage secret
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout | from_json | json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true # noqa ignore-errors

View File

@@ -27,7 +27,7 @@ manage-offline-container-images.sh register
## generate_list.sh
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.

View File

@@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
mkdir -p ${TEMP_DIR}
@@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"

View File

@@ -1,5 +1,6 @@
---
- hosts: localhost
- name: Collect container images for offline deployment
hosts: localhost
become: no
roles:
@@ -11,9 +12,11 @@
tasks:
# Generate files.list and images.list files from templates.
- template:
- name: Collect container images for offline deployment
template:
src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list
mode: 0644
with_items:
- files
- images

View File

@@ -39,6 +39,6 @@ if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "$(pwd)"/nginx.conf:/etc/nginx/nginx.conf \
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@@ -1,4 +1,5 @@
---
- hosts: all
- name: Disable firewalld/ufw
hosts: all
roles:
- { role: prepare }

View File

@@ -1,5 +1,8 @@
---
- block:
- name: Disable firewalld and ufw
when:
- disable_service_firewall is defined and disable_service_firewall
block:
- name: List services
service_facts:
@@ -9,7 +12,7 @@
state: stopped
enabled: no
when:
"'firewalld.service' in services"
"'firewalld.service' in services and services['firewalld.service'].status != 'not-found'"
- name: Disable service ufw
systemd:
@@ -17,7 +20,4 @@
state: stopped
enabled: no
when:
"'ufw.service' in services"
when:
- disable_service_firewall is defined and disable_service_firewall
"'ufw.service' in services and services['ufw.service'].status != 'not-found'"

View File

@@ -12,7 +12,7 @@ This will install a Kubernetes cluster on Equinix Metal. It should work in all l
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Equinix Metal project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../..//cluster.yml)
file to generate a dynamic inventory that is consumed by [cluster.yml](../../../cluster.yml)
to actually install Kubernetes with Kubespray.
### Kubernetes Nodes
@@ -60,16 +60,16 @@ Terraform will be used to provision all of the Equinix Metal resources with base
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/metal/sample-inventory inventory/$CLUSTER
cp -LRp contrib/terraform/equinix/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/metal/hosts
ln -s ../../contrib/terraform/equinix/hosts
```
This will be the base for subsequent Terraform commands.
#### Equinix Metal API access
Your Equinix Metal API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
Your Equinix Metal API key must be available in the `METAL_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
@@ -80,10 +80,12 @@ The Equinix Metal Project ID associated with the key will be set later in `clust
For more information about the API, please see [Equinix Metal API](https://metal.equinix.com/developers/api/).
For more information about terraform provider authentication, please see [the equinix provider documentation](https://registry.terraform.io/providers/equinix/equinix/latest/docs).
Example:
```ShellSession
export PACKET_AUTH_TOKEN="Example-API-Token"
export METAL_AUTH_TOKEN="Example-API-Token"
```
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
@@ -101,7 +103,7 @@ This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
- equinix_metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
@@ -119,12 +121,13 @@ Once the Kubespray playbooks are run, a Kubernetes configuration file will be wr
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/metal` directory :
`.gitignore` file in the `contrib/terraform/equinix` directory :
- `.terraform`
- `.tfvars`
- `.tfstate`
- `.tfstate.backup`
- `.lock.hcl`
You can still add them manually if you want to.
@@ -135,7 +138,7 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/metal
terraform -chdir=../../contrib/terraform/metal init -var-file=cluster.tfvars
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
@@ -146,7 +149,7 @@ You can apply the Terraform configuration to your cluster with the following com
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/metal
terraform -chdir=../../contrib/terraform/equinix apply -var-file=cluster.tfvars
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
@@ -156,7 +159,7 @@ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/metal
terraform -chdir=../../contrib/terraform/equinix destroy -var-file=cluster.tfvars
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@@ -1,62 +1,57 @@
# Configure the Equinix Metal Provider
provider "metal" {
}
resource "metal_ssh_key" "k8s" {
resource "equinix_metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "metal_device" "k8s_master" {
depends_on = [metal_ssh_key.k8s]
resource "equinix_metal_device" "k8s_master" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters
facilities = [var.facility]
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.metal_project_id
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "metal_device" "k8s_master_no_etcd" {
depends_on = [metal_ssh_key.k8s]
resource "equinix_metal_device" "k8s_master_no_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters_no_etcd
facilities = [var.facility]
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.metal_project_id
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "metal_device" "k8s_etcd" {
depends_on = [metal_ssh_key.k8s]
resource "equinix_metal_device" "k8s_etcd" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = var.plan_etcd
facilities = [var.facility]
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.metal_project_id
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "metal_device" "k8s_node" {
depends_on = [metal_ssh_key.k8s]
resource "equinix_metal_device" "k8s_node" {
depends_on = [equinix_metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = var.plan_k8s_nodes
facilities = [var.facility]
metro = var.metro
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.metal_project_id
project_id = var.equinix_metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

View File

@@ -0,0 +1,15 @@
output "k8s_masters" {
value = equinix_metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = equinix_metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = equinix_metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = equinix_metal_device.k8s_node.*.access_public_ipv4
}

View File

@@ -0,0 +1,17 @@
terraform {
required_version = ">= 1.0.0"
provider_meta "equinix" {
module_name = "kubespray"
}
required_providers {
equinix = {
source = "equinix/equinix"
version = "~> 1.14"
}
}
}
# Configure the Equinix Metal Provider
provider "equinix" {
}

View File

@@ -1,16 +1,19 @@
# your Kubernetes cluster name here
cluster_name = "mycluster"
# Your Equinix Metal project ID. See hhttps://metal.equinix.com/developers/docs/accounts/
metal_project_id = "Example-API-Token"
# Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/
equinix_metal_project_id = "Example-Project-Id"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project
# Terraform will complain if the public key is setup in Equinix Metal
public_key_path = "~/.ssh/id_rsa.pub"
# cluster location
facility = "ewr1"
# Equinix interconnected bare metal across our global metros.
metro = "da"
# operating_system
operating_system = "ubuntu_22_04"
# standalone etcds
number_of_etcd = 0

View File

@@ -2,12 +2,12 @@ variable "cluster_name" {
default = "kubespray"
}
variable "metal_project_id" {
variable "equinix_metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
}
variable "operating_system" {
default = "ubuntu_20_04"
default = "ubuntu_22_04"
}
variable "public_key_path" {
@@ -19,8 +19,8 @@ variable "billing_cycle" {
default = "hourly"
}
variable "facility" {
default = "dfw2"
variable "metro" {
default = "da"
}
variable "plan_k8s_masters" {
@@ -54,4 +54,3 @@ variable "number_of_etcd" {
variable "number_of_k8s_nodes" {
default = 1
}

View File

@@ -1,6 +1,6 @@
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini"
ssh_public_keys = [
@@ -15,17 +15,17 @@ machines = {
"master-0" : {
"node_type" : "master",
"size" : "cx21",
"image" : "ubuntu-20.04",
"image" : "ubuntu-22.04",
},
"worker-0" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
"image" : "ubuntu-22.04",
},
"worker-1" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
"image" : "ubuntu-22.04",
}
}

View File

@@ -2,7 +2,7 @@ provider "hcloud" {}
module "kubernetes" {
source = "./modules/kubernetes-cluster"
#source = "./modules/kubernetes-cluster-flatcar"
# source = "./modules/kubernetes-cluster-flatcar"
prefix = var.prefix
@@ -14,7 +14,7 @@ module "kubernetes" {
#ssh_private_key_path = var.ssh_private_key_path
ssh_public_keys = var.ssh_public_keys
network_zone = var.network_zone
network_zone = var.network_zone
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
@@ -26,31 +26,32 @@ module "kubernetes" {
# Generate ansible inventory
#
data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")
vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
network_id = module.kubernetes.network_id
}
locals {
inventory = templatefile(
"${path.module}/templates/inventory.tpl",
{
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
network_id = module.kubernetes.network_id
}
)
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
command = "echo '${local.inventory}' > ${var.inventory_file}"
}
triggers = {
template = data.template_file.inventory.rendered
template = local.inventory
}
}
}

View File

@@ -15,12 +15,12 @@ resource "hcloud_ssh_key" "first" {
public_key = var.ssh_public_keys.0
}
resource "hcloud_server" "master" {
resource "hcloud_server" "machine" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "master"
}
name = "${var.prefix}-${each.key}"
ssh_keys = [hcloud_ssh_key.first.id]
# boot into rescue OS
@@ -30,11 +30,11 @@ resource "hcloud_server" "master" {
server_type = each.value.size
location = var.zone
connection {
host = self.ipv4_address
timeout = "5m"
host = self.ipv4_address
timeout = "5m"
private_key = file(var.ssh_private_key_path)
}
firewall_ids = [hcloud_firewall.machine.id]
firewall_ids = each.value.node_type == "master" ? [hcloud_firewall.master.id] : [hcloud_firewall.worker.id]
provisioner "file" {
content = data.ct_config.machine-ignitions[each.key].rendered
destination = "/root/ignition.json"
@@ -45,9 +45,9 @@ resource "hcloud_server" "master" {
"set -ex",
"apt update",
"apt install -y gawk",
"curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/kinvolk/init/flatcar-master/bin/flatcar-install",
"curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install",
"chmod +x flatcar-install",
"./flatcar-install -s -i /root/ignition.json",
"./flatcar-install -s -i /root/ignition.json -C stable",
"shutdown -r +1",
]
}
@@ -55,9 +55,10 @@ resource "hcloud_server" "master" {
# optional:
provisioner "remote-exec" {
connection {
host = self.ipv4_address
timeout = "3m"
user = var.user_flatcar
host = self.ipv4_address
private_key = file(var.ssh_private_key_path)
timeout = "3m"
user = var.user_flatcar
}
inline = [
@@ -66,65 +67,11 @@ resource "hcloud_server" "master" {
}
}
resource "hcloud_server_network" "master" {
for_each = hcloud_server.master
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
resource "hcloud_server" "worker" {
resource "hcloud_server_network" "machine" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "worker"
name => hcloud_server.machine[name]
}
name = "${var.prefix}-${each.key}"
ssh_keys = [hcloud_ssh_key.first.id]
# boot into rescue OS
rescue = "linux64"
# dummy value for the OS because Flatcar is not available
image = each.value.image
server_type = each.value.size
location = var.zone
connection {
host = self.ipv4_address
timeout = "5m"
private_key = file(var.ssh_private_key_path)
}
firewall_ids = [hcloud_firewall.machine.id]
provisioner "file" {
content = data.ct_config.machine-ignitions[each.key].rendered
destination = "/root/ignition.json"
}
provisioner "remote-exec" {
inline = [
"set -ex",
"apt update",
"apt install -y gawk",
"curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/kinvolk/init/flatcar-master/bin/flatcar-install",
"chmod +x flatcar-install",
"./flatcar-install -s -i /root/ignition.json",
"shutdown -r +1",
]
}
# optional:
provisioner "remote-exec" {
connection {
host = self.ipv4_address
timeout = "3m"
user = var.user_flatcar
}
inline = [
"sudo hostnamectl set-hostname ${self.name}",
]
}
}
resource "hcloud_server_network" "worker" {
for_each = hcloud_server.worker
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
@@ -134,38 +81,33 @@ data "ct_config" "machine-ignitions" {
for name, machine in var.machines :
name => machine
}
content = data.template_file.machine-configs[each.key].rendered
strict = false
content = templatefile(
"${path.module}/templates/machine.yaml.tmpl",
{
ssh_keys = jsonencode(var.ssh_public_keys)
user_flatcar = var.user_flatcar
name = each.key
}
)
}
data "template_file" "machine-configs" {
for_each = {
for name, machine in var.machines :
name => machine
}
template = file("${path.module}/templates/machine.yaml.tmpl")
vars = {
ssh_keys = jsonencode(var.ssh_public_keys)
user_flatcar = jsonencode(var.user_flatcar)
name = each.key
}
}
resource "hcloud_firewall" "machine" {
name = "${var.prefix}-machine-firewall"
resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
}
}
@@ -173,30 +115,30 @@ resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
}
}

View File

@@ -1,20 +1,22 @@
output "master_ip_addresses" {
value = {
for key, instance in hcloud_server.master :
instance.name => {
"private_ip" = hcloud_server_network.master[key].ip
"public_ip" = hcloud_server.master[key].ipv4_address
}
for name, machine in var.machines :
name => {
"private_ip" = hcloud_server_network.machine[name].ip
"public_ip" = hcloud_server.machine[name].ipv4_address
}
if machine.node_type == "master"
}
}
output "worker_ip_addresses" {
value = {
for key, instance in hcloud_server.worker :
instance.name => {
"private_ip" = hcloud_server_network.worker[key].ip
"public_ip" = hcloud_server.worker[key].ipv4_address
}
for name, machine in var.machines :
name => {
"private_ip" = hcloud_server_network.machine[name].ip
"public_ip" = hcloud_server.machine[name].ipv4_address
}
if machine.node_type == "worker"
}
}
@@ -24,4 +26,4 @@ output "cluster_private_network_cidr" {
output "network_id" {
value = hcloud_network.kubernetes.id
}
}

View File

@@ -1,8 +1,11 @@
---
variant: flatcar
version: 1.0.0
passwd:
users:
- name: ${user_flatcar}
ssh_authorized_keys: ${ssh_keys}
storage:
files:
- path: /home/core/works
@@ -13,4 +16,4 @@ storage:
#!/bin/bash
set -euo pipefail
hostname="$(hostname)"
echo My name is ${name} and the hostname is $${hostname}
echo My name is ${name} and the hostname is $${hostname}

View File

@@ -1,6 +1,6 @@
variable "zone" {
type = string
type = string
default = "fsn1"
}
@@ -9,7 +9,7 @@ variable "prefix" {
}
variable "user_flatcar" {
type = string
type = string
default = "core"
}

View File

@@ -1,13 +1,14 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
source = "hetznercloud/hcloud"
}
ct = {
source = "poseidon/ct"
version = "0.11.0"
}
null = {
source = "hashicorp/null"
source = "hashicorp/null"
}
}
}
}

View File

@@ -75,17 +75,17 @@ resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
}
}
@@ -93,30 +93,30 @@ resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
}
}

View File

@@ -24,4 +24,4 @@ output "cluster_private_network_cidr" {
output "network_id" {
value = hcloud_network.kubernetes.id
}
}

Some files were not shown because too many files have changed in this diff Show More