Compare commits

...

299 Commits

Author SHA1 Message Date
rtsp
1f65e6d3b5 [ingress-nginx] upgrade to 1.2.1 (#8904) 2022-06-01 00:23:10 -07:00
Kenichi Omichi
9bf7aaf6cd Update RELEASE.md (#8884)
This updates RELEASE.md file to understand the release process
easily based on hands-on experience.
2022-06-01 00:23:03 -07:00
Max Gautier
5512465b34 Revert "Set exact user for Kubelet services" (#8872)
This reverts commit e375678674.

The workaround of explicitly specifying root for the kubelet unit was
for pulling images from private registry. Kubernetes now have a
dedicated mechanism with imagePullSecret.
2022-06-01 00:19:02 -07:00
Chris Ricker
2f30ab558a Add 1.24 mappings for etcd and snapshot_controller (#8903)
Map appropriate versions of etcd and snapshot_controller containers with
k8s 1.24
2022-06-01 00:09:02 -07:00
Daniil Muidinov
5c136ae3af [calico] add 3.22.3 and 3.23.1 (#8897)
* [calico]
* add 3.22.3 and 3.23.1
* set 3.22.3 default
* fix download crd for calico 3.22.3 and upper

* update calico README.md
2022-05-31 13:27:23 -07:00
mahjonp
c927da00e0 Support cilium ip-masq-agent configuration (#8893)
* fix deploy Cilium with eBPF-based Masquerading failed

Signed-off-by: mahjonp <junpeng.man@gmail.com>

* forget to add the enable-ip-masq-agent flag

Signed-off-by: mahjonp <junpeng.man@gmail.com>
2022-05-31 09:26:53 -07:00
Samuel Liu
1600fd9082 clean up tags (#8880) 2022-05-31 07:52:53 -07:00
Samuel Liu
14acd124bc fix containerd images downalod bugs (#8894) 2022-05-31 00:22:53 -07:00
rtsp
e3cbbfb9ed [kubernetes] make 1.23.7 the new default (#8888) 2022-05-29 17:08:51 -07:00
rtsp
5f21e0b58b Update components version in README.md (#8886) 2022-05-29 14:10:51 -07:00
Alessio Greggi
d22204a59f docs: add hardening guide (#8868) 2022-05-29 12:36:50 -07:00
ERIK
90289b8502 add arch var in dockerfile (#8875)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-29 12:32:51 -07:00
Mohamed Zaian
78aacee21b [kubernetes] add hashes for 1.24.1 and other versions. (#8876)
* [kubernetes] add hashes for 1.24.1 and other versions.
versions: v1.21.13, v1.22.10, v1.23.7 & v1.24.1

* [kubernetes] make v1.23.7 default1
2022-05-27 12:00:42 -07:00
Gleb Galkin
f47aca3558 Added |bool for rhel_enable_repos (#8871) 2022-05-26 18:51:55 -07:00
Kenichi Omichi
73fc70dbe8 Delete kube_version v1.20- related code (#8869)
Current Kubespray supports the Kubernetes version 1.21 or upper with
`kube_version_min_required: v1.21.0`

Then kube_version v1.20- related code is not used at all.
This deletes those code for cleanup.
2022-05-25 21:31:22 -07:00
Kenichi Omichi
dc2a18e436 Merge pull request #8815 from simplekube-ro/dont_clobber_calico
[calico] don't clobber calico options set by the user
2022-05-24 10:25:48 -07:00
Thearas
82590eb087 fix remove docker-ce.repo failed (#8856) 2022-05-24 05:44:06 -07:00
Ross Kusler
4c97ce747c Adding support for the kube-router flag --cluster-asn flag (#8837) 2022-05-23 16:39:10 -07:00
Samuel Liu
ebbc5ed0ce add liupeng0518 to reviewers (#8853) 2022-05-23 21:42:14 +03:00
Necatican Yıldırım
dc1af5a9c5 [etcd] Add support for setting the request size limit (#8849)
* [etcd] Add extra documentation for `etcd_memory_limit` and `etcd_quota_backend_bytes`

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [etcd] Add support for setting ETCD_MAX_REQUEST_BYTES

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-23 09:36:03 -07:00
irizzant
85bd1eea27 fix(calico): add missing "get" verb (#8847)
Signed-off-by: irizzant <i.rizzante@gmail.com>
2022-05-21 01:20:00 -07:00
Necatican Yıldırım
2b151c6aa2 cni-plugins: upgrade to 1.1.1 (#8852)
Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-21 11:14:16 +03:00
David Louks
93fe3e06ef Add support for including annotations on aws-ebs-csi-controller (#8779)
* Add support for including annotations on aws-ebs-csi-controller

* update comment to specify role arn
2022-05-20 15:00:00 -07:00
Tamas Pasztor
9d3a894991 Possible remove ippools from cni config (#8845)
* Possible remove ippools from cni config

* Typo

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

* Update cni-calico.conflist.j2

Incorrectly deleted calico forwarding content.

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-05-19 23:45:13 -07:00
Kenichi Omichi
0e6b727e53 Update docs for using venv (#8842)
Due many patterns of Linux distributions, it is difficult to install
ansible dependencies as system-wide stably.
Apart of Kubespray doc[1] recommends to use venv to avoid such issue,
and this applies venv usage to the other parts of the doc.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/setting-up-your-first-cluster.md#set-up-kubespray
2022-05-19 23:39:12 -07:00
Andrey
e42a01f203 Fixed systemd-networkd restart for ubuntu 22.04, when using reset.yml (#8841)
* Fixed systemd-networkd restart  for ubuntu 22.04

* fixed systemd-networkd restart for all Ubuntu
2022-05-20 09:34:53 +03:00
Samuel Liu
a28b58dbd0 [calico]use ipamconfig instead of calico ipam command (#8839)
* use ipamconfig instead of calico ipam command

* fix ansible lint
2022-05-19 11:13:20 -07:00
orange-llajeanne
a26a9ee14f set apparmor_enabled in netchecker task (#8844) 2022-05-19 10:49:21 -07:00
Kenichi Omichi
c09fcd4f92 Skip gathering facts when reset_nodes is false (#8843)
The doc[1] explains we need to specify

  "-e reset_nodes=false -e allow_ungraceful_removal=true"

to delete offline node. However the task "Gather facts"
tried to gather facts of offline node also and the task
was failed.
This adds a condition to skip gathering facts when reset_nodes
is false on remove-node.yml.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md#3-remove-an-old-node-with-remove-nodeyml
2022-05-19 01:04:07 -07:00
Samuel Liu
593359ec77 fix kube-ovn image (#8838) 2022-05-18 08:36:53 -07:00
Maxime Guyot
34ec4d5d40 Move woopstar to emeritus approver (#8809) 2022-05-18 02:36:53 -07:00
Kay Yan
3d8f3bc0b7 Fix the invalid kube vip manifest (#8831)
* add Feature synchronized time checking

* fix-invalid-kube-vip-manifest
2022-05-17 23:48:55 -07:00
Samuel Liu
eea7bb7692 only need run this once (#8833)
calicoctl ipam xx
calicoctl apply xx
2022-05-17 09:52:27 -07:00
Cristian Calin
3a89e31dee [ansible] update ansible and cryptography requirements to work on ubuntu 22.04 (#8826) 2022-05-16 11:14:17 -07:00
Cristian Calin
0c504e4984 [docs] document support for ansible versions (#8827)
drop note about not supporting ansible 2.9 since we still cover it in
nightly CI
2022-05-16 00:50:17 -07:00
Kenichi Omichi
0bf070c33b doc: write how to use kata-container for pods (#8817)
kata-container is not used by default even if enabling kata_containers_enabled.
This updates the doc for writing how to do that.
2022-05-13 23:15:18 -07:00
Cyclinder
dc8ad78206 fix: incorrect condition type (#8822)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-13 14:09:56 -07:00
ERIK
48e938660d Allow replacement of address prefixes for all images (#8764)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-13 09:23:14 +03:00
Mohamed Zaian
632d457f78 [ingress-nginx] upgrade to 1.2.0 (#8814) 2022-05-12 09:07:14 -07:00
Calin Cristian Andrei
569a319ff5 [calico] don't clobber user set bgp configuration options that are not managed by kubespray 2022-05-12 15:50:38 +00:00
Calin Cristian Andrei
47812ec002 [calico] don't clobber user set ippool options that are not managed by kubespray 2022-05-12 15:50:05 +00:00
Calin Cristian Andrei
c27dee57ea [calico] don't clobber user set felixconfig options that are not managed by kubespray 2022-05-12 15:49:24 +00:00
weizhoublue
b289f533b3 get wrong server name of coredns (#8811)
Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-12 08:33:14 -07:00
Cyclinder
3eb0a4071a set default value of name to "k8s-pod-network" (#8813)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-12 08:29:14 -07:00
Oogy
5684610a55 Support metallb peer password (#8792)
* support metallb peer password

* add MetalLB BGP password example
2022-05-11 21:39:15 -07:00
Samuel Liu
f26f544ff6 [kube-ovn]: update kube-ovn version and sync some feature (#8790)
* [kube-ovn]: some feature

kube-ovn vlan mode
ipv6/ipv4 dual stack
...

* remove unused env

* fix readinessprobe
2022-05-11 21:35:15 -07:00
Ajarmar
b9e5b0cb53 UpCloud server plan, firewall, load balancer integration (#8758)
* [upcloud] add option to use preconfigured cpu/mem plan

* [upcloud] add option to use firewall rules for API server/SSH access

* [upcloud] add option to use managed load balancer
2022-05-11 10:15:03 -07:00
Necatican Yıldırım
13443b05a6 Overhaul Cilium manifests to match the newer versions (#8717)
* [cilium] Separate templates for cilium, cilium-operator, and hubble installations

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-operator templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Allow using custom args and mounting extra volumes for the Cilium Operator

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update the cilium configmap to filter out the deprecated variables, and add the new variables

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Add an option to use Wireguard encryption on Cilium 1.10 and up

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-agent templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Bump Cilium version to 1.11.3

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-11 06:23:04 -07:00
Andrew Zagorodnuk
e70c00a0fe fix: Waiting until Volumes will be detached from the node on graceful node removal (#8739) 2022-05-10 09:57:43 -07:00
spaced
bb67b654c5 local volume provisioner should not run on control plane nodes by default (#8805) 2022-05-10 19:04:24 +03:00
Kenichi Omichi
aef25819bc nit: Add offline note for kube-* images (#8718) 2022-05-10 06:41:44 -07:00
weizhoublue
1d96f465f4 arm64 support of cilium (#8803)
when cilium v1.10 , it is ok to support arm64
https://cilium.io/blog/2021/05/20/cilium-110

Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-10 02:55:43 -07:00
emiran-orange
8f618ab408 Fix condition on kata_containers_version/kube_version when kata_containers_enabled is false (#8804) 2022-05-09 14:56:32 -07:00
Hugo Blom
5296d7ef9c Added playbook to wait for cloud-init to finish (#8799) 2022-05-09 10:49:19 -07:00
Robin Wallace
b715500b48 csi: bump upcloud csi driver (#8784) 2022-05-09 10:43:19 -07:00
Alessio Greggi
37a5271f5a feat: add variables to manage makeIPTablesUtilChains and streamingConnectionIdleTimeout kubelet parameters (#8796) 2022-05-09 09:25:19 -07:00
Robin Wallace
42fc71fafa [PodSecurityPolicy] Move the install of psp (#8744) 2022-05-09 09:21:19 -07:00
Victor Morales
02b6e4833a Update Kata Containers runtime (#8797)
* Update Kata containers binary to 2.4.1 version

* Update overhead kata runtime values

* Fix kata-qemu default values in CRI-O
2022-05-08 17:01:18 -07:00
Andy
323a111362 [kubelet] set correct resolv.conf for Ubuntu 22.04 (#8795) 2022-05-06 16:31:04 -07:00
Alessio Greggi
e7df4d3dd9 add support for service-account-lookup parameter (#8781)
* feat: add variable to manage service-account-lookup on kube-apiserver

* docs: add documentation about service-account-lookup variable
2022-05-06 00:39:07 -07:00
David Louks
3e52a0db95 Add optional setting for ca data in auth webhook (#8777)
* Add optional setting for ca data in auth webhook

* add webhook token auth variables to sample inventory
2022-05-05 14:52:43 -07:00
Cristian Calin
94484873d1 [containerd] add 1.6.4 which is needed for kubernetes 1.24.0 and make it the default (#8791) 2022-05-05 14:10:43 -07:00
Elif Akyıldırım
0d6ea85167 Assert that IP range is enough for the nodes (#8720)
* Assert that IP range is enough for the nodes 

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>

* Fixed whitespace

* Fixed errors

* Fixed errors

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>
2022-05-05 08:48:20 -07:00
Florian Ruynat
674ec92224 Add crictl 1.24 for new k8s version (#8787) 2022-05-05 08:40:22 -07:00
Victor Morales
e7e5037a86 Add a container_manager validation (#8785) 2022-05-04 23:58:19 -07:00
Kenichi Omichi
fbcf426240 Drop containerd 1.4 support (#8780)
The version 1.4 of containerd has been End of Life since March 3, 2022
as https://containerd.io/releases/#support-horizon
It is nice to drop the support from Kubespray also to follow containerd.
2022-05-04 23:02:20 -07:00
Mohamed Zaian
2301554e98 [kubernetes] add hashes for 1.24.0 (#8783) 2022-05-04 22:58:21 -07:00
Calin Cristian Andrei
5bc35002ba [remove-etcd-node] fix json path query 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei
9143810a4d [CI] add remove node job 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei
8f118fb619 [reset] fix task inclusion logic for network plugin 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei
1113460b68 [cri-o] molecule switch from ubuntu 18 to ubuntu 20 2022-05-04 14:46:17 +02:00
Florian Ruynat
74c7e009b7 Move flannel to kubespray/quay for CI (#8774) 2022-05-04 00:11:30 -07:00
Lubos Mercl
c20ab7d987 add fix for GCP CSI driver (#8616)
Signed-off-by: Lubos Mercl <lubos.mercl@gmail.com>
2022-05-03 08:55:56 -07:00
Robin Wallace
fe66121287 [Openstack] master foreach and fixes (#8709)
* [openstack] fix for new network modules

* [openstack] for-each master nodes
2022-05-03 08:51:56 -07:00
Cristian Calin
9605bbaa67 [nerdctl] upgrade to 0.19.0 (#8772) 2022-05-03 05:39:56 -07:00
Cristian Calin
b7ce6a9f79 [ansible] upgrade to 5.7 (#8771) 2022-05-03 01:29:55 -07:00
Kenichi Omichi
c04a73c11a Update containerd version to 1.6.3 (#8770)
containerd version 1.6.3 has been released as [1]
This adds the checksums and makes Kubespray use it.

[1]: https://github.com/containerd/containerd/releases/tag/v1.6.3
2022-05-02 22:43:55 -07:00
Kenichi Omichi
f184725c5f Use ansible 2.12 for testcases_prepare (#8763)
tests/requirements.txt links to tests/requirements-2.12.txt, so
Kubespray uses ansible 2.12 by default for testing. However we
forgot to update testcases_prepare.sh to use ansible 2.12.
This updates testcases_prepare to use ansible 2.12.
2022-05-02 11:34:31 -07:00
bilalcaliskan
26a0b0f1e8 chore(flannel): change flannel repository and upgrade image version (#8740)
* chore: change flannel repository and upgrade image version

* docs: upgrade flanneld version
2022-05-02 11:29:14 -07:00
Alessio Greggi
fa1d222eee add support for EventRateLimit plugin configuration (#8711)
* feat: add support for EventRateLimit admission plugin

* docs: add documentation about admission_control_config_file and EventRateLimit configuration
2022-05-02 11:03:15 -07:00
Cristian Calin
56cf163a23 [kubernetes] actually make 1.23.6 the default (#8767) 2022-05-02 00:43:14 -07:00
Mohamed Zaian
afcedf6d77 Pull master, Rebase, add changes again (#8745) 2022-05-02 00:39:14 -07:00
Chris Ricker
21fc197ee0 Ensure containerd service unmasking (#8726)
* Force containerd service unmasking

Force systemd to unmask and start service when adding containerd service

* Eliminate restart and move unmasking step

Switch to start instead of restart
Move unmasking to restart handler

* Add unmasking to similar container runtimes

* Add missing service names
2022-04-29 08:39:14 -07:00
Calin Cristian Andrei
fcb4c8fb61 [kubernetes] make 1.23.6 the new default 2022-04-29 07:57:13 -07:00
Calin Cristian Andrei
b6e2c56ae6 [kubernetes] add hashes for 1.21.12 2022-04-29 07:57:13 -07:00
Calin Cristian Andrei
b005985d4e [kubernetes] add hashes for 1.23.6 2022-04-29 07:57:13 -07:00
Samuel Liu
1294fd5730 check calico ipv6 (#8738)
* check calico ipv6

* just check ipip mode for ipv6
2022-04-29 00:35:13 -07:00
Cristian Calin
835fd86a08 [CI] split molecule testes to run in parallel (#8756)
* add parametrization to molecule_run.sh

* [CI] split molecule tests to allow parallelization of work
2022-04-29 00:09:12 -07:00
Mohamed Zaian
b7004d72c5 [kubernetes] add hashes for 1.22.9 (#8746)
* [kubernetes] add hashes for 1.22.9
2022-04-28 16:10:50 +03:00
Kenichi Omichi
eb566ca626 Remove aufs-tools from Ubuntu requirement (#8754)
aufs-tools was required for docker.io package originally,
but Kubespray installs docker-ce package instead today.
In addition, Ubuntu 20.04 doesn't provide aufs-tools as [1].
Then this removes aufs-tools from Ubuntu requirement.

[1]: https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1947004
2022-04-27 23:04:55 -07:00
Cristian Calin
aa12f1c56b [CI] fix packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha job (#8752) 2022-04-27 12:39:36 -07:00
Cristian Calin
6cc5b38a2e [terraform] use modern day equinix metal provider (#8748)
* [terraform] use modern day equinix metal provider

* [CI] ensure packet job tests metal
2022-04-27 10:34:13 -07:00
Mathieu Parent
e6c4330e4e calico: vxlan is the default for calico_network_backend (#8750)
Since https://github.com/kubernetes-sigs/kubespray/pull/8434
2022-04-27 02:24:11 -07:00
Kenichi Omichi
1e827f9807 Update kata-containers.md (#8747)
* kata container related options exist in k8s-cluster.yml,
  not k8s_cluster.yml

* https://github.com/kata-containers/runtime has been archived and
  https://github.com/kata-containers/kata-containers is used today.
2022-04-26 07:06:53 -07:00
Olle Larsson
a4f26dc8f3 [terraform/openstack] add safespring to provider list (#8735) 2022-04-25 04:43:39 -07:00
Mulugeta Ayalew Tamiru
3f065918d9 Update verbs for volumeattachments resource (#8731)
* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource so that the kubelet can create volumeattachments and mount volumes when deploying Kubernetes on VMware vSphere.

* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource to match upstream

* Update vsphere-csi-controller-rbac.yml.j2
2022-04-22 00:04:13 -07:00
Cristian Calin
2c2d4513ac [helm] upgrade to 3.8.2 (#8723) 2022-04-18 12:51:50 -07:00
zhengtianbao
937e64d296 Update flannel use install-cni-plugin to fit upstream (#8714)
* Update flannel use install-cni-plugin to fit upstream

* Replace flannel cni repo

* Remove download flannel binary
2022-04-18 09:44:41 -07:00
Cristian Calin
3261d26181 [etcd] ensure etcd is properly upgraded when managed by kubeadm (#8722)
* [etcd] ensure etcd is properly upgraded when managed by kubeadm

* [CI] add periodic job to test upgrade of etcd managed by kubeadm
2022-04-17 10:32:41 -07:00
Mathieu Parent
c98a0a448f metallb: Add images to downloads (#8715)
For offline mode
2022-04-14 10:06:46 -07:00
Mohamed Zaian
7e7218f5ce etcd: add etcd v3.5.3 for kubernetes 1.21+ (#8712)
* As per this issue https://github.com/kubernetes-sigs/kubespray/pull/8664 I propose to make etcd v.3.5.3 default for any kubernetes version which uses 3.5.x since that 3.5.[0-2] not recommended for production.
2022-04-14 05:48:46 -07:00
Cristian Calin
45262da726 [calico] call calico checks early on to prevent altering the cluster with bad configuration (#8707) 2022-04-14 01:08:46 -07:00
Florian Ruynat
aef5f1e139 Add tz to kubespray image 2022-04-13 08:22:45 +02:00
SOPHAL HONG
3d4baea01c Add tag to AWS VPC subnets for automatic subnet discovery by load balancers or ingress controllers (#8705) 2022-04-12 10:05:23 -07:00
Julien Le Fur
30306d6ec7 Enable external CA mode for control-plane deployment (#8620) 2022-04-12 05:47:23 -07:00
Robin Wallace
d7254eead6 UpCloud integration (#8653)
* [upcloud] add upcloud csi-driver

* Option to use ansible_host as api ip for kubueconfig
2022-04-11 15:13:23 -07:00
Anthony Bible
9dced7133c Fixes for Hetzner terraform and Hetzner Cloud (#8702)
* - add ability to specify the network_zone in hetzner terraform
- Export the network id from hetzner terraform the the generated inventory.ini

* - Add with_networks variable to allow different deployments of hcloud controller manager

- Add network id to hcloud controller secret (added via the inventory)

- Don't include extra_args if it's not set
2022-04-11 10:26:06 -07:00
Kenichi Omichi
c2fb1a0747 Add VAGRANT_ANSIBLE_TAGS for normal deployment (#8697)
Current ansible.tags 'facts' is for skipping actual Kubespray deployment
at vagrant CI because the deployment takes much time. However the static
'facts' skips the deployment for normal usage of vagrant also.
That causes confusions.

This adds VAGRANT_ANSIBLE_TAGS to skip the deployment for vagrant CI.
2022-04-08 23:58:04 -07:00
Thomas Eberle
00a4d2d3c4 Removed quotation of nerdctl_extra_flags. (#8695)
The quotations in the variable nerdctl_extra_flags are not required for the `nerdctl_image_pull_command` and throw the following error when executing the cluster-playbook with `container_insecure_registries` set:
        unknown flag: --insecure-registry\\\"
This happens as the complete nerdctl_image_pull_command string variable gets split into an array string for the cmd task. The escaped quotation doesn't get escaped properly and is added to the cmd-string array as part of the command. This leads to a wrong written insecure-registry flag, which throws this error.
2022-04-08 08:02:43 -07:00
Samuel Liu
424ef3b3f9 [calico] add calico apiserver (#8690)
* [calico] add calico apiserver

* fix yamllint

* remove addext argument

* Configure API server with the CA bundle

* add check kdd
2022-04-08 00:02:42 -07:00
Mathieu Parent
996ef98b87 Add support for kube-vip (#8669)
Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-04-07 10:37:57 -07:00
Unai Arríen
19d5a1c7c3 Ensure all Kubelet required kernel values are configured when enabling protectKernelDefaults (#8692) 2022-04-07 08:33:59 -07:00
rtsp
0481dd946f [cert-manager] Upgrade to v1.8.0 (#8688) 2022-04-06 00:52:57 -07:00
cyril-corbon
29109575f5 fix: reset docker was not removing docker properly (#8680)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-05 21:36:55 -07:00
emiran-orange
3782573ede Single quotes are missing in jsonpath argument of kubectl get node (#8683) 2022-04-05 09:45:38 -07:00
Alessio Greggi
bba91a7524 split kube_feature_gates variable for different kubernetes components (#8677)
* feat: split kube_feature_gates variable for different kubernetes components

* docs: add kube_feaute_gates componet variables
2022-04-05 05:39:37 -07:00
Cristian Calin
b67cadf743 [crun] upgrade to 1.4.4 (#8675) 2022-04-04 23:57:36 -07:00
cyril-corbon
56dda4392c [validate-container-engine] check if kubelet is present was not working (#8679)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-04 09:34:12 -07:00
Cristian Calin
34fec09ff1 [containerd] upgrade versions to address CVE-2022-24769 (#8671)
* [containerd] add hashes for 1.5.11

* [containerd] add hashes for 1.6.2

* [containerd] make 1.6.2 the new default
2022-04-04 05:30:11 -07:00
Cristian Calin
cefd1339fc [vsphere_csi] update to 2.5.1 and make external_vsphere_version 7.0u1 by default (#8676) 2022-04-04 01:08:11 -07:00
Cristian Calin
b915376194 [runc] upgrade to 1.1.1 (#8674) 2022-04-04 00:42:23 -07:00
Cristian Calin
455cc6ff75 [nerdctl] upgrade to 0.18.0 (#8672) 2022-04-04 00:42:11 -07:00
Cristian Calin
cc9c376d0f [validate-container-engine] add facts tag to tasks needed for vagrant jobs (#8678) 2022-04-04 00:32:11 -07:00
Kenichi Omichi
018611f829 Fix quotation of nerdctl_extra_flags (#8668)
Due to missing quotation of nerdctl_extra_flags, ansible-playbook was failed:

  Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/command.py
  Pipelining is enabled.
    [..]
    File "/usr/lib/python3.8/shlex.py", line 191, in read_token
      raise ValueError("No closing quotation")

This fixes the issue.

T-Eberle investigated the issue and found the solution.
Thank you T-Eberle!
2022-04-02 10:56:09 -07:00
cyril-corbon
1781eab21f fix: uninstall contailer engine if service is running (#8662) 2022-04-01 09:20:46 -07:00
190ikp
78b05d0ffc fix disk controller type in Vagrantfile (#8656) 2022-03-31 10:51:01 -07:00
Florian Ruynat
1c0df78278 Add ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK flag to etcd config (#8664) 2022-03-31 08:17:01 -07:00
Kenichi Omichi
6cc9da6b0a Update vagrant.md (#8663)
To read it easily, this puts new lines.
2022-03-31 00:07:00 -07:00
Florian Ruynat
6af9cae0a5 Add missing 2.10 ansible test (#8665) 2022-03-30 08:12:27 -07:00
Cristian Calin
ef29455652 [ansible] make ansible 5.x the new default version (#8660)
* [ansible] make ansible 5.x the new default version and move different versions tested to nightly jobs

* [CI] jobs were missing proper ansible cleanup
2022-03-29 15:36:11 -07:00
Kenichi Omichi
503ab0f722 Run 0100-dhclient-hooks if dhcpclient is enabled (#8658)
If running Kubespray on static IP environments, a task was failed like:

  TASK [kubernetes/preinstall : Configure dhclient hooks for resolv.conf (RH-only)]
  fatal: [ak8s2]: FAILED! => {
    "changed": false, "checksum": "..",
    "msg": "Destination directory /etc/dhcp/dhclient.d does not exist"}

This adds a check for dhclientconffile for running 0100-dhclient-hooks to
run the task only if dhcpclient is enabled.
2022-03-29 00:11:11 -07:00
Christian Rohmann
90883e76af terrform/openstack: Fix templating of ansible_ssh_common_args in no_floating.yml if used as TF module (#8646)
* terraform/openstack: Use path.module for ansible_bastion_template.txt

This extends on #7643 by not using path.root, but switching to path.module
to allow use of the terraform code as a module itself. This change then keeps
all calls to the template file stable even for that use-case.

* terraform/openstack: Make sed calls fail on errors

By using a single call with two replacements to use of sed will create proper exit codes
and allowing for errors to be recognized by terraform.
2022-03-29 00:07:11 -07:00
Cristian Calin
113de8381c [ansible] add support for ansible 5 (ansible-core 2.12) (#8512) 2022-03-28 08:49:22 -07:00
Calin Cristian Andrei
652f2edbe1 [etcd] add 0 hash for arm v3.5.2 to prevent deployment failures 2022-03-28 08:40:30 +02:00
rtsp
a67e36703f Update cert-manager to v1.7.2 (#8648) 2022-03-26 04:53:22 -07:00
Samuel Liu
73c6943402 fix vagrant parameter (#8650) 2022-03-25 18:57:58 -07:00
Florian Ruynat
d46817d690 Remove centos7 molecule while opensuse mirror is flaky 2022-03-25 16:57:58 -07:00
Florian Ruynat
97cb64c62d Remove k8s module for ns creation 2022-03-25 16:57:58 -07:00
Florian Ruynat
3f70241fb7 Update kubernetes image to 2.18.1 2022-03-25 16:57:58 -07:00
Maciej Wereski
21b71b38a3 Vagrantfile: add var to set ansible verbosity level (#8639)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-22 06:11:44 -07:00
Erwan Miran
b2f9442aba Have ingress_controller and external_provisioner in upgrade-cluster.yml (#8640) 2022-03-22 05:43:43 -07:00
Cristian Calin
fa9f85c7e9 [sysctl] set fs.may_detach_mounts=1 even when CRIs don't set it themselves (#8635) 2022-03-21 17:36:13 -07:00
Fredrik Liv
ffa285c2e7 Fixed cluster roles for openstack cloud controller (#8638) 2022-03-21 06:19:21 -07:00
Kenichi Omichi
7b1dc600d5 Fix the condition of drain on pre-remove task (#8634)
When running cluster.yml for new machines what containerd is already
install but Kubernetes cluster were not installed before, the task
"remove-node | List nodes" is failed like

  "changed": false,
  "cmd": [
    "/usr/local/bin/kubectl", "--kubeconfig",
    "/etc/kubernetes/admin.conf", "get", "nodes", "-o",
    "go-template={{ range .items }}{{ .metadata.name }}
    {{ "\n" }}{{ end }}"
   ],
   ..
   "stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory",

That was due to lack to check the existing Kubernetes cluster exists
or not before running "kubectl drain" command.
This adds the check to avoid the issue.
2022-03-21 01:39:10 -07:00
Cristian Calin
5e67ebeb9e [container image] use focal (ubuntu 20.04) base image for our docker builds (#8631) 2022-03-18 09:58:41 -07:00
Fredrik Liv
af7066d33c Updated openstack cloud controller version to v1.22.0 (#8629)
* Updated openstack cloud controller version to match kubernetes version

* Rolled back file structure change
2022-03-18 01:47:16 -07:00
Cristian Calin
dd2d95ecdf [calico] don't enable ipip encapsulation by default and use vxlan in CI (#8434)
* [calico] make vxlan encapsulation the default

* don't enable ipip encapsulation by default
* set calico_network_backend by default to vxlan
* update sample inventory and documentation

* [CI] pin default calico parameters for upgrade tests to ensure proper upgrade

* [CI] improve netchecker connectivity testing

* [CI] show logs for tests

* [calico] tweak task name

* [CI] Don't run the provisioner from vagrant since we run it in testcases_run.sh

* [CI] move kube-router tests to vagrant to avoid network connectivity issues during netchecker check

* service proxy mode still fails connectivity tests so keeping it manual mode

* [kube-router] account for containerd use-case
2022-03-17 18:05:39 -07:00
Sergey
a86d9bd8e8 do not remove package in validate container engine role when Fedora CoreOS distr (#8626) 2022-03-17 06:49:20 -07:00
Calin Cristian Andrei
21b1516d80 [kubernetes] add hashes for 1.21.11 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei
4c15038194 [kubernetes] add hashes for 1.22.8 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei
538f9df5cc [kubernetes] make 1.23.5 the default 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei
efb0412b63 [kubernetes] add hashes for 1.23.5 2022-03-17 05:03:20 -07:00
Qasim Mehmood
5a486a5cca Calico: Fix Wireguard support for CentOS Stream 9/RHEL 9 Beta (#8625) 2022-03-17 04:11:20 -07:00
Cristian Calin
394857b5ce [docker] add support for cri-dockerd as a replacement for dockershim (#8623) 2022-03-16 16:28:11 -07:00
Cristian Calin
5043517cfb [containerd] avoid cleanup of /usr/bin on ostree distributions (#8624) 2022-03-15 13:47:48 -07:00
Max Gautier
307d122a84 Helm-apps role for installing helm charts (#8347)
* Sketch of helm-apps role interface

* helm-apps: Early implementation and settings

* helm-apps: Fix README.md example playbook

* fixup! Sketch of helm-apps role interface

* Make the argument specs more explicit

* Remove exposed options from hardcoded default

* Simplify example playbook in README.md

- Define directly the roles parameters
- Add an example of option override for one chart only

* Use release instead of charts

Make explicit that the role is mananing releases, not charts.
Simplify parameters naming
2022-03-14 08:29:58 -07:00
onock
d444a2fb83 [systemd-resolved] Fix DNS configuration according to docs/dns-stack.md and during reset of cluster (#8560) (#8561) 2022-03-14 02:08:22 -07:00
Kenichi Omichi
fb7c56e3d3 Add unit test for print_hostnames of inventory.py (#8558)
This adds a unit test for the function.
2022-03-12 23:40:23 -08:00
spaced
2b79be68e7 fix typo and duplicated declaration of ingressclasses (#8591) 2022-03-12 23:36:23 -08:00
Mac Chaffee
512d5e3348 Restart etcd if the etcd version changes (#8556)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-03-11 18:08:23 -08:00
Unai Arríen
4b6892ece9 Add epoch to docker-ce and docker-ce-cli packages to ensure docker up… (#8618)
* Add epoch to docker-ce and docker-ce-cli packages to ensure docker upgrade

* Split container-engine redhat vars to support legacy RHEL 7 version management

* Support ansible_distribution_major_version when disvering vars with ansible_os_family
2022-03-11 02:45:07 -08:00
Toni Tauro
5a49ac52f9 feat(calico): add configurable ipam strictaffinity (#8581)
Signed-off-by: Toni Tauro <toni.tauro@adfinis.com>
2022-03-07 22:58:33 -08:00
Cristian Calin
db1e30e4fc [calico] add 3.22.1 (#8612) 2022-03-07 22:54:34 -08:00
Cristian Calin
b4a61370c8 [cri-o] add cri-0 1.23.x (#8599) 2022-03-07 05:39:07 -08:00
kakkotetsu
58b2f39ce5 add IPv6 listen directive to nginx if enable_dual_stack_networks (#8596) 2022-03-07 05:39:00 -08:00
Tom Janson
56d882abed Clarify confirmation prompt (#8589)
Entering any value causes the play to proceed, e.g., entering "no<Enter>". (This is simply how Ansible's pause module behaves.)
2022-03-07 05:38:54 -08:00
Takuya Murakami
39acb2b84d Update ansible-lint to 5.4.0 (#8607) (#8608)
* Update ansible-lint to 5.4.0 (#8607)

It seems that the Rich version 11.0.0 has a breaking change.
So need to update ansible-lint to 5.3.2 or later.

* Fix for ansible-lint no-changed-when rule (#8607)
2022-03-07 05:35:55 -08:00
Branko Mijuskovic
3ccba08983 Fix crio_packages for Rocky8 (#8594) 2022-03-07 05:29:05 -08:00
Mohamed Zaian
632aa764e6 etcd: add etcd v3.5.1 for kubernetes 1.22+ (#8588)
* There is an issue with etcd v3.5.0 where it resurrects ancient members see: https://github.com/etcd-io/etcd/issues/13196
This issue is clearly fixed in etcd v3.5.2

* Just keep the checksums
2022-03-07 05:28:54 -08:00
Cristian Calin
f6342b6cf4 [crun] upgrade to 1.4.3 (#8598) 2022-03-04 08:22:52 -08:00
Cristian Calin
471585dcd5 [containerd]: upgrade versions to fix CVE-2022-23648 (#8597)
* [containerd] add hashes for 1.6.1

* [contained] make 1.6.1 the default

* [containerd] add hashes for 1.5.10

* [containerd] add hashes for 1.4.13

* [nerdct] bump to 0.17.1
2022-03-03 14:51:16 -08:00
Maciej Wereski
51821a811f MetalLB: update to v0.12.1 (#8593)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-03 08:49:48 -08:00
Mathieu Parent
299a9ae7ba terraform/gcp: Add ingress_whitelist (#8590)
Also, do not create unneeded resources (target pools are charged and should
only be created when needed).
2022-03-02 16:52:46 -08:00
Cristian Calin
bf7a506f79 [containerd] Upgrade containerd to 1.6.0 and re-enable arm64 architecture with default options (#8555)
* [containerd] add checksums for 1.6.0

* [containerd] promote 1.6.0 as the new default

* [runc] promote 1.1.0 as the new default to allow arm deployments out of the box

* [nerdctl] bump to 0.17.0 to align with containerd 1.6.0

* [reset] allow crictl stopp and rmp commands to fail
2022-03-02 15:27:13 -08:00
Tom Janson
2e925f82ef Revert "Fix: typos in docs and comments (#7805)" (#8592)
This reverts commit 417180246c.
2022-03-02 11:57:13 -08:00
Tom Janson
ddef7e1139 missing "check_mode: no"s for several read-only tasks (#8584)
this is not complete -- there are almost certainly more instances of
this issue
2022-03-02 09:29:14 -08:00
cyril-corbon
672e47a7eb feat: check & uninstall container engine (#8439)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-28 10:59:46 -08:00
Tom Janson
3e8e64a3e5 fix typo / error regarding etcd and k8s_cluster groups (#8580)
As far as I can tell this is simply a typo that has existed from the beginning. Having it this way around (`etcd` group as a child and thus subset of `k8s_cluster`) mirrors what is written in the preceeding sentence.
2022-02-28 02:54:58 -08:00
Mac Chaffee
b554246502 Fix host DNS config 1) being edited too soon and 2) not working with NM (#8575)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-26 10:29:23 -08:00
SOPHAL HONG
6d683c98a3 [Terraform-AWS] Replace CLB with NLB (#8578) 2022-02-24 23:53:54 -08:00
Nicolas Goudry
ee079f4740 fix(coredns): make sure to keep coredns repository namespace (#8572)
fix: regex

fix: wrong regex_replace usage
2022-02-24 01:01:33 -08:00
Cristian Calin
a090038d02 [CI] add ara to collect CI job logs (#8545) 2022-02-23 07:36:19 -08:00
Florian Ruynat
4f1499bd23 Fixup remaining etcd_kubeadm_enabled variables (#8576) 2022-02-23 06:46:18 -08:00
Alex
36393d77d3 Encrypting Secret Data at Rest (#8574)
* change default value for Encrypting Secret Data at Rest to secretbox, remove experimental flag and add documentation

* fix MD012/no-multiple-blanks
2022-02-23 03:04:18 -08:00
Ilya Margolin
e053ee4272 Check all places with check_mode: no for side effects (#8573)
and fix the one with side effect.

Also removes `notify` from this task as the task has `changed_when: false`
and notify is not going to fire.
2022-02-23 01:20:18 -08:00
jayonlau
1d46c07307 Cleanup crictl configuration file (#8569) 2022-02-23 00:58:19 -08:00
Ilya Margolin
f9b5e448c1 Prevent removing etcd member when running in check mode (#8570) 2022-02-22 23:34:18 -08:00
kakkotetsu
3effb008c9 improve validation conditions for MetalLB BGP Peers (#8568) 2022-02-22 23:12:18 -08:00
cyril-corbon
a088f492f4 chore: remove addon-resizer (#8566)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-22 09:51:16 -08:00
Necatican Yıldırım
e9c8913248 Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable (#8317)
* Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Add etcd kubeadm deployment documentation

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Refactor warning for the deprecated 'etcd_kubeadm_enabled' variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-02-22 08:53:16 -08:00
Florian Ruynat
b9a27c91da Update kubernetes dashboard to 2.5.0 2022-02-21 03:54:11 -08:00
Florian Ruynat
d4f654275b Set default kubernetes version to 1.23.4 2022-02-21 03:54:11 -08:00
Florian Ruynat
f6eb4c749d Add kubernetes hashes for 1.23.4/1.22.7/1.21.10 2022-02-21 03:54:11 -08:00
cyril-corbon
418fc00718 fix: kube-dns service deletion (#8565)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-21 02:48:11 -08:00
Florian Ruynat
2537177929 Fix amazon docker version (#8564) 2022-02-18 23:50:11 -08:00
Sander Klein
9af719bf99 This fixes the etcd node removal. (#8526)
Since we are already on an etcd node while executing the commands, there 
is no need to find out an etcd IP because it is on localhost.
2022-02-18 07:20:23 -08:00
Vitaliy D
9e020b252e Configure Etcd container_manager explicitly (#8521)
* Configure Etcd container_manager explicitly

* Add explanation for the Etcd container_manager variable

* Remove redundant space in etcd vars
2022-02-18 00:50:23 -08:00
Kenichi Omichi
cc45e365ae Fix print_hostnames of inventory.py (#8554)
When trying to run print_hostnames of inventory.py, it outputs the following
error:

 $ CONFIG_FILE=./test-hosts.yaml python3 ./inventory.py print_hostnames
 Traceback (most recent call last):
   File "./inventory.py", line 472, in <module>
     sys.exit(main())
   File "./inventory.py", line 467, in main
     KubesprayInventory(argv, CONFIG_FILE)
   File "./inventory.py", line 92, in __init__
     self.parse_command(changed_hosts[0], changed_hosts[1:])
   File "./inventory.py", line 415, in parse_command
     self.print_hostnames()
   File "./inventory.py", line 455, in print_hostnames
     print(' '.join(self.yaml_config['all']['hosts'].keys()))
 KeyError: 'all'

because it is missed to load a hosts config file before printing hostnames.
This fixes the issue.
2022-02-17 13:57:03 -08:00
Mac Chaffee
97c667f67c Fix etcd_events not getting upgraded in upgrade-cluster.yml (#8550)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-17 08:03:38 -08:00
Cristian Calin
063fc525b1 nerdctl: upgrade to 0.16.1 (#8539) 2022-02-16 02:04:37 -08:00
Mac Chaffee
0f73d87509 Allow pausing after upgrade but before uncordon (#8530)
* Allow pausing after upgrade but before uncordon

* Expand docs for upgrade pausing vars

Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-15 16:39:02 -08:00
Cristian Calin
402e85ad6e [calico] upgrade release checksums (#8544)
* [calico] upgrade 3.19.x to 3.19.4

* [calico] upgrade 3.20.x to 3.20.4

* [calico] upgrade 3.21.x to 3.21.4 and make it the default

* [calico] add 3.22.0 checksums

* [calico] account for path changes in calico 3.21.4 crd archive and above
2022-02-15 16:35:02 -08:00
Tony Fouchard
1d635e04e4 Allow to specify a source address for metallb peerings, and target only some nodes using node selectors (#8534) 2022-02-15 13:57:19 -08:00
kakkotetsu
98d5d0cdd5 add support for Dual Stack node InternalIP (#8542) 2022-02-15 00:28:02 -08:00
Mathieu Parent
31d4a38f09 terraform/gcp: Allow to change extra disk types (#8524) 2022-02-15 00:22:02 -08:00
kakkotetsu
1ebe456f2d add support for Calico IP6_AUTODETECTION_METHOD (#8541) 2022-02-14 17:26:14 -08:00
Cristian Calin
c6e5314fab implement download mirrors support (#8474)
* [download] add mechanism to support mirrors

* [calico] support alternate download url
2022-02-14 13:19:32 -08:00
SOPHAL HONG
a6a79883b7 Fix: Error when creating subnets more than AZ (#8516) 2022-02-14 13:12:30 -08:00
Takuya Murakami
b02e68222f feat(offline): Improve generate_list.sh to generate offline file list using ansible (#8537) (#8538)
Use jinja2 template and ansible to expand variables.
2022-02-13 23:19:28 -08:00
Takuya Murakami
da8522af64 docs: Update offline-environment.md for containerd (#8520) (#8523)
* Add containerd/runc/nerdctl download url
* Add insecure registries configuration for containerd
2022-02-09 08:08:18 -08:00
Tom Stian Berget
84b93090a8 Change Cilium setting identity_allocation_mode to cilium_identity_allocation_mode (#8519)
* Change Cilium identity_allocation_mode to cilium_identity_allocation_mode

* Change inventory sample
2022-02-08 14:04:35 -08:00
Byeonggon Lee
5695c892d0 Fix wrong port name in metallb.yml.j2 (#8510) 2022-02-07 09:43:45 -08:00
DenisKa
696101a910 Fixed mitogen.yml (#8508)
Fixed the problem when call ansible-playbook contrib/mitogen/mitogen.yml
"The error was: 'dict object' has no attribute 'section'"

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:
2022-02-07 01:39:43 -08:00
Sander Klein
54dfe73d24 Add bastion support to remove-node.yml (#8504)
Somehow bastion support for remove-node.yml was missing.

This commit adds it.
2022-02-04 23:50:50 -08:00
Krystian Młynek
87928baa31 CRI-O: fix unqualified-search registries (#8496) 2022-02-04 23:46:50 -08:00
mgiessing
6a4fd33a03 Added ppc64le support (#8505)
* Added ppc64le support

* Fixed linting errors
2022-02-04 00:14:00 -08:00
cyril-corbon
790448f48b feat: update cert-manager to 1.7.0 (#8491)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-03 17:24:00 -08:00
Cristian Calin
7759494c85 [terraform][openstack] allow disabling port_security at port level (#8455)
Use openstack_networking_port_v2 and openstack_networking_floatingip_associate_v2
to attach floating ips. This gives us more flexibility on disabling port security
when binding instances directly on provider networks in private cloud scenario.
2022-02-02 08:50:22 -08:00
Ilya Margolin
aed187e56c Fix kubelet_kubelet_cgroups_cgroupfs (#8500)
If kubelet is run with systemd (as it always is when using kubespray),
it starts in systemd's /system.slice/kubelet.service cgroup.

This commit prevents a creation and usage of a second unrelated cgroup.
2022-02-02 00:50:22 -08:00
Julio H Morimoto
eac799f589 Amend documentation for docker to containerd migration (#8477)
* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>

* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-02-02 00:46:22 -08:00
Cristian Calin
5ecb07b59a [nerdctl] upgrade to 0.16.0 (#8484)
* [nerdctl] upgrade nerdctl to 0.16.0

* [nerdctl] add configuration file
2022-02-01 15:11:48 -08:00
Cristian Calin
ff621fb7f1 [ingress-nginx] upgrade to 1.1.1 (#8490) 2022-02-01 09:50:11 -08:00
Mathieu Parent
958bca8800 terraform/gcp: Do not create unused subnetworks and Upgrade to latest google provider (#8497)
* terraform/gcp: Do not create unused subnetworks

By default terraform creates a subnetwork in each 39 regions

* terraform/gcp: Upgrade to latest google provider

... where "one of source_tags, source_ranges, or source_service_accounts must be defined"
2022-02-01 09:14:11 -08:00
Michael Schmitz
eacd55fbca Use sysctl_file_path variable for all sysctl_file locations (#8395)
* Use sysctl_file_path variable for all sysctl_file locations

* Add sysctl_file_path variable to kubespay-defaults

* Remove previously used sysctl file locations if present

* Use explicit filename in roles/kubernetes/node/defaults/main.yml

* Defaults: use explicit value
2022-02-01 08:12:10 -08:00
Cristian Calin
0e2ab5c273 [misc] add cristicalin to approvers list (#8494) 2022-02-01 08:08:11 -08:00
Cristian Calin
c47634290e [helm] upgrade to 3.8.0 (#8489) 2022-02-01 06:34:12 -08:00
Tristan
92d612c3e0 8487: Allow override of default CoreDNS zone cache (#8488)
Using the coredns_cluster_zone_cache_block variable
2022-02-01 00:48:18 -08:00
Ilya Margolin
2bbe5732b7 Add node label to etcd metrics (#8475)
targetRef on endpoints surfaces as
__meta_kubernetes_endpoint_address_target_kind/__meta_kubernetes_endpoint_address_target_name
in prometheus and gets converted to the label `node` by
prometheus-operator
2022-01-31 06:08:23 -08:00
Samuel Liu
e6e7fbc25f fix reset containerd_storage_dir undefined (#8478)
* fix reset containerd_storage_dir

* add env to kubespray-defaults
2022-01-31 05:46:23 -08:00
Ilya Margolin
7d4d554436 Document host_resolvconf as default value for resolvconf_mode (#8493)
refs #8247
2022-01-31 03:12:24 -08:00
cyril-corbon
d31db847b7 feat: update local path to v0.0.21 (#8492) 2022-01-31 01:08:24 -08:00
Mathieu Parent
3562d3378b terraform/gcp: Allow to use preemptible VM instances (#8480) 2022-01-31 00:30:24 -08:00
Calin Cristian Andrei
ababcd5481 [kube] make 1.23.3 the new default 2022-01-31 00:22:24 -08:00
Calin Cristian Andrei
7caffde0b6 [kube] add 1.23.3 hashes 2022-01-31 00:22:24 -08:00
Cristian Calin
c40b43de01 [mitogent] update to 0.3.2 (#8470) 2022-01-27 08:36:59 -08:00
Julio H Morimoto
b0eb5650da Provide initial guidelines for a container engine migration (docker-2-containerd), with special emphasis on the fact that the procedure is still not officially supported. (#8471)
Follow up from https://github.com/kubernetes-sigs/kubespray/issues/8431.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-01-27 01:40:10 -08:00
华忠啊
52f221f976 Adaptive Kube-ovn (#8454) 2022-01-27 01:08:10 -08:00
Cristian Calin
26a5948d2a [reset] remove containerd storage during reset (#8469) 2022-01-26 05:10:01 -08:00
ceesios
d86a3b962c Proposing fixes for contrib/terraform/vsphere/ #8436 (#8441)
* fixes issues in vSphere Terraform contrib. #8436

* fix formatting

* add variables to the main module and document changes

* add missing newline
2022-01-25 05:24:30 -08:00
Mathieu Parent
d64b341b38 Update terraform GCP to Ubuntu 20.04 (latest LTS) (#8463)
* Fix terraform Warning

Version constraints inside provider configuration blocks are deprecated

Terraform 0.13 and earlier allowed provider version constraints inside the
provider configuration block, but that is now deprecated and will be removed
in a future version of Terraform. To silence this warning, move the provider
version constraint into the required_providers block.

* Fix terraform Warning: Quoted references are deprecated

* terraform: Update GCP Ubuntu to latest LTS
2022-01-25 01:22:30 -08:00
Florian Ruynat
d580014c66 Fix CI for Fedora (followup) + OpenSUSE Leap (update to 15.3) (#8407)
* Fix fedora jobs - followup

* Update OpenSUSE Leap to 15.3

* Fix cilium version in README + update minor 1.11.1
2022-01-24 23:24:30 -08:00
Calin Cristian Andrei
be9a1f80c1 [kube] make 1.23.2 the default version 2022-01-24 11:59:33 -08:00
Calin Cristian Andrei
73ff3b0d3b [kubernetes] add hashes for 1.23.2, 1.22.6 and 1.21.9 2022-01-24 11:59:33 -08:00
cyril-corbon
9fce9ca42a feat: upgrade azuredisk csi to v1.10.0 (#8432)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:41:56 -08:00
Cristian Calin
f1adb734e3 [cri-tools] add hashes for 1.23.0 (#8442) 2022-01-24 00:21:56 -08:00
cyril-corbon
575e0ca457 feat: add eviction hard to kubelet config (#8421)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:13:57 -08:00
Alex
69f088bb82 add hash-values for runc v1.1.0 - first upstream runc version for multi-arch (#8447) 2022-01-23 23:51:57 -08:00
Cristian Calin
ef34f5fe7d [calico] switch default iptables backend detection to Auto (#8429) 2022-01-23 23:47:57 -08:00
Victor Morales
e88aa7c96b Add youki runtime support (#8411) 2022-01-21 14:01:07 -08:00
Johann Schley
38d129a0b6 add external hcloud cloud controller manager (#8440) 2022-01-20 12:31:09 -08:00
onock
392815d97c [cert-manager] Fix missing RBAC rules for ClusterRole cert-manager-cainjector kubernetes-sigs#8104. (#8444) 2022-01-20 12:17:09 -08:00
Pav K
6e2e61012a Docs - Removed incorrect info on calico_rr. (#8437) 2022-01-17 02:55:30 -08:00
rtsp
e791089466 cert-manager: Fix incorrect leader election namespace lead to insufficient permission (#8433) 2022-01-17 02:37:29 -08:00
Cristian Calin
418f12f62a [calico] drop 3.18.x and make 3.21.x the new default (#8426) 2022-01-17 02:29:29 -08:00
Necatican Yıldırım
caff539ccd Add identity_allocation_mode support for Cilium (#8430)
Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-16 09:29:28 -08:00
Kenichi Omichi
c0d1bb1a5c Remove subnet from router on tf-elastx_cleanup (#8425)
The tf-elastx_cleanup test job was failed with error message:

Port xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx cannot be deleted
directly via the port API: has device owner network:router_interface.

That means necessary to remove a subnet from the router before
deleting the port.
This adds a method to removes a subnet from the router automatically.
2022-01-15 00:50:15 -08:00
Cristian Calin
ea44d64511 [contrib] terraform openstack: allow disabling port security (#8410) 2022-01-14 12:58:32 -08:00
Samuel Liu
1a69f8c3ad parameterized snaphot controller namespaces (#8305)
* Parameterized snaphot controller namespaces

* add ns yml

* add docs

* namespace
2022-01-14 12:58:26 -08:00
rtsp
ccd3180a69 cert-manager: Allow to change leader election namespace for GKE Autopilot support (#8424)
More information:

- kubernetes-sigs/kubespray#8393
- jetstack/cert-manager#4102
- jetstack/cert-manager#3717
2022-01-14 12:54:26 -08:00
cyril-corbon
01dcbc18ac feat: upgrade metallb to v0.11.0 (#8420)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-14 05:22:28 -08:00
Florian Ruynat
7c67ec4976 Fix kubectl call before installing it (#8412) 2022-01-12 23:12:29 -08:00
Mathieu Parent
43d128362f Document image_command_tool and image_command_tool_on_localhost (#8409)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
2022-01-11 15:35:24 -08:00
Cristian Calin
1337c9c244 [csi-snapshotter] upgrade to 5.0 (#8403) 2022-01-11 09:14:33 -08:00
cyril-corbon
86953b2ac4 fix: add tolerations / affinity to cert-manager (#8389)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-11 09:14:26 -08:00
moss2k13
135c9b29a7 contrib: add cloud-init support for terraform vms (#8394)
* contrib: add cloud-init support for terraform vms

This change enables instance customization via cloud-init,
for example: additional CA certs, custom SSH access etc.

* contrib: update docs for terraform cloud-init

* contrib: disable yamllint in cloud-init

require-starting-space rule breaks cloud-init header

* contrib: documenation formatting

* yamllint: disable comments related checks

* docs: markdown formatting
2022-01-11 05:23:16 -08:00
Tovin Seven
e0d67367ed Update installation doc with vagrant (#8406) 2022-01-11 05:19:17 -08:00
Florian Ruynat
d007132655 Fix Fedora CI following ipset version in kube-proxy for k8s 1.23 (#8397) 2022-01-11 05:01:17 -08:00
Mathieu Parent
cfd9873bbc Allow to choose container manager commands (#8380)
This allow to workaround #8375 by using image_command_tool=crictl
when containerd_registries is used for containerd.

Also changes image_info_command_on_localhost for docker to return digests.
2022-01-11 01:13:16 -08:00
Samuel Liu
b2b95cc8f9 fix 0090-etchosts (#7634) 2022-01-11 01:03:16 -08:00
Kenichi Omichi
73c889eb10 Fix failures of ansible-lint (#8401)
This fixes the following types of failures:
- empty-string-compare
- literal-compare
- risky-file-permissions
- risky-shell-pipe
- var-spacing

In addition, this changes .gitlab-ci/lint.yml to block the same issue
by using the same method at Kubespray CI.
2022-01-11 00:45:16 -08:00
Victor Morales
642725efe7 Bump containerd version to 1.5.9 (#8402) 2022-01-11 00:05:16 -08:00
Cristian Calin
29aafff2ce etcd: add 3.5.1 for kubernetes 1.23+ (#8320) 2022-01-10 22:45:15 -08:00
forselli-stratio
df425ac143 Fix etcd certificates reference to support etcd_kubeadm_enabled:true (#7766)
* Fix etcd certificates reference to support etcd_kubeadm_enabled:true

* Add retries to ETCD Join Member task

* Fix etcd certificates reference when etcd_kubeadm_enabled:true

* Fix conflicts
2022-01-10 15:24:25 -08:00
Unai Arríen
57a1d18db3 Improve first_kube_control_plane variable management to avoid installation failures due to variable overlapping (#8388) 2022-01-10 01:35:19 -08:00
rtsp
aa4a3d7afd Fix container engine still installed on dedicated etcd node even if etcd_deployment_type: host (#8386) 2022-01-10 01:35:12 -08:00
Alex
06ad5525b8 replace runc 1.0.3 arm64 hash with 0 (#8391) 2022-01-10 01:31:13 -08:00
Kenichi Omichi
f80fd24a55 Fix risky-file-permissions (#8370)
When running ansible-lint directly, we can see a lot of warning
message like

  risky-file-permissions File permissions unset or incorrect

This fixes the warning messages.
2022-01-09 01:51:12 -08:00
Kenichi Omichi
51bd9bee0d Move containerd_version to defaults/main.yml (#8379)
All container image versions were defined in download/defaults/main.yml
except containerd.
The inconsistency caused the offline script(generate_list.sh) could not
output the URL of containerd image.
This moves the definition into a valid file.
In addition, this adds host_os to generate_list.sh for downloading
krew from a valid URL.
2022-01-09 01:47:12 -08:00
Victor Morales
52266406f8 Bump cert-manager version to v1.6.1 (#8377) 2022-01-07 16:45:34 -08:00
cyril-corbon
cd601c77c7 feat: upgrade metrics server to v0.5.2 (#8338)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-07 08:18:33 -08:00
Florian Ruynat
6abae713f7 Update helm / kube-router and coredns (#8382)
* Update kube-router to 1.4.0

* Update Helm to 3.7.2

* Up coredns to 1.8.6 when k8s is 1.23.x
2022-01-06 12:14:27 -08:00
Alex
1312f92a8d adding 0 checksum for kata_containers_version on arm(64) (#8383) 2022-01-06 12:08:27 -08:00
Unai Arríen
92abf26d29 Ensure taint configuration for secondary control-plane nodes (#8363) 2022-01-05 23:56:28 -08:00
Mathieu Parent
c11e4ba9a7 Add missing example offline nerdctl_download_url (#8373) 2022-01-05 10:23:48 -08:00
Mathieu Parent
7ae00947f5 Avoid yanked ruamel.yaml.clib version (#8372)
See https://pypi.org/project/ruamel.yaml.clib/#history

Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-01-05 08:06:41 -08:00
Bart Sloeserwij
59f62473c9 Update configuration of registries in cri-o (#7852)
* Update configuration of registries in cri-o

* Update docs to match new registry configuration
2022-01-05 07:36:40 -08:00
Unai Arríen
8fbd08d027 Fix DNS configuration when using resolvconf_mode='host_resolvconf' during scale (#23) (#8361) 2022-01-05 03:06:33 -08:00
Choi Yongbeom
dda557ed23 Update config.toml.j2 (#8340)
* Update config.toml.j2

i think this commit code is not completed works

exam registry address : a.com:5000

insecure registry must be http://a.com:5000

but this code add insecure a.com:5000 (without http://)

If there is no http, containerd accesses with https even if insecure_skip_verify = true

solution is code edit

* Update config.toml.j2

* Update containerd.yml

* Update containerd.yml

* Update containerd.yml

* Update config.toml.j2
2022-01-05 02:56:33 -08:00
Max Gautier
cb54eb40ce Use a variable for standardizing kubectl invocation (#8329)
* Add kubectl variable

* Replace kubectl usage by kubectl variable in roles

* Remove redundant --kubeconfig on kubectl usage

* Replace unecessary shell usage with command
2022-01-05 02:26:32 -08:00
Cristian Calin
3eab1129b9 CI: Replace CentOS 8 with AlmaLinux 8 before CentOS 8 EOL end of 2021 (#8297) 2022-01-05 02:20:33 -08:00
Choi Yongbeom
24f1402a14 nerdctl insecure registry config (#8339)
* Update prep_download.yml

nerdctl insecure registry config

* Update prep_download.yml

* Update prep_download.yml

apply conversations advice

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update main.yml

* Update main.yml

* Update prep_download.yml

* Update prep_download.yml
2022-01-05 01:14:33 -08:00
Necatican Yıldırım
bf00550388 Upgrade Cilium to 1.11.0 (#8354)
* Remove kvstore args from Cilium DaemonSet

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Bump Cilium to 1.11.0

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-05 00:36:32 -08:00
Kenichi Omichi
78c83a8f26 Update containerd doc (#8369)
This is a follow-up change for https://github.com/kubernetes-sigs/kubespray/pull/7911
2022-01-05 00:32:33 -08:00
Nguyễn Trung
e72f8e0412 Update node about container_manager variable (#7911)
I was deploy my cluster with separate etcd cluster and not intersect with kube_control_plane or kube_node. And I want to run etcd cluster in docker but still used containerd to make container runtime for all other nodes. Therefore, I was added note to this doc for everyone 

Thank !
2022-01-04 14:29:20 -08:00
Florian Ruynat
6136fa7c49 Update Kubernetes version to 1.23.1 2022-01-04 10:25:00 -08:00
Florian Ruynat
8d2b4ed4a9 Move min k8s version to 1.21 2022-01-04 10:25:00 -08:00
Florian Ruynat
9e9b177674 Update kubespray_version following release 2022-01-04 10:25:00 -08:00
Cristian Calin
4c4c83f0a1 crun update to 1.4 (#8330)
* [crun] update crun to 1.4

* [crun] drop pre-1.x versions
2022-01-04 08:30:53 -08:00
Unai Arríen
0e98814732 Configure PriorityClassName for MetalLB deployment (#8362) 2022-01-04 08:20:52 -08:00
502 changed files with 8913 additions and 14369 deletions

1
.gitignore vendored
View File

@@ -102,7 +102,6 @@ ENV/
# molecule
roles/**/molecule/**/__pycache__/
roles/**/molecule/**/*.conf
# macOS
.DS_Store

View File

@@ -8,7 +8,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.17.1
KUBESPRAY_VERSION: v2.18.1
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
@@ -27,6 +27,7 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
@@ -80,3 +81,4 @@ include:
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@@ -23,9 +23,8 @@ ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
script:
- ansible-lint -v
except: ['triggers', 'master']
syntax-check:
@@ -53,7 +52,7 @@ tox-inventory-builder:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox

93
.gitlab-ci/molecule.yml Normal file
View File

@@ -0,0 +1,93 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/docker
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
when: on_success
molecule_cri-dockerd:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: on_success
molecule_gvisor:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: on_success
molecule_youki:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: on_success

View File

@@ -31,18 +31,26 @@ packet_ubuntu20-calico-aio:
variables:
RESET_CHECK: "true"
# Exericse ansible variants
# Exericse ansible variants during the nightly jobs
packet_ubuntu20-calico-aio-ansible-2_9:
stage: deploy-part1
extends: .packet_pr
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.9"
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_10:
stage: deploy-part1
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.10"
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1
extends: .packet_pr
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.11"
@@ -70,7 +78,7 @@ packet_centos7-flannel-addons-ha:
stage: deploy-part2
when: on_success
packet_centos8-crio:
packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
@@ -100,16 +108,6 @@ packet_ubuntu16-flannel-ha:
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
extends: .packet_periodic
@@ -145,17 +143,17 @@ packet_centos7-calico-ha-once-localhost:
services:
- docker:19.03.9-dind
packet_centos8-kube-ovn:
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos8-calico:
packet_almalinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos8-docker:
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
@@ -165,11 +163,6 @@ packet_fedora34-docker-weave:
extends: .packet_pr
when: on_success
packet_fedora35-kube-router:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_opensuse-canal:
stage: deploy-part2
extends: .packet_periodic
@@ -203,7 +196,7 @@ packet_ubuntu18-flannel-ha-once:
when: manual
# Calico HA eBPF
packet_centos8-calico-ha-ebpf:
packet_almalinux8-calico-ha-ebpf:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -218,11 +211,6 @@ packet_centos7-calico-ha:
extends: .packet_pr
when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet_pr
@@ -255,7 +243,7 @@ packet_amazon-linux-2-aio:
extends: .packet_pr
when: manual
packet_centos8-calico-nodelocaldns-secondary:
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
@@ -275,6 +263,13 @@ packet_centos7-docker-weave-upgrade-ha:
variables:
UPGRADE_TEST: basic
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
@@ -288,6 +283,14 @@ packet_debian10-calico-upgrade:
variables:
UPGRADE_TEST: graceful
packet_almalinux8-calico-remove-node:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_debian10-calico-upgrade-once:
stage: deploy-part3
extends: .packet_periodic

View File

@@ -60,11 +60,11 @@ tf-validate-openstack:
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
tf-validate-metal:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: packet
PROVIDER: metal
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:

View File

@@ -1,28 +1,5 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
.vagrant:
extends: .testcases
variables:
@@ -38,7 +15,7 @@ molecule_tests:
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
@@ -66,3 +43,24 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@@ -1,5 +1,9 @@
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
FROM ubuntu:bionic-20200807
# Use imutable image tags rather than mutable tags (like ubuntu:20.04)
FROM ubuntu:focal-20220316
ARG ARCH=amd64
ARG TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y \
&& apt install -y \
@@ -8,7 +12,7 @@ RUN apt update -y \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
"deb [arch=$ARCH] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
@@ -28,6 +32,6 @@ RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
&& update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/$ARCH/kubectl \
&& chmod a+x kubectl \
&& mv kubectl /usr/local/bin/kubectl

View File

@@ -4,10 +4,10 @@ aliases:
- chadswen
- mirwan
- miouge1
- woopstar
- luckysb
- floryut
- oomichi
- cristicalin
kubespray-reviewers:
- holmsten
- bozzo
@@ -15,7 +15,9 @@ aliases:
- oomichi
- jayonlau
- cristicalin
- liupeng0518
kubespray-emeritus_approvers:
- riverzhang
- atoms
- ant31
- woopstar

View File

@@ -19,10 +19,10 @@ To deploy the cluster you can use :
#### Usage
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
@@ -57,10 +57,10 @@ A simple way to ensure you get all the correct version of Ansible is to use the
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
```ShellSession
docker pull quay.io/kubespray/kubespray:v2.17.1
docker pull quay.io/kubespray/kubespray:v2.18.1
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.17.1 bash
quay.io/kubespray/kubespray:v2.18.1 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@@ -75,10 +75,11 @@ python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following step:
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
@@ -110,6 +111,7 @@ vagrant up
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [Hardening](docs/hardening.md)
- [Roadmap](docs/roadmap.md)
## Supported Linux Distributions
@@ -131,27 +133,27 @@ Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.22.5
- [etcd](https://github.com/coreos/etcd) v3.5.0
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.23.7
- [etcd](https://github.com/etcd-io/etcd) v3.5.3
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.5.8
- [containerd](https://containerd.io/) v1.6.4
- [cri-o](http://cri-o.io/) v1.22 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.0.1
- [calico](https://github.com/projectcalico/calico) v3.20.3
- [cni-plugins](https://github.com/containernetworking/plugins) v1.1.1
- [calico](https://github.com/projectcalico/calico) v3.22.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.9.11
- [flanneld](https://github.com/flannel-io/flannel) v0.15.1
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.8.1
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.3.2
- [cilium](https://github.com/cilium/cilium) v1.11.3
- [flanneld](https://github.com/flannel-io/flannel) v0.17.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.9.2
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.4.0
- [multus](https://github.com/intel/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v1.5.4
- [coredns](https://github.com/coredns/coredns) v1.8.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.0.4
- [cert-manager](https://github.com/jetstack/cert-manager) v1.8.0
- [coredns](https://github.com/coredns/coredns) v1.8.6
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.2.1
## Container Runtime Notes
@@ -160,8 +162,8 @@ Note: Upstart/SysV init based OS types are not supported.
## Requirements
- **Minimum required version of Kubernetes is v1.20**
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
- **Minimum required version of Kubernetes is v1.21**
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.

View File

@@ -2,17 +2,18 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
7. An approver creates a release branch in the form `release-X.Y`
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
10. The release issue is closed
11. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones
@@ -46,3 +47,17 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note)

19
Vagrantfile vendored
View File

@@ -26,9 +26,11 @@ SUPPORTED_OS = {
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"fedora34" => {box: "fedora/34-cloud-base", user: "vagrant"},
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.3.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
@@ -53,7 +55,7 @@ $subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$multi_networking ||= "False"
$download_run_once ||= "True"
$download_force_cache ||= "False"
# The first three nodes are etcd servers
@@ -68,9 +70,12 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_enabled ||= "False"
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$playbook ||= "cluster.yml"
@@ -167,7 +172,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
end
end
end
@@ -238,9 +243,11 @@ Vagrant.configure("2") do |config|
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
@@ -250,7 +257,9 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
if $ansible_tags != ""
ansible.tags = [$ansible_tags]
end
ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View File

@@ -14,7 +14,7 @@ fact_caching_timeout = 7200
stdout_callback = default
display_skipped_hosts = no
library = ./library
callback_whitelist = profile_tasks
callback_whitelist = profile_tasks,ara_default
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@@ -5,7 +5,7 @@
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.12.0
maximal_ansible_version: 2.13.0
ansible_connection: local
tags: always
tasks:

View File

@@ -46,7 +46,7 @@
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
@@ -59,7 +59,7 @@
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
@@ -118,7 +118,8 @@
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s_cluster
- name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"

View File

@@ -47,6 +47,10 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
@@ -59,6 +63,5 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@@ -83,11 +83,15 @@ class KubesprayInventory(object):
self.config_file = config_file
self.yaml_config = {}
loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
@@ -105,6 +109,10 @@ class KubesprayInventory(object):
print(e)
sys.exit(1)
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES)
if changed_hosts:

View File

@@ -13,6 +13,7 @@
# under the License.
import inventory
from test import support
import unittest
from unittest import mock
@@ -26,6 +27,28 @@ if path not in sys.path:
import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with support.captured_stdout() as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys')
def setUp(self, sys_mock):

View File

@@ -28,7 +28,7 @@
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
sysctl_file: "{{ sysctl_file_path }}"
state: present
reload: yes
@@ -37,7 +37,7 @@
name: "{{ item }}"
state: present
value: 0
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
sysctl_file: "{{ sysctl_file_path }}"
reload: yes
with_items:
- net.bridge.bridge-nf-call-arptables

View File

@@ -5,8 +5,8 @@
- hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.0rc1
mitogen_url: https://github.com/dw/mitogen/archive/v{{ mitogen_version }}.tar.gz
mitogen_version: 0.3.2
mitogen_url: https://github.com/mitogen-hq/mitogen/archive/refs/tags/v{{ mitogen_version }}.tar.gz
ansible_connection: local
tasks:
- name: Create mitogen plugin dir
@@ -38,7 +38,12 @@
- name: add strategy to ansible.cfg
ini_file:
path: ansible.cfg
section: defaults
option: strategy
value: mitogen_linear
mode: 0644
section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}"
value: "{{ item.value }}"
with_items:
- option: strategy
value: mitogen_linear
- option: strategy_plugins
value: plugins/mitogen/ansible_mitogen/plugins/strategy

View File

@@ -3,6 +3,7 @@
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View File

@@ -9,7 +9,8 @@ This script has two features:
(2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images
to the target offline environment.
to the target offline environment. if images are from a private registry,
you need to set `PRIVATE_REGISTRY` environment variable.
Then we will run step(2) for registering the images to local registry.
Step(1) can be operated with:
@@ -28,16 +29,19 @@ manage-offline-container-images.sh register
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
Run this script will generates three files, all downloaded files url in files.list, all container images in images.list, all component version in generate.sh.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
```shell
bash generate_list.sh
./generate_list.sh
tree temp
temp
├── files.list
├── generate.sh
── images.list
0 directories, 3 files
├── files.list.template
── images.list
└── images.list.template
0 directories, 5 files
```
In some cases you may want to update some component version, you can edit `generate.sh` file, then run `bash generate.sh | grep 'https' > files.list` to update file.list or run `bash generate.sh | grep -v 'https'> images.list` to update images.list.
In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars,
then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list.

56
contrib/offline/generate_list.sh Normal file → Executable file
View File

@@ -5,53 +5,29 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${IMAGE_ARCH:="amd64"}
: ${ANSIBLE_SYSTEM:="linux"}
: ${ANSIBLE_ARCHITECTURE:="x86_64"}
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
: ${KUBE_VERSION_YAML:="roles/kubespray-defaults/defaults/main.yaml"}
mkdir -p ${TEMP_DIR}
# ARCH used in convert {%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%} to {{arch}}
if [ "${IMAGE_ARCH}" != "amd64" ]; then ARCH="${IMAGE_ARCH}"; fi
cat > ${TEMP_DIR}/generate.sh << EOF
arch=${ARCH}
image_arch=${IMAGE_ARCH}
ansible_system=${ANSIBLE_SYSTEM}
ansible_architecture=${ANSIBLE_ARCHITECTURE}
EOF
# generate all component version by $DOWNLOAD_YML
grep 'kube_version:' ${REPO_ROOT_DIR}/${KUBE_VERSION_YAML} \
| sed 's/: /=/g' >> ${TEMP_DIR}/generate.sh
grep '_version:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
sed -i 's/kube_major_version=.*/kube_major_version=${kube_version%.*}/g' ${TEMP_DIR}/generate.sh
sed -i 's/crictl_version=.*/crictl_version=${kube_version%.*}.0/g' ${TEMP_DIR}/generate.sh
# generate all download files url
# generate all download files url template
grep 'download_url:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/ //g;s/{{/${/g;s/}}/}/g;s/|lower//g;s/^.*_url=/echo /g' >> ${TEMP_DIR}/generate.sh
| sed 's/^.*_url: //g;s/\"//g' > ${TEMP_DIR}/files.list.template
# generate all images list
grep -E '_repo:|_tag:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed "s#{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}#{{arch}}#g" \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
# generate all images list template
sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' | sed 's/{{/${/g;s/}}/}/g' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/^/echo /g' >> ${TEMP_DIR}/generate.sh
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# special handling for https://github.com/kubernetes-sigs/kubespray/pull/7570
sed -i 's#^coredns_image_repo=.*#coredns_image_repo=${kube_image_repo}$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n /coredns/coredns;else echo -n /coredns; fi)#' ${TEMP_DIR}/generate.sh
sed -i 's#^coredns_image_tag=.*#coredns_image_tag=$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n ${coredns_version};else echo -n ${coredns_version/v/}; fi)#' ${TEMP_DIR}/generate.sh
# add kube-* images to images list
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
echo "${KUBE_IMAGES}" | tr ' ' '\n' | xargs -L1 -I {} \
echo 'echo ${kube_image_repo}/{}:${kube_version}' >> ${TEMP_DIR}/generate.sh
for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
done
# print files.list and images.list
bash ${TEMP_DIR}/generate.sh | grep 'https' | sort > ${TEMP_DIR}/files.list
bash ${TEMP_DIR}/generate.sh | grep -v 'https' | sort > ${TEMP_DIR}/images.list
# run ansible to expand templates
/bin/cp ${CURRENT_DIR}/generate_list.yml ${REPO_ROOT_DIR}
(cd ${REPO_ROOT_DIR} && ansible-playbook $* generate_list.yml && /bin/rm generate_list.yml) || exit 1

View File

@@ -0,0 +1,19 @@
---
- hosts: localhost
become: no
roles:
# Just load default variables from roles.
- role: kubespray-defaults
when: false
- role: download
when: false
tasks:
# Generate files.list and images.list files from templates.
- template:
src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list
with_items:
- files
- images

View File

@@ -54,7 +54,8 @@ function create_container_image_tar() {
if [ "${FIRST_PART}" = "k8s.gcr.io" ] ||
[ "${FIRST_PART}" = "gcr.io" ] ||
[ "${FIRST_PART}" = "docker.io" ] ||
[ "${FIRST_PART}" = "quay.io" ]; then
[ "${FIRST_PART}" = "quay.io" ] ||
[ "${FIRST_PART}" = "${PRIVATE_REGISTRY}" ]; then
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@)
fi
echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST}
@@ -152,7 +153,8 @@ else
echo "(2) Deploy local container registry and register the container images to the registry."
echo ""
echo "Step(1) should be done online site as a preparation, then we bring"
echo "the gotten images to the target offline environment."
echo "the gotten images to the target offline environment. if images are from"
echo "a private registry, you need to set PRIVATE_REGISTRY environment variable."
echo "Then we will run step(2) for registering the images to local registry."
echo ""
echo "${IMAGE_TAR_FILE} is created to contain your container images."

View File

@@ -20,20 +20,20 @@ module "aws-vpc" {
aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_avail_zones = data.aws_availability_zones.available.names
aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags
}
module "aws-elb" {
source = "./modules/elb"
module "aws-nlb" {
source = "./modules/nlb"
aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_avail_zones = data.aws_availability_zones.available.names
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_elb_api_port = var.aws_elb_api_port
aws_nlb_api_port = var.aws_nlb_api_port
k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags
}
@@ -54,7 +54,6 @@ resource "aws_instance" "bastion-server" {
instance_type = var.aws_bastion_size
count = var.aws_bastion_num
associate_public_ip_address = true
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -79,8 +78,7 @@ resource "aws_instance" "k8s-master" {
count = var.aws_kube_master_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -98,10 +96,10 @@ resource "aws_instance" "k8s-master" {
}))
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = var.aws_kube_master_num
elb = module.aws-elb.aws_elb_api_id
instance = element(aws_instance.k8s-master.*.id, count.index)
resource "aws_lb_target_group_attachment" "tg-attach_master_nodes" {
count = var.aws_kube_master_num
target_group_arn = module.aws-nlb.aws_nlb_api_tg_arn
target_id = element(aws_instance.k8s-master.*.private_ip, count.index)
}
resource "aws_instance" "k8s-etcd" {
@@ -110,8 +108,7 @@ resource "aws_instance" "k8s-etcd" {
count = var.aws_etcd_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -134,8 +131,7 @@ resource "aws_instance" "k8s-worker" {
count = var.aws_kube_worker_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@@ -168,7 +164,7 @@ data "template_file" "inventory" {
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
nlb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-nlb.aws_nlb_api_fqdn}\""
}
}

View File

@@ -1,57 +0,0 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = var.aws_vpc_id
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = var.aws_subnet_ids_public
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTPS:${var.k8s_secure_api_port}/healthz"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}))
}

View File

@@ -1,7 +0,0 @@
output "aws_elb_api_id" {
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = aws_elb.aws-elb-api.dns_name
}

View File

@@ -0,0 +1,41 @@
# Create a new AWS NLB for K8S API
resource "aws_lb" "aws-nlb-api" {
name = "kubernetes-nlb-${var.aws_cluster_name}"
load_balancer_type = "network"
subnets = length(var.aws_subnet_ids_public) <= length(var.aws_avail_zones) ? var.aws_subnet_ids_public : slice(var.aws_subnet_ids_public, 0, length(var.aws_avail_zones))
idle_timeout = 400
enable_cross_zone_load_balancing = true
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-nlb-api"
}))
}
# Create a new AWS NLB Instance Target Group
resource "aws_lb_target_group" "aws-nlb-api-tg" {
name = "kubernetes-nlb-tg-${var.aws_cluster_name}"
port = var.k8s_secure_api_port
protocol = "TCP"
target_type = "ip"
vpc_id = var.aws_vpc_id
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
interval = 30
protocol = "HTTPS"
path = "/healthz"
}
}
# Create a new AWS NLB Listener listen to target group
resource "aws_lb_listener" "aws-nlb-api-listener" {
load_balancer_arn = aws_lb.aws-nlb-api.arn
port = var.aws_nlb_api_port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.aws-nlb-api-tg.arn
}
}

View File

@@ -0,0 +1,11 @@
output "aws_nlb_api_id" {
value = aws_lb.aws-nlb-api.id
}
output "aws_nlb_api_fqdn" {
value = aws_lb.aws-nlb-api.dns_name
}
output "aws_nlb_api_tg_arn" {
value = aws_lb_target_group.aws-nlb-api-tg.arn
}

View File

@@ -6,8 +6,8 @@ variable "aws_vpc_id" {
description = "AWS VPC ID"
}
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View File

@@ -25,13 +25,14 @@ resource "aws_internet_gateway" "cluster-vpc-internetgw" {
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_public)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}))
}
@@ -43,12 +44,14 @@ resource "aws_nat_gateway" "cluster-nat-gateway" {
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_private)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}))
}

View File

@@ -14,8 +14,8 @@ output "etcd" {
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
output "aws_nlb_api_fqdn" {
value = "${module.aws-nlb.aws_nlb_api_fqdn}:${var.aws_nlb_api_port}"
}
output "inventory" {

View File

@@ -33,9 +33,9 @@ aws_kube_worker_size = "t2.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
#Settings AWS NLB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443

View File

@@ -24,4 +24,4 @@ kube_control_plane
calico_rr
[k8s_cluster:vars]
${elb_api_fqdn}
${nlb_api_fqdn}

View File

@@ -32,7 +32,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
default_tags = {

View File

@@ -25,7 +25,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
default_tags = { }

View File

@@ -104,11 +104,11 @@ variable "aws_kube_worker_size" {
}
/*
* AWS ELB Settings
* AWS NLB Settings
*
*/
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View File

@@ -74,14 +74,23 @@ ansible-playbook -i contrib/terraform/gcs/inventory.ini cluster.yml -b -v
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to ingress on ports 80 and 443
### Optional
* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
* `master_sa_email`: Service account email to use for the master nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the master nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_sa_email`: Service account email to use for the control plane nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the control plane nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the control plane nodes *(Defaults to `false`)*
* `master_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the control plane nodes *(Defaults to `"pd-ssd"`)*
* `worker_sa_email`: Service account email to use for the worker nodes *(Defaults to `""`, auto generate one)*
* `worker_sa_scopes`: Service account email to use for the worker nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `worker_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the worker nodes *(Defaults to `false`)*
* `worker_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the worker nodes *(Defaults to `"pd-ssd"`)*
An example variables file can be found `tfvars.json`

View File

@@ -1,8 +1,16 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file(var.keyfile_location)
region = var.region
project = var.gcp_project_id
version = "~> 3.48"
}
module "kubernetes" {
@@ -13,12 +21,17 @@ module "kubernetes" {
machines = var.machines
ssh_pub_key = var.ssh_pub_key
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
master_preemptible = var.master_preemptible
master_additional_disk_type = var.master_additional_disk_type
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
worker_preemptible = var.worker_preemptible
worker_additional_disk_type = var.worker_additional_disk_type
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
}

View File

@@ -5,6 +5,8 @@
resource "google_compute_network" "main" {
name = "${var.prefix}-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "main" {
@@ -20,6 +22,8 @@ resource "google_compute_firewall" "deny_all" {
priority = 1000
source_ranges = ["0.0.0.0/0"]
deny {
protocol = "all"
}
@@ -39,6 +43,8 @@ resource "google_compute_firewall" "allow_internal" {
}
resource "google_compute_firewall" "ssh" {
count = length(var.ssh_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-ssh-firewall"
network = google_compute_network.main.name
@@ -53,6 +59,8 @@ resource "google_compute_firewall" "ssh" {
}
resource "google_compute_firewall" "api_server" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-api-server-firewall"
network = google_compute_network.main.name
@@ -67,6 +75,8 @@ resource "google_compute_firewall" "api_server" {
}
resource "google_compute_firewall" "nodeport" {
count = length(var.nodeport_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-nodeport-firewall"
network = google_compute_network.main.name
@@ -81,11 +91,15 @@ resource "google_compute_firewall" "nodeport" {
}
resource "google_compute_firewall" "ingress_http" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-http-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["80"]
@@ -93,11 +107,15 @@ resource "google_compute_firewall" "ingress_http" {
}
resource "google_compute_firewall" "ingress_https" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-https-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["443"]
@@ -173,7 +191,7 @@ resource "google_compute_disk" "master" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.master_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@@ -229,19 +247,28 @@ resource "google_compute_instance" "master" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.master_preemptible
automatic_restart = !var.master_preemptible
}
}
resource "google_compute_forwarding_rule" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-forward-rule"
port_range = "6443"
target = google_compute_target_pool.master_lb.id
target = google_compute_target_pool.master_lb[count.index].id
}
resource "google_compute_target_pool" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-pool"
instances = local.master_target_list
}
@@ -258,7 +285,7 @@ resource "google_compute_disk" "worker" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.worker_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@@ -326,35 +353,48 @@ resource "google_compute_instance" "worker" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.worker_preemptible
automatic_restart = !var.worker_preemptible
}
}
resource "google_compute_address" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-address"
address_type = "EXTERNAL"
region = var.region
}
resource "google_compute_forwarding_rule" "worker_http_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-http-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "80"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_forwarding_rule" "worker_https_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-https-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "443"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_target_pool" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-pool"
instances = local.worker_target_list
}

View File

@@ -19,9 +19,9 @@ output "worker_ip_addresses" {
}
output "ingress_controller_lb_ip_address" {
value = google_compute_address.worker_lb.address
value = length(var.ingress_whitelist) > 0 ? google_compute_address.worker_lb.0.address : ""
}
output "control_plane_lb_ip_address" {
value = google_compute_forwarding_rule.master_lb.ip_address
value = length(var.api_server_whitelist) > 0 ? google_compute_forwarding_rule.master_lb.0.ip_address : ""
}

View File

@@ -27,6 +27,14 @@ variable "master_sa_scopes" {
type = list(string)
}
variable "master_preemptible" {
type = bool
}
variable "master_additional_disk_type" {
type = string
}
variable "worker_sa_email" {
type = string
}
@@ -35,6 +43,14 @@ variable "worker_sa_scopes" {
type = list(string)
}
variable "worker_preemptible" {
type = bool
}
variable "worker_additional_disk_type" {
type = string
}
variable "ssh_pub_key" {}
variable "ssh_whitelist" {
@@ -49,6 +65,11 @@ variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}
variable "private_network_cidr" {
default = "10.0.10.0/24"
}

View File

@@ -16,6 +16,9 @@
"nodeport_whitelist": [
"1.2.3.4/32"
],
"ingress_whitelist": [
"0.0.0.0/0"
],
"machines": {
"master-0": {
@@ -24,7 +27,7 @@
"zone": "us-central1-a",
"additional_disks": {},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@@ -38,7 +41,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@@ -52,7 +55,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
}

View File

@@ -44,6 +44,16 @@ variable "master_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "master_preemptible" {
type = bool
default = false
}
variable "master_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable "worker_sa_email" {
type = string
default = ""
@@ -54,6 +64,16 @@ variable "worker_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "worker_preemptible" {
type = bool
default = false
}
variable "worker_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable ssh_pub_key {
description = "Path to public SSH key file which is injected into the VMs."
type = string
@@ -70,3 +90,8 @@ variable api_server_whitelist {
variable nodeport_whitelist {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}

View File

@@ -97,6 +97,7 @@ terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
* `prefix`: Prefix to add to all resources, if set to "" don't set any prefix
* `ssh_public_keys`: List of public SSH keys to install on all machines
* `zone`: The zone where to run the cluster
* `network_zone`: the network zone where the cluster is running
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `size`: Size of the VM

View File

@@ -1,6 +1,6 @@
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini"
ssh_public_keys = [

View File

@@ -10,6 +10,7 @@ module "kubernetes" {
machines = var.machines
ssh_public_keys = var.ssh_public_keys
network_zone = var.network_zone
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
@@ -34,9 +35,9 @@ data "template_file" "inventory" {
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
network_id = module.kubernetes.network_id
}
}

View File

@@ -6,7 +6,7 @@ resource "hcloud_network" "kubernetes" {
resource "hcloud_network_subnet" "kubernetes" {
type = "cloud"
network_id = hcloud_network.kubernetes.id
network_zone = "eu-central"
network_zone = var.network_zone
ip_range = var.private_subnet_cidr
}

View File

@@ -21,3 +21,7 @@ output "worker_ip_addresses" {
output "cluster_private_network_cidr" {
value = var.private_subnet_cidr
}
output "network_id" {
value = hcloud_network.kubernetes.id
}

View File

@@ -39,3 +39,6 @@ variable "private_network_cidr" {
variable "private_subnet_cidr" {
default = "10.0.10.0/24"
}
variable "network_zone" {
default = "eu-central"
}

View File

@@ -14,3 +14,6 @@ ${list_worker}
[k8s-cluster:children]
kube-master
kube-node
[k8s-cluster:vars]
network_id=${network_id}

View File

@@ -1,6 +1,10 @@
variable "zone" {
description = "The zone where to run the cluster"
}
variable "network_zone" {
description = "The network zone where the cluster is running"
default = "eu-central"
}
variable "prefix" {
description = "Prefix for resource names"

View File

@@ -35,7 +35,7 @@ now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- Install dependencies: `sudo pip install -r requirements.txt`
- [Install Ansible dependencies](/docs/ansible.md#installing-ansible)
- Account with Equinix Metal
- An SSH key pair
@@ -60,9 +60,9 @@ Terraform will be used to provision all of the Equinix Metal resources with base
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
cp -LRp contrib/terraform/metal/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/packet/hosts
ln -s ../../contrib/terraform/metal/hosts
```
This will be the base for subsequent Terraform commands.
@@ -101,7 +101,7 @@ This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- packet_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
- metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
@@ -119,7 +119,7 @@ Once the Kubespray playbooks are run, a Kubernetes configuration file will be wr
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory :
`.gitignore` file in the `terraform/metal` directory :
- `.terraform`
- `.tfvars`
@@ -135,7 +135,7 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/packet
terraform init ../../contrib/terraform/metal
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
@@ -146,7 +146,7 @@ You can apply the Terraform configuration to your cluster with the following com
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/metal
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
@@ -156,7 +156,7 @@ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/metal
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@@ -1,16 +1,15 @@
# Configure the Equinix Metal Provider
provider "packet" {
version = "~> 2.0"
provider "metal" {
}
resource "packet_ssh_key" "k8s" {
resource "metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "packet_device" "k8s_master" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@@ -18,12 +17,12 @@ resource "packet_device" "k8s_master" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master_no_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@@ -31,12 +30,12 @@ resource "packet_device" "k8s_master_no_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "packet_device" "k8s_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
@@ -44,12 +43,12 @@ resource "packet_device" "k8s_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_node" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
@@ -57,7 +56,7 @@ resource "packet_device" "k8s_node" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

View File

@@ -0,0 +1,16 @@
output "k8s_masters" {
value = metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = metal_device.k8s_node.*.access_public_ipv4
}

View File

@@ -2,7 +2,7 @@
cluster_name = "mycluster"
# Your Equinix Metal project ID. See hhttps://metal.equinix.com/developers/docs/accounts/
packet_project_id = "Example-API-Token"
metal_project_id = "Example-API-Token"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project

View File

@@ -2,12 +2,12 @@ variable "cluster_name" {
default = "kubespray"
}
variable "packet_project_id" {
variable "metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
}
variable "operating_system" {
default = "ubuntu_16_04"
default = "ubuntu_20_04"
}
variable "public_key_path" {
@@ -24,23 +24,23 @@ variable "facility" {
}
variable "plan_k8s_masters" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_etcd" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_k8s_nodes" {
default = "c2.medium.x86"
default = "c3.medium.x86"
}
variable "number_of_k8s_masters" {
default = 0
default = 1
}
variable "number_of_k8s_masters_no_etcd" {
@@ -52,6 +52,6 @@ variable "number_of_etcd" {
}
variable "number_of_k8s_nodes" {
default = 0
default = 1
}

View File

@@ -2,8 +2,8 @@
terraform {
required_version = ">= 0.12"
required_providers {
packet = {
source = "terraform-providers/packet"
metal = {
source = "equinix/metal"
}
}
}

View File

@@ -17,9 +17,10 @@ most modern installs of OpenStack that support the basic services.
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [Open Telekom Cloud](https://cloud.telekom.de/)
- [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/)
- [Safespring](https://www.safespring.com)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
@@ -247,6 +248,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated |
|`use_existing_network`| Use an existing network with the name of `network_name`. `false` by default |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
@@ -271,7 +273,6 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
@@ -283,7 +284,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|`port_security_enabled` | Allow to disable port security by setting this to `false`. `true` by default |
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
##### k8s_nodes
@@ -411,18 +415,39 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" init
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Customizing with cloud-init
You can apply cloud-init based customization for the openstack instances before provisioning your cluster.
One common template is used for all instances. Adjust the file shown below:
`contrib/terraform/openstack/modules/compute/templates/cloudinit.yaml`
For example, to enable openstack novnc access and ansible_user=root SSH access:
```ShellSession
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
ssh_pwauth: yes
chpasswd:
list: |
root:secret
expire: False
## in some cases direct root ssh access via ssh key is required
disable_root: false
```
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" apply -var-file=cluster.tfvars
```
if you chose to create a bastion host, this script will create
@@ -437,7 +462,7 @@ pick it up automatically.
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" destroy -var-file=cluster.tfvars
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@@ -1,14 +1,15 @@
module "network" {
source = "./modules/network"
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
router_id = var.router_id
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
port_security_enabled = var.port_security_enabled
router_id = var.router_id
}
module "ips" {
@@ -23,6 +24,7 @@ module "ips" {
network_name = var.network_name
router_id = module.network.router_id
k8s_nodes = var.k8s_nodes
k8s_masters = var.k8s_masters
k8s_master_fips = var.k8s_master_fips
bastion_fips = var.bastion_fips
router_internal_port_id = module.network.router_internal_port_id
@@ -43,6 +45,7 @@ module "compute" {
number_of_bastions = var.number_of_bastions
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
k8s_masters = var.k8s_masters
k8s_nodes = var.k8s_nodes
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
@@ -69,6 +72,7 @@ module "compute" {
flavor_bastion = var.flavor_bastion
k8s_master_fips = module.ips.k8s_master_fips
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
k8s_masters_fips = module.ips.k8s_masters_fips
k8s_node_fips = module.ips.k8s_node_fips
k8s_nodes_fips = module.ips.k8s_nodes_fips
bastion_fips = module.ips.bastion_fips
@@ -80,7 +84,6 @@ module "compute" {
supplementary_node_groups = var.supplementary_node_groups
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
wait_for_floatingip = var.wait_for_floatingip
use_access_ip = var.use_access_ip
master_server_group_policy = var.master_server_group_policy
node_server_group_policy = var.node_server_group_policy
@@ -88,8 +91,11 @@ module "compute" {
extra_sec_groups = var.extra_sec_groups
extra_sec_groups_name = var.extra_sec_groups_name
group_vars_path = var.group_vars_path
network_id = module.network.router_id
port_security_enabled = var.port_security_enabled
force_null_port_security = var.force_null_port_security
network_router_id = module.network.router_id
network_id = module.network.network_id
use_existing_network = var.use_existing_network
}
output "private_subnet_id" {

View File

@@ -15,6 +15,15 @@ data "openstack_images_image_v2" "image_master" {
name = var.image_master == "" ? var.image : var.image_master
}
data "template_file" "cloudinit" {
template = file("${path.module}/templates/cloudinit.yaml")
}
data "openstack_networking_network_v2" "k8s_network" {
count = var.use_existing_network ? 1 : 0
name = var.network_name
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
@@ -150,16 +159,25 @@ resource "openstack_compute_servergroup_v2" "k8s_etcd" {
locals {
# master groups
master_sec_groups = compact([
openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].name : "",
openstack_networking_secgroup_v2.k8s_master.id,
openstack_networking_secgroup_v2.k8s.id,
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].id : "",
])
# worker groups
worker_sec_groups = compact([
openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.name,
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].name : "",
openstack_networking_secgroup_v2.k8s.id,
openstack_networking_secgroup_v2.worker.id,
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].id : "",
])
# bastion groups
bastion_sec_groups = compact(concat([
openstack_networking_secgroup_v2.k8s.id,
openstack_networking_secgroup_v2.bastion[0].id,
]))
# etcd groups
etcd_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# glusterfs groups
gfs_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# Image uuid
image_to_use_node = var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.vm_image[0].id
@@ -169,12 +187,27 @@ locals {
image_to_use_master = var.image_master_uuid != "" ? var.image_master_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.image_master[0].id
}
resource "openstack_networking_port_v2" "bastion_port" {
count = var.number_of_bastions
name = "${var.cluster_name}-bastion-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.bastion_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index + 1}"
count = var.number_of_bastions
image_id = var.bastion_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_bastion
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.bastion_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -189,25 +222,35 @@ resource "openstack_compute_instance_v2" "bastion" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
element(openstack_networking_secgroup_v2.bastion.*.name, count.index),
]
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "bastion"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${var.bastion_fips[0]}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_master_port" {
count = var.number_of_k8s_masters
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
count = var.number_of_k8s_masters
@@ -215,6 +258,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
@@ -231,11 +275,9 @@ resource "openstack_compute_instance_v2" "k8s_master" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
@@ -246,15 +288,87 @@ resource "openstack_compute_instance_v2" "k8s_master" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_masters_port" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
availability_zone = each.value.az
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
content {
uuid = local.image_to_use_master
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
network {
port = openstack_networking_port_v2.k8s_masters_port[each.key].id
}
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "%{if each.value.etcd == true}etcd,%{endif}kube_control_plane,${var.supplementary_master_groups},k8s_cluster%{if each.value.floating_ip == false},no_floating%{endif}"
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_masters_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
}
}
resource "openstack_networking_port_v2" "k8s_master_no_etcd_port" {
count = var.number_of_k8s_masters_no_etcd
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
count = var.number_of_k8s_masters_no_etcd
@@ -262,6 +376,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
@@ -278,11 +393,9 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
@@ -293,15 +406,29 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "etcd_port" {
count = var.number_of_etcd
name = "${var.cluster_name}-etcd-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.etcd_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index + 1}"
count = var.number_of_etcd
@@ -309,6 +436,7 @@ resource "openstack_compute_instance_v2" "etcd" {
image_id = var.etcd_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_etcd
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.etcd_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@@ -323,13 +451,11 @@ resource "openstack_compute_instance_v2" "etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.etcd_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.etcd_server_group_policy ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
for_each = var.etcd_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_etcd[0].id
}
@@ -338,11 +464,25 @@ resource "openstack_compute_instance_v2" "etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_port" {
count = var.number_of_k8s_masters_no_floating_ip
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip
@@ -365,11 +505,9 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
@@ -380,11 +518,25 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_no_etcd_port" {
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
@@ -392,6 +544,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@@ -407,11 +560,9 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_no_etcd_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
@@ -422,11 +573,25 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_node_port" {
count = var.number_of_k8s_nodes
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
count = var.number_of_k8s_nodes
@@ -434,6 +599,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -449,10 +615,9 @@ resource "openstack_compute_instance_v2" "k8s_node" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
@@ -464,15 +629,29 @@ resource "openstack_compute_instance_v2" "k8s_node" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,${var.supplementary_node_groups}"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_node_no_floating_ip_port" {
count = var.number_of_k8s_nodes_no_floating_ip
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
count = var.number_of_k8s_nodes_no_floating_ip
@@ -480,6 +659,7 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -495,11 +675,9 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_node_no_floating_ip_port.*.id, count.index)
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
@@ -510,11 +688,25 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,no_floating,${var.supplementary_node_groups}"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_nodes_port" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
@@ -522,6 +714,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.template_file.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@@ -537,11 +730,9 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
}
network {
name = var.network_name
port = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
@@ -552,15 +743,29 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
command = "%{if each.value.floating_ip}sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
}
}
resource "openstack_networking_port_v2" "glusterfs_node_no_floating_ip_port" {
count = var.number_of_gfs_nodes_no_floating_ip
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.gfs_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip
@@ -582,11 +787,9 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.glusterfs_node_no_floating_ip_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
@@ -597,44 +800,46 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user_gfs
kubespray_groups = "gfs-cluster,network-storage,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
resource "openstack_networking_floatingip_associate_v2" "bastion" {
count = var.number_of_bastions
floating_ip = var.bastion_fips[count.index]
instance_id = element(openstack_compute_instance_v2.bastion.*.id, count.index)
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
resource "openstack_networking_floatingip_associate_v2" "k8s_master" {
count = var.number_of_k8s_masters
instance_id = element(openstack_compute_instance_v2.k8s_master.*.id, count.index)
floating_ip = var.k8s_master_fips[count.index]
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
instance_id = element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)
floating_ip = var.k8s_master_no_etcd_fips[count.index]
resource "openstack_networking_floatingip_associate_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
floating_ip = var.k8s_masters_fips[each.key].address
port_id = openstack_networking_port_v2.k8s_masters_port[each.key].id
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
resource "openstack_networking_floatingip_associate_v2" "k8s_master_no_etcd" {
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
floating_ip = var.k8s_master_no_etcd_fips[count.index]
port_id = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
}
resource "openstack_networking_floatingip_associate_v2" "k8s_node" {
count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0
floating_ip = var.k8s_node_fips[count.index]
instance_id = element(openstack_compute_instance_v2.k8s_node[*].id, count.index)
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_nodes" {
resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
floating_ip = var.k8s_nodes_fips[each.key].address
instance_id = openstack_compute_instance_v2.k8s_nodes[each.key].id
wait_until_associated = var.wait_for_floatingip
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {

View File

@@ -0,0 +1,17 @@
# yamllint disable rule:comments
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
#ssh_pwauth: yes
#chpasswd:
# list: |
# root:secret
# expire: False
## in some cases direct root ssh access via ssh key is required
#disable_root: false
## in some cases additional CA certs are required
#ca-certs:
# trusted: |
# -----BEGIN CERTIFICATE-----

View File

@@ -68,6 +68,14 @@ variable "network_id" {
default = ""
}
variable "use_existing_network" {
type = bool
}
variable "network_router_id" {
default = ""
}
variable "k8s_master_fips" {
type = list
}
@@ -80,6 +88,10 @@ variable "k8s_node_fips" {
type = list
}
variable "k8s_masters_fips" {
type = map
}
variable "k8s_nodes_fips" {
type = map
}
@@ -104,9 +116,9 @@ variable "k8s_allowed_egress_ips" {
type = list
}
variable "k8s_nodes" {}
variable "k8s_masters" {}
variable "wait_for_floatingip" {}
variable "k8s_nodes" {}
variable "supplementary_master_groups" {
default = ""
@@ -165,3 +177,11 @@ variable "image_master_uuid" {
variable "group_vars_path" {
type = string
}
variable "port_security_enabled" {
type = bool
}
variable "force_null_port_security" {
type = bool
}

View File

@@ -14,6 +14,12 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
# If user specifies pre-existing IPs to use in k8s_master_fips, do not create new ones.
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = length(var.k8s_master_fips) > 0 ? 0 : var.number_of_k8s_masters_no_etcd

View File

@@ -3,6 +3,10 @@ output "k8s_master_fips" {
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master[*].address
}
output "k8s_masters_fips" {
value = openstack_networking_floatingip_v2.k8s_masters
}
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
output "k8s_master_no_etcd_fips" {
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address

View File

@@ -16,6 +16,8 @@ variable "router_id" {
default = ""
}
variable "k8s_masters" {}
variable "k8s_nodes" {}
variable "k8s_master_fips" {}

View File

@@ -11,10 +11,11 @@ data "openstack_networking_router_v2" "k8s" {
}
resource "openstack_networking_network_v2" "k8s" {
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
port_security_enabled = var.port_security_enabled
}
resource "openstack_networking_subnet_v2" "k8s" {

View File

@@ -2,6 +2,10 @@ output "router_id" {
value = "%{if var.use_neutron == 1} ${var.router_id == null ? element(concat(openstack_networking_router_v2.k8s.*.id, [""]), 0) : var.router_id} %{else} %{endif}"
}
output "network_id" {
value = element(concat(openstack_networking_network_v2.k8s.*.id, [""]),0)
}
output "router_internal_port_id" {
value = element(concat(openstack_networking_router_interface_v2.k8s.*.id, [""]), 0)
}

View File

@@ -10,6 +10,10 @@ variable "dns_nameservers" {
type = list
}
variable "port_security_enabled" {
type = bool
}
variable "subnet_cidr" {}
variable "use_neutron" {}

View File

@@ -32,6 +32,28 @@ number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
k8s_masters = {
# "master-1" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = true
# "etcd" = true
# },
# "master-2" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = false
# "etcd" = true
# },
# "master-3" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = true
# "etcd" = true
# },
}
# nodes
number_of_k8s_nodes = 2
@@ -52,6 +74,9 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking
network_name = "<network>"
# Use a existing network with the name of network_name. Set to false to create a network with name of network_name.
# use_existing_network = true
external_net = "<UUID>"
subnet_cidr = "<cidr>"
@@ -59,3 +84,6 @@ subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]
# Force port security to be null. Some cloud providers do not allow to set port security.
# force_null_port_security = false

View File

@@ -137,6 +137,12 @@ variable "network_name" {
default = "internal"
}
variable "use_existing_network" {
description = "Use an existing network"
type = bool
default = "false"
}
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = string
@@ -148,6 +154,18 @@ variable "use_neutron" {
default = 1
}
variable "port_security_enabled" {
description = "Enable port security on the internal network"
type = bool
default = "true"
}
variable "force_null_port_security" {
description = "Force port security to be null. Some providers does not allow setting port security"
type = bool
default = "false"
}
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = string
@@ -268,6 +286,10 @@ variable "router_internal_port_id" {
default = null
}
variable "k8s_masters" {
default = {}
}
variable "k8s_nodes" {
default = {}
}

View File

@@ -1,16 +0,0 @@
output "k8s_masters" {
value = packet_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = packet_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = packet_device.k8s_node.*.access_public_ipv4
}

View File

@@ -114,10 +114,10 @@ def iterhosts(resources):
def iterips(resources):
'''yield ip tuples of (instance_id, ip)'''
'''yield ip tuples of (port_id, ip)'''
for module_name, key, resource in resources:
resource_type, name = key.split('.', 1)
if resource_type == 'openstack_compute_floatingip_associate_v2':
if resource_type == 'openstack_networking_floatingip_associate_v2':
yield openstack_floating_ips(resource)
@@ -195,8 +195,8 @@ def parse_bool(string_form):
raise ValueError('could not convert %r to a bool' % string_form)
@parses('packet_device')
def packet_device(resource, tfvars=None):
@parses('metal_device')
def metal_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['hostname']
groups = []
@@ -213,14 +213,14 @@ def packet_device(resource, tfvars=None):
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # Use root by default in packet
'ansible_ssh_user': 'root', # Use root by default in metal
# generic
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
'ipv6_address': raw_attrs['network.1.address'],
'public_ipv6': raw_attrs['network.1.address'],
'private_ipv4': raw_attrs['network.2.address'],
'provider': 'packet',
'provider': 'metal',
}
if raw_attrs['operating_system'] == 'flatcar_stable':
@@ -228,10 +228,10 @@ def packet_device(resource, tfvars=None):
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state'])
groups.append('packet_plan=' + attrs['plan'])
groups.append('metal_operating_system=' + attrs['operating_system'])
groups.append('metal_locked=%s' % attrs['locked'])
groups.append('metal_state=' + attrs['state'])
groups.append('metal_plan=' + attrs['plan'])
# groups specific to kubespray
groups = groups + attrs['tags']
@@ -243,13 +243,13 @@ def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
'ip': raw_attrs['floating_ip'],
'instance_id': raw_attrs['instance_id'],
'port_id': raw_attrs['port_id'],
}
return attrs
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
return raw_attrs['instance_id'], raw_attrs['floating_ip']
return raw_attrs['port_id'], raw_attrs['floating_ip']
@parses('openstack_compute_instance_v2')
@calculate_mantl_vars
@@ -282,6 +282,7 @@ def openstack_host(resource, module_name):
# generic
'public_ipv4': raw_attrs['access_ip_v4'],
'private_ipv4': raw_attrs['access_ip_v4'],
'port_id' : raw_attrs['network.0.port'],
'provider': 'openstack',
}
@@ -339,10 +340,10 @@ def openstack_host(resource, module_name):
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
port_id = host[1]['port_id']
if host_id in ips:
ip = ips[host_id]
if port_id in ips:
ip = ips[port_id]
host[1].update({
'access_ip_v4': ip,

View File

@@ -104,9 +104,22 @@ terraform destroy --var-file cluster-settings.tfvars \
* `zone`: The zone where to run the cluster
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `plan`: Preconfigured cpu/mem plan to use (disables `cpu` and `mem` attributes below)
* `cpu`: number of cpu cores
* `mem`: memory size in MB
* `disk_size`: The size of the storage in GB
* `additional_disks`: Additional disks to attach to the node.
* `size`: The size of the additional disk in GB
* `tier`: The tier of disk to use (`maxiops` is the only one you can choose atm)
* `firewall_enabled`: Enable firewall rules
* `master_allowed_remote_ips`: List of IP ranges that should be allowed to access API of masters
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `k8s_allowed_remote_ips`: List of IP ranges that should be allowed SSH access to all nodes
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `loadbalancer_enabled`: Enable managed load balancer
* `loadbalancer_plan`: Plan to use for load balancer *(development|production-small)*
* `loadbalancers`: Ports to load balance and which machines to forward to. Key of this object will be used as the name of the load balancer frontends/backends
* `port`: Port to load balance.
* `backend_servers`: List of servers that traffic to the port should be forwarded to.

View File

@@ -20,6 +20,8 @@ ssh_public_keys = [
machines = {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -30,6 +32,8 @@ machines = {
},
"worker-0" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -49,6 +53,8 @@ machines = {
},
"worker-1" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -68,6 +74,8 @@ machines = {
},
"worker-2" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -86,3 +94,32 @@ machines = {
}
}
}
firewall_enabled = false
master_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
k8s_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
loadbalancer_enabled = false
loadbalancer_plan = "development"
loadbalancers = {
# "http" : {
# "port" : 80,
# "backend_servers" : [
# "worker-0",
# "worker-1",
# "worker-2"
# ]
# }
}

View File

@@ -22,6 +22,14 @@ module "kubernetes" {
machines = var.machines
ssh_public_keys = var.ssh_public_keys
firewall_enabled = var.firewall_enabled
master_allowed_remote_ips = var.master_allowed_remote_ips
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
loadbalancer_enabled = var.loadbalancer_enabled
loadbalancer_plan = var.loadbalancer_plan
loadbalancers = var.loadbalancers
}
#

View File

@@ -10,6 +10,16 @@ locals {
]
])
lb_backend_servers = flatten([
for lb_name, loadbalancer in var.loadbalancers : [
for backend_server in loadbalancer.backend_servers : {
port = loadbalancer.port
lb_name = lb_name
server_name = backend_server
}
]
])
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
# Else don't prefix with anything
resource-prefix = "%{ if var.prefix != ""}${var.prefix}-%{ endif }"
@@ -45,8 +55,9 @@ resource "upcloud_server" "master" {
}
hostname = "${local.resource-prefix}${each.key}"
cpu = each.value.cpu
mem = each.value.mem
plan = each.value.plan
cpu = each.value.plan == null ? each.value.cpu : null
mem = each.value.plan == null ? each.value.mem : null
zone = var.zone
template {
@@ -65,6 +76,13 @@ resource "upcloud_server" "master" {
network = upcloud_network.private.id
}
# Ignore volumes created by csi-driver
lifecycle {
ignore_changes = [storage_devices]
}
firewall = var.firewall_enabled
dynamic "storage_devices" {
for_each = {
for disk_key_name, disk in upcloud_storage.additional_disks :
@@ -94,8 +112,9 @@ resource "upcloud_server" "worker" {
}
hostname = "${local.resource-prefix}${each.key}"
cpu = each.value.cpu
mem = each.value.mem
plan = each.value.plan
cpu = each.value.plan == null ? each.value.cpu : null
mem = each.value.plan == null ? each.value.mem : null
zone = var.zone
template {
@@ -114,6 +133,13 @@ resource "upcloud_server" "worker" {
network = upcloud_network.private.id
}
# Ignore volumes created by csi-driver
lifecycle {
ignore_changes = [storage_devices]
}
firewall = var.firewall_enabled
dynamic "storage_devices" {
for_each = {
for disk_key_name, disk in upcloud_storage.additional_disks :
@@ -134,3 +160,151 @@ resource "upcloud_server" "worker" {
create_password = false
}
}
resource "upcloud_firewall_rules" "master" {
for_each = upcloud_server.master
server_id = each.value.id
dynamic firewall_rule {
for_each = var.master_allowed_remote_ips
content {
action = "accept"
comment = "Allow master API access from this network"
destination_port_end = "6443"
destination_port_start = "6443"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = firewall_rule.value.end_address
source_address_start = firewall_rule.value.start_address
}
}
dynamic firewall_rule {
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
content {
action = "drop"
comment = "Deny master API access from other networks"
destination_port_end = "6443"
destination_port_start = "6443"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = "255.255.255.255"
source_address_start = "0.0.0.0"
}
}
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
action = "accept"
comment = "Allow SSH from this network"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = firewall_rule.value.end_address
source_address_start = firewall_rule.value.start_address
}
}
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
action = "drop"
comment = "Deny SSH from other networks"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = "255.255.255.255"
source_address_start = "0.0.0.0"
}
}
}
resource "upcloud_firewall_rules" "k8s" {
for_each = upcloud_server.worker
server_id = each.value.id
dynamic firewall_rule {
for_each = var.k8s_allowed_remote_ips
content {
action = "accept"
comment = "Allow SSH from this network"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = firewall_rule.value.end_address
source_address_start = firewall_rule.value.start_address
}
}
dynamic firewall_rule {
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
content {
action = "drop"
comment = "Deny SSH from other networks"
destination_port_end = "22"
destination_port_start = "22"
direction = "in"
family = "IPv4"
protocol = "tcp"
source_address_end = "255.255.255.255"
source_address_start = "0.0.0.0"
}
}
}
resource "upcloud_loadbalancer" "lb" {
count = var.loadbalancer_enabled ? 1 : 0
configured_status = "started"
name = "${local.resource-prefix}lb"
plan = var.loadbalancer_plan
zone = var.zone
network = upcloud_network.private.id
}
resource "upcloud_loadbalancer_backend" "lb_backend" {
for_each = var.loadbalancer_enabled ? var.loadbalancers : {}
loadbalancer = upcloud_loadbalancer.lb[0].id
name = "lb-backend-${each.key}"
}
resource "upcloud_loadbalancer_frontend" "lb_frontend" {
for_each = var.loadbalancer_enabled ? var.loadbalancers : {}
loadbalancer = upcloud_loadbalancer.lb[0].id
name = "lb-frontend-${each.key}"
mode = "tcp"
port = each.value.port
default_backend_name = upcloud_loadbalancer_backend.lb_backend[each.key].name
}
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
for_each = {
for be_server in local.lb_backend_servers:
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
if var.loadbalancer_enabled
}
backend = upcloud_loadbalancer_backend.lb_backend[each.value.lb_name].id
name = "${local.resource-prefix}${each.key}"
ip = merge(upcloud_server.master, upcloud_server.worker)[each.value.server_name].network_interface[1].ip_address
port = each.value.port
weight = 100
max_sessions = var.loadbalancer_plan == "production-small" ? 50000 : 1000
enabled = true
}

View File

@@ -18,3 +18,7 @@ output "worker_ip" {
}
}
}
output "loadbalancer_domain" {
value = var.loadbalancer_enabled ? upcloud_loadbalancer.lb[0].dns_name : null
}

View File

@@ -16,6 +16,7 @@ variable "machines" {
description = "Cluster machines"
type = map(object({
node_type = string
plan = string
cpu = string
mem = string
disk_size = number
@@ -29,3 +30,38 @@ variable "machines" {
variable "ssh_public_keys" {
type = list(string)
}
variable "firewall_enabled" {
type = bool
}
variable "master_allowed_remote_ips" {
type = list(object({
start_address = string
end_address = string
}))
}
variable "k8s_allowed_remote_ips" {
type = list(object({
start_address = string
end_address = string
}))
}
variable "loadbalancer_enabled" {
type = bool
}
variable "loadbalancer_plan" {
type = string
}
variable "loadbalancers" {
description = "Load balancers"
type = map(object({
port = number
backend_servers = list(string)
}))
}

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.0.0"
version = "~>2.4.0"
}
}
required_version = ">= 0.13"

View File

@@ -6,3 +6,7 @@ output "master_ip" {
output "worker_ip" {
value = module.kubernetes.worker_ip
}
output "loadbalancer_domain" {
value = module.kubernetes.loadbalancer_domain
}

View File

@@ -20,6 +20,8 @@ ssh_public_keys = [
machines = {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -30,6 +32,8 @@ machines = {
},
"worker-0" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -49,6 +53,8 @@ machines = {
},
"worker-1" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -68,6 +74,8 @@ machines = {
},
"worker-2" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@@ -86,3 +94,32 @@ machines = {
}
}
}
firewall_enabled = false
master_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
k8s_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
loadbalancer_enabled = false
loadbalancer_plan = "development"
loadbalancers = {
# "http" : {
# "port" : 80,
# "backend_servers" : [
# "worker-0",
# "worker-1",
# "worker-2"
# ]
# }
}

View File

@@ -28,6 +28,7 @@ variable "machines" {
type = map(object({
node_type = string
plan = string
cpu = string
mem = string
disk_size = number
@@ -54,3 +55,46 @@ variable "UPCLOUD_USERNAME" {
variable "UPCLOUD_PASSWORD" {
description = "Password for UpCloud API user"
}
variable "firewall_enabled" {
description = "Enable firewall rules"
default = false
}
variable "master_allowed_remote_ips" {
description = "List of IP start/end addresses allowed to access API of masters"
type = list(object({
start_address = string
end_address = string
}))
default = []
}
variable "k8s_allowed_remote_ips" {
description = "List of IP start/end addresses allowed to SSH to hosts"
type = list(object({
start_address = string
end_address = string
}))
default = []
}
variable "loadbalancer_enabled" {
description = "Enable load balancer"
default = false
}
variable "loadbalancer_plan" {
description = "Load balancer plan (development/production-small)"
default = "development"
}
variable "loadbalancers" {
description = "Load balancers"
type = map(object({
port = number
backend_servers = list(string)
}))
default = {}
}

View File

@@ -3,7 +3,7 @@ terraform {
required_providers {
upcloud = {
source = "UpCloudLtd/upcloud"
version = "~>2.0.0"
version = "~>2.4.0"
}
}
required_version = ">= 0.13"

View File

@@ -105,8 +105,7 @@ ansible-playbook -i inventory.ini ../../cluster.yml -b -v
* `vsphere_datacenter`: The identifier of vSphere data center
* `vsphere_compute_cluster`: The identifier of vSphere compute cluster
* `vsphere_datastore`: The identifier of vSphere data store
* `vsphere_server`: The address of vSphere server
* `vsphere_hostname`: The IP address of vSphere hostname
* `vsphere_server`: This is the vCenter server name or address for vSphere API operations.
* `ssh_public_keys`: List of public SSH keys to install on all machines
* `template_name`: The name of a base image (the OVF template be defined in vSphere beforehand)
@@ -125,5 +124,7 @@ ansible-playbook -i inventory.ini ../../cluster.yml -b -v
* `worker_cores`: The number of CPU cores for the worker nodes (default: 16)
* `worker_memory`: The amount of RAM for the worker nodes in MB (default: 8192)
* `worker_disk_size`: The amount of disk space for the worker nodes in GB (default: 100)
* `vapp`: Boolean to set the template type to vapp. (Default: false)
* `interface_name`: Name of the interface to configure. (Default: ens192)
An example variables file can be found `default.tfvars`

View File

@@ -34,6 +34,5 @@ vsphere_datacenter = "i-did-not-read-the-docs"
vsphere_compute_cluster = "i-did-not-read-the-docs" # e.g. Cluster
vsphere_datastore = "i-did-not-read-the-docs" # e.g. ssd-000000
vsphere_server = "i-did-not-read-the-docs" # e.g. vsphere.server.com
vsphere_hostname = "i-did-not-read-the-docs" # e.g. 192.168.0.2
template_name = "i-did-not-read-the-docs" # e.g. ubuntu-bionic-18.04-cloudimg

View File

@@ -23,11 +23,6 @@ data "vsphere_network" "network" {
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_host" "host" {
name = var.vsphere_hostname
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_virtual_machine" "template" {
name = var.template_name
datacenter_id = data.vsphere_datacenter.dc.id
@@ -40,7 +35,7 @@ data "vsphere_compute_cluster" "compute_cluster" {
resource "vsphere_resource_pool" "pool" {
name = "${var.prefix}-cluster-pool"
parent_resource_pool_id = data.vsphere_host.host.resource_pool_id
parent_resource_pool_id = data.vsphere_compute_cluster.compute_cluster.resource_pool_id
}
module "kubernetes" {
@@ -74,11 +69,13 @@ module "kubernetes" {
scsi_type = data.vsphere_virtual_machine.template.scsi_type
network_id = data.vsphere_network.network.id
adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
interface_name = var.interface_name
firmware = var.firmware
hardware_version = var.hardware_version
disk_thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
template_id = data.vsphere_virtual_machine.template.id
vapp = var.vapp
ssh_public_keys = var.ssh_public_keys
}
@@ -87,30 +84,17 @@ module "kubernetes" {
# Generate ansible inventory
#
data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")
vars = {
resource "local_file" "inventory" {
content = templatefile("${path.module}/templates/inventory.tpl", {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip),
values(module.kubernetes.master_ip),
range(1, length(module.kubernetes.master_ip) + 1)))
range(1, length(module.kubernetes.master_ip) + 1))),
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s",
keys(module.kubernetes.worker_ip),
values(module.kubernetes.worker_ip)))
list_master = join("\n", formatlist("%s",
keys(module.kubernetes.master_ip)))
list_worker = join("\n", formatlist("%s",
keys(module.kubernetes.worker_ip)))
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers = {
template = data.template_file.inventory.rendered
}
values(module.kubernetes.worker_ip))),
list_master = join("\n", formatlist("%s", keys(module.kubernetes.master_ip))),
list_worker = join("\n", formatlist("%s", keys(module.kubernetes.worker_ip)))
})
filename = var.inventory_file
}

View File

@@ -46,15 +46,31 @@ resource "vsphere_virtual_machine" "worker" {
client_device = true
}
vapp {
properties = {
"user-data" = base64encode(templatefile("${path.module}/templates/cloud-init.tmpl", { ip = each.value.ip,
netmask = each.value.netmask,
gw = var.gateway,
dns = var.dns_primary,
ssh_public_keys = var.ssh_public_keys}))
dynamic "vapp" {
for_each = var.vapp ? [1] : []
content {
properties = {
"user-data" = base64encode(templatefile("${path.module}/templates/vapp-cloud-init.tpl", { ssh_public_keys = var.ssh_public_keys }))
}
}
}
extra_config = {
"isolation.tools.copy.disable" = "FALSE"
"isolation.tools.paste.disable" = "FALSE"
"isolation.tools.setGUIOptions.enable" = "TRUE"
"guestinfo.userdata" = base64encode(templatefile("${path.module}/templates/cloud-init.tpl", { ssh_public_keys = var.ssh_public_keys }))
"guestinfo.userdata.encoding" = "base64"
"guestinfo.metadata" = base64encode(templatefile("${path.module}/templates/metadata.tpl", { hostname = "${var.prefix}-${each.key}",
interface_name = var.interface_name
ip = each.value.ip,
netmask = each.value.netmask,
gw = var.gateway,
dns = var.dns_primary,
ssh_public_keys = var.ssh_public_keys }))
"guestinfo.metadata.encoding" = "base64"
}
}
resource "vsphere_virtual_machine" "master" {
@@ -105,13 +121,29 @@ resource "vsphere_virtual_machine" "master" {
client_device = true
}
vapp {
properties = {
"user-data" = base64encode(templatefile("${path.module}/templates/cloud-init.tmpl", { ip = each.value.ip,
netmask = each.value.netmask,
gw = var.gateway,
dns = var.dns_primary,
ssh_public_keys = var.ssh_public_keys}))
dynamic "vapp" {
for_each = var.vapp ? [1] : []
content {
properties = {
"user-data" = base64encode(templatefile("${path.module}/templates/vapp-cloud-init.tpl", { ssh_public_keys = var.ssh_public_keys }))
}
}
}
extra_config = {
"isolation.tools.copy.disable" = "FALSE"
"isolation.tools.paste.disable" = "FALSE"
"isolation.tools.setGUIOptions.enable" = "TRUE"
"guestinfo.userdata" = base64encode(templatefile("${path.module}/templates/cloud-init.tpl", { ssh_public_keys = var.ssh_public_keys }))
"guestinfo.userdata.encoding" = "base64"
"guestinfo.metadata" = base64encode(templatefile("${path.module}/templates/metadata.tpl", { hostname = "${var.prefix}-${each.key}",
interface_name = var.interface_name
ip = each.value.ip,
netmask = each.value.netmask,
gw = var.gateway,
dns = var.dns_primary,
ssh_public_keys = var.ssh_public_keys }))
"guestinfo.metadata.encoding" = "base64"
}
}

View File

@@ -1,7 +1,7 @@
output "master_ip" {
value = {
for name, machine in var.machines :
name => machine.ip
"${var.prefix}-${name}" => machine.ip
if machine.node_type == "master"
}
}
@@ -9,8 +9,7 @@ output "master_ip" {
output "worker_ip" {
value = {
for name, machine in var.machines :
name => machine.ip
"${var.prefix}-${name}" => machine.ip
if machine.node_type == "worker"
}
}

View File

@@ -0,0 +1,6 @@
#cloud-config
ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key}
%{ endfor ~}

View File

@@ -0,0 +1,14 @@
instance-id: ${hostname}
local-hostname: ${hostname}
network:
version: 2
ethernets:
${interface_name}:
match:
name: ${interface_name}
dhcp4: false
addresses:
- ${ip}/${netmask}
gateway4: ${gw}
nameservers:
addresses: [${dns}]

View File

@@ -6,23 +6,12 @@ ssh_authorized_keys:
%{ endfor ~}
write_files:
- path: /etc/netplan/20-internal-network.yaml
content: |
network:
version: 2
ethernets:
"lo:0":
match:
name: lo
dhcp4: false
addresses:
- 172.17.0.100/32
- path: /etc/netplan/10-user-network.yaml
content: |
content: |.
network:
version: 2
ethernets:
ens192:
${interface_name}:
dhcp4: false #true to use dhcp
addresses:
- ${ip}/${netmask}

View File

@@ -18,9 +18,13 @@ variable "datastore_id" {}
variable "guest_id" {}
variable "scsi_type" {}
variable "network_id" {}
variable "interface_name" {}
variable "adapter_type" {}
variable "disk_thin_provisioned" {}
variable "template_id" {}
variable "vapp" {
type = bool
}
variable "firmware" {}
variable "folder" {}
variable "ssh_public_keys" {

View File

@@ -29,6 +29,5 @@ vsphere_datacenter = "i-did-not-read-the-docs"
vsphere_compute_cluster = "i-did-not-read-the-docs" # e.g. Cluster
vsphere_datastore = "i-did-not-read-the-docs" # e.g. ssd-000000
vsphere_server = "i-did-not-read-the-docs" # e.g. vsphere.server.com
vsphere_hostname = "i-did-not-read-the-docs" # e.g. 192.168.0.2
template_name = "i-did-not-read-the-docs" # e.g. ubuntu-bionic-18.04-cloudimg

View File

@@ -27,8 +27,6 @@ variable "vsphere_password" {}
variable "vsphere_server" {}
variable "vsphere_hostname" {}
variable "ssh_public_keys" {
description = "List of public SSH keys which are injected into the VMs."
type = list(string)
@@ -37,6 +35,13 @@ variable "ssh_public_keys" {
variable "template_name" {}
# Optional variables (ones where reasonable defaults exist)
variable "vapp" {
default = false
}
variable "interface_name" {
default = "ens192"
}
variable "folder" {
default = ""

View File

@@ -18,6 +18,7 @@
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* Ingress
* [kube-vip](docs/kube-vip.md)
* [ALB Ingress](docs/ingress_controller/alb_ingress_controller.md)
* [MetalLB](docs/metallb.md)
* [Nginx Ingress](docs/ingress_controller/ingress_nginx.md)
@@ -33,10 +34,11 @@
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)
* [RedHat Enterprise Linux](docs/rhel.md)
* [CentOS/OracleLinux/AlmaLinux/Rocky Linux](docs/centos8.md)
* [CentOS/OracleLinux/AlmaLinux/Rocky Linux](docs/centos.md)
* [Amazon Linux 2](docs/amazonlinux.md)
* CRI
* [Containerd](docs/containerd.md)
* [Docker](docs/docker.md)
* [CRI-O](docs/cri-o.md)
* [Kata Containers](docs/kata-containers.md)
* [gVisor](docs/gvisor.md)

Some files were not shown because too many files have changed in this diff Show More