Compare commits

...

735 Commits

Author SHA1 Message Date
lukasz bielinski
b39a196cfb properly generate extravolumes in kubeadmconfig for centos (#6707) 2020-09-23 01:42:08 -07:00
Florent Monbillard
9fc14b3e6c Make sure node_ip is set if node is in etcd group (#6720) 2020-09-23 00:46:08 -07:00
Florian Ruynat
f9a7dce7ca Add Kubernetes hashes 1.19.2/1.18.9/1.17.12 and set default (#6699) 2020-09-18 14:44:28 -07:00
Florian Ruynat
fbbbd90732 fix kubelet_flexvolumes_plugins_dir undefined (#6645) (#6670)
Co-authored-by: w33dw0r7d <w33dw0r7d@gmail.com>
2020-09-18 02:14:46 -07:00
Florian Ruynat
9869b46432 Move from widehat.opensuse to download.opensuse for crio centos (#6682) (#6704) 2020-09-18 02:04:45 -07:00
spaced
6cd33700f5 NetworkManager lists must be separated by , (#6649) 2020-09-11 00:30:15 -07:00
Maxime Guyot
a1f04e9869 Cleanup v1.16 hashes (#6635) 2020-09-08 01:51:43 -07:00
Maxime Guyot
961149b865 Update kube_version_min_required for 2.14 release (#6634) 2020-09-07 23:59:43 -07:00
Barry Melbourne
597c810ef0 Resolve Vagrant etcd unhealthy cluster error (#6630) 2020-09-07 12:09:41 -07:00
spaced
2de6a5676d Fedora coreos networkmanager global dns and bootstrapping fix (#6577)
* remove podman cni plugin

* configure networkamanger global dns

* allow installation of python3-libselinux by disabling update repo temporary

* remove ipv4 section because it is not a valid configuration
2020-09-07 02:27:41 -07:00
Florian Ruynat
050578da94 Update Cilium to 1.8.3 (#6629) 2020-09-07 02:11:49 -07:00
Florent Monbillard
5a437add01 Fix upgrade playbook name (#6625)
* Fix upgrade playbook name

* Fix my fix :)
2020-09-07 02:11:42 -07:00
Florian Ruynat
6fc73e3038 Add Kubernetes 1.16.15 hashes (#6624) 2020-09-07 01:23:41 -07:00
Florian Ruynat
d97e9b9e50 Fix oracle linux repo (#6627) 2020-09-07 01:15:41 -07:00
Florian Ruynat
fa0eb11bf4 Update kubernetes dashboard (#6623) 2020-09-04 05:29:41 -07:00
Julien Pervillé
f660c29348 Declare port 10254 in nginx ingress pod template (#6609) 2020-09-04 04:54:11 -07:00
Hans Feldt
6613895de0 remove kubelet startup warnings for non docker container runtime (#6605)
Removes these startup warnings:

Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead
Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
2020-09-04 04:54:04 -07:00
Hans Feldt
803d52ffce kubernetes: remove unused variables (#6601) 2020-09-04 04:53:56 -07:00
tasekida
fc61f8d52e Update cert manager to 0.16.1 (#6600)
* Update cert manager to 0.16.1

* Update cert manager to 0.16.1

Co-authored-by: Barry Melbourne <9964974+bmelbourne@users.noreply.github.com>
2020-09-04 04:53:48 -07:00
Maxim Pogozhiy
0553814b4f Add selectable dns policy for kube-router (#6586) 2020-09-04 04:53:41 -07:00
Florian Ruynat
f1566cb8c2 Add protectKernelDefaults option (default true) to kubelet config file (#6611) 2020-09-03 07:41:41 -07:00
Lovro Seder
c1ba8e1b3a Rotate kubelet server certificate. (#6453)
* Rotate kubelet server certificate.

* CI test kubelet server cert rotation

* Approve kubelet serving certificates in tests.
2020-09-03 07:25:41 -07:00
Hugo Blom
2ff7ab8d40 Add snapshot-controller for CSI drivers and snapshot CRDs, add a default volumesnapshotclass when running cinder CSI (#6537)
* add snapshot-controller and v1beta1 snapshot api

* fix typo

* udpate manifest to v1beta1

* update

* update manifests

* fix spelling

* wait until crd is applied

* fix missing info in kube module

* revert snapshotclass

* add snapshot crds before applying the csi driver

* add crds, missed them in last commit

* use pull policy from kubespray
2020-09-03 04:01:43 -07:00
Hans Feldt
93698a8f73 Calico: update crds to v1 and cr (#6360)
* Update CustomResourceDefinition for kubecontrollersconfigurations.crd.projectcalico.org to v1
* Align ClusterRole for kube-controllers with upstream (calico)
2020-09-03 00:51:40 -07:00
Maxime Guyot
6245587dc8 Fix E306 in roles/network_plugin (#6516)
Signed-off-by: Miouge1 <maxime@root314.com>
2020-09-02 23:55:40 -07:00
Florian Ruynat
2faf53b039 Check node_ip is defined when removing etcd node (#6603) 2020-09-01 01:05:58 -07:00
Florian Ruynat
e0b1787740 Use crictl 1.19.0 for k8s 1.19.x (#6598) 2020-09-01 01:05:50 -07:00
Florian Ruynat
9849dba5d3 Update cni plugins with minor fix (#6592) 2020-08-31 05:16:21 -07:00
Barry Melbourne
03c9c091f2 Docker: Set Cgroup driver by default to systemd (#6563)
* Set Docker Cgroup driver to systemd

* Add docker_cgroup_driver in Docker defaults
2020-08-31 04:56:20 -07:00
Marc-Antoine
5a8b68a429 Add support for openstack application credentials (#6534)
* Add support for openstack application credentials

* Add some lines for readability

* Update external_openstack_tenant_id check

Do not check external_openstack_tenant_id when application credentials are defined

* Add check for external_openstack_domain_id

* Fix typo
2020-08-31 03:30:28 -07:00
Maxime Guyot
34d88ea6d9 Fix Ansible-lint E303 (#6409) 2020-08-31 03:30:20 -07:00
Florian Ruynat
0665b45e61 Update nginx ingress to 0.35.0 (#6599) 2020-08-31 03:24:21 -07:00
Maxime Guyot
648fcf3a2e Fix E306 in roles/etcd (#6515) 2020-08-31 03:20:20 -07:00
Barry Melbourne
058438a25d Remove support for CoreOS Container Linux (#6576) 2020-08-28 02:28:53 -07:00
Maxime Guyot
6e938a3106 Fix E306 in other roles (#6517) 2020-08-28 01:20:53 -07:00
Florian Ruynat
2f93d62aa5 Update nginx ingress to 0.34.1 (#6571) 2020-08-27 10:15:53 -07:00
Florian Ruynat
8ba3d7ec75 Add Kubernetes 1.19 hashes (#6593) 2020-08-27 09:45:53 -07:00
Hans Feldt
9e2d282709 cri-o: add variable to configure unsecure pull (#6568)
By default do not allow "unqualified" (without a registry) images
because it is considered unsecure and subject to mitm attacks.

To enable insecure pull configure for example:

crio_registries:
  - "docker.io"
  - "quay.io"
2020-08-27 09:09:53 -07:00
Florian Ruynat
706c7cb4f1 etcd should not fail when adding an already existing member (#6587) 2020-08-27 02:33:01 -07:00
Florian Ruynat
5884eeb606 Remove ethtool workaround, issue is now fixed (#6579) 2020-08-27 02:29:01 -07:00
Florian Ruynat
e7ee19bd66 Update bunch of dependencies with minor fixes (#6570) 2020-08-27 02:25:01 -07:00
Hugo Blom
2f8fc92182 make it possible to open additional ports on master nodes (#6547) 2020-08-27 02:07:13 -07:00
nic0las
f59d3fc4a3 Deviceroutesourceaddress (#6508)
* add FELIX_DEVICEROUTESOURCEADDRESS calico option

* add calico_use_default_route_src_ipaddr option 

add calico_use_default_route_src_ipaddr option to use FELIX_DEVICEROUTESOURCEADDRESS calico option

* Update k8s-net-calico.yml
2020-08-27 02:07:01 -07:00
Barry Melbourne
8e2bae0f2a Fix Ansible Lint warnings (No such file or directory) (#6581) 2020-08-26 23:19:10 -07:00
Arthur Outhenin-Chalandre
e6dae03a0d Add cilium hubble server in config (#6575)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-26 23:19:02 -07:00
Arthur Outhenin-Chalandre
2f2ed116f7 Improve metallb template for bgp peers (#6574)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-26 23:15:03 -07:00
Kuralamudhan Ramakrishnan
e91c6a7bd1 update the ovn4nfv-k8s-plugin image version to v1.1.0 (#6531)
Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
2020-08-26 23:11:03 -07:00
Florian Ruynat
1ff95e85f4 Rollback coredns, should not have been updated before 1.19 (#6573) 2020-08-26 03:30:03 -07:00
Sulochan Acharya
36924b63dc Allow webhook authorization (#6502) 2020-08-24 06:29:41 -07:00
Florian Ruynat
0c80d3d9fa Add proxy_env calculation to reset.yml (#6558) 2020-08-21 02:03:46 -07:00
jeanfabrice
411510cbe6 Use proper openssl command to differentiate between host and ip in API certificate check (#6392)
* Use proper openssl command to differentiate between host and ip in current certificate check

* fixup! Use proper openssl command to differentiate between host and ip in current certificate check
2020-08-21 02:03:39 -07:00
Florian Ruynat
6e2b8a5750 Add timeout to Get current version of calico cluster version, again (#6493) 2020-08-21 00:13:51 -07:00
Lars
ca66a96d0a make pre-remove node draining a failable task (#6442)
and add configuration to allow ungraceful removal
2020-08-21 00:13:39 -07:00
Marc-Antoine
0c09ec5d13 Bump Openstack cloud controller image verison to 1.18.2 (#6562) 2020-08-21 00:10:03 -07:00
*=0=1=4=*
a8e2110b2d #6552 Update extras_rh_repo_base_url (#6556) 2020-08-21 00:09:55 -07:00
Christian Strack
250541d29d Use proper pypy download url in bootstrap script (#6555)
The bootstrap-os role uses a bootstrap script to provision a
python interpreter on flatcar and container os hosts. As the
pypy project switched to another hoster, the download url changed.

If applied this will use the new proper pypy download url in bootstrap script
2020-08-21 00:09:47 -07:00
Florian Ruynat
142b9e1eff Update k8s hashes and set default version to 1.18.8 (#6532) 2020-08-21 00:09:39 -07:00
Svendegroote91
f204212963 Add docs for 'setting up your first cluster' (#6544) 2020-08-21 00:05:40 -07:00
Michal Petko
91ae87fa60 Fix setting node label if kube_override_hostname is defined (#6557) 2020-08-20 06:23:30 -07:00
Maxime Guyot
85646c96ad Add docs about CI setup (#6397) 2020-08-20 04:37:23 -07:00
tasekida
d6456d13c2 Update coredns to 1.7.0 (#6538) 2020-08-20 04:33:44 -07:00
Florian Ruynat
98f7485303 Update weave to 2.7.0 + minor update to Cilium (#6501) 2020-08-20 04:33:36 -07:00
Samuel Liu
a42d811420 fix scale playbook (#6482) 2020-08-20 04:33:23 -07:00
Barry Melbourne
bf6fdce339 Fix cert-manager E305 ansible-lint error (#6549) 2020-08-20 04:25:45 -07:00
Bernard Landon
fa378f09c3 Edited pre-upgrade task to uncordon a node failing to drain (#6546) 2020-08-20 04:25:36 -07:00
Florian Ruynat
d9d11e2291 Update sonobuoy dependency (#6536) 2020-08-20 04:25:23 -07:00
Florian Ruynat
73b2683697 Allow hosts with hyphen in name (#6529) 2020-08-18 00:53:30 -07:00
holmesb
d8a749fd27 Update apiserver-audit-policy.yaml.j2 (#6526) 2020-08-18 00:49:37 -07:00
rptaylor
f2d2d080f6 add master_volume_type variable (#6524) 2020-08-18 00:49:29 -07:00
Florian Ruynat
78ceef6b15 Remove unused variable (#6522) 2020-08-18 00:45:29 -07:00
Arthur Outhenin-Chalandre
ca8e59fa85 Add new cilium options for native routing (#6519)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:39:42 -07:00
Bernard Landon
b0210567aa Fixed Kubespray container-engine/docker role to populate docker.service (#6518) 2020-08-18 00:39:30 -07:00
Arthur Outhenin-Chalandre
33ec13293b Fix cilium_deploy_additionally with kubeadm etcd (#6514)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:35:36 -07:00
Arthur Outhenin-Chalandre
bedb411d06 improve Cilium metrics support (#6513)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:35:29 -07:00
Erwan Miran
ef3e98807e tlsminversion and tlsciphersuites kubelet (#6490) 2020-08-13 02:48:13 -07:00
Alvaro
49158dbe40 Minor Ambassador docs updates (#6503)
Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-08-06 08:37:42 -07:00
Arthur Outhenin-Chalandre
35682b5228 Fix cilium strict kube proxy replacement in HA (#6473)
* Update the cilium svc proxy test to HA mode

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Fix cilium strict kube-proxy in HA

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add a single global endpoint variable

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add cilium docs about kube-proxy replacement

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Fix issues in docs

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-06 00:14:55 -07:00
Barry Melbourne
9cc70e9e70 Upgrade JetStack Cert-Manager to v0.15.2 (#6414)
* Upgrade JetStack Cert-Manager to v0.15.2

* Add README.md table of contents
2020-08-05 23:26:55 -07:00
Maxime Guyot
50598d9d47 Fix E306 in tests/ (#6495) 2020-08-05 13:22:55 -07:00
Maxime Guyot
fc23f37af7 Fix E306 in roles/kubernetes (#6500) 2020-08-05 07:56:28 -07:00
Sulochan Acharya
bfe143808f Allows tls verify skip on webhook auth url (#6472) 2020-08-05 05:02:29 -07:00
Maxime Guyot
91742055e0 Fix E306 in scripts/ (#6496) 2020-08-05 01:56:28 -07:00
נυαη נυαηѕση
6c41f64a98 Correct sample inventory to pass yamllint (#6499)
Nit alert.  Sample inventory throws an error when processed
by yamllint.  The default line is currently commented out.
However, when uncommenting it our linters fail.
2020-08-05 01:52:48 -07:00
Mike Williams
e72dbf3dfc Option for MetalLB to talk BGP (#6383)
* Option for MetalLB to talk BGP

* Check for BGP peers when metallb_protocol is bgp

* README clarification

* Commented values as documentation only in the sample inventory

* layer 2 or BGP, not both
2020-08-05 01:52:40 -07:00
Kevin Klopfenstein
c3b78c3255 bootstrap-os for remove-node (#6154) 2020-08-05 01:52:28 -07:00
Maxime Guyot
fb666c44b3 Quoted type constraints are deprecated (#6497) 2020-08-05 01:32:28 -07:00
Maxime Guyot
58b5bf7886 Update base image to v2.13.3 (#6494) 2020-08-05 01:28:29 -07:00
bozzo
cc70200a07 Fix Flexvolume mount in Openstack Controller (#6480) 2020-08-04 05:28:35 -07:00
Florent Monbillard
ffbd98fec6 Remove hvac dependency (#6476) 2020-08-04 05:28:28 -07:00
Steven Reitsma
f3c17361da Create a PodDisruptionBudget for the Cinder CSI controllerplugin (#6385) 2020-08-04 05:28:19 -07:00
Victor Morales
bdf0238328 Upgrade molecule to v3 (#6468)
Signed-off-by: Victor Morales <v.morales@samsung.com>
2020-08-04 05:24:19 -07:00
Florent Monbillard
39b907cdfb Remove workaround for kubeadm upgrade (#6478)
https://github.com/kubernetes/kubeadm/issues/1498 was closed
2020-08-03 01:17:40 -07:00
Florian Ruynat
24a7878e7c Update kube-router to 1.0.1 and kube-ovn to 1.3.0 (#6479) 2020-08-01 00:34:04 -07:00
Konstantin Lebedev
2364a84579 fix src for audit webhook config yaml (#6470) 2020-08-01 00:33:56 -07:00
Hans Feldt
c6e5be91e9 crio: align template crio.conf with upstream (#6432)
* log level by default increased to 'info'
* cgroup manager by default set to 'systemd'
* stream port (used by kubelet) bound to 127.0.0.1 for security reasons
* metrics can be enabled and port specified
2020-08-01 00:33:48 -07:00
fulii
ce22c0e6a4 Add option to configure IPVS timeouts in kube-proxy configration manifest. (#6396) 2020-08-01 00:33:40 -07:00
Maxime Lavandier
bd60df97aa Fix download calico policy condition (#6474) 2020-08-01 00:29:48 -07:00
Cristian Chiru
94df580674 Moved docker_dns_options to defaults so it can be overridden (#6394)
* Moved docker_dns_options to defaults so it can be overridden

* Fixed yaml indentation and markdown

* Moved docker_dns_search_domains to defaults
2020-08-01 00:29:41 -07:00
Kuralamudhan Ramakrishnan
90e5f8ffe1 adding ovn4nfv in kubespray (#6381)
Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
2020-07-31 07:33:08 -07:00
Florian Ruynat
bf6168fca8 Move fedora30 jobs to fedora32 (#6426) 2020-07-30 23:31:07 -07:00
Florian Ruynat
a78e861a89 Fix test if openstack_cacert is a base64 string (#6421) 2020-07-30 13:15:17 -07:00
Arthur Outhenin-Chalandre
3550e3c145 Adding kube-proxy-replacement support in cilium (#6334)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-30 02:46:31 -07:00
Vladimir Masarik
8425c2363b Replaced a broken link (#6467) 2020-07-30 00:58:31 -07:00
Samuel Liu
15ec44901d azure csi typo (#6469) 2020-07-30 00:52:31 -07:00
Florent Monbillard
924cc11af6 Upgrade to kubernetes 1.18.6 (#6405)
- Add 1.17.9 and 1.16.13 SHAs
2020-07-29 14:54:09 -07:00
Alvaro
0fa5a252b9 Documentation for Ingress (#6378)
Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-07-29 06:55:47 -07:00
Maxime Guyot
fe46349786 Fix ansible-lint E301 for commands fetching data (#6465) 2020-07-28 08:39:47 -07:00
Lovro Seder
96a2b386f2 Fix shellcheck url (#6462) 2020-07-28 05:57:08 -07:00
Maxime Guyot
214e08f8c9 Fix ansible-lint E305 (#6459) 2020-07-28 01:39:08 -07:00
Maxime Guyot
8bd3b50e31 Fix ansible-lint E404 (#6417) 2020-07-28 01:21:08 -07:00
Maxime Guyot
b8c4bd200e Update README.md and openstack.md (#6455) 2020-07-27 07:44:17 -07:00
Maxime Guyot
e70f27dd79 Add noqa and disable .ansible-lint global exclusions (#6410) 2020-07-27 06:24:17 -07:00
Florian Ruynat
b680cdd0e4 Move healthz check to secure ports (#6446) 2020-07-27 00:26:17 -07:00
Florian Ruynat
c9f63e5016 Update multus version & crio conf (#6444) 2020-07-26 23:36:16 -07:00
Florian Ruynat
d8a197ca51 Fix remove etcd broken with etcdctl_api 3 (#6448) 2020-07-26 23:32:29 -07:00
Hugo Blom
1f9841f609 update cinder csi manifests (#6434) 2020-07-26 23:32:17 -07:00
Florian Ruynat
aa21edeb53 Update docker package to 19.03.12 (#6439) 2020-07-22 09:26:06 -07:00
nniehoff
eb69f126de * add proxy_env definition to remove_node.yml resolving #6430 (#6431) 2020-07-22 00:28:05 -07:00
Michal Skalski
70edccf7e0 Newer version of Local Path Provisioner in samples (#6437)
To make it less confusing for users who uncommented whole block of
local path provisioner [1] the samples should point at least to
version 0.0.3 which supports helper image [2] configured by
local_path_provisioner_helper_image_repo variable. As 0.0.3 is a bit old
samples could point to current newest release 0.0.14.

[1] 45a177e2a0 (commitcomment-38625688)
[2] 315d67fa8c
2020-07-22 00:08:11 -07:00
Konstantin Lebedev
4b80a7f6fe Felix configuration via extraenvs of calico node (#6433) 2020-07-22 00:08:04 -07:00
Michael Sheinberg
e06e6895da Remove dbus-tools from coreos bootstrap (#6428)
Trying to layer this package on Fedora 32 causes the install to crash
and furthermore it looks like the original bug linked to in the comment
has been resolved for Fedora 31
2020-07-22 00:04:04 -07:00
Florian Ruynat
50fc82acdc Minor update to Cilium and Calico (#6438) 2020-07-21 23:58:33 -07:00
Igor Vuk
ea67bb6e41 Fix typo: Modprode -> Modprobe (#6429) 2020-07-21 23:58:25 -07:00
Minjong Kim
b19f2e2d3d Update the calico_veth_mtu setting to affect IP-in-IP users (#6419)
* Update calico_veth_mtu to FELIX_IPINIP variable

calico_veth_mtu is specified in the configuration, but since it only works for wireguard, modify it to work for IP-in-IP users.

* Update template with more cleaner expression
2020-07-21 23:58:18 -07:00
chenguoquan1024
9c48f666ec change /etc/ssl/etcd to etcd_config_dir param (#6408)
* change /etc/ssl/etcd to etcd_config_dir param

* add use etcd_events_data_dir param
2020-07-21 23:58:05 -07:00
Kenichi Omichi
4990eec4a2 Replace Openstack with OpenStack (#6413)
The official word is OpenStack, not Openstack as [1].
This replaces it with OpenStack in the docs.

[1]: https://www.openstack.org/
2020-07-21 23:54:05 -07:00
Florent Monbillard
bf8c8976dd Upgrade etcd to 3.4.3 (#5998) 2020-07-20 07:26:51 -07:00
Konstantin Lebedev
a7ec0ed587 add audit webhook support (#6317)
* add audit webhook support

* use generic name auditsink
2020-07-20 01:32:54 -07:00
Arthur Outhenin-Chalandre
1a1fe99669 Add a way to deploy cilium alongside another CNI (#6373)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-17 05:57:01 -07:00
Maxime Guyot
8818073ff3 Cleanup old build-cephfs-provisioner.yml playbook (#6418) 2020-07-17 04:15:00 -07:00
Maxime Guyot
b35e6558bc Always enable GitLab CI artifacts for cluster-dump (#6412) 2020-07-16 13:45:00 -07:00
Florian Ruynat
5e22574402 Remove allow-release-candidate-upgrades already include in experimental-upgrades flag (#6349) 2020-07-15 00:26:37 -07:00
chenguoquan1024
e1873ab872 add calico-node selinux (#6359) 2020-07-15 00:22:38 -07:00
Kenichi Omichi
29312a3ec0 Add oomichi to reviwers of MetalLB addon (#6393)
I'd like to review PRs related to metallb addon as possible to make
it better, and it would be easy to track related PRs if becoming the
reviewer.
2020-07-14 20:44:37 -07:00
Qasim Sarfraz
feeb701c13 Respect kube_override_hostname during removal/upgrade (#6347)
* respect kube_override_hostname during removal/upgrade

* Use hostvars in loop
2020-07-13 07:18:40 -07:00
Daniel Schade
b347aefd61 Fixed fedora modular repos activation for fcos (#6300)
* Enable fedora modular repos for fcos #6299

* Fixed fedora modular repos activation for fcos #6300
2020-07-13 07:18:32 -07:00
Arthur Outhenin-Chalandre
abfa1636e4 Fix kube-proxy post deployment removal (#5554)
* Fix kube-proxy removal

* Fix unwanted skipped task for kube-proxy
* Fix kube_proxy_remove default

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add test for kube-router svc proxy

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-13 07:12:33 -07:00
Steven Reitsma
deca5ec903 Remove old csi-attacher flag and fix RBAC for Cinder CSI (#6358)
Add proper RBAC for new csi-attacher version
2020-07-13 04:48:32 -07:00
Arthur Outhenin-Chalandre
05b9f14b76 Update cilium minimum kernel preinstall check (#6376)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-13 04:44:32 -07:00
petruha
4cb576da19 Add readiness probe to dns-autoscaler (#6382) 2020-07-13 02:50:34 -07:00
bozzo
8cb644fbec Add Fedora CoreOS kubevirt image for tests (#6337) 2020-07-10 01:07:48 -07:00
Hans Feldt
22996babcf allow kubeadm to upgrade etcd (#6345)
Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-07-07 12:36:00 -07:00
Hans Feldt
75ad868cbd crio: harden downloads with retry (#6374)
CI job 624031102 failed with:

fatal: [ubuntu1804]: FAILED! => {"changed": false, "msg": "Failed to download key at https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_18.04/Release.key: Request failed: <urlopen error [Errno -3] Temporary failure in name resolution>"}

Assuming its a temporary problem it should get more robust with a
couple of retries like in other roles.

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-07-07 12:32:01 -07:00
spaced
9433fe46c8 Add workaround with include_task for mitogen (#6312) 2020-07-07 08:09:59 -07:00
Maxime Guyot
935c5093e2 Enable OVH CI (#6365) 2020-07-06 01:56:51 -07:00
Sam Lin
6bb47d8adb Fix can't remove etcd node (#6363)
* add remove_node_ip

* move remove_node_ip to remove etcd part

* fix: remove tail space

* fix: handle ubuntu: focal
2020-07-04 02:02:48 -07:00
Maxime Guyot
57eefdd458 Fix azure-cloud-config.j2 JSON syntax (#6364) 2020-07-02 23:38:47 -07:00
Kenichi Omichi
060d25fc79 Update MetalLB README.md (#6350)
Recently MetalLB becomes one of addons with renaming the options.
This updates MetalLB README.md for this change.
2020-07-02 07:12:54 -07:00
Pasquale Toscano
4ce970c0b2 Cilium: overwrite auto-detected MTU of underlying network (#6329) 2020-07-02 07:12:47 -07:00
nurekage
017df7113d Patch Calico for V3.14.0 missing CR and CRD (#6276) 2020-07-01 08:44:16 -07:00
Maxime Guyot
00fe3d5094 Explicitly set ETCDCTL_API and use ETCDCTL_ENDPOINTS (#6327) 2020-07-01 04:56:16 -07:00
Paul Rey
bcac3c62a2 Add additional metadata configuration options to external Openstack CCM (kubernetes-sigs#6338) (#6339)
* Add additional metadata configuration option to external Openstack CCM (kubernetes-sigs#6338)

* Set the variable external_openstack_metadata_search_order undefined by default
2020-07-01 04:52:17 -07:00
Florian Ruynat
2a82dff3ae Remove runtime-config from kubeadm if empty (#6311) 2020-06-30 11:22:05 -07:00
Florian Ruynat
16ec5939c2 Update deprecated api (#6245) 2020-06-30 09:00:07 -07:00
Florian Ruynat
b064274e27 Update kube-router to 1.0.0 (#6211) 2020-06-30 08:54:06 -07:00
Hans Feldt
ae003af262 Fix kubelet cgroup driver detection for crio (#6331)
* Fix kubelet cgroup driver detection for crio

Remove fact standalone_kubelet since it is not used

* Fix yamllint complaints of roles/kubernetes/node/tasks/facts.yml

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-30 02:32:05 -07:00
Florian Ruynat
f515898cb5 Update hashes and set default version to 1.18.5 (#6335) 2020-06-30 02:00:05 -07:00
Kenichi Omichi
25bab0e976 Change MetalLB to one of addons (#6238)
This changes MetalLB contrib to one of addons for deploying MetalLB with
Kubernetes cluster deployment. By the default, Kubespray doesn't deploy
MetalLB addon.
2020-06-29 15:11:59 -07:00
Florian Ruynat
8213b1802b Update calico to 1.15.0 + minor update to kube-ovn/weave (#6306) 2020-06-29 14:39:58 -07:00
Joel Seguillon
4c1e0b188d Add .editorconfig file (#6307) 2020-06-29 12:39:59 -07:00
bozzo
09b23f96d7 Use NetworkManager to manage resolv.conf in FedoraCoreOS (#6291) 2020-06-29 00:26:17 -07:00
Kenichi Omichi
56f389a9f3 Add USE_REAL_HOSTNAME to inventory.py (#6293)
inventory_builder creates hosts.yaml file with hostnames like "node1",
"node2", etc. Even if specifying override_system_hostname=false, the
output of "kubectl get nodes" shows those hostnames ("node1", etc.)
without using actual hostnames.
To solve this issue, this adds an option USE_REAL_HOSTNAME to get
actual hostnames when creating hosts.yaml file instead of "node1", etc.
2020-06-26 00:03:47 -07:00
Maxime Guyot
45e12df8a3 Cleanup OpenStack network things (#6283) 2020-06-26 00:03:39 -07:00
Mateus Caruccio
1892cd65f6 Add support for dns_etchosts (#6236) 2020-06-26 00:03:31 -07:00
Erwan Miran
d3ca9d1db9 kube_encryption_resources must be output as yaml (#6309) 2020-06-25 23:59:31 -07:00
Qasim Sarfraz
16ad344c41 Gather ansible_default_ipv4 for specific groups (#6318) 2020-06-25 23:55:31 -07:00
Mike Dziedziela
8ca2a9a7d5 added azure_cloud parameter to Azure's cloud_config (#6321) 2020-06-25 14:35:30 -07:00
Maxime Guyot
93cbcb61b8 Fix some doc links (#6328) 2020-06-25 11:56:37 -07:00
bozzo
276c450759 Use connection: local when delegate_to: localhost (#6322)
This will avoid SSH connection on the local host
2020-06-25 08:14:38 -07:00
irizzant
a6a6e843af Add /dev volume (#6319) 2020-06-25 06:22:38 -07:00
Florian Ruynat
f54f63ec3f Update cilium to 1.8.0 (#6314) 2020-06-25 06:16:38 -07:00
Hans Feldt
93951f2ed5 fix use of ansible tags (#6316)
tags are not inherited for include_role therefore the change
from include to import

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-25 03:00:37 -07:00
Samuel Liu
c29b21717d Add event-ttl duration (#6310)
* Add event-ttl duration

* Fix wrong location
2020-06-24 08:15:17 -07:00
Alvaro
80d16e6c91 Support for Ambassador OSS as an Ingress (#6135)
Support for Ambassador OSS as an Ingress Controller when
settings `ingress_ambassador_enabled: true`.

Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-06-24 07:39:17 -07:00
Kenichi Omichi
68cfb9a053 Update OpenStack doc for external cloud provider (#6252)
Now the in-tree cloud provider is deprecated and it is recommended to
the external cloud provider for OpenStack instead.
The doc described how to upgrade from the in-tree cloud provider, but
it is better to describe how to deploy the external cloud provider from
scratch instead for current situation.
This updates the OpenStack doc for this usecase.
2020-06-22 04:48:39 -07:00
Joel Seguillon
d50fe9550c bump dashboard to 2.0.2 (#6303) 2020-06-22 01:14:40 -07:00
Pasquale Toscano
8f5c4dcd2e Add support for Kata Containers (#6256)
* Install Kata Containers as additional container runtime

* Create RuntimeClasses for Kata Containers

* Updated Vagrant to optionally run without Docker as container manager

* Updated Vagrant to optionally use Libvirt nested virtualization

* Add Kata Containers documentation

* Fix lint errors

* Add kata_containers_enabled to kubespray-defaults

* Fixed typo error

* Fixed typo error
2020-06-22 00:28:39 -07:00
Maxime Guyot
1a802726d2 Update base image to v2.13.2 (#6296) 2020-06-19 06:47:58 -07:00
Florian Ruynat
90c867b424 Update loadbalancers versions (haproxy&nginx) (#6278) 2020-06-18 07:48:19 -07:00
Florian Ruynat
eeb77369cb Update hashes and set default to 1.18.4 (#6285) 2020-06-18 06:30:19 -07:00
Maxime Guyot
69a48cbdd7 Add Vagrant CI for Ubuntu 20.04 (#6279) 2020-06-18 01:18:05 -07:00
Florian Ruynat
33b8ad0d89 Update test-cases documentation (#6264) 2020-06-17 23:40:05 -07:00
Maxime Guyot
605cfeb3e4 Test bootstrap-os on more platforms (#6277) 2020-06-17 04:52:39 -07:00
Maxime Guyot
c6588856c7 Add Ubuntu 20.04 support and use Python 3 (#6157) 2020-06-16 13:04:05 -07:00
Samuel Liu
dba645421f ADD tls cipher suites support (#6024)
* ADD tls cipher suites support

yaml lint

yamllint

* update test case

* update test case
2020-06-16 04:10:05 -07:00
Florian Ruynat
f437ac0b27 Fix nologin wrong path (#6272) 2020-06-16 02:30:04 -07:00
Unai Arríen
8ec6729cae Add disable_ipv6_dns: true in E2E tests (#6266) 2020-06-16 01:12:03 -07:00
Florian Ruynat
19d4b5dd04 Update various dependencies (#6265) 2020-06-16 01:08:03 -07:00
Kenichi Omichi
78251b0304 Fix check external_openstack_tenant_name value (#6270)
We need to specify either external_openstack_tenant_name or
external_openstack_tenant_id. Those values were checked by seeing they
are defined or they have actual values separately.
However those values are always defined because of the following code
of openstack/defaults/main.yml:

external_openstack_tenant_id: "{{ lookup('env','OS_TENANT_ID')| default(lookup('env','OS_PROJECT_ID'),true) }}"
external_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME')| default(lookup('env','OS_PROJECT_NAME'),true) }}"

So even if not specifying both values, those checks could not detect
the misconfiguration. This fixes this to detect the misconfiguration.
2020-06-16 01:02:03 -07:00
mohsen
10e54eca26 make better condition for applying nf_conntrack kernel tweak (#6267)
* MINOR: Check kernel version before enable modprobe nf_conntrack

* CLEANUP: no more need to ignore error of this task

* MINOR: Fixing yaml and ansible lint error - remove trailling-space
2020-06-16 00:34:06 -07:00
Hans Feldt
a8740c6e13 fix a few tasks falsely reporting "changed" (#6269)
Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-16 00:24:03 -07:00
Y0UZ45
06391b6dd9 Fix kubectl.sh parameter quoting (#6239)
If the special parameter "$@" is not quoted, the following command will not work:

./kubectl.sh patch storageclass my-storage-class -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
2020-06-14 13:57:57 -07:00
marcosfsch
8dc01df60b Oracle Linux 8 support and fixes (#6198)
* Add oraclelinux8 and disable firewalld

Add oraclelinux8 image and disable firewalld on oraclelinux VMs

* Fix Oracle Linux repositories

As documented in: http://yum.oracle.com/getting-started.html#installing-software-from-oracle-linux-yum-server
public-yum-ol7.repo was deprecated on release 7.6. Some repos were integrated into oracle-linux-ol7.repo (i.e.: ol7_latest, ol7_addons) and other are available as packages (epel). This also adds support for oraclelinux8

* Fix to use ansible_distribution_version

Instead of ansible_distribution_major_version

* Update README.md
2020-06-12 01:59:56 -07:00
Florian Ruynat
a9de6dde33 Cleanup unneeded elif in kubelet env file (#6261) 2020-06-12 01:27:55 -07:00
Alexander Petermann
75571ed303 manual intervention on etcd member removal aren't required anymore (#6248) 2020-06-12 01:13:54 -07:00
Unai Arríen
1912df7e3e Create /etc/gai.conf if not exists when disable_ipv6_dns is 'true' (#6258) 2020-06-12 00:55:55 -07:00
petruha
bacbb2a0ca Add custom dashboard namespace test (#6249)
Add custom dashboard namespace test
2020-06-12 00:52:03 -07:00
Hugo Blom
e1ba25a4fb Bump CSI containers to latest version (#6221)
* bump csi containers

* bump snapshoter to 2.1.1
2020-06-12 00:51:55 -07:00
Kenichi Omichi
10a17cfe54 Look up OS_PROJECT_NAME for OpenStack project name (#6262)
On OpenStack history, we used to call "tenant" for separeted namespace.
However we use "project" now instead.
Then we have replaced "tenant" with "project". Then all "TENANT" variables
also are renamed to "PROJECT".
This makes Kubespray search "PROJECT" variable also for newer OpenStack
clouds.
2020-06-12 00:47:56 -07:00
Alexander Evseev
5a311236c4 Enable portmap CNI plugin with kube-router (#6204)
... to have working `hostPort` for containers.

See: https://www.kube-router.io/docs/user-guide/#hostport-support
2020-06-10 10:08:52 -07:00
Yousong Zhou
a7b8708dfc calico: use absolute path to docker, crictl binary (#6253)
To avoid the following error (ignored when pipefail is off)

  RUNNING HANDLER [network_plugin/calico : containerd | delete calico-node containers] *******************************************************************************
  changed: [node1] => {"attempts": 1, "changed": true, "cmd": "crictl pods --name calico-node-* -q | xargs -I% --no-run-if-empty bash -c \"crictl stopp % && crictl rmp %\"", "delta": "0:00:00.004240", "end": "2020-06-10 03:32:41.316955", "rc": 0, "start": "2020-06-10 03:32:41.312715", "stderr": "/bin/sh: crictl: command not found", "stderr_lines": ["/bin/sh: crictl: command not found"], "stdout": "", "stdout_lines": []}
2020-06-10 03:22:08 -07:00
Florent Monbillard
8964dc53df Add Offline docs to docs website's sidebar (#6251)
Fix the offline docs URL in README
2020-06-09 12:17:01 -07:00
Florian Ruynat
ecc3a0aec5 Update kube-ovn to 1.2.0 - also update minor version for multus and weave (#6223) 2020-06-09 12:09:01 -07:00
Craig Rodrigues
144743e818 Fix indentation in a few places so file can be round-tripped more easily (#6178)
with the Python ruamel.yml library

- Change True/False to true/false in a few places so file can
  be more easily round-tripped with the Python ruamel.yml library
2020-06-09 06:39:20 -07:00
Alexander Petermann
7712bd0c76 remove ectd node in pre step, instead of post step (#6099) 2020-06-09 05:37:17 -07:00
Florian Ruynat
101686c665 Remove outdated CriticalAddonsOnly toleration and critical-pod annotation (#6202) 2020-06-09 05:23:30 -07:00
Florian Ruynat
f2ca929a4a Move nodes readiness test before pods readiness (#6089) 2020-06-09 05:23:18 -07:00
Florent Monbillard
13f2b3d134 Improve air-gap installation instructions (#6234) 2020-06-09 03:25:17 -07:00
Danilo Riecken P. de Morais
50204d9551 Add rpm-ostree cleanup task (#5986) 2020-06-09 02:49:17 -07:00
Florian Ruynat
6852f821a5 Update nginx ingress to 0.32.0 (#6063) 2020-06-09 02:45:18 -07:00
Florian Ruynat
953bc8dee2 Update docker & docker-cli to 19.03.11 (#6225) 2020-06-07 23:55:46 -07:00
Maxime Guyot
9afd3f0c32 Use a random subnet for elastx CI (#6232) 2020-06-06 12:11:45 -07:00
Hugo Blom
3f443f3878 set allowVolumeExpansion in cinder csi (#6220) 2020-06-05 08:27:43 -07:00
Lovro Seder
5dd85197af Manage containerd.io package with docker CRI. (#6218)
* Manage containerd.io package with docker CRI.

* Refactor common containerd stuff to separate role

* Fix check mode and unnecessary shell.
2020-06-05 05:55:44 -07:00
Florian Ruynat
764a851189 Terraform quoted references are now deprecated (#6203) 2020-06-05 00:05:43 -07:00
Maxime Guyot
b98cb74f5e Use 19.03.9 in localhost CI (#6201) 2020-06-04 08:59:14 -07:00
spaced
750db9139a fix CRI-O repos for centos distributions (#6224)
* fix CRI-O repos for centos distributions

* fix CRI-O repos for centos distributions
- revert workarounds

* fix CRI-O repos for centos distributions
- use https for centos repos

* avoid 302 redirects for centos repos
2020-06-04 01:08:44 -07:00
Hugo Blom
f2c8b393e1 Upgrade calico to 3.14.1 (#6219)
* upgrade calico to 3.14.1

* add checksums for calico 3.14.1 and update readme
2020-06-03 00:38:17 -07:00
Maxime Guyot
fd59556222 Add Elastx CI (#6127) 2020-06-03 00:00:17 -07:00
Wang Zhen
0b54e8e04c fix documentation example (#6216)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2020-06-02 05:42:23 -07:00
Aleksandr Loktionov
85b3526617 Fix vSphere CPI configMap and vSphere CSI secret re-deploy (#6209) (#6210) 2020-06-02 05:42:15 -07:00
Flavien
7ff8fc259b Support all taints in network plugins manifests (#6208)
flannel, ovn and multus network plugins did not support all taint keys. This
update changes the tolerations to support them all.

According to the documentation:

```
There are two special cases: An empty key with operator Exists matches all keys,
values and effects which means this will tolerate everything. An empty effect matches
all effects with key key.
```

Usage of the empty `key` and `effect` ensures the network plugin daemonset will
be deployed on every nodes (ex: in case of custom taints, or NoExecute effect)
2020-06-02 05:38:15 -07:00
Sergey
cc507d7ace disable bird-check flag for probes of calico-node pods when calico_network_backend is not 'bird'. (#6217) 2020-06-01 12:44:14 -07:00
xgdgsc
7c0fbe2959 dead link (#6181)
* dead link

* triggger ci
2020-06-01 09:33:56 -07:00
Florian Ruynat
6bc60e021e Update minor version for dependencies (#6206) 2020-05-29 05:11:24 -07:00
petruha
54816f1217 Update containerd package to 1.2.13-3.2.el7 (#6162)
* Update containerd package to 1.2.13-3.2.el7

* Update Fedora containerd package versions.

* Update Redhat containerd stable and edge packages.
2020-05-29 05:11:16 -07:00
jeanfabrice
be3283c9ba Fix conflicting clusterIP fact between coredns and nodelocaldns (#6195) 2020-05-29 04:27:15 -07:00
Kenichi Omichi
249b0a2a80 Allow metallb:speaker to create events (#6147)
Since MetalLB v0.8[1], metallb:speaker has started publishing an event
nodeAssigned on k8s resource.
To support MetalLB v0.8+, this allows metallb:speaker to create events.

[1]: 5cc6e23776 (diff-60053ad6fecb5a3cfabb6f3d9e720899R246)
2020-05-29 04:17:16 -07:00
Florian Ruynat
71d476b121 Auto detect github target branch in rebase script (#6187) 2020-05-28 12:37:15 -07:00
Florian Ruynat
45d8797dce Fix download boolean for local_path_provisioner (#6177) 2020-05-28 06:56:02 -07:00
Cody Seavey
b6e21a18cc Modify the populate no_proxy task to use a combine rather than relying on the hash_behaviour setting to be set to merge rather than replace (#6112) 2020-05-28 06:42:03 -07:00
petruha
f959cc296f Fix metrics-server rules (#6165) 2020-05-28 03:18:02 -07:00
Flavien
ab44beba17 weave: support any taint effect in daemonset tolerations (#6159)
Since weave 2.5.1, `NoExecute` taint effect is no more supported,
this changes the daemonset tolerations to change this behavior.

Also remove the toleration key `CriticalAddonsOnly` not required anymore.
2020-05-28 01:10:02 -07:00
Florian Ruynat
b2a0b649fd Add new Kubernetes version hashes and set default to 1.18.3 (#6173) 2020-05-28 01:02:03 -07:00
Florian Ruynat
6179405e84 Update docker default to 19.03 - cleanup docker docs & refs (#6153) 2020-05-28 00:52:02 -07:00
Maxime Guyot
83d945127f Make vagrant CI normal (#6074) 2020-05-28 00:46:02 -07:00
spaced
1be15a0864 Enable crio 1.18 (#6197) 2020-05-28 00:42:15 -07:00
Etienne Champetier
41b44739b1 Bump CNI plugins to 0.8.6 (#6196)
https://github.com/containernetworking/plugins/releases/tag/v0.8.6

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-05-28 00:42:03 -07:00
Samuel Liu
38ca58ae8d update pause images version: 3.2 (#6190) 2020-05-28 00:38:02 -07:00
Kenichi Omichi
fd7829d468 Update MetalLB version (#6139)
If running MetalLB v0.7.3 on k8s v1.18.2, metallb pods output the
following parsing error of v1.ServiceList:

  $ kubectl logs controller-dbb46cf84-fw8h8 -n metallb-system
  {
    "caller":"reflector.go:205",
    "level":"error",
    "msg":"go.universe.tf/metallb/internal/k8s/k8s.go:231:
      Failed to list *v1.Service: v1.ServiceList:
        Items: []v1.Service: v1.Service: ObjectMeta:
        v1.ObjectMeta: readObjectFieldAsBytes:
        expect : after object field, parsing 1605

Then an external IP address is never allocated to the Service of
LoadBalancer type.
By updating MetalLB version to the latest v0.9[1] today, this issue
can be solved.

[1]: https://hub.docker.com/r/metallb/controller/tags
2020-05-27 14:10:03 -07:00
Wang Zhen
d62836f2ab Replace seccomp profile docker/default with runtime/default (#6170)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2020-05-27 14:02:02 -07:00
Craig Rodrigues
4fd03b93f7 Rewrite download_hash in Python (#5995)
- Directly update the main.yml file with the new hashes.
2020-05-27 06:52:40 -07:00
Maxime Guyot
1617a6ea8e CI upgrade from v2.13.1 (#6188) 2020-05-27 05:22:40 -07:00
Florian Ruynat
e9ce7243b8 Match docker-cli version with docker-engine version (when available) (#6163) 2020-05-25 05:37:11 -07:00
404notfoundhard
d036a04d4d restart kubelet service when kube-config.yml is changed (#5402)
* fix(kubelet): exec notify restart kubelet service when kube-config.yml changed

* Revert "refactor(kubelet handler): change task name("reload kubelet") this is misleading"

This reverts commit 8f5d29560802c7c997293adb1ce9f84d3b20b6cb.

* fix(handlers,kubelet): setting right notify task name
2020-05-19 10:13:37 -07:00
Maxime Guyot
35ad57674e Update containerd to 1.2.13-2 (#6156) 2020-05-18 07:57:36 -07:00
qvicksilver
437189c213 Fix missing permissions for OpenStack cloud-controller-manager preventing metrics scraping (#6124) 2020-05-18 02:35:45 -07:00
Alexander Petermann
0f5fd1edc0 update documentation to add and remove nodes (#6095)
* update documentation to add and remove nodes

* add information about parameters to change when adding multiple etcd nodes

* add information about reset_nodes

* add documentation about adding existing nodes to ectd masters.
2020-05-18 02:35:37 -07:00
Paul Rey
b5aaaf864d Add additional network configuration options to external Openstack CCM (#6083) (#6085)
* Add additional network configuration options to external Openstack CCM (#6083)

* Change the default version of external openstack cloud controller image to v1.18.1 since there was an issue in v1.18.0 where some IPs of the private network were ignored

* Change Network section in external-openstack-cloud-config.j2 to Networking

* Add networking customization information in the openstack documentation
2020-05-18 02:31:36 -07:00
bozzo
d948839320 Fix resolv.conf configuration for Fedora CoreOS. (#6138) 2020-05-18 02:27:36 -07:00
Mateus Caruccio
a5af58c05a Fix apiserver port when upgrading (#6136) 2020-05-18 01:21:36 -07:00
Kenichi Omichi
d8a61b94a9 Update MetalLB README (#6140)
This updates MetalLB README as following
- Remove unnecessary markdown to read it easily on github
- Make words consistency (kubernetes, loadbalancer)
- Add change-required option
2020-05-18 01:17:36 -07:00
Matthew Mosesohn
fda05df5f1 Only fix kube-proxy address on evaluating kube_master hosts (#6152)
Change-Id: I83a7101a6cd99eb531d8385de5c31aee4f474469
2020-05-17 13:05:36 -07:00
jeanfabrice
3997aa9a0f Use OS packaging default value for apparmor_profile in crio.conf (#6125) 2020-05-14 21:47:00 -07:00
tasekida
81292f9cf3 Fix apt update don't access Docker’s official repository for Ubuntu (#6106) 2020-05-13 07:06:26 -07:00
Florian Ruynat
167e293594 Fix erroneous variable name (docker_keepcache) (#6129) 2020-05-13 06:26:27 -07:00
Florian Ruynat
1f9ccfe54d Rollback metrics-server version and enable in one CI test (#6130) 2020-05-13 06:20:26 -07:00
Hector S
d3d0360526 Changed state to present instead of installed in glusterfs role for Debian (#6096) 2020-05-12 13:50:30 -07:00
Kenichi Omichi
826b0f384d Add installation of requirements for Azure (#6076)
Due to lack of requirements installation on Azure README, the error
can happen:

 "The ipaddr filter requires python's netaddr be installed on the
  ansible controller"

It is nice to add the installation for Azure users.
2020-05-12 13:50:23 -07:00
Hector S
a3131e271a Removed env vars DOCKER_NETWORK_OPTIONS and INSECURE_REGISTRY from docker.service.j2 (#6126) 2020-05-12 13:46:21 -07:00
Anton Kulikov
ed12936be2 Add missing RBAC rule #6116 (#6121) 2020-05-11 04:25:51 -07:00
Florian Ruynat
7c00ce5f30 Update metrics-server tag and template (#6090) 2020-05-11 03:55:50 -07:00
Florian Ruynat
c87bd53352 Update calico to 3.14.0 (#6120) 2020-05-11 03:51:51 -07:00
Andrew DeMaria
af1c93cdfc Add option to expose metrics on separate port (#6092) 2020-05-10 12:21:51 -07:00
petruha
9ce7fc9b2c Create namespace when dashboard deployment uses customized namespace. (#6107)
* Create namespace when dashboard deployment uses customized namespace.

* Fix syntax.
2020-05-10 11:38:02 -07:00
Florian Ruynat
b6243bfc1c Fix ImagePullPolicy missing variable usage (#6091) 2020-05-10 11:37:50 -07:00
Maxime Guyot
21ea079896 Disable OVH CI (#6114) 2020-05-09 15:19:50 -07:00
Florian Ruynat
93579773d6 Cleanup kubernetes 1.15.x hashes (and references) as it has now reached EOL (#5876) 2020-05-09 12:19:50 -07:00
Florian Ruynat
0bd23f720d Fix docker fedora packages (#6097) 2020-05-08 15:39:51 -07:00
Florent Monbillard
dca3bf0e80 Fix first etcd member exclusion in host group pattern (#6109) 2020-05-08 15:31:51 -07:00
Florian Ruynat
c605a05c6b Update coredns to 1.6.7 (#6086) 2020-05-08 12:07:51 -07:00
Florian Ruynat
c44f13114f Allow containerd runtime with fedora os (30/31) - add CI test (#6094) 2020-05-08 07:55:43 -07:00
lukasz bielinski
ef7076e36f fix expected str instance, float found #6078 (#6103) 2020-05-08 05:57:42 -07:00
Florent Monbillard
324106e91e Remove Kubernetes <1.16 conditionals (#6088) 2020-05-08 00:45:43 -07:00
Florent Monbillard
218b2a5992 Workaround about inconsistent CRI-O YUM repo path on Kubic repos (#6101) 2020-05-07 12:59:42 -07:00
Florian Ruynat
61e7afa9f0 Fix some typos and outdated docs (#6071) 2020-05-06 11:17:25 -07:00
Victor Morales
367566adaa Fix kubernetes-dashboard template identation (#6066)
The 98e7a07fba commit udpates the
dashboard version to 2.0.0 but it enable skip login flag wasn't
updated. This change updates its identation to avoid issues when
dashboard_skip_login is enabled.
2020-05-06 11:17:17 -07:00
Florian Ruynat
c06f482901 Update default kubernetes version to 1.18.2 (#6064) 2020-05-06 11:17:09 -07:00
Florian Ruynat
965fe1db94 Update cni spec to 0.4.0 for network plugin allowing it (#6053) 2020-05-06 11:13:09 -07:00
Florian Ruynat
f6be326feb Update kube-ovn to 1.1.1 (#6060) 2020-05-06 11:05:09 -07:00
Michael Sheinberg
c58e5e80ce Bump pypy to 7.3.1, verify hash (#6070)
As of pypy 7.3.0, we can utilize the official pypy project as opposed to
the previously used "portable-pypy" distribution.
2020-05-06 04:49:08 -07:00
Maxime Guyot
641a2a8bb4 Skip molecule tests for Ubuntu 18.04 (#6077) 2020-05-05 07:17:09 -07:00
Florian Ruynat
7d497e46c5 Update calico to 3.13.3 (#6061) 2020-05-04 08:56:26 -07:00
Kenichi Omichi
d414588a47 Azure: Rename apply-rg_2.sh to apply-rg.sh (#6049)
apply-rg.sh was for Azure command version 1("azure" command) and the
command is old and version 2("az" command) is officially used today.
apply-rg_2.sh was for the version 2. In addition, the README[1] says
we need to run apply-rg.sh for applying templates.

This renames apply-rg_2.sh to apply-rg.sh for common usages of the
version 2.

[1]: https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/azurerm#generating-and-applying
2020-05-03 12:42:26 -07:00
Florian Ruynat
79de8ff169 Replace "replicas" option (CI tests) removed in latest k8s versions (#6068) 2020-05-03 12:36:34 -07:00
Florian Ruynat
38daee41d5 Reorder tests in packet file (#6067) 2020-05-03 12:36:26 -07:00
Florian Ruynat
f8f55bc413 Update cilium to 1.7.3 (#6069) 2020-05-03 12:32:26 -07:00
Maxime Guyot
7457ce7f2d Update Kubespray CI image to v2.13.0 (#6062) 2020-05-02 00:56:26 -07:00
Maxime Guyot
01dbc909be Make Vagrant CI use unsafe I/O (#6058) 2020-05-01 07:30:29 -07:00
Kenichi Omichi
0512c22607 Update contrib/azurerm/README.md (#6057)
The ansible-playbook needs to ssh-login to Azure virtual machines with
ssh keypair, and users need to specify ssh_public_keys for their own
ssh public key. The change of ssh_public_keys is mandatory.
So this updates contrib/azurerm/README.md to explain that.
In addition, the path of all.yml was wrong. That also is updated with
this.
2020-04-30 23:46:12 -07:00
Kenichi Omichi
f0d5a96464 Update deprecated command in azure script (#6056)
apply-rg_2.sh uses 'az group deployment' command but the command is
deprecated like the following warning message:

"This command is implicitly deprecated because command group
 'group deployment' is deprecated and will be removed in a future release.
 Use 'deployment group' instead."

This updates these deprecated commands.

FYI: The command has been deprecated since [1] on azure-cli side.
[1]: 991cb7cc7c (diff-2057bbb8441166e4910b34b09d22b58cR222)
2020-04-30 23:46:06 -07:00
Florian Ruynat
361645e8b6 Fix multus missing cni and erroneous CI tests (#6051) 2020-04-30 23:38:05 -07:00
Maxime Guyot
353d44a4a6 Add CI var for http_proxy (#6039) 2020-04-30 05:44:17 -07:00
qvicksilver
680aa60429 Specify tag for OpenStack Cloud Controller image (#6048) 2020-04-30 02:02:17 -07:00
Kenichi Omichi
6b3cf8c4b8 Update azure with az command (#6042)
As the download page[1], the command name is "az", not "azure". This
replaces "azure" command with "az" command for fixing it.
In addition, "az account list-locations" is correct command line to
know available location as [2].

[1]: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
[2]: https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-list-locations
2020-04-30 00:00:26 -07:00
qvicksilver
e41766fd58 Fix broken Octavia integration in OpenStack External Cloud Provider (#6046) 2020-04-29 11:30:25 -07:00
Maxime Guyot
e4c820c35e Add molecule tests to containerd role (#6037) 2020-04-29 09:08:25 -07:00
Joel Seguillon
db5f83f8c9 update dashboard access doc for 2.0.x (#6036)
* update dashboard access doc for 2.0.x

* make metrics scrapper system-cluster-critical
2020-04-29 07:20:25 -07:00
Maxime Guyot
412d560bcf Add CI for 16x ubuntu servers (#6040) 2020-04-29 07:14:24 -07:00
Florian Ruynat
a468954519 Fix default value for standalone tests (#6043) 2020-04-29 06:34:24 -07:00
Lee Spottiswood
a3d3f27aaa allow dns autoscaler limits to be specified via variables (#6020) 2020-04-28 23:34:25 -07:00
Florian Ruynat
72b68c7f82 Update spray version in ci/dockerfile (#6041) 2020-04-28 23:26:25 -07:00
Maxime Guyot
28333d4513 Fix crio runc path on Ubuntu (#6035) 2020-04-28 05:28:06 -07:00
Florent Monbillard
ed8c0ee95a Add EppO to the reviewers group (#6034) 2020-04-28 11:21:09 +03:00
Hugo Blom
724a316204 Cinder-CSI default storageclass and volumeBindingMode (#6026)
* Set volumeBindingMode in cinder CSI template (#22)

* make sure true/false is lowercase in cinder-csi storageclass
2020-04-28 00:12:04 -07:00
marcosfsch
d70cafc1e4 vagrant: Add Flatcart images (#6029) 2020-04-28 00:08:05 -07:00
Hugo Blom
18c8e0a14a rename mitogen playbook inside makefile (#6025) 2020-04-27 01:13:29 -07:00
Florian Ruynat
3ff6a2e7ff Update default (erroneous) backend value for calico (#6031) 2020-04-27 00:03:39 -07:00
Florian Ruynat
1ee3ff738e Add option to enable usage reports to calico servers (#6030) 2020-04-27 00:03:30 -07:00
Qasim Sarfraz
52edd4c9bc Fix liveness probe for cilium operator (#6016) 2020-04-26 23:59:29 -07:00
Samuel Liu
d8345c5eae MetalLB IP address range extension (#6023)
* MetalLB IP address range extension

* MetalLB IP address range extension
2020-04-26 23:55:28 -07:00
Joel Seguillon
98e7a07fba bump to dashboard 2.0.0 with metrics scrapper support (#5821)
* bump to dashboard 2.0 rc6 with metrics scrapper

* fix missing yaml seperator making Replicaset complaining about missing ServiceAccount

* unwanted legay gross hack forgot to remove before

* no  need namespace on CrBinding

* bump to 2.0.0 release

* remove dashboard_metrics_scrapper_enabled
2020-04-25 03:55:28 -07:00
Pasquale Toscano
3d5988577a Support Cilium from version 1.5 (#6006) 2020-04-24 06:00:10 -07:00
Sergey
69603aed34 add strategy mitogen_linear when installed mitogen (#5985)
* add strategy mitogen_linear when installed mitogen

* add small docs

Rename playbook file

The raw action executes as a regular Mitogen connection, which requires Python on the target, so add strategy: linear to bootstrap-os role playbook.

* add mitogen to  CI test
fix typo

* enable mitogen test on deploy-part1 tests
change version from master to release
download tar.gz archive

* run all CI tests with mitogen

* disable mitogen with upgrade CI tests

* enable mitogen on CI tests via env vars

* disable mitogen on CI test by default, enable on some different OS

* disable mitogen CI test on centos8
(get error  /usr/bin/python: No such file or directory)
2020-04-24 05:20:07 -07:00
Florian Ruynat
299e35ebe4 Cleanup unused/erroneous variables (#6003) 2020-04-24 01:54:07 -07:00
Maxime Guyot
6674be2572 Cleanup Vagrant VMs before molecule and vagrant CI (#6009) 2020-04-24 01:30:07 -07:00
spaced
cf1566e8ed Centos, debian and fedora CRI-O repo (#6008)
* replace removed repo with kubic repository for centos 7

* add crio configuration for centos8

* add crio configurations for debian

* use correct crio version for fedora

* simplify calulation of required crio version
- gives possibility to overwrite

* change default path for runc

* change default for seccomp path

* change default for conmon
2020-04-24 01:18:07 -07:00
Maxime Guyot
c6d91b89d7 Update CONTRIBUTING.md (#6012) 2020-04-23 14:36:06 -07:00
Maxime Guyot
b44f7957d5 Update CI matrix (#6010) 2020-04-23 09:51:11 -07:00
Sergey
aead0e3a69 bump minimal ansible version to 2.8.0 (#5984)
* bump minimal ansible version to 2.8.0

* check ansible version in separate playbook
2020-04-22 13:33:44 -07:00
spaced
b0484fe3e5 Ubuntu crio repo (#5994)
* declare kubic repo for ubuntu

* do not install crictl twice

* move fedora repo modular tasks to crio_repo file

* move centos repo tasks to crio_repo

* declare crio version matrix for ubuntu

* update documentation crio support for ubuntu
2020-04-22 13:29:45 -07:00
Florian Ruynat
b8cd9403df Fix nginx template missing latest changes (#6000) 2020-04-22 08:41:52 -07:00
Florent Monbillard
d7df577898 k8s-dns-node-cache 1.15.12 was released (#5999) 2020-04-22 07:43:53 -07:00
Maxime Guyot
09bccc97ba Add CRI-O CI (#5460) 2020-04-22 06:09:52 -07:00
Florian Ruynat
1c187e9729 Downgrade coredns to 1.6.5 due to upgrade errors while migrating coredns configmap (Corefile) (#5960) 2020-04-22 05:27:52 -07:00
Maxime Guyot
8939196f0d Verify apiserver version in CI (#5918) 2020-04-21 12:31:53 -07:00
Kenichi Omichi
15be42abfd Update path of all.yml on Azure README (#5993)
cloud_provider option exists in ./inventory/sample/group_vars/all/all.yml
In addition, the quick start shows to create configuration by copying
./inventory/sample. So this updates path of all.yml for fitting the above.
2020-04-21 07:21:04 -07:00
Florian Ruynat
ca45d5ffbe Fix retries keyword missing until instruction (#5989) 2020-04-21 07:20:56 -07:00
Victor Morales
2bec26dba5 Add proxy support to CRI-O service (#4607)
* Add proxy support to CRI-O service

The crio.service requires proxy environment variables when it's
deployed behind a corporated network. This change creates a systemd
configuration file when the proxy variables are defined.

* Remove unnecesary crio's tasks
2020-04-21 04:12:55 -07:00
Pierre Lebrun
03c8d0113c Add vSphere external cloud provider (#5959) 2020-04-20 08:47:39 -07:00
Lovro Seder
536606c2ed Fix kube-proxy ds win nodeselector check for 1.17 (#5982)
* Fix kube-proxy ds nodeselector for older versions

* Fix for ansible-lint
2020-04-20 08:43:39 -07:00
Sergey
6e29a47784 generate flannel manifest only on first master (#5983) 2020-04-20 01:33:38 -07:00
Florian Ruynat
826a440fa6 Add floryut to reviewers (#5979) 2020-04-19 22:53:38 -07:00
Sergey
baff4e61cf remove image flannel cni (#5980) 2020-04-19 06:13:37 -07:00
Maxime Guyot
4d7eca7d2e Add Dockerfile for vagrant image (#5977) 2020-04-18 13:53:36 -07:00
Florian Ruynat
32fec3bb74 Update minor version for tools (helm, busybox, registry etc...) (#5961) 2020-04-18 07:59:36 -07:00
Maxime Guyot
3134dd4c0d Drop support for Fedora 28 and add Fedora 30 and 31 (#5969) 2020-04-18 06:35:36 -07:00
Maxime Guyot
56a9c7a802 Add Vagrant CI (#5487) 2020-04-18 06:09:35 -07:00
Maxime Guyot
bfa468c771 Ensure upgrade CI jobs are named correctly (#5909) 2020-04-18 06:05:36 -07:00
Sergey
6318bb9f96 Return the ability to start control plain from the hyperkube image (#5422) 2020-04-18 05:59:36 -07:00
Florian Ruynat
8618a3119b Fix selector check for windows (#5974) 2020-04-18 00:41:35 -07:00
Lovro Seder
27a268df33 Gather just the necessary facts (#5955)
* Gather just the necessary facts

* Move fact gathering to separate playbook.
2020-04-17 16:23:36 -07:00
Victor Morales
7930f6fa0a Ensure /etc/sysconfig/proxy for openSUSE bootstrap (#5445)
The playbook that bootstrap openSUSE servers assumes that the
/etc/sysconfig/proxy file exists but the execution fails when
these file is not present. This change guarantees its existence.
2020-04-17 14:23:35 -07:00
Florian Ruynat
49bd208026 Update hashes (1.18.2/1.17.5/1.16.9) and set default to 1.17.5 (#5967) 2020-04-17 06:55:07 -07:00
Florian Ruynat
83fe607f62 Cleanup deprecated labels beta.kubernetes.io/arch and beta.kubernetes.io/os (#5964) 2020-04-17 05:51:06 -07:00
Kenichi Omichi
ea8b799ff0 Update link to deprecated repository (#5965)
https://github.com/colemickens/azure-kubernetes-status is deprecated
and will be removed soon as the README.
So this updates the link to the repository for a new one.
2020-04-17 04:07:07 -07:00
Rishi
e2d6f8d897 Update packet.md (#5963)
The Terraform installation part states that is for CentOS 7, but the echo command refers to OS X binary. Updated the echo command to use the Linux version.
2020-04-16 13:07:07 -07:00
Maxime Guyot
0924c2510c Use role to copy CNI bin (#5953) 2020-04-16 10:06:45 -07:00
qvicksilver
065292f8a4 Terraform/OpenStack: Allow free form worker node definition (#5952)
* Terraform/OpenStack: Allow free form worker node definition

* fixup! Terraform/OpenStack: Allow free form worker node definition
2020-04-16 07:52:45 -07:00
Sergey
35f248dff0 assembly fallback_ips and no_proxy var only one time on localhost and… (#5957)
* assembly fallback_ips and no_proxy var only one time on localhost and populate result on all hosts

* add tag always, fix ansible lint errors

* workaround to mitogen issue dw/mitogen#663

* do not gather fact before install python on coreos like distros

* try to pass docker molecule test
2020-04-16 07:22:47 -07:00
Lovro Seder
b09fe64ff1 Calculate inventory list only once (#5956) 2020-04-16 06:12:45 -07:00
Florent Monbillard
54debdbda2 Generate unique username per cluster in client kubeconfig (#5943)
* Generate unique username per cluster

* rename admin kubeconfig shell output to raw_admin_kubeconfig

* Make the linter happy

* Fix lint errors

* Cleaning up tasks
2020-04-16 05:32:45 -07:00
aharrisson
b6341287bb Add Molecule to Docker role (#5129)
* Add Molecule for container-engine/docker

* Add bootstrap-os to Molecule prepare stage
2020-04-15 23:28:45 -07:00
Florian Ruynat
6a92e34994 Update tests names (#5904) 2020-04-15 09:24:03 -07:00
Pasquale Toscano
00efc63f74 Customize PodSecurityPolicies from inventory (#5920)
* Customize PodSecurityPolicies from inventory

* Fixed yaml indentation
2020-04-15 03:18:02 -07:00
Ryler Hockenbury
b061cce913 Allow configureable vni and port for flannel overlay (#5939) 2020-04-15 03:14:02 -07:00
Florian Ruynat
c929b5e82e Upgrade kube-ovn to v1.1.0 and move test from centos7 to centos8 (#5852) 2020-04-15 03:10:03 -07:00
Florian Ruynat
58f48500b1 Update Flannel manifests, install script and version (0.12) + fix tests scripts (#5937)
* Add CI_TEST_VARS to tests

* Update flannel to 0.12.0 (with new manifests) and disable tx/rx
offloading in networking test
2020-04-14 23:48:02 -07:00
Florian Ruynat
b5125e59ab update rbac.authorization.k8s.io to non deprecated api-groups (#5517) 2020-04-14 13:14:04 -07:00
Christopher Randles
d316b02d28 else condition required otherwise AnsibleUndefinedVariable is triggered (#5722) 2020-04-14 07:06:12 -07:00
MikeG
7910198b93 fix error in templating in local-path-provisioner (#5950) 2020-04-14 06:52:12 -07:00
Carlos Tolon
7b2f35c7d4 Update vars.md (#5947)
Add the `container_manager` variable as a Cluster Variable in the global Docs
2020-04-13 23:11:10 -07:00
Florian Ruynat
45874a23bb Remove 1.16.x flag for packet_centos7-weave-kubeadm-sep (#5907) 2020-04-11 00:15:48 -07:00
spaced
9c3b573f8e Cleanup fedora coreos with crio container (#5887)
* fix upgrade of crio on fcos
- update documents

* install conntrack required by kube-proxy
- like commit 48c41bcbe7

* enable fedora modular repo for crio

* allow to override crio configuration
- set cgroup manager same to kubelet_cgroup_driver if defined
- path of seccomp_profile depends on distribution

* allow to override crio configuration
- fix path for ubuntu

* allow to override crio configuration
- fix cni path for fcos
2020-04-10 23:51:47 -07:00
Pasquale Toscano
7d6ef61491 Fix metallb speaker when podsecuritypolicy_enabled=true (#5932) (#5933) 2020-04-10 23:48:03 -07:00
Florian Ruynat
6a7c3c6e3f Upgrade terraform version to 0.12.24 (#5928) 2020-04-10 23:47:56 -07:00
Chris
883194afec Fix Cilium permissions (#5923)
* added required permissions for querying endpointslice resources

* copy-pasted role permissions from cilium install manifests

* bumped cilium version to v1.7.2
2020-04-10 23:47:48 -07:00
Sergey
3a63aa6b1e downgrade nodelocaldns version due bug with flood to error log (#5931)
https://github.com/kubernetes/kubernetes/issues/90043
2020-04-10 23:41:55 -07:00
Sergey
337499d772 Remove hashes only for EOL version in RELEASE cycle. (#5924) 2020-04-10 23:41:47 -07:00
Florian Ruynat
82123f3c4e Upgrade azure csi and fix aws csi tag (#5938) 2020-04-10 17:53:47 -07:00
Sergey
8f3d820664 always download docker image on download_host when download_run_once=true (#5921) 2020-04-10 01:59:47 -07:00
Maxime Guyot
7d812f8112 Set LANG in Dockerfile (#5929) 2020-04-10 01:25:46 -07:00
Florian Ruynat
473a8beff0 Remove hard-coded dependance to docker.service in kubelet.service file (#5917) 2020-04-09 08:43:46 -07:00
Alexander Kross
0d675cdd1a Update Calico to v3.13.2, Multus to v3.4.1. Add ConfigMap get permission to allow calico-node access to kubeadm config. (#5912) 2020-04-09 07:27:43 -07:00
aharrisson
9cce46ea8c Fix idempotence issue in bootstrap-os (#5916) 2020-04-09 03:31:44 -07:00
qvicksilver
2e67289473 Terraform/OpenStack: Fix idempotency bug in module.network.openstack_networking_router_interface_v2.k8s[0] (#5914) 2020-04-09 02:27:44 -07:00
Florian Ruynat
980aeafebe Add kubernetes 1.18.1 hashes (#5915) 2020-04-09 01:53:43 -07:00
Denis Kadyshev
7d1ab3374e Proxy fixes (#5869)
* Fix proxy and module_hotfixes

On CentOS 8 with proxy ansible render inline `proxy` and `module_hotfixes` options.

For example:

`proxy=http://127.0.0.1:3128module_hotfixes=True`

But expected result:

```
proxy=http://127.0.0.1:3128
module_hotfixes=True
```

* Use ini_file module for work with ini files

* Prevent duplicates proxy= option in /etc/yum.conf

Module `lineinfile` is weak, use most powerful module `ini_file` and add or remove `proxy=` when `http_proxy` is defined or not.
2020-04-09 01:25:44 -07:00
Florian Ruynat
01b9b263ed Remove 1.16.x flag for tf-ovh_coreos-calico (now 1.17 ready) (#5853) 2020-04-08 10:57:44 -07:00
Alexander Kross
c33a049292 Update docker RHEL/CentOS versions to the latest patch versions available. (#5872) 2020-04-08 10:09:45 -07:00
Maxime Guyot
7eaa7c957a Fix conntrack for opensuse and docker support (#5880) 2020-04-08 07:37:44 -07:00
Florian Ruynat
f055ba7965 Add crictl 1.18.0 hashes for k8s 1.18 (#5877) 2020-04-08 02:19:43 -07:00
spaced
157c247563 fix readonly flexvolume in fcos and coreos (#5885) 2020-04-08 01:41:43 -07:00
Etienne Champetier
a35b6dc1af Fix scaling (#5889)
* etcd: etcd-events doesn't depend on etcd_cluster_setup

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: remove condition already present on include_tasks

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: fix scaling up

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: use *access_addresses, do not delegate to etcd[0]

We want to wait for the full cluster to be healthy,
so use all the cluster addresses
Also we should be able to run the playbook when etcd[0] is down
(not tested), so do not delegate to etcd[0]

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: use failed_when for health check

unhealthy cluster is expected on first run, so use failed_when
instead of ignore_errors to remove scary red messages

Also use run_once

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubernetes/preinstall: ensure ansible_fqdn is up to date after changing /etc/hosts

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubernetes/master: regenerate apiserver cert if needed

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-08 01:27:43 -07:00
Alexander Kross
910a821d0b Fix chicken and egg problem with proxy_env not defined on the first … (#5896)
* Fix chicken and egg problem with proxy_env not defined on the first envinronment usage.

* Disable fact gathering for the first proxy_env evaluation.

* Move proxy_env var set up from the role defaults to the root playbooks as fact.
2020-04-08 00:53:43 -07:00
Joel Seguillon
2c21e7bd3a make explicit that doc is at kubespray.io (#5878) 2020-04-08 00:19:43 -07:00
MikeG
45a177e2a0 add local-path-provosioner helper image def (#5817) 2020-04-07 23:51:43 -07:00
spaced
0c51352a74 remove unused kubelet options (#5903) 2020-04-07 11:51:44 -07:00
Florian Ruynat
9b1980cfff Change docker.io repo to variable and upgrade alb image (#5898) 2020-04-07 08:07:42 -07:00
Florian Ruynat
ae29296e20 Replace latest tags for csi drivers (#5899) 2020-04-07 06:55:44 -07:00
Etienne Champetier
75e743bfae CentOS 8 CI (#5842)
* requirements.txt: Bump versions

Ansible 2.8+ allow ansible_python_interpreter autodetection

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* tests: do not force ansible_python_interpreter

we do not expect people to set ansible_python_interpreter, so we should not set it in the CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Add CentOS 8 Calico to CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-07 05:49:43 -07:00
Etienne Champetier
2f19d964f6 Bump requirements.txt versions / remove ansible_python_interpreter hack (#5847)
* requirements.txt: Bump versions

Ansible 2.8+ allow ansible_python_interpreter autodetection

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* tests: do not force ansible_python_interpreter

we do not expect people to set ansible_python_interpreter, so we should not set it in the CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-07 01:47:44 -07:00
qvicksilver
0d2990510e Terraform/OpenStack: Enable usage of an existing router (#5890) 2020-04-06 02:41:46 -07:00
Maxime Guyot
e732df56a1 Move packet_centos7-calico-ha-once-localhost to the appropriate CI stage (#5881) 2020-04-02 02:02:24 -07:00
Florian Ruynat
2f92d6bca3 Upgrade coredns to 1.6.9 (#5871) 2020-04-01 10:58:23 -07:00
Maxime Guyot
c72903e8d6 Update release policy (#5749) 2020-04-01 07:29:28 -07:00
Maxime Guyot
ded58d3b66 Add molecule test for bootstrap-os (#5845) 2020-04-01 07:25:28 -07:00
Maxime Guyot
be9414fabe Add cluster dump artifact in CI jobs (#5796) 2020-04-01 07:23:29 -07:00
Maxime Guyot
033afe1574 Fix Docker in Docker CI jobs (#5867) 2020-04-01 07:19:28 -07:00
Craig Rodrigues
c35461a005 Add checksums for v1.18.0 (#5843)
* Add checksums for v1.18.0

* Add crictl version for k8s 1.18
2020-04-01 06:41:28 -07:00
Florian Ruynat
a93421019b Upgrade ingress-nginx to 0.30.0 (#5870) 2020-04-01 03:57:28 -07:00
Maxime Guyot
4fd3e2ece7 Fix download_run_once in packet_ubuntu18-flannel-containerd-once (#5864) 2020-04-01 03:15:28 -07:00
Ali Sanhaji
937adec515 Azure Disk CSI deployment (#5833)
* Azure Disk CSI deployment

* Mention Azure CSI support

* Fix: remove unnecessary file

* Typo in documentation

* Add newline to end of file
2020-04-01 00:53:27 -07:00
matjazp
bce3f282f1 fix systemd cgroup driver for containerd (#5220) 2020-04-01 00:43:26 -07:00
Vinayaka V Ladwa
f8ad44a99f Azure vmss - kubelet: failed to get instance ID from cloud provider: instance not found #5824 (#5855)
* kubernetes-sigs-kubespray #5824

Added support nodes which are part of Virtual Machine Scale Sets(VMSS)

* kubernetes-sigs-kubespray #5824

* kubernetes-sigs-kubespray #5824

Added comments and updatetd azure docs.

* kubernetes-sigs-kubespray #5824

Added supported values comments for "azure_vmtype" in azure.yml
2020-03-31 10:12:40 -07:00
Maxime Guyot
7ee2f0d918 Hide after_script output if return code is zero (#5862) 2020-03-31 05:28:40 -07:00
Maxime Guyot
9cbb373ae2 Update base CI image to v2.12.5 (#5858) 2020-03-31 01:28:40 -07:00
Ali Sanhaji
484df62c5a GCP Persistent Disk CSI Driver deployment (#5857)
* GCP Persistent Disk CSI Driver deployment

* Fix MD lint

* Fix Yaml lint
2020-03-31 00:06:40 -07:00
Anshul Sharma
79a6b72a13 Removed deprecated label kubernetes.io/cluster-service (#5372) 2020-03-30 01:19:53 -07:00
Christopher Randles
d439564a7e disable gpgcheck if gpgkey is empty (#5621)
Signed-off-by: Chris Randles <randles.chris@gmail.com>
2020-03-30 01:13:53 -07:00
Martin Zobel-Helas
b0a5f265e3 Honor bastion host config from inventary (#5522)
Before this commit, the bastion entry in the inventary was not honored,
so machines behind firewalls or with unrouted addresses were not
reachable for ansible.
2020-03-30 01:11:53 -07:00
Mateus Caruccio
8800eb3492 Remove unicode chars from coredns template (#5848) 2020-03-27 11:39:54 -07:00
Florian Ruynat
09308d6125 Upgrade to Kubernetes 1.174 (#5628)
* Upgrade to Kubernetes 1.17.4 - change defaults

* Update ci jobs to previous k8s release (will fix them afterward)
2020-03-27 07:40:23 -07:00
Pierre Gaxatte
a8822e24b0 Fix terraform formatting (#5823) 2020-03-27 05:46:24 -07:00
Maxime Guyot
a60e4c0a3f Remove unused kubeadm_enabled variable (#5838) 2020-03-27 04:58:23 -07:00
Maxime Guyot
b2d740dd1f Add Ubuntu 20.04 RC image and test job (#5836) 2020-03-27 02:14:23 -07:00
Mateus Caruccio
3237b2702f Add config coredns_external_zones (#5280)
Allows to add custom zone resolving servers.
2020-03-26 23:34:23 -07:00
Craig Rodrigues
e8c49b0090 Improve curl invocation (#5844)
- make it follow redirects
- error out if an HTTP error is encountered
2020-03-26 23:12:23 -07:00
Maxime Guyot
3dd51cd648 Add moreutils in Dockerfile (#5839) 2020-03-26 13:58:23 -07:00
Maxime Guyot
e03aa795fa Move long running jobs into separate CI stage (#5837) 2020-03-26 13:56:24 -07:00
Ali Sanhaji
a8a05a21a4 AWS EBS CSI implementation (#5549)
* AWS EBS CSI implementation

* Fixing image repos

* Add OWNERS file

* Fix expressions

* Add csi-driver tag

* Add AWS EBS prefix to variables

* Add AWS EBS CSI Driver documentation
2020-03-25 13:10:25 -07:00
Xiaodu
63fa406c3c Move host_architecture to kubespray-defaults (#5811)
The variable is defined in `kubernetes/preinstall` role and used in several roles. Since `kubernetes/preinstall` is not always included when `ansible-playbook` is run with tag selectors (see #5734 for reason), they will fail, or individual roles must copy the same fact definitions (as in #3846). Moving the definition to the always-included `kubespray-defaults` role will resolve the dependency problem.
2020-03-25 12:58:25 -07:00
Etienne Champetier
6ad6609872 Fix certificates checking when adding etcd node to existing k8s node (#5807)
Co-authored-by: alexkomrakov <alexkomrakov@gmail.com>
2020-03-25 12:46:25 -07:00
Petr Enkov
474fbf09c4 fix wrong cilium_operator repo variable (#5819) 2020-03-25 02:17:03 -07:00
Etienne Champetier
47849b8ff7 docker: Fix docker install on CentOS/RHEL 8 (#5820)
we can't set module_hotfixes=True using yum_repository ansible module
Fixes 38688a4486
(keep docker-ce.repo name)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-25 01:03:03 -07:00
Stephen Schmidt
0379a52f03 Fix etcd install with docker and etcd_kubeadm_enabled (#5777)
- This solves issue #5721 & #5713 (dupes)
  - Provide a cleaner default usage pattern for the download role
    around etcd that supports 'host' and 'docker' properly
  - Extract the 'etcdctl' as a separate task install piece and reuse it where
    appropriate
  - Update the kubeadm-etcd task to reflect the above change
2020-03-24 08:12:47 -07:00
Petr Enkov
bc2eeb0560 use variables for cilium-operator instead of hardcoded value (#5802) 2020-03-24 07:40:47 -07:00
Mateus Caruccio
81f07c3783 Disable IPv6 support for canal's calico-node (#5684)
This implements the same behavior as a15a0b5eb9/roles/network_plugin/calico/templates/calico-node.yml.j2

More info: https://github.com/projectcalico/felix/issues/1447
2020-03-24 07:10:49 -07:00
Pierre Gaxatte
f90926389a Fix wrong Docker ubuntu repo URL (#5815) 2020-03-24 04:36:46 -07:00
Pierre Gaxatte
dcb97e775e Fix broken internal links (#5799) 2020-03-20 15:40:44 -07:00
Etienne Champetier
096de82fd9 Fixup recover_control_plane with Ansible 2.9 (#5806)
Tests as filters support is removed as of Ansible 2.9
https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.5.html#jinja-tests-used-as-filters

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-20 14:22:06 -07:00
eddebc
db693d46df Fixed an issue where without a default GW, ansible_default_ipv4 would return an empty dict which passed as a valid fallback_ip dict item (#5394) 2020-03-20 14:06:09 -07:00
Sergey
b8d628c5f3 rename handler to fix ansible 2.8 issue (#5801) 2020-03-20 13:54:08 -07:00
Etienne Champetier
0aa22998e2 Bump node local dns version to 1.15.11 (#5805)
k8s-dns-node-cache now uses debian-iptables base images
to automatically use either iptables-legacy or iptables-nft
https://github.com/kubernetes/dns/pull/355
https://github.com/kubernetes/kubernetes/pull/82966

This adds support CentOS 8

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-20 13:44:09 -07:00
Maxime Guyot
afe047a77f Add documentation for scripts/openstack-cleanup (#5803) 2020-03-20 13:18:06 -07:00
Maxime Guyot
1ae794e5e4 Add script to cleanup old gitlab branches (#5795) 2020-03-20 13:16:06 -07:00
Maxime Guyot
a7a204ebca Add kube_encryption_resources variable to configure which resources are encrypted at rest (#5797) 2020-03-20 04:14:36 -07:00
Maxime Guyot
8774d7e4d5 Fix ERROR! the playbook: tests/testcases/020_check-nodes-ready.yml could not be found (#5798) 2020-03-20 01:14:35 -07:00
Maxime Guyot
34e51ac1cb Add a test to check that nodes are Ready (#5793) 2020-03-19 04:09:14 -07:00
nmr
d152dc2e6a Update prep_download.yml (#5791)
Fix check if user can use docker without sudo.
2020-03-18 13:30:44 -07:00
spaced
8ce5a9dd19 remove atomic support because reached end of live (#5783) 2020-03-17 14:31:27 -07:00
Bjoern Teipel
820d8e6ce6 Adding new registry_port option (#5779)
New override are added to allow installation of the registry
on different ports than ``5000``. The default port is unchanged
from previous versions
2020-03-17 05:52:22 -07:00
bozzo
3cefd60c37 Add OWNERS file for kube-router (#5782)
I propose also my help as a reviewer
2020-03-17 04:14:22 -07:00
spaced
876d4de6be Fedora CoreOS support (#5657)
* fedora coreos support
- bootstrap and new fact for

* fedora coreos support
- fix bootstrap condition

* fedora coreos support
- allow customize packages for fedora coreos bootstrap

* fedora coreos support
- prevent install ptyhon3 and epel via dnf for fedora coreos

* fedora coreos support
- handle all ostree like os in same way

* fedora coreos support
- handle all ostree like os in same way for crio

* fedora coreos support
- add fcos documentations
2020-03-17 03:12:21 -07:00
bozzo
974902af31 Update Kube-router version to v0.4.0 (#5756) 2020-03-17 02:40:21 -07:00
MengZeLee
45626a05dc fix pip requirements version (#5174)
Because using python Program create inventory it will happen error, thus I change python pip version to install kubespray requirements.
2020-03-16 05:10:35 -07:00
Pasquale Toscano
4b5299bb7a Add variables to configure Containerd default runtime, untrusted runt… (#5497)
* Add variables to configure Containerd default runtime, untrusted runtime and additional runtimes

* Add containerd settings to sample inventory

* Empty commit
2020-03-16 03:48:36 -07:00
Yujun Zhang
ceab27c97a Add OWNERS file for recover_control_plane (#5505)
Related to #5432
2020-03-16 03:46:35 -07:00
Sergey
03d1b56a8f fix check exists download cache (#5776) 2020-03-16 03:34:35 -07:00
keyboardfann
64190dfc73 Fix deploy heketi show selector missing error. (#5738) 2020-03-16 03:32:36 -07:00
Michael Shnit
29128eb316 Add AWS ALB Ingress Controller (#5489)
* Add AWS ALB Ingress Controller Ansible role

* remove trailing spaces

* update owners

* ALB ingress: update rbac clusterrole and remove role

* Move alb-ingress role to roles/kubernetes-apps/ingress_controller folder
2020-03-16 02:58:35 -07:00
Yujun Zhang
ea9f8b4258 Add document about adding/replacing a node (#5570)
* Add document about adding/replacing a node

* Update nodes.md

Amend for comments
2020-03-15 03:32:34 -07:00
Sergey
1cb03a184b kubernetes 1.15.11 (#5775) 2020-03-14 07:16:34 -07:00
hfinucane
158d998ec4 Support configuring the Calico iptables insert mode (#5473)
* Support configuring the insert mode

Defaults to the upstream default https://docs.projectcalico.org/v3.9/reference/felix/configuration

so nothing should change for existing deployments.

This allows coexistence with other firewall management technologies.

* Add a note to the sample config
2020-03-14 06:36:35 -07:00
Cédric de Saint Martin
168241df4f Python bootstrap: upgrade pypy to 3.6-7.2.0. (#5511)
Solves problem with mitogen about 'Compress object has no attribute copy' in zlib module.
2020-03-14 06:32:35 -07:00
Sander Cornelissen
f5417032bf Merge OracleLinux in RedHat bootstrap-os (#5575)
* Merge OracleLinux in RedHat bootstrap-os

* Set default for use_oracle_public_repo in main.yaml
2020-03-14 06:28:34 -07:00
bozzo
d69db3469e Add external zones in nodelocaldns configuration (#5591)
Allows to configure additionnal zone for domains not resolved by `upstream_dns_servers`.
2020-03-14 06:26:34 -07:00
Xiaodu
980a4fa401 Add docker-ce 19.03 packages for Debian & Ubuntu (#5729)
* Add docker-ce 19.03 packages for Debian & Ubuntu

K8s has updated the recommended Docker version to 19.03. More
specifically it should be 19.03.4, but since we used 18.06.7 instead of
.2, I'm assuming the latest patch version should be used here as well.

* Add docker 19.03 for redhat
2020-03-14 06:24:35 -07:00
Florent Monbillard
027e2e8a11 Update CoreDNS to 1.6.7 (#5761) 2020-03-14 04:20:34 -07:00
Maxime Guyot
dcfda9d9d2 Change python crypto module from pycrypto to cryptography (#5769) 2020-03-14 03:30:34 -07:00
Florent Monbillard
ca73e29ec5 Use k8s.gcr.io for kubernetes related images (#5764)
* Use k8s.gcr.io for kubernetes related images

* Use k8s.gcr.io in inventory sample
2020-03-13 14:41:48 -07:00
Florent Monbillard
0330442c63 Kubernetes 1.16.8 (#5770)
* Kubernetes 1.16.8

* Use 1.16.8 in sample inventory and kubespray-defaults
2020-03-13 13:41:47 -07:00
Maxime Guyot
221c6a8eef Use a separate runner for light CI jobs (#5771) 2020-03-13 20:29:22 +03:00
Florent Monbillard
25a1e5f952 Include etcd image repository when using kubeadm etcd deployment mode (#5725) 2020-03-13 10:28:39 -07:00
Maxime Guyot
38df80046e CI inventory should start at 1 instead of 0 (#5763) 2020-03-13 10:22:39 -07:00
Nakahara, Kohei
57bb7aa5f6 Fix delete nodes task (#5747) 2020-03-13 08:36:40 -07:00
Florian Ruynat
86996704ce remove unused crictl hashes (#5754) 2020-03-13 06:56:40 -07:00
Joel Seguillon
f53ac2a5a0 Update metrics addon for 1.16 (#5706)
* upgrade metrics server and resizer images version

* scope "apps" api group for addon resizer
2020-03-13 06:46:40 -07:00
Hugo Blom
d0af5979c8 install csi-driver not just cinder (#5766) 2020-03-13 05:34:39 -07:00
Qingkun Li
43020bd064 Fix the command for kube-proxy cleanup (#5671) 2020-03-13 05:32:39 -07:00
Danilo Riecken P. de Morais
dc00b96f47 Add missing Coreos OS family string (#5759) 2020-03-13 04:24:39 -07:00
Christopher Randles
71c856878c update multus to 3.4 and add crio support (#5701)
Signed-off-by: Chris Randles <randles.chris@gmail.com>
2020-03-13 04:22:39 -07:00
Maxime Guyot
19865e81db Add OWNERS file for OpenStack CSI driver and cloud controller (#5753) 2020-03-13 02:52:39 -07:00
Maxime Guyot
a4258b1244 Add automatic cleanup of OpenStack CI VMs (#5760) 2020-03-12 15:12:39 -07:00
dymq
e0b76b185a Failover for adding proxy when line exists in file (#5751)
The 'regexp' parameter matches last occurrence of a line starting with 'proxy=' and replaces it with the one defined in 'line' parameter. If no match - it works same way as before. This fixes resuming cluster deployments failed after that task (if there was no more than one line starting with 'proxy' in the yum.conf file - this condition should also be reassured with the change introduced here) eg. if they were initiated with Terraform.
2020-03-12 15:08:39 -07:00
Xiaodu
c47f441b13 fix kube-proxy server address when local apiserver lb is disabled (#5730)
refs #5277

As the issue describes, when no external or local load-balanced is used,
kube-proxy won't be able to contact apiserver at 127.0.0.1. So the
config map should be left as is.
2020-03-12 10:40:39 -07:00
Maxime Guyot
7c854a18bb Enable retries on SSH error during CI (#5755) 2020-03-12 10:10:39 -07:00
Florent Monbillard
8df2c0a7c6 Upgrade CNI plugins to 0.8.5 (#5717) 2020-03-12 07:22:38 -07:00
Sergey
e60b9f796e add calico VXLAN mode, update docs and vars in sample inventory (#5731)
* calico VXLAN mode

* check vars if calico backend defined
2020-03-12 01:20:37 -07:00
Florent Monbillard
2c8bcc6722 Upgrade etcd to 3.3.12 (#5718)
* Upgrade etcd to 3.3.18

* Try with etcd 3.3.15 (kubeadm 1.16.7 default)

* Back to square one

* Try with 3.3.11

* Upgrade etcd to 3.3.18 (take 2)

* Try with 3.3.12
2020-03-11 08:25:38 -07:00
Fredrik Lönnegren
e257d92f41 Cilium updates (#5438)
* Add resources needed to deploy 1.6.4

* Use cilium v1.6.4

* Change deprecated option name

* Add update crd to clusterrole cilium

* Cilium 1.6.4 -> 1.6.5

* Make monitor-aggregation config configurable as a variable

* Change monitor-aggregation default none->medium

* Cilium 1.6.5 -> 1.6.6

* Update to 1.7.0

* v1.7.0->v1.7.1
2020-03-11 08:15:36 -07:00
Hugo Blom
f697338eec [Openstack] Install Cinder-CSI before first node is schedulable again (#5735)
* install cinder-csi before upgrading nodes

* Only run the Cinder CSI when enabled
2020-03-11 06:31:36 -07:00
Etienne Champetier
e2ec7c76a4 containerd: bump to 1.2.13 (#5727)
https://github.com/containerd/containerd/releases/tag/v1.2.11
CVE-2019-16884 / CVE-2019-17596

https://github.com/containerd/containerd/releases/tag/v1.2.12
CVE-2019-19921 / CVE-2019-16884 / CVE-2019-11253

https://github.com/containerd/containerd/releases/tag/v1.2.13

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-11 05:39:36 -07:00
Lovro Seder
058d101bf9 Escape dots in jsonpath keys. (#5600)
+ use more secure `command` instead of `shell`
+ read-only command doesn't change state - make idempotent
+ multi-line long string
2020-03-11 05:17:36 -07:00
Hugo Blom
833794feef [Openstack] Cleanup the old in-tree openstack cloud provider (#5742)
* Added playbook to migrate openstack cloud provider

* remove old cloud provider config

* Rewrite provisioned-by annotation on Cinder PVs

* update indents

Co-authored-by: Jonathan Süssemilch Poulain <jonathan@sofiero.net>
2020-03-11 05:13:36 -07:00
Hugo Blom
a3dedb68d1 [Openstack] Make it possible to apply the new cloud provider during upgrade (#5707)
* run external cloud provider setup during upgrade

* change name of taskt to better reflect what it does

* fix typo
2020-03-11 05:11:36 -07:00
Hugo Blom
4a463567ac [Openstack] A guide on how to replace the in-tree cloudprovider with the external one (#5741)
* add documentation for how to upgrade to the new external cloud provider

* add migrate_openstack_provider playbook

* fix codeblock syntax highligth

* make docs for migrating cloud provider better

* update grammar

* fix typo

* Make sure the code is correct markdown

* remove Fenced code blocks

* fix markdown syntax

* remove extra lines and fix trailing spaces
2020-03-11 05:09:35 -07:00
Sergey
9f3ed7d855 change ignore_errors: to when: in assert tasks (#5716) 2020-03-10 08:09:36 -07:00
Sergey
221b429c24 move var preinstall_selinux_state: to roles/kubespray-defaults/defaults/main.yaml (#5715) 2020-03-10 07:45:35 -07:00
dependabot[bot]
b937d1cd9a Bump ansible from 2.7.12 to 2.7.16 (#5739)
Bumps [ansible](https://github.com/ansible/ansible) from 2.7.12 to 2.7.16.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.7.12...v2.7.16)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-03-09 06:31:34 -07:00
Vicenç Juan Tomàs Montserrat
986c46c2b6 Check ansible version >=2.7.8 in recover-control-plane.yml playbook (#5724) 2020-03-07 10:17:34 -08:00
Maxime Guyot
e029216566 Update security contacts (#5719) 2020-03-06 10:47:24 -08:00
roc
2d4595887d Fix youtube url (#5582) 2020-03-06 02:13:22 -08:00
Jakub Husák
2beffe688a Make etcdctl connect to localhost out of the box (#5643)
* Make etcdctl connect to localhost out of the box

* etcdctl envs: use admin-.pem instead of member-.pem
2020-03-06 02:05:23 -08:00
Kubernetes Prow Robot
66408a87ee Refactor download role (#5697)
* download file

* download containers

* fix push image to nodes

* pull if none image on host

* fix

* improve docker image tag checks.
do not pull already cached images

* rebase fix merge conflict

* add support download_run_once when upgrade and scale cluster
add some test with download_run_once

* set default values to temp flag for every download cycle

* add save,load abilty for containerd and crio when download_run_once=true

* return redefine image save/load command to  set_docker_image_facts.yml

* move set command to set_container_facts

* ctr in containerd_bin_dir

* fix order of ctr image export arguments

* temporary disable download_run_once for containerd and crio
due https://github.com/containerd/containerd/issues/4075

* remove unused files

* fix strict yaml linter warning and errors

* refactor logical conditions to pull and cache container images

* remove comment due lint check

* document role

* remove image_load_on_localhost, because cached images are always loaded to docker on remote sites

* remove XXX from debug output
2020-03-05 07:31:39 -08:00
Kubernetes Prow Robot
62b418cd16 Use 'k8s.gcr.io' instead of 'gcr.io/google-containers' (#5709)
Ref: kubernetes/kubeadm/issues/2051

See: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-sig-release/ew-k9PEBckQ/T7dFepHdCAAJ

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-03-05 05:44:37 -08:00
Kubernetes Prow Robot
5361cc075d Use the v2.12.3 docker image for CI (#5712) 2020-03-05 05:40:37 -08:00
Kubernetes Prow Robot
be12164290 Add option and defaults to configure metrics exporting in containerd (#5466)
* Add metrics exporting in containerd config

* Add containerd.yml with containerd configuration example to the sample group_vars
2020-03-04 14:46:38 -08:00
Arthur Outhenin-Chalandre
588896712e Fix kube-router config generation (#5531)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-03-04 02:11:47 -08:00
Steven Reitsma
6221b94fdf Fix variable naming bug in OpenStack CCM (#5702) 2020-03-03 06:45:38 -08:00
Steven Reitsma
efef80f67b Add support for HA deployment of OpenStack Cinder CSI plugin (#5691) 2020-03-03 06:33:38 -08:00
Hugo Blom
0c1a0ab966 implement max-volumes for cinder csi (#5666) 2020-03-02 03:30:43 -08:00
Sergey
678ed5ced5 fix upgrade procedure when in playbook (#5695)
exists role kubernetes/preinstall and not exists role container-engine

 error 'yum_repo_dir' is undefined
2020-02-28 01:56:38 -08:00
Lovro Seder
7f87ce0362 Upgrade container-engine after draining (#5601)
* Run 'container-engine' after drain.

Move possibly disruptive role 'container-engine' to run after the node
is drained.

As that role have to be run on non-cluster nodes as well (etcd and
calico-rr), and those nodes are not drained, add play for that case.

* Check if api is up before upgrade.

If container engine is restarted in previous role, api controller can
take some time to start. This check ensures api is up before upgrade.
2020-02-27 11:47:28 -08:00
Steven Reitsma
d1acf7f192 Add additional configuration options to external Openstack CCM (#5661)
- Add support for manage-security-groups flag
- Add support for internal-lb flag
2020-02-26 13:03:19 -08:00
Hugo Blom
171d2ce59c Implement topology support for Cinder CSI (#5667)
* make cinder csi topology aware

* change feature description do better reflect whats being done

* remove sameas true since it isn't required
2020-02-26 05:12:25 -08:00
Nguyen Hai Truong
c6170eb79d docs: fix some typos (#5618)
Although it is spelling mistakes, it might make affect while reading.

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-02-26 04:46:28 -08:00
Moritz Graf
d2c44dd4df Modifying Ansible filter 'failed' according to Ansible 2.5 porting guide (#5678) 2020-02-26 00:18:26 -08:00
Qingkun Li
9b7090ca1d add mangle table in the iptable flush task (#5672)
When kube-router is used as cni, rules might be added to the mangle table
to support external IPs. Therefore, mangle table should be flushed during
reset as well.
2020-02-26 00:04:26 -08:00
Sergey
ee8e88b111 Require a complete inventory with all variables in the bugreport (#5655) 2020-02-25 23:58:25 -08:00
Hugo Blom
a901b1f0d7 convert volumes to dynamic blocks, openstack (#5673) 2020-02-24 01:20:49 -08:00
Victor Morales
82efd95901 Remove dockerproject_.+_repo_.+ variables (#5662)
This 38688a4486 change replaces the
value for dockerproject_.+_repo_.+ docker variables but their new
value was previously defined in other variables. This change removes
the dockerproject_.+_repo_.+ docker variables in favor of the older
ones.
2020-02-22 13:28:47 -08:00
Hoat Le
4c803d579b @ #5008 | Local path provisioner boolean annotation is rendered incorrectly and not applied (#5669) 2020-02-22 07:08:47 -08:00
keyboardfann
b34ec6c46b Enhance ha document (#5664)
* Fix HAproxy config to avoid client offered only unsupported versions error

* Add HAproxy SSL check interval

* Fix ha mode document markdown
2020-02-22 07:04:47 -08:00
Javeria Khan
6368c626c5 Ignore assertion comparison for kube_network_node_prefix when using calico (#5632)
* Fix incorrect assertion comparison for kube_network_node_prefix

* Ignore assertion comparison for kube_network_node_prefix when using calico

* Adding more var docs description for kube_network_node_prefix

* Fixing trailing whitespaces
2020-02-20 00:39:03 -08:00
Erwan Miran
a5445d9c5c Add stable repo on all masters with helm 3.x.x (#5659) 2020-02-19 14:05:46 -08:00
Adrien Gooris
da86457cda remove unused var 'kube_apiserver_admission_control' (#5648) (#5651) 2020-02-19 05:08:25 -08:00
Lovro Seder
eb00693325 Do not display skipped hosts/tasks. (#5620)
Replace deprecated callback plugin `skippy` with `default`, which
also supports ignoring skipped hosts.
2020-02-19 02:38:25 -08:00
Chad Swenson
a15a0b5eb9 Make calico iptables lock timeout configurable (#5658)
Adds `calico_iptables_lock_timeout_secs` variable to calico DS yaml.
2020-02-19 02:28:25 -08:00
Ali Sanhaji
646fd5f47b External OpenStack Cloud Controller Manager implementation (#5491)
* External OpenStack Cloud Controller Manager implementation

* Adding controller image tag

* Minor fixes

* Restructuring the external cloud controller to work with KubeADM
2020-02-18 04:47:28 -08:00
rptaylor
277b347604 add az_list_node variable to specify different AZs for kubelets (#5413)
* rebase and add az_list_node variable to specify different AZs for kubelets

* fix missing variable name change
2020-02-18 04:29:27 -08:00
Sergey
12bc634ec3 helm default version 3.1.0 (#5634)
* helm default version 3.1.0

* fix newline
try to retest2
2020-02-18 03:21:29 -08:00
Jin Hase
769e54d8f5 Fix a typo in integration.md (#5616) 2020-02-18 02:29:29 -08:00
MarkusTeufelberger
ad50bc4ccb Cache facts for 2 hours (#5633)
Sets a 2 hour timeout value for facts caching.
2020-02-18 01:31:28 -08:00
Sylvain Chateau
0ca7aa126b added "Flatcar", "Flatcar Container Linux by Kinvolk" for all coreOS role (#5607) 2020-02-18 00:15:29 -08:00
Woohyung Han
d0d9967457 Fix to Vagrant config.rb apply correctly (#5525) 2020-02-18 00:13:28 -08:00
Manuel Cintron
b51b52ac0e Fixing and issue where if the pids in the orphan list no longer exist then all systemd child processes would be killed. (#5636) 2020-02-17 09:33:29 -08:00
Sergey
36c1f32ef9 remove legacy docker repo in kubernetes/preinstall before any packages installed (#5640) 2020-02-17 08:59:28 -08:00
Steven Reitsma
fa245ffdd5 Fix some minor issues with the Cinder CSI plugin (#5561)
Add Cinder images to download role
2020-02-17 03:47:28 -08:00
Erwan Miran
f7c5f45833 Ability to define plugins.cri.containerd params (#5624)
* Ability to define plugins.cri.containerd params

* addition of containerd field commented as an example

* documentation of containerd_config
2020-02-17 02:15:29 -08:00
lcooper40
579976260f Added in code to allow control over pull policy for local path provis… (#5334)
* Added in code to allow control over pull policy for local path provisioner

* change to imagePullPolicy to use globally used variable k8s_image_pull_policy

* removed unusued variable from defaults

* updated contiv-etcd and cinder-csi-controllerplugin to use k8s_image_pull_policy variable
2020-02-17 02:13:30 -08:00
Ali Sanhaji
d56e9f6b80 Fix Cinder CSI bugs (#5492) 2020-02-17 01:49:28 -08:00
Brendan Creane
57b0b6a9b1 update Calico CNI description (#5523) 2020-02-17 01:47:28 -08:00
Flowkap
640190217d UPdate docs to match actuall required settings to perform an unsafe upgrade using cluster.yml playbook. Relates to https://github.com/kubernetes-sigs/kubespray/issues/4736 and https://github.com/kubernetes-sigs/kubespray/issues/4139 (#5609) 2020-02-17 01:45:29 -08:00
Thomas Ziegler
a08f485d76 updated links in the PR template (#5614) 2020-02-17 12:16:35 +03:00
Quan Hoang
f6b66839bd Use 'private_dns' as hostname in inventory file (#5463) 2020-02-17 00:59:28 -08:00
Erwan Miran
26700e7882 kubelet_config_extra_args and kubelet_node_config_extra_args (#5623)
* Introduce kubelet_config_extra_args and kubelet_node_config_extra_args to pass params to kubelet via YAML config

* kubelet_config_extra_args is not the alternative
2020-02-14 16:05:28 -08:00
Florian Ruynat
d86229dc2b Upgrade cri-tools (crictl) to 1.17.0 (#5629) 2020-02-14 02:50:17 -08:00
Florian Ruynat
f56171b513 Remove old features gates (#5608) 2020-02-14 02:24:17 -08:00
Nguyen Hai Truong
516e9a4de6 Securing http link to https link (#5617)
Fix http link to https link for security

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-02-13 14:46:17 -08:00
Thomas Ziegler
765d907ea1 added reference to calico_ip_auto_method in sample inventory group vars (#5612) 2020-02-13 13:18:36 -08:00
Bort Verwilst
287421e21e Set helm 3.0 as default (#5503)
* set helm 3.0 as default

* remove trainling space in vars.yml

* switched to helm 3.0.3
2020-02-13 02:18:35 -08:00
fktkrt
2761fda2c9 Update bug-report.md (#5585) 2020-02-13 01:34:35 -08:00
Erwan Miran
339e36fbe6 Files to archive can be passed directly (#5571) 2020-02-12 07:50:51 -08:00
Arthur Outhenin-Chalandre
5e648b96e8 Fix default value of kube_api_server_endpoint (#5529)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-02-11 01:40:01 -08:00
qvicksilver
ac2135e450 Fix recover-control-plane to work with etcd 3.3.x and add CI (#5500)
* Fix recover-control-plane to work with etcd 3.3.x and add CI

* Set default values for testcase

* Add actual test jobs

* Attempt to satisty gitlab ci linter

* Fix ansible targets

* Set etcd_member_name as stated in the docs...

* Recovering from 0 masters is not supported yet

* Add other master to broken_kube-master group as well

* Increase number of retries to see if etcd needs more time to heal

* Make number of retries for ETCD loops configurable, increase it for recovery CI and document it
2020-02-11 01:38:01 -08:00
rptaylor
68c8c05775 improve documentation about user account and connecting to API (#5415)
* improve documentation about user acct and connecting to API

* fix lint
2020-02-11 01:36:00 -08:00
Sergey
14b1cab5d2 force rotate control plane certifcate on master node when upgrade cluster (#5596) 2020-02-10 06:09:54 -08:00
Florian Ruynat
e570e2e736 Remove last rkt references (#5606) 2020-02-07 02:19:43 -08:00
Fabiano Tessarolo
16fd2e5d68 Fix etcd deployment type variable location (#5587)
On deployments types where etcd server is splitted from Kube Master, the deployment fails since it cannot find the variable.
2020-02-07 02:17:43 -08:00
Preslav Draganov
422b25ab1f Bind Docker service to containerd.service on versions >=18.09.1 (#5477) 2020-02-07 02:15:44 -08:00
rptaylor
b7527399b5 fully clean docker_options from sample inventory (#5414)
* comment out docker_options

* fix yamllint
2020-02-07 02:13:43 -08:00
wwgfhf
89bad11ad8 Update PULL_REQUEST_TEMPLATE.md (#5597) 2020-02-07 02:11:44 -08:00
aca
9d32e2c3b0 fix duplicates when scheduler_extra_volumes defined (#5566) 2020-02-07 02:09:44 -08:00
Florian Ruynat
099341582a Update nginx image to latest (#5590) 2020-02-07 02:07:44 -08:00
Matthew Mosesohn
942c98003f Add LuckySB as an approver (#5584)
Change-Id: I830d5bff9fa3c50b83a9eb1fd6dff521f8e55dc1
2020-02-05 11:21:55 -08:00
Maxime Guyot
cad3bf3e8c Add CentOS 8 image for testing (#5589) 2020-01-29 02:06:16 -08:00
andreyshestakov
2ab5cc73cd Fix typo in Multus plugin. (#5568) 2020-01-29 01:28:13 -08:00
Etienne Champetier
9f2dd09628 Add proxy support to containerd, improves no_proxy (#5583)
* containerd: add proxy support

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubespray-defaults: add kube_service_addresses / kube_pods_subnet to no_proxy

CIDR notation in no_proxy is supported by a lot of programs/languages,
including go: https://github.com/golang/go/issues/16704
Without that containerd cannot talk the the API server (kube_apiserver_ip),
but it should not go through an external proxy for the nodes/pods/services

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-29 01:24:14 -08:00
Sergey
2798adc837 Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo (#5569)
* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo

* move task 'Remove legacy docker repo file' to pre-upgrade.yml
2020-01-28 02:31:40 -08:00
Florian Ruynat
54d9404c0e Fix hashes... kubernetes 1.17.2 (#5581) 2020-01-24 06:44:31 -08:00
Florian Ruynat
f1025dce4e Update to hashes and default version (1.15.8 / 1.16.5 / 1.17.1) (#5564) 2020-01-23 03:54:49 -08:00
jlacoline
538f4dad9d tag role kubernetes/node-label in playbooks (#5560) 2020-01-20 11:43:36 -08:00
gatolynx
5323e232b2 recreate in another branch due to rebase problem (#5557) 2020-01-18 02:23:35 -08:00
Maxime Guyot
5d9986ab5f Fix temp filename for debian-10 image (#5540) 2020-01-17 02:08:56 -08:00
Matthew Mosesohn
38688a4486 Remove dockerproject org (#5548)
* Change dockerproject.org to download.docker.com

dockerproject.org was deprecated in 2017 and has gone down.

* Restore yum repo for containerd

Change-Id: I883bb512a2164a85865b1bd4fb569af0358c8c2b

Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>
2020-01-17 00:38:55 -08:00
Florian Ruynat
d640a57f9b update api-version for PriorityClass following removal in 1.17 (#5450) 2020-01-16 01:52:22 -08:00
Etienne Champetier
5e9479cded Ensure we always fixup kube-proxy kubeconfig (#5524)
When running with serial != 100%, like upgrade_cluster.yml, we need to apply this fixup each time
Problem was introduced in 05dc2b3a09

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-14 02:45:09 -08:00
Matthew Mosesohn
06ffe44f1f Remove downloading deprecated calico-rr image (#5528)
Change-Id: I7354d33c7db513e0ee27c9a4cc40e8501c0e1061
2020-01-14 02:41:08 -08:00
Matthew Mosesohn
b35b816287 Raise typha max connections to 300 (#5527)
Raises limit from 100 to 300 because the default is far too low
and the pod can handle 300 with the given resources.

Change-Id: Ib1eec10da3d09d198933fcfe87291587e58d7cdb
2020-01-10 00:24:33 -08:00
Florian Ruynat
bf15d06568 Update to Kubernetes 1.15.7 (#5518) 2020-01-08 17:35:40 -08:00
Etienne Champetier
2c2ffa846c Calico: update to 3.11.1, allow to configure calico_iptables_backend (#5514)
I've tested this update by deploying a containerd / etcd cluster on top CentOS7,
MetalLB + NGINX Ingress. Upgrade using upgrade-cluster.yml

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-08 02:27:40 -08:00
Damon Wang
48c41bcbe7 kube-proxy need conntrack (#5478) 2020-01-06 02:31:35 -08:00
zhanwang
beb47e1c63 update ingress_nginx install guide (#5502) 2020-01-06 02:27:35 -08:00
Erwan Miran
303c3654a1 Set pipefail in case tar fails (#5506) 2020-01-06 02:25:34 -08:00
Matthew Mosesohn
5fab610fab Clean kubectl cache after upgrade on first master (#5479)
Resolves issue where kubectl cache of <v1.16 api schema
interferes with interacting with daemonsets and deployments.

Change-Id: I63b7046958f2008eb144b6da0004c598f945e0ae
2020-01-06 02:23:35 -08:00
Kessler
3c3ebc05cc Fix invalid count index (#5469) 2020-01-02 01:57:39 -08:00
Kessler
94956ebde9 Fix invalid variable in host inventory script (#5481) 2019-12-20 05:01:33 -08:00
Alex Newman
e716bed11b A fix of install instructions (#5483)
* Update from https://github.com/kubernetes-sigs/kubespray/issues/4318#issuecomment-470161397

* Woops I missed a spot
2019-12-20 04:39:32 -08:00
Fredrik Lönnegren
ccbcad9741 Ubuntu CRI-O (#5426)
* Fix crictl

* Reload systemd daemon before enabling service

* Typo

* Add crictl template

* Remove seccomp.json for ubuntu

* Set runtime path of runc for ubuntu

* Change path to conmon
2019-12-19 04:37:57 -08:00
wwgfhf
15a8c34717 Update PULL_REQUEST_TEMPLATE.md (#5476) 2019-12-19 04:21:57 -08:00
Matthew Mosesohn
b815f48803 Add script for generating binary hashes (#5470)
Change-Id: I4498d1c0585ee98c23856208d660caadf67cab34
2019-12-18 00:29:57 -08:00
Maxime Guyot
95c97332bf Bump yamllint and ansible-lint versions (#5421) 2019-12-17 07:13:59 -08:00
Maxime Guyot
9bdf6b00cc Remove inline shell in YAML for vagrant-validate (#5386) 2019-12-17 07:11:59 -08:00
Maxime Guyot
91b23caa19 Remove GCE tests files (#5459) 2019-12-17 07:09:59 -08:00
Maxime Guyot
5df48ef8fd [docs] Add CI matrix and script (#5461)
* Rename CI jobs from ubuntu to ubuntu16

* Add CI matrix and script
2019-12-17 07:07:59 -08:00
Florent Monbillard
109078c5e0 Update CNI plugins to v0.8.3 (#5453) 2019-12-16 04:53:36 -08:00
bozzo
c0b262a22a Add kube-router configuration to enable metrics exposure (#5416) 2019-12-16 04:35:36 -08:00
Florian Ruynat
8bb1af9926 fix typo (#5452) 2019-12-16 02:55:36 -08:00
Douglas Schilling Landgraf
538f1f1a68 cri-o: redhat.yml - remove package cri-tools (#5444)
There is no cri-tools package in CentOS/EPEL/Red Hat.
Additionally, cri-tools is provided into the installation via
roles/download/defaults/main.yml:104:crictl_download_url.
2019-12-16 02:53:36 -08:00
Maxime Guyot
b60ab3ae44 Update CI to use v2.12.0 image and update release process (#5448) 2019-12-13 05:42:54 -08:00
Andreas Krüger
370a0635fa Bump nodelocaldns version to 1.15.8 (#5447)
* Bump nodelocaldns version

* Add missing upstreamsvc
2019-12-13 02:22:55 -08:00
Bort Verwilst
db2ca014cb Add Helm 3.x support (#5441)
* Add Helm 3.x support

* tiller enabled when helm < 3.0.0
2019-12-12 09:24:32 -08:00
bfraz
f0f8379e1b Update aws tf (#5435)
* update aws tf to function as expected

* update tf version

* update syntax for tf v0.12

* update tf version in readme

* update per tf for v0.12
2019-12-12 03:42:33 -08:00
Maxime Guyot
815eebf1d7 Add wait for kubectl get ds after upgrades (#5433) 2019-12-11 11:23:55 -08:00
Maxime Guyot
95cf18ff00 Re introduce CI for upgrades (#5427) 2019-12-11 04:48:06 -08:00
Matthew Mosesohn
696fcaf391 Ensure 0644 mode for ca.crt on nodes (#5428)
Change-Id: I5e018dfaeffe314300b373aeb7ed5f59929cf4f9
2019-12-11 00:54:04 -08:00
Maxime Guyot
6ff5ccc938 Use kubespray/kubespray:v2.11.0 for CI (#5363) 2019-12-11 00:10:05 -08:00
Maxime Guyot
f8a18fcaca Update the release process doc (#5419) 2019-12-10 04:41:29 -08:00
Maxime Guyot
961c1be53e Remove Digital Ocean CI (#5418) 2019-12-10 04:39:29 -08:00
Maxime Guyot
eda1dcb7f6 Fix TF inventory script (#5424) 2019-12-10 03:41:29 -08:00
Etienne Champetier
5e0140d62c Add k8s 1.15.6 hashes (#5342)
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-10 00:45:30 -08:00
Craig Rodrigues
717fe3cf3a Add checksums for v1.17.0 (#5423) 2019-12-09 21:15:28 -08:00
Yujun Zhang
32d80ca438 Add default value for bin_dir in recover control plane (#5396) 2019-12-09 02:54:02 -08:00
ooneko
2a9aead50e Set kube_image_repo use {{ gcr_image_repo }} (#5314)
To aviod repeat "gcr.io" again.
2019-12-09 02:52:02 -08:00
Sergey
9fda84b1c9 set node label via kubectl label command (#5257)
* set varios node label via kubectl label command, not kubelet options

* remove node_labels from KUBELET_ARGS
2019-12-09 01:43:09 -08:00
Etienne Champetier
42702dc1a3 Fixes for CentOS 8 (#5213)
* Fix python3-libselinux installation for RHEL/CentOS 8

In bootstrap-centos.yml we haven't gathered the facts,
so #5127 couldn't work

Minimum ansible version to run kubespray is 2.7.8,
so ansible_distribution_major_version is defined an there is no need to default it

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Restart NetworkManager for RHEL/CentOS 8

network.service doesn't exist anymore
 # systemctl status network
 Unit network.service could not be found.

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Add module_hotfixes=True to docker / containerd yum repo config

https://bugzilla.redhat.com/show_bug.cgi?id=1734081
https://bugzilla.redhat.com/show_bug.cgi?id=1756473
Without this setting you end up with the following error:
 # yum install docker-ce
 Failed to set locale, defaulting to C
 Last metadata expiration check: 0:03:21 ago on Thu Sep 26 22:00:05 2019.
 Error:
  Problem: package docker-ce-3:19.03.2-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
   - cannot install the best candidate for the job
   - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
   - package containerd.io-1.2.2-3.el7.x86_64 is excluded
   - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-09 01:37:10 -08:00
Hugo Blom
40e35b3fa6 Support Openstack servergroups (#5412)
* add support for nova servergroups

* Add documentation for openstack nova servergroups

* uppdate to TF 0.12.12 format and fix etcd

* revert for_each change

* fix variables and formatting in main.tf

* try to avoid errors

* update variable

* Update main.tf

* Update main.tf

* update all other instance resources
2019-12-09 01:15:10 -08:00
Maxime Guyot
b15d41a96a Add support to Ansible 2.9 (#5361) 2019-12-05 07:24:32 -08:00
Matthew Mosesohn
7da2083986 Add toleration for calico-typha on master (#5405)
Change-Id: Iea9a366cf6ccc4d491bfc49c5d2dba6d98f81b69
2019-12-05 06:24:32 -08:00
Maxime Guyot
37df9a10ff Add CI for Amazon Linux 2 (#5410) 2019-12-05 05:44:32 -08:00
Maxime Guyot
0f845fb350 Add support for Debian 10 (#5408) 2019-12-05 05:42:32 -08:00
Maxime Guyot
23b8998701 Add OIDC to CI (#5407) 2019-12-05 05:40:32 -08:00
Maxime Guyot
401d441c10 Fix Python code style for inventory_builder (#5362) 2019-12-05 01:48:32 -08:00
Hugo Blom
f7aea8ed89 update oidc to contain quotes (#5406) 2019-12-05 00:24:32 -08:00
Maxime Guyot
a9b67d586b Add markdown CI (#5380) 2019-12-04 07:22:57 -08:00
Maxime Guyot
b1fbead531 Update to TF v0.12.12 (#5267) 2019-12-04 07:20:58 -08:00
Maxime Guyot
b06826e88a Fix OpenSUSE support (#5370) 2019-12-04 05:16:57 -08:00
Matthew Mosesohn
57fef8f75e Allow customizing kubelet healthz port and bind addr (#5403)
Change-Id: I1634ba2d2d3337243ffcdea86750003a559f2576
2019-12-03 11:56:58 -08:00
Matthew Mosesohn
f599a4a859 force other resolvers to be secondary when using systemd-resolved (#5391)
Change-Id: I33d46c7e0c5374467e22c5a652b282d1703dea85
2019-12-02 08:41:04 -08:00
Matthew Mosesohn
e44b0727d5 Allow inventory_builder to add nodes with hostname (#5398)
Change-Id: Ifd7dd7ce8778f4f1be2016cae8d74452173b5312
2019-12-02 08:13:04 -08:00
Matthew Mosesohn
18cee65c4b Add support for k8s v1.17.0-rc.1, remove hyperkube (#5378)
Change-Id: I3fff04f0211cd9c2e8235acaf51c3aa98abc8bb7
2019-11-28 05:41:03 -08:00
zhanwang
f779cb93d6 update URL for Gluster Getting Started Guide (#5390)
update URL for Gluster Getting Started Guide
2019-11-28 00:45:03 -08:00
Yujun Zhang
aec5080a47 kubernetes/masters: fix task name in kubeadm setup (#5377) 2019-11-27 06:05:20 -08:00
Anton Fayzrahmanov
80418a44d5 CoreDNS deployment extra tolerations (#5364)
* Add extra tolerations for coredns

* dns_extra_tolerations option

* dns_extra_tolerations

* missing starting space in comment
2019-11-27 05:49:21 -08:00
Florian Ruynat
257c20f39e add 1.16.3 checksums and set new version as default (#5384) 2019-11-27 01:29:20 -08:00
xieyanker
050de3ab7b update fedora atomic download url (#5357) 2019-11-26 07:53:10 -08:00
Aaron Crickenberger
f1498d4b53 fix OWNERS file (#5359)
Initially this was to fix a mis-indented approvers key. However, it turns
out that 'oilbeater' is not a member of kubernetes-sigs nor
kubernetes-incubator (the org this repo was migrated from). Thus this
OWNERS file is failing prow's validation check.

As a workaround I've opted to move them to emeritus_approver, which
isn't valiated and can be used as a hint for other approvers in this
repo
2019-11-25 17:59:11 -08:00
Etienne Champetier
18d19d9ed4 containerd: update to 1.2.10 (#5341)
Lot's of bugs and security fixes:
https://github.com/containerd/containerd/releases/tag/v1.2.10
CVE-2019-16884 / CVE-2019-16276
https://github.com/containerd/containerd/releases/tag/v1.2.9
CVE-2019-9512 / CVE-2019-9514 / CVE-2019-9515
https://github.com/containerd/containerd/releases/tag/v1.2.8
CVE-2019-9512 / CVE-2019-9514
https://github.com/containerd/containerd/releases/tag/v1.2.7

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-11-22 00:09:29 -08:00
Matthew Mosesohn
61d7d1459a Purge inactive reviewers. Promote miouge1 to approver (#5365)
Change-Id: I92b94ed9be85d47717e24a6099b4b22066717d02
2019-11-21 11:03:29 -08:00
Michael Shen
6924c6e5a3 [FIX] fix match because trim removes leading/trailing whitespace (#5356) 2019-11-19 22:35:18 -08:00
Matthew Mosesohn
85c851f519 scale down coredns on each master during graceful upgrade (#5344)
This fixes the scenario where masters are upgraded one at a time
and coredns gets improperly scaled back up to 2 replicas.

Change-Id: I7cc9283f40efcfd61b5813c89a5805c95d901567
2019-11-18 00:13:41 -08:00
Yumo Yang
5cd7d1a3c9 modify host.yml in README.md (#5338) 2019-11-17 18:15:40 -08:00
Matthew Mosesohn
8b67159239 Do not run kubeadm upgrade on first deploy (#5339)
Change-Id: I68a962a9dd28c83ef07eaeaf53eb98287f38bca9
2019-11-14 02:05:34 -08:00
LuciferInLove
4f70da2731 Added Amazon Linux 2 support for deploying with docker (#5301) 2019-11-11 07:05:41 -08:00
Matthew Mosesohn
db5040e6ea Set certs and files with kubeadm token to mode 0640 (#5325)
Change-Id: I298496e55a6889c158b2085fcadeda5e679a873e
2019-11-11 05:41:41 -08:00
Jacopo Secchiero
97764921ed Fix calico name resolution (#5291) 2019-11-11 04:01:41 -08:00
Michée lengronne
a6853cb79d library files added to setup.cfg (#5274)
It hopefully ensures the usability of Kubespray as pip.
2019-11-11 03:59:41 -08:00
Bjoern Teipel
8c15db53b2 Fix helm for Kubernetes 1.16.2 (#5332)
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
https://github.com/helm/helm/issues/6374

This PR closes issue #5331
2019-11-11 03:53:41 -08:00
Julien Pervillé
0200138a5d Pass ingress_nginx_extra_args when deploying the nginx-ingress addon (#5321) 2019-11-11 03:51:40 -08:00
Florent Monbillard
14af98ebdc Respect cri-tool supported version matrix (#5241)
| Kubernetes Version | cri-tools Version |
|--------------------|-------------------|
| 1.16.x             | v1.16.0           |
| 1.15.X             | v1.15.0           |
| 1.14.X             | v1.14.0           |
| 1.13.X             | v1.13.0           |
| 1.12.X             | v1.12.0           |
| 1.11.X             | v1.11.1           |

- Upgrade to cri-tools 1.16.1
- Add checksums for cri-tools 1.16.1
2019-11-11 03:45:42 -08:00
YichenWong
8a5434419b fix useradd etcd (#5281) 2019-11-11 03:27:41 -08:00
Quentin Gliech
8a406be48a Fix indentation in cilium-ds.yml template (#5305) 2019-11-11 03:25:41 -08:00
Johannes Scheuermann
feac802456 Remove default docker_options from sample (#5287) 2019-11-11 03:23:40 -08:00
Junho Suh
076f254a67 Add cilium_tunnel_mode variable to the cilium config (#5295) 2019-11-11 03:19:42 -08:00
holmesb
bc3a8a0039 Fixes issue #5299 (#5300) 2019-11-11 03:13:41 -08:00
Dmitry Chusovitin
45d151a69d containerd installation on Debian (#5326) 2019-11-11 02:41:41 -08:00
Matthew Mosesohn
bd014c409b Skip coredns image when evaluating kubeadm images (#5327)
It will be enabled correctly in downloads

Change-Id: Ief0b7aa2a8ee2ba6a6849820802f8542584b2c04
Related-story: PRODX-1171
2019-11-09 00:51:39 -08:00
Michael Shen
08421aad3d [FIX] fix incorrect link to downloads documentation (#5319) 2019-11-07 03:50:42 -08:00
Matthew Mosesohn
1c25ed669c Remove unnecessary and risky reload network for resolvconf propagation (#5322)
Change-Id: I54d706f7941b4b86c4c6cd45340295577155b884
2019-11-06 10:11:52 -08:00
Matthew Mosesohn
a005d19f6f Enable systemd-resolved DNS resolution mode (#5318)
Change-Id: If3e253a40782e03cde7fc4a91493517ae31fda17
2019-11-06 03:33:52 -08:00
Matthew Mosesohn
471589f1f4 Scale down coredns created by kubeadm upgrade to 0 replicas (#5308)
Change-Id: I128b0f9c1acbb956d9a6c4e5510b45a36e296af7
2019-11-05 03:34:38 -08:00
Ali Sanhaji
b0ee1f6cc6 Deploy Cinder CSI driver to provision volumes over OpenStack (#5184)
* Deploy Cinder CSI driver to provision volumes over OpenStack

* Deploy Cinder CSI StorageClass

* Cinder CSI doc
2019-11-01 00:59:24 -07:00
Pierre Ozoux
79128dcd6b Removes repetition. (#5310) 2019-10-30 06:12:53 -07:00
Samina Fu
dd7e1469e9 Fix typo of docs/dns-stack (#5307) 2019-10-30 02:00:55 -07:00
Matthew Mosesohn
186ec13579 Fix incorrect suggestion to enable old k8s apis (#5292)
Change-Id: If965cc6aa0daaca232dcf2ca0efd649aa097497f
2019-10-30 01:58:53 -07:00
Matthew Mosesohn
2c4e6b65d7 Raise delay and retry for rotate tokens (#5304)
Change-Id: I87844b43b9a18064e7a99567ce57c1ca1ffcc4a8
2019-10-30 01:56:52 -07:00
Eric Lake
108a6297e9 Terraform dynamic inventory 0.12.12 (#5298)
* Update parsing of terraform state file for 0.12.12

* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method

* Remove the setting of ansible_ssh_user to root for all Packet

Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.

* Default to root user for packet devices except on CoreOS

* Update TF_VERSION for packet in tf-validate-packet

Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests

* convert packet terraform files to TV_VERSION 4

* initalize terraform before copying the variable file to the top level dir
2019-10-29 00:02:42 -07:00
Matthew Mosesohn
94d4ce5a6f Retry cleaning up calico-node container (#5302)
Change-Id: Iad27b107860213759c7ae51f0891d7e5e7c6d96b
2019-10-28 05:11:25 -07:00
Matthew Mosesohn
81da231b1e Set cluster DNS in kubeadm config for kubelet dynamic config (#5293)
Change-Id: I23116efefe8626d361d1904fc6fb8448f66cf3c5
2019-10-25 02:23:40 -07:00
Ludovic Muller
1a87dcd9b9 readme: update url to the Kubernetes documentation page for Kubespray (#5294) 2019-10-24 22:39:39 -07:00
Matthew Mosesohn
a1fff30bd9 Generate TLS certs for calico typha (#5258)
* Generate TLS certs for calico typha

Change-Id: I3883f49c124c52d0fc5b900ca2b44e4e2ed0d707

* Add group vars note

Change-Id: I63550dfef616e884efdbd42010a90b2c04c5eb69
2019-10-17 07:02:38 -07:00
Sergey
81d57fe658 set calico_datastore default value in role kubespray-default (#5259) 2019-10-17 05:58:38 -07:00
Sergey
3118437e10 check on all cluster node - kubelet_max_pods <= (2 ** (32 - kube_network_node_prefix | int)) - 2 (#5279) 2019-10-17 05:48:38 -07:00
Sergey
65e461a7c0 download container always been on download_delegate host (#5177)
* download container always been on download_delegate host

* fix also check pull required
2019-10-17 05:38:38 -07:00
Michael Oglesby
c672681ce5 Revert Pull Request #5084 (#5120)
Kubespray Pull Request #5084 (https://github.com/kubernetes-sigs/kubespray/pull/5084) caused more problems than it solved due to limitations with the synchronize module. See comments on Kubespray Issues #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059) and #5116 (https://github.com/kubernetes-sigs/kubespray/issues/5116). Details from Ansible documentation: "Currently, synchronize is limited to elevating permissions via passwordless sudo. This is because rsync itself is connecting to the remote machine and rsync doesn’t give us a way to pass sudo credentials in. ... Currently there are only a few connection types which support synchronize (ssh, paramiko, local, and docker) because a sync strategy has been determined for those connection types. Note that the connection for these must not need a password as rsync itself is making the connection and rsync does not provide us a way to pass a password to the connection. ..." Thus, reverting Pull Request #5084.
2019-10-17 05:26:37 -07:00
yelhouti
d332a254ee install python3 instead of python2 for fedora >= 30 fixes 5056, fixes 4802 (#5111) 2019-10-17 05:04:38 -07:00
Sean Sube
f3c072f6b6 ignore gpg files in inventory (#5209) 2019-10-16 20:22:39 -07:00
Matthew Rapa
3debb8aab5 add KUBELET_VOLUME_PLUGIN to kubelet.env (#5128) 2019-10-16 20:08:38 -07:00
YichenWong
aada6e7e40 Add etcd_data_dir variable to the kubeadm config (#5263) 2019-10-16 19:50:39 -07:00
Matthew Mosesohn
ac60786c6f Add support for restart handlers for control plane on crio/containerd (#5250)
* Add support for restart handlers for control plane on crio/containerd

Change-Id: I8343cc4e9df7f55b732628ed01cc6e7ea5dcee85

* Update main.yml
2019-10-16 18:58:39 -07:00
Hugo Blom
db33dc6938 Add support for Kubernetes 1.16.2 (#5272)
* Add support for Kubernetes 1.16.1

* Defaults to 1.16.1

* add 1.16.2 checksums and set new version as default

* correct 1.16.2 checksums and add 1.15.5 checksums
2019-10-16 18:34:38 -07:00
Hugo Blom
9dfb25cafd fix typo (#5275) 2019-10-16 18:26:38 -07:00
Maxime Guyot
df8d2285b6 Update ingress-nginx to v0.26.1 (#5268) 2019-10-16 18:22:39 -07:00
Matthew Mosesohn
af6456d1ea Fix selector for calico-typha deployment (#5253)
Change-Id: I79f43379cbe1c495cb416f0572e65f695d5ec2b8
2019-10-16 07:53:42 -07:00
Maxime Guyot
6f57f7dd2f Update nginx image to latest (#5270) 2019-10-16 04:37:42 -07:00
Dennis Field
fd2ff675c3 Clarify process for upgrading more than one version (#5264)
Since it is unsupported to skip upgrades, I've detailed the steps for upgrading a step at a time and removed some language that indicated it should work
2019-10-16 04:35:41 -07:00
Xiaodu
bec23c8a41 Add k8s v1.15.4 hashes (#5235) 2019-10-16 04:33:41 -07:00
Robin Elfrink
faaff8bd72 Add RotateCertificates to kubelet config if kubelet_rotate_certificates is set. (#5152)
Signed-off-by: Robin Elfrink <robin.elfrink@eu.equinix.com>
2019-10-16 04:31:41 -07:00
andreyshestakov
8031c6c1e7 Update template for dashboard to support v2.x (#5187)
Secrets and ConfigMap should be created before dashboard pod run.
2019-10-16 04:29:41 -07:00
Erwan Miran
9d8fc8caad Fix getting nameserver and search for /etc/resolv.conf with comments (#5197) 2019-10-16 04:27:40 -07:00
Julien Pervillé
d1b1add176 contrib/heketi: use inventory node ip in topology instead of guessing it (#5233) 2019-10-16 04:25:42 -07:00
Qingkun Li
a51b729817 add ignore_errors to the kube-proxy deletion task (#5236)
When using cluster.yml or scale.yml to add/scale nodes in the existing
k8s cluster, the `kubeadm init` wouldn't run. As a result, kube-proxy
wouldn't be created, and therefore the kube-proxy deletion task would
fail, e.g. in the case where kube-router is used and "kube_proxy_remove"
is set to true. As a workaround, add ignore_errors to the kube-proxy
deletion task.
2019-10-16 04:23:40 -07:00
Maxime Guyot
19bc79b1a6 Update cert-manager to v0.11.0 (#5269) 2019-10-16 04:21:40 -07:00
Seref Acet
d2fa3c7796 Minimum required version of Kubernetes is v1.15 (#5266) 2019-10-15 06:59:53 -07:00
Mateus Caruccio
03cac2109c No need to gather facts on localhost (#5251)
It's unnecessary and breaks when running from within a docker container:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/sbin/udevadm info --query property --name /dev/mapper/vg00-root", "msg": "Timer expired after 10 seconds", "rc": 257}
```
2019-10-15 06:57:53 -07:00
Sergey
932935ecc7 fix wrong path in include install_host.yml in etcd role (#5256) 2019-10-13 18:16:34 -07:00
BenoitBOULANGER
e01118d36d Fix issue in remove-node/post-remove task (#5185) (#5186) 2019-10-10 05:17:43 -07:00
Matthew Mosesohn
dea9304968 Enable openstack_cacert to be either file or base64 string (#5243) 2019-10-09 02:19:49 -07:00
Matthew Mosesohn
2864e13ff9 Reset between kubeadm secondary control plane join attempts (#5240)
Change-Id: Ic9425bf90552d7e3d42b02409af9773d99376384
2019-10-08 00:15:12 -07:00
Hugo Blom
a8c5a0afdc Make it possible to disable access_ip (openstack provider) (#5239)
* Add a variable do disable access_ip

* Document the use of use_access_ip
2019-10-07 04:09:09 -07:00
Erwan Miran
0ba336b04e install helm client separately (#5212) 2019-10-04 05:14:02 -07:00
Matthew Mosesohn
89f1223f64 Fix selector workaround for helm install (#5237)
Change-Id: I826337b59814674c3feb4cd6a4904d9d53e01652
2019-10-03 23:41:56 -07:00
陈谭军
8bc0710073 clean up document (#5214) 2019-10-02 04:41:07 -07:00
Matthew Mosesohn
fb591bf232 Apply workaround for NetworkManager and calico (#5230)
Change-Id: I5cb2bdf1a57707c1b8da3e5ac0c80e5c353480a4
2019-10-02 04:37:07 -07:00
Matthew Mosesohn
a43e0d3f95 Switch to Kubernetes v1.16.0 (#5189)
* Switch to Kubernetes v1.16.0

Change-Id: I5d6a9528b2d443750fc5e031aff15ad3ffead158

* Fix download localhost cached file path

Change-Id: I65e79b70e3d1b37265ebc60f41b460cf4b0a0d47

* fix kubeadm etcd for v1.16

Change-Id: I6888a00fd48b530a38b0b31c4095492476af42d2

* disable tf packet jobs

Change-Id: I075c4666547fdea4c50ec04864f38e2cfaa79154

* Disable contiv packet jobs. Fix kube-router

Change-Id: I3170e8789e60711d4cee8faf65f2094480b79b8d

* bump sonobuoy version

Change-Id: Ib946905629c7c53ed88f08fb2f41c454457a0097
2019-10-02 02:21:07 -07:00
Besmir Zanaj
f16cc1f08d fix digital rebar url (#5223) 2019-09-30 01:05:38 -07:00
Maxime Guyot
8712bddcbe Add docs for TF vars introduced PR 4239 (#5201) 2019-09-26 04:31:07 -07:00
陈谭军
99dbc6d780 clean-up doc,spelling mistakes (#5206) 2019-09-26 04:25:08 -07:00
Richard Scott
75e4cc2fd9 Updated kubectl.sh (#5156)
The script is not usable unless you are in the '.vagrant/provisioners/ansible/inventory/artifacts' folder.
This update makes this usable from anywhere.
2019-09-26 04:23:07 -07:00
Etienne Champetier
81cb302399 MetalLB: fail if kube_proxy_strict_arp is false (#5180)
When using IPVS, kube_proxy_strict_arp = true is required
https://github.com/danderson/metallb/issues/153#issuecomment-518651132

Add kube_proxy_strict_arp to inventory/sample
2019-09-26 04:21:06 -07:00
陈谭军
3bcdf46937 fix-up some spelling mistakes (#5202) 2019-09-25 23:27:08 -07:00
Sergey
1cf6a99df4 generate kubeadm download image list with options useHyperKubeImage (#5203) 2019-09-25 18:03:06 -07:00
Robert Neumann
a5d165dc85 Customize host root volume size by Terrafrom provisioning (#4239)
* print hostnames (#5110)

Terrafrom - customize hosts root volume size

disable block_device by default value

Terraform formatting fix

Fixed typos

* fix resources after rebase

* Fix glusterfs image issue
2019-09-25 05:17:59 -07:00
Erwan Miran
f18e77f1db Blocksize for calico default pool should be configurable (#5198) 2019-09-25 04:44:00 -07:00
陈谭军
2fc02ed456 fix-typo (#5199) 2019-09-25 04:04:00 -07:00
pando85
9db61c45ed Upgrade nodelocaldns to 1.15.5 (#5191) 2019-09-22 20:13:22 -07:00
Sergey
8cb54cd74d fix broken scale procedure: (#5193)
- do not run etcd role when etcd_kubeadm_enabled == true
- remove default value 'systemd' for cgroup driver in containerd role.
  this value override autodetect in kubelet_cgroup_driver_detected from docker info
2019-09-22 01:07:22 -07:00
Florent Monbillard
a3f1ce25f8 Add support for k8s v1.14.6 (#5182) 2019-09-18 02:53:30 -07:00
Qingkun Li
3c7f682e90 Parameterize gcr, quay, and docker image repo defines (#5146)
This allows to easily override the gcr, quay, and docker repos with the
mirror repos in countries like China, where the default accesses are
blocked or unstable.
2019-09-18 02:49:30 -07:00
Sergey
8984096f35 use hyperkubeimage to run controlplane containers (#5178) 2019-09-17 18:33:28 -07:00
Mario
1ce7831f6d Update main.yml (#5166) 2019-09-17 05:36:24 -07:00
holmesb
46f6c09d21 Fixes issue #5160 (#5171) 2019-09-16 23:42:22 -07:00
Matthew Mosesohn
6fe2248314 Use more native way to update kubeconfigs using kubeadm (#5165)
Change-Id: I1076b418f85a26d9896be69910052128afc51cee
2019-09-13 03:40:29 -07:00
andreyshestakov
cb4f797d32 Fix macro on local_volume_provisioner (#5168)
mydict.keys() should be converted to list,
otherwise it causes errors in loop iteration.

Remove extra space after class name, which broke configmap.

Also allow set reclaimPolicy property.
2019-09-13 00:50:33 -07:00
Matthew Mosesohn
eb40ac163f Move cri_socket var to kubespray-defaults (#5149) 2019-09-10 12:30:55 -07:00
Matthew Mosesohn
27ec548b88 Add support for k8s v1.16.0-beta.2 (#5148)
Cleaned up deprecated APIs:
apps/v1beta1
apps/v1beta2
extensions/v1beta1 for ds,deploy,rs

Add workaround for deploying helm using incompatible
deployment manifest.
Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
2019-09-10 12:06:54 -07:00
Florent Monbillard
637f09f140 Fix ansible task titles (#5154)
* Fix ansible task titles for CRI connection tasks

* Fix Azure subscription ID check task title
2019-09-10 01:34:54 -07:00
Matthew Mosesohn
9b0f57a0a6 Adjust endpoints for kube-proxy,controller,scheduler to proper ip (#5150)
Change-Id: I5aa009358bee7035922b5a10327997e47c9ba434
2019-09-09 10:33:20 -07:00
leonmbecker
5f02068f90 Documenting Terraform variable az_list explicitly (#5132)
* added az_list to README section

* added az_list to cluster.tfvars
2019-09-09 07:41:19 -07:00
Matthew Mosesohn
7f74906d33 Make haproxy/nginx client timeout configurable (#5140)
Change-Id: I61319a06eb33d9fc868e19941924f387088b856b
2019-09-05 00:32:51 -07:00
Csergő Bálint
56523812d3 print hostnames (#5110) 2019-08-29 05:07:57 -07:00
Guangming Wang
602b2d198a Cleanup: fix typo in doc (#5105)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-28 02:35:15 -07:00
Richard Arends
4d95bb1421 Use python3-libselinux on RHEL8/Centos8 (#5127)
* Use python3-libselinux on RHEL8/Centos8

* The fact ansible_facts.distribution_major_version is not present on older Ansible version.
Default it to 0 in when not present and use libselinux-python as package to get current
default behaviour.
2019-08-28 02:33:15 -07:00
Matthew Mosesohn
184ac6a4e6 Parse calico nodes as json (#5114) 2019-08-27 10:16:42 -07:00
rptaylor
10e0fe86fb remove unimplemented custom_flags vars, document the extra_args vars (issue 4352) (#5108) 2019-08-23 01:21:18 -07:00
Matthew Mosesohn
7e1645845f Allow calico settings to be modified (#5101)
Previous logic used calicoctl.sh create --skip-exists, which
allowed setting initial values, but not permitting changes.
2019-08-23 00:01:19 -07:00
Neven Miculinic
f255ce3f02 Added CRI-O support for ubuntu (#4629)
* Added CRI-O support for ubuntu

* implemented feedback

* set crictl to fixed version

* Fix errors during rebasing

* Fix linting errors
2019-08-22 03:54:31 -07:00
Michael Oglesby
07ecef86e3 Replace fetch with synchronize due to memory error (#5084)
Fix for Kubespray Issue #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059). There is a known issue with the 'fetch' module that will sometimes lead to it failing with a memory error. See ansible/ansible#11702 (https://github.com/ansible/ansible/issues/11702). I encountered this issue with the "Copy kubectl binary to ansible host" task in kubespray/roles/kubernetes/client/tasks/main.yml, and it caused my entire deployment to error out (see "Output of ansible run" above). Replacing 'fetch' with 'synchronize' fixes this issue.
2019-08-22 02:40:32 -07:00
ewtang
3bc4b4c174 Use raw module for bootstrap-debian.yml (#5061)
Updated Openstack to terraform 0.12 (#5062)

* update openstack to terraform 0.12(.5)

* replace cluter.tf with cluster.tfvars

* update README.md to terraform 0.12

* update Openstack CI tests to use terraform 0.12

* specify terraform version in openstack README

* gitlab CI to copy cluster.tfvars in case of openstack provider

* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).

Additionally the script has been updated to Python 3.
2019-08-22 01:46:31 -07:00
Victor Morales
da089b5fca Update CRI-O in CentOS (#4582)
According to their compatibility matrix[1] the 1.11.5 version seems to
be deprecated. This change updates the CentOS repository reference.

[1] https://github.com/cri-o/cri-o#compatibility-matrix-cri-o---kubernetes-clusters
2019-08-22 01:16:32 -07:00
ewtang
d4f094cc11 Add localhost to ansible.limit. (#5037)
Upgrade to Kubernetes 1.15.3 (#5091)
2019-08-22 01:14:32 -07:00
Sergey
494a6512b8 fix bug: run Copy image to ansible host cache on download_delegate host (#5094)
* run 'task download_container | Copy image to ansible host cache' with synchronize on download_delegate host

* try to run task copy file to ansible host on all inventory, not only on first random host
2019-08-21 23:38:30 -07:00
mcayland
3732c3a9b1 terraform/openstack: add network_dns_domain variable (#5093)
This allows the user to optionally specify the dns_domain attribute on the
generated internal kubernetes network.
2019-08-21 05:09:15 -07:00
Tony Fouchard
f6a63d88a7 Allow to configure strict ARP on kube-proxy (#5092) 2019-08-20 18:21:17 -07:00
848 changed files with 43114 additions and 6843 deletions

View File

@@ -2,13 +2,8 @@
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or documented why they are skipped on purpose.
- '301'
- '305'
- '306'
- '404'
- '503'
# DO NOT add any other rules to this skip_list, instead use local `# noqa` with a comment explaining WHY it is necessary
# These rules are intentionally skipped:
#

15
.editorconfig Normal file
View File

@@ -0,0 +1,15 @@
root = true
[*.{yaml,yml,yml.j2,yaml.j2}]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
[{Dockerfile}]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8

View File

@@ -18,6 +18,8 @@ explain why.
- **Version of Ansible** (`ansible --version`):
- **Version of Python** (`python --version`):
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
@@ -25,8 +27,8 @@ explain why.
**Network plugin used**:
**Copy of your inventory file:**
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Command used to invoke ansible**:

View File

@@ -1,9 +1,9 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md and developer guide https://git.k8s.io/community/contributors/devel/development.md
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/release.md#issue-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/testing.md
https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
.vagrant
*.retry
**/vagrant_ansible_inventory
*.iml
temp
.idea
.tox

View File

@@ -4,13 +4,13 @@ stages:
- deploy-part1
- moderator
- deploy-part2
- deploy-gce
- deploy-part3
- deploy-special
variables:
KUBESPRAY_VERSION: v2.13.3
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
# DOCKER_HOST: tcp://localhost:2375
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
@@ -26,31 +26,36 @@ variables:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
LOG_LEVEL: "-vv"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
before_script:
- ./tests/scripts/rebase.sh
- /usr/bin/python -m pip install -r tests/requirements.txt
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
.job: &job
tags:
- packet
variables:
KUBESPRAY_VERSION: v2.10.0
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
artifacts:
when: always
paths:
- cluster-dump/
.testcases: &testcases
<<: *job
services:
- docker:dind
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- ./tests/scripts/testcases_cleanup.sh
- chronic ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
@@ -66,6 +71,6 @@ ci-authorized:
include:
- .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/digital-ocean.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml

View File

@@ -1,19 +0,0 @@
---
.do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
.do: &do
extends: .testcases
tags:
- do
do_ubuntu-canal-ha:
stage: deploy-part2
extends: .do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -1,247 +0,0 @@
---
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.cache: &cache
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
.gce: &gce
extends: .testcases
<<: *cache
variables:
<<: *gce_variables
tags:
- gce
except: ['triggers']
only: [/^pr-.*$/]
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-gce
UPGRADE_TEST: "graceful"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *gce
when: manual
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-gce
<<: *gce
when: on_success
gce_centos7-flannel-addons:
stage: deploy-gce
<<: *gce
when: manual
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep:
stage: deploy-gce
<<: *gce
when: manual
only: ['triggers']
except: []
gce_coreos-calico-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-flannel-addons-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
# More builds for PRs/merges (manual) and triggers (auto)
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm:
stage: deploy-gce
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-flannel-ha:
stage: deploy-gce
<<: *gce
when: manual
gce_centos-weave-kubeadm-triggers:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-cilium:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu18-cilium-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-weave:
stage: deploy-gce
<<: *gce
when: manual
gce_rhel7-weave-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_debian9-calico-upgrade:
stage: deploy-gce
<<: *gce
when: manual
gce_debian9-calico-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_coreos-canal:
stage: deploy-gce
<<: *gce
when: manual
gce_coreos-canal-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_rhel7-canal-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-canal-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-calico-ha:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-calico-ha-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-multus-calico:
stage: deploy-gce
extends: .gce
variables:
<<: *centos7_multus_calico_variables
when: manual
gce_oracle-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-gce
<<: *gce
when: manual
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *gce
when: manual

View File

@@ -2,6 +2,9 @@
yamllint:
extends: .job
stage: unit-tests
tags: [light]
variables:
LANG: C.UTF-8
script:
- yamllint --strict .
except: ['triggers', 'master']
@@ -9,15 +12,17 @@ yamllint:
vagrant-validate:
extends: .job
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.2.4
script:
- curl -sL https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.deb -o /tmp/vagrant_2.2.4_x86_64.deb
- dpkg -i /tmp/vagrant_2.2.4_x86_64.deb
- vagrant validate --ignore-provider
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
@@ -26,6 +31,7 @@ ansible-lint:
syntax-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_INVENTORY: inventory/local-tests.cfg
ANSIBLE_REMOTE_USER: root
@@ -41,9 +47,30 @@ syntax-check:
tox-inventory-builder:
stage: unit-tests
tags: [light]
extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
script:
- pip install tox
- pip3 install tox
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']
markdownlint:
stage: unit-tests
tags: [light]
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md
ci-matrix:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/md-table/test.sh

View File

@@ -1,122 +1,229 @@
---
.packet_variables: &packet_variables
CI_PLATFORM: "packet"
SSH_USER: "kubespray"
.packet: &packet
extends: .testcases
variables:
<<: *packet_variables
CI_PLATFORM: "packet"
SSH_USER: "kubespray"
tags:
- packet
only: [/^pr-.*$/]
except: ['triggers']
.test-upgrade: &test-upgrade
variables:
UPGRADE_TEST: "graceful"
packet_ubuntu18-calico-aio:
stage: deploy-part1
<<: *packet
extends: .packet
when: on_success
# Future AIO job
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet
when: on_success
# ### PR JOBS PART2
packet_centos7-flannel-addons:
packet_centos7-flannel-containerd-addons-ha:
extends: .packet
stage: deploy-part2
<<: *packet
when: on_success
variables:
MITOGEN_ENABLE: "true"
# ### MANUAL JOBS
packet_centos-weave-kubeadm-sep:
packet_centos7-crio:
extends: .packet
stage: deploy-part2
<<: *packet
when: on_success
only: ['triggers']
except: []
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu-weave-sep:
packet_ubuntu18-crio:
extends: .packet
stage: deploy-part2
<<: *packet
when: manual
only: ['triggers']
except: []
variables:
MITOGEN_ENABLE: "true"
# # More builds for PRs/merges (manual) and triggers (auto)
packet_ubuntu16-canal-kubeadm-ha:
stage: deploy-part2
extends: .packet
when: on_success
packet_ubuntu-canal-ha:
packet_ubuntu16-canal-sep:
stage: deploy-special
<<: *packet
extends: .packet
when: manual
packet_ubuntu-canal-kubeadm:
packet_ubuntu16-flannel-ha:
stage: deploy-part2
<<: *packet
when: on_success
packet_ubuntu-flannel-ha:
stage: deploy-part2
<<: *packet
extends: .packet
when: manual
packet_ubuntu-contiv-sep:
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
<<: *packet
when: on_success
packet_ubuntu18-cilium-sep:
stage: deploy-special
<<: *packet
extends: .packet
when: manual
packet_ubuntu18-flannel-containerd:
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
<<: *packet
extends: .packet
when: manual
packet_debian9-macvlan-sep:
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
<<: *packet
when: on_success
packet_debian9-calico-upgrade:
stage: deploy-part2
<<: *packet
when: on_success
packet_centos7-calico-ha:
stage: deploy-part2
<<: *packet
extends: .packet
when: manual
packet_centos7-kube-ovn:
packet_debian10-containerd:
stage: deploy-part2
<<: *packet
extends: .packet
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet
when: on_success
variables:
# This will instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.9-dind
packet_centos8-kube-ovn:
stage: deploy-part2
extends: .packet
when: on_success
packet_centos7-kube-router:
packet_centos8-calico:
stage: deploy-part2
<<: *packet
extends: .packet
when: on_success
packet_centos7-multus-calico:
packet_fedora32-weave:
stage: deploy-part2
<<: *packet
when: manual
extends: .packet
when: on_success
packet_opensuse-canal:
stage: deploy-part2
<<: *packet
extends: .packet
when: on_success
packet_ubuntu18-ovn4nfv:
stage: deploy-part2
extends: .packet
when: on_success
# Contiv does not work in k8s v1.16
# packet_ubuntu16-contiv-sep:
# stage: deploy-part2
# extends: .packet
# when: on_success
# ### MANUAL JOBS
packet_ubuntu16-weave-sep:
stage: deploy-part2
extends: .packet
when: manual
packet_oracle-7-canal:
stage: deploy-part2
<<: *packet
packet_ubuntu18-cilium-sep:
stage: deploy-special
extends: .packet
when: manual
packet_ubuntu-kube-router-sep:
packet_ubuntu18-flannel-containerd-ha:
stage: deploy-part2
<<: *packet
extends: .packet
when: manual
packet_ubuntu18-flannel-containerd-ha-once:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-macvlan:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-calico-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet
when: manual
packet_oracle7-canal-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_fedora31-flannel:
stage: deploy-part2
extends: .packet
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet
when: manual
packet_fedora32-kube-ovn-containerd:
stage: deploy-part2
extends: .packet
when: on_success
# ### PR JOBS PART3
# Long jobs (45min+)
packet_centos7-weave-upgrade-ha:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade-once:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3
extends: .packet
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
packet_ubuntu18-calico-ha-recover-noquorum:
stage: deploy-part3
extends: .packet
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube-master[1:]"

View File

@@ -2,11 +2,12 @@
shellcheck:
extends: .job
stage: unit-tests
tags: [light]
variables:
SHELLCHECK_VERSION: v0.6.0
before_script:
- ./tests/scripts/rebase.sh
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- curl --silent --location "https://github.com/koalaman/shellcheck/releases/download/"${SHELLCHECK_VERSION}"/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:

View File

@@ -3,14 +3,14 @@
.terraform_install:
extends: .job
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
- ./tests/scripts/terraform_install.sh
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Prepare inventory
- if [ "$PROVIDER" == "openstack" ]; then VARIABLEFILE="cluster.tfvars"; else VARIABLEFILE="cluster.tf"; fi
- cp contrib/terraform/$PROVIDER/sample-inventory/$VARIABLEFILE .
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -s contrib/terraform/$PROVIDER/hosts
- terraform init contrib/terraform/$PROVIDER
# Copy SSH keypair
@@ -18,21 +18,29 @@
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- mkdir -p group_vars
# Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
.terraform_validate:
extends: .terraform_install
stage: unit-tests
tags: [light]
only: ['master', /^pr-.*$/]
script:
- if [ "$PROVIDER" == "openstack" ]; then VARIABLEFILE="cluster.tfvars"; else VARIABLEFILE="cluster.tf"; fi
- terraform validate -var-file=$VARIABLEFILE contrib/terraform/$PROVIDER
- terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
- terraform fmt -check -diff contrib/terraform/$PROVIDER
.terraform_apply:
extends: .terraform_install
stage: deploy-part2
tags: [light]
stage: deploy-part3
when: manual
only: [/^pr-.*$/]
artifacts:
when: always
paths:
- cluster-dump/
variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
ANSIBLE_INVENTORY: hosts
@@ -43,56 +51,56 @@
- tests/scripts/testcases_run.sh
after_script:
# Cleanup regardless of exit code
- ./tests/scripts/testcases_cleanup.sh
- chronic ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.6
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: 0.11.11
TF_VERSION: 0.12.24
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: 0.11.11
TF_VERSION: 0.12.24
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-packet-ubuntu16-default:
extends: .terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: ewr1
TF_VAR_public_key_path: ""
TF_VAR_operating_system: ubuntu_16_04
tf-packet-ubuntu18-default:
extends: .terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: ams1
TF_VAR_public_key_path: ""
TF_VAR_operating_system: ubuntu_18_04
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.24
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ewr1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.24
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ams1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -105,12 +113,87 @@ tf-packet-ubuntu18-default:
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
# Elastx is generously donating resources for Kubespray on Openstack CI
# Contacts: @gix @bl0m1
.elastx_variables: &elastx_variables
OS_AUTH_URL: https://ops.elastx.cloud:5000
OS_PROJECT_ID: 564c6b461c6b44b1bb19cdb9c2d928e4
OS_PROJECT_NAME: kubespray_ci
OS_USER_DOMAIN_NAME: Default
OS_PROJECT_DOMAIN_ID: default
OS_USERNAME: kubespray@root314.com
OS_REGION_NAME: se-sto
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup:
stage: unit-tests
tags: [light]
image: python
variables:
<<: *elastx_variables
before_script:
- pip install -r scripts/openstack-cleanup/requirements.txt
script:
- ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu18-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
variables:
<<: *elastx_variables
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "0"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_floatingip_pool: "elx-public1"
TF_VAR_dns_nameservers: '["1.1.1.1", "8.8.8.8", "8.8.4.4"]'
TF_VAR_use_access_ip: "0"
TF_VAR_external_net: "600b8501-78cb-4155-9c9f-23dfcba88828"
TF_VAR_network_name: "ci-$CI_JOB_ID"
TF_VAR_az_list: '["sto1"]'
TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_cleanup:
stage: unit-tests
tags: [light]
image: python
environment: ovh
variables:
<<: *ovh_variables
before_script:
- pip install -r scripts/openstack-cleanup/requirements.txt
script:
- ./scripts/openstack-cleanup/main.py
tf-ovh_ubuntu18-calico:
extends: .terraform_apply
when: on_success
environment: ovh
variables:
<<: *ovh_variables
TF_VERSION: 0.12.6
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
@@ -132,31 +215,3 @@ tf-ovh_ubuntu18-calico:
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_coreos-calico:
extends: .terraform_apply
when: on_success
variables:
<<: *ovh_variables
TF_VERSION: 0.12.6
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: core
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_flavor_k8s_node: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_image: "CoreOS Stable"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

54
.gitlab-ci/vagrant.yml Normal file
View File

@@ -0,0 +1,54 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant:
extends: .testcases
variables:
CI_PLATFORM: "vagrant"
SSH_USER: "vagrant"
VAGRANT_DEFAULT_PROVIDER: "libvirt"
KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success

2
.markdownlint.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
MD013: false

View File

@@ -2,10 +2,30 @@
## How to become a contributor and submit your own code
### Environment setup
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can use `pip install -r tests/requirements.txt`
#### Linting
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `./tests/scripts/ansible-lint.sh`
#### Molecule
[molecule](https://github.com/ansible-community/molecule) is designed to help the development and testing of Ansible roles. In Kubespray you can run it all for all roles with `./tests/scripts/molecule_run.sh` or for a specific role (that you are working with) with `molecule test` from the role directory (`cd roles/my-role`).
When developing or debugging a role it can be useful to run `molecule create` and `molecule converge` separately. Then you can use `molecule login` to SSH into the test environment.
#### Vagrant
Vagrant with VirtualBox or libvirt driver helps you to quickly spin test clusters to test things end to end. See [README.md#vagrant](README.md)
### Contributing A Patch
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Sign the CNCF CLA (https://git.k8s.io/community/CLA.md#the-contributor-license-agreement)
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.

View File

@@ -4,15 +4,18 @@ RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.5/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl
# Some tools like yamllint need this
ENV LANG=C.UTF-8

View File

@@ -1,5 +1,5 @@
mitogen:
ansible-playbook -c local mitogen.yaml -vv
ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

View File

@@ -4,18 +4,16 @@ aliases:
- mattymo
- atoms
- chadswen
- rsmitty
- bogdando
- bradbeam
- woopstar
- mirwan
- miouge1
- riverzhang
- holser
- smana
- verwilst
- woopstar
- luckysb
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan
- miouge1
- holmsten
- bozzo
- floryut
- eppo

277
README.md
View File

@@ -1,19 +1,17 @@
# Deploy a Production Ready Kubernetes Cluster
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
- **Continuous integration tests**
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
- **Continuous integration tests**
Quick Start
-----------
## Quick Start
To deploy the cluster you can use :
@@ -21,31 +19,35 @@ To deploy the cluster you can use :
#### Usage
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
@@ -56,157 +58,172 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed:
python -V && pip -V
```ShellSession
python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
sudo pip install -r requirements.txt
vagrant up
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
Documents
---------
## Documents
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [Flatcar Container Linux bootstrap](docs/flatcar.md)
- [Fedora CoreOS bootstrap](docs/fcos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [Roadmap](docs/roadmap.md)
Supported Linux Distributions
-----------------------------
## Supported Linux Distributions
- **Container Linux by CoreOS**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
- **Oracle Linux** 7
- **Flatcar Container Linux by Kinvolk**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, 8 (experimental: see [centos 8 notes](docs/centos8.md))
- **Fedora** 31, 32
- **Fedora CoreOS** (experimental: see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 42.3/Tumbleweed
- **Oracle Linux** 7, 8 (experimental: [centos 8 notes](docs/centos8.md) apply)
Note: Upstart/SysV init based OS types are not supported.
Supported Components
--------------------
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.15.3
- [etcd](https://github.com/coreos/etcd) v3.3.10
- [docker](https://www.docker.com/) v18.06 (see note)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.1
- [calico](https://github.com/projectcalico/calico) v3.7.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.5.5
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.2.1
- [weave](https://github.com/weaveworks/weave) v2.5.2
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
- [coredns](https://github.com/coredns/coredns) v1.6.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.25.1
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.18.9
- [etcd](https://github.com/coreos/etcd) v3.4.3
- [docker](https://www.docker.com/) v19.03 (see note)
- [containerd](https://containerd.io/) v1.2.13
- [cri-o](http://cri-o.io/) v1.17 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.6
- [calico](https://github.com/projectcalico/calico) v3.15.2
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.8.3
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.12.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.3.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.0.1
- [multus](https://github.com/intel/multus-cni) v3.6.0
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [weave](https://github.com/weaveworks/weave) v2.7.0
- Application
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.16.1
- [coredns](https://github.com/coredns/coredns) v1.6.7
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.35.0
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: The list of validated [docker versions](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker) is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09 and 19.03. The recommended docker version is 19.03. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Requirements
------------
- **Minimum required version of Kubernetes is v1.14**
- **Ansible v2.7.8 (or newer, but [not 2.8.x](https://github.com/kubernetes-sigs/kubespray/issues/4778)) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
## Requirements
- **Minimum required version of Kubernetes is v1.17**
- **Ansible v2.9+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
- If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Hardware:
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
Network Plugins
---------------
## Network Plugins
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [calico](docs/calico.md): bgp (layer 3) networking.
- [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
Community docs and resources
----------------------------
## Ingress Plugins
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
Tools and projects on top of Kubespray
--------------------------------------
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
## Community docs and resources
CI Tests
--------
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=CJ5G4GpqDy0)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
## Tools and projects on top of Kubespray
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details.

View File

@@ -3,38 +3,46 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [OWNERS](OWNERS) must LGTM this release
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
4. The release issue is closed
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases, merge freezes and milestones
## Major/minor releases and milestones
* Kubespray does not maintain stable branches for releases. Releases are tags, not
branches, and there are no backports. Therefore, there is no need for merge
freezes as well.
* For major releases (vX.Y) Kubespray maintains one branch (`release-X.Y`). Minor releases (vX.Y.Z) are available only as tags.
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
* Security patches and bugs might be backported.
* Fixes for major releases (vX.Y) and minor releases (vX.Y.Z) are delivered
via maintenance releases (vX.Y.Z) and assigned to the corresponding open
milestone (vX.Y). That milestone remains open for the major/minor releases
support lifetime, which ends once the milestone closed. Then only a next major
or minor release can be done.
[GitHub milestone](https://github.com/kubernetes-sigs/kubespray/milestones).
That milestone remains open for the major/minor releases support lifetime,
which ends once the milestone is closed. Then only a next major or minor release
can be done.
* Kubespray major and minor releases are bound to the given ``kube_version`` major/minor
* Kubespray major and minor releases are bound to the given `kube_version` major/minor
version numbers and other components' arbitrary versions, like etcd or network plugins.
Older or newer versions are not supported and not tested for the given release.
Older or newer component versions are not supported and not tested for the given
release (even if included in the checksum variables, like `kubeadm_checksums`).
* There is no unstable releases and no APIs, thus Kubespray doesn't follow
[semver](http://semver.org/). Every version describes only a stable release.
[semver](https://semver.org/). Every version describes only a stable release.
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
playbooks, shall be described in the release notes. Other breaking changes, if any in
the contributed addons or bound versions of Kubernetes and other components, are
considered out of Kubespray scope and are up to the components' teams to deal with and
document.
* Minor releases can change components' versions, but not the major ``kube_version``.
Greater ``kube_version`` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``,
then Kubespray v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1
* Minor releases can change components' versions, but not the major `kube_version`.
Greater `kube_version` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to `kube_version: 1.4.x`, `calico_version: 0.22.0`, `etcd_version: v3.0.6`,
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.

View File

@@ -1,13 +1,13 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Team to reach out
# They are the contact point for the Product Security Committee to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo
mattymo

114
Vagrantfile vendored
View File

@@ -7,63 +7,72 @@ require 'fileutils'
Vagrant.require_version ">= 2.0.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb')
COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json"
# Uniq disk UUID for libvirt
DISK_UUID = Time.now.utc.to_i
SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"flatcar-stable" => {box: "flatcar-stable", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["stable"]},
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"fedora31" => {box: "fedora/31-cloud-base", user: "vagrant"},
"fedora32" => {box: "fedora/32-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.1", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
}
# Defaults for config options defined in CONFIG
$num_instances = 3
$instance_name_prefix = "k8s"
$vm_gui = false
$vm_memory = 2048
$vm_cpus = 1
$shared_folders = {}
$forwarded_ports = {}
$subnet = "172.17.8"
$os = "ubuntu1804"
$network_plugin = "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking = false
# The first three nodes are etcd servers
$etcd_instances = $num_instances
# The first two nodes are kube masters
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances = $num_instances
# The following only works when using the libvirt provider
$kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$override_disk_size = false
$disk_size = "20GB"
$local_path_provisioner_enabled = false
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
$playbook = "cluster.yml"
host_vars = {}
if File.exist?(CONFIG)
require CONFIG
end
# Defaults for config options defined in CONFIG
$num_instances ||= 3
$instance_name_prefix ||= "k8s"
$vm_gui ||= false
$vm_memory ||= 2048
$vm_cpus ||= 2
$shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$download_run_once ||= "True"
$download_force_cache ||= "True"
# The first three nodes are etcd servers
$etcd_instances ||= $num_instances
# The first two nodes are kube masters
$kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider
$kube_node_instances_with_disks ||= false
$kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
$playbook ||= "cluster.yml"
host_vars = {}
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
@@ -136,6 +145,8 @@ Vagrant.configure("2") do |config|
end
node.vm.provider :libvirt do |lv|
lv.nested = $libvirt_nested
lv.cpu_mode = "host-model"
lv.memory = $vm_memory
lv.cpus = $vm_cpus
lv.default_prefix = 'kubespray'
@@ -176,19 +187,24 @@ Vagrant.configure("2") do |config|
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
# Disable firewalld on oraclelinux vms
if ["oraclelinux","oraclelinux8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end
host_vars[vm_name] = {
"ip": ip,
"flannel_interface": "eth1",
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"download_run_once": "True",
"download_run_once": $download_run_once,
"download_localhost": "False",
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
# Make kubespray cache even when download_run_once is false
"download_force_cache": "True",
"download_force_cache": $download_force_cache,
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
"download_keep_remote_cache": "False",
"docker_keepcache": "1",
"docker_rpm_keepcache": "1",
# These two settings will put kubectl and admin.config in $inventory/artifacts
"kubeconfig_localhost": "True",
"kubectl_localhost": "True",
@@ -206,7 +222,7 @@ Vagrant.configure("2") do |config|
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true
ansible.limit = "all"
ansible.limit = "all,localhost"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars

View File

@@ -1,2 +1,2 @@
---
theme: jekyll-theme-slate
theme: jekyll-theme-slate

View File

@@ -11,11 +11,13 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
stdout_callback = skippy
fact_caching_timeout = 7200
stdout_callback = default
display_skipped_hosts = no
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg
[inventory]
ignore_patterns = artifacts, credentials

15
ansible_version.yml Normal file
View File

@@ -0,0 +1,15 @@
---
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.8.0
ansible_connection: local
tasks:
- name: "Check ansible version >={{ minimal_ansible_version }}"
assert:
msg: "Ansible must be {{ minimal_ansible_version }} or higher"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
tags:
- check

View File

@@ -1,44 +1,55 @@
---
- hosts: localhost
- name: Check ansible version
import_playbook: ansible_version.yml
- hosts: all
gather_facts: false
become: no
tags: always
tasks:
- name: "Check ansible version >=2.7.8"
assert:
msg: "Ansible must be v2.7.8 or higher"
that:
- ansible_version.string is version("2.7.8", ">=")
tags:
- check
vars:
ansible_connection: local
- name: "Set up proxy environment"
set_fact:
proxy_env:
http_proxy: "{{ http_proxy | default ('') }}"
HTTP_PROXY: "{{ http_proxy | default ('') }}"
https_proxy: "{{ https_proxy | default ('') }}"
HTTPS_PROXY: "{{ https_proxy | default ('') }}"
no_proxy: "{{ no_proxy | default ('') }}"
NO_PROXY: "{{ no_proxy | default ('') }}"
no_log: true
- hosts: bastion[0]
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s-cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s-cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{ proxy_env }}"
- hosts: etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
@@ -47,9 +58,10 @@
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
@@ -58,58 +70,68 @@
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
environment: "{{ proxy_env }}"
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label, tags: node-label }
- hosts: calico-rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr']}
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube-master[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps, tags: apps }
environment: "{{ proxy_env }}"
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@@ -15,8 +15,9 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
## Configuration through group_vars/all
You have to modify at least one variable in group_vars/all, which is the **cluster_name** variable. It must be globally
unique due to some restrictions in Azure. Most other variables should be self explanatory if you have some basic Kubernetes
You have to modify at least two variables in group_vars/all. The one is the **cluster_name** variable, it must be globally
unique due to some restrictions in Azure. The other one is the **ssh_public_keys** variable, it must be your ssh public
key to access your azure virtual machines. Most other variables should be self explanatory if you have some basic Kubernetes
experience.
## Bastion host
@@ -59,6 +60,7 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
$ cd kubespray-root-dir
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all.yml" cluster.yml
$ sudo pip3 install -r requirements.txt
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@@ -9,18 +9,11 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
elif azure &>/dev/null; then
ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
else
echo "Azure cli not found"
fi
ansible-playbook generate-templates.yml
az deployment group create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP

View File

@@ -9,10 +9,6 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
else
ansible-playbook generate-templates.yml
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
fi
ansible-playbook generate-templates.yml
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete

View File

@@ -1,14 +0,0 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete

View File

@@ -7,7 +7,7 @@ cluster_name: example
# node that can be used to access the masters and minions
use_bastion: false
# Set this to a prefered name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# Set this to a preferred name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
# bastion_domain_prefix: k8s-bastion

View File

@@ -1,6 +1,6 @@
---
- name: Query Azure VMs
- name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

View File

@@ -1,14 +1,14 @@
---
- name: Query Azure VMs IPs
- name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles
- name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- name: Query Azure Load Balancer Public IP
- name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

View File

@@ -69,7 +69,7 @@
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id

View File

@@ -20,6 +20,8 @@
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Add hosts with a specific hostname, ip, and optional access ip:
# inventory.py first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
# Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1
#
@@ -39,12 +41,14 @@ from ruamel.yaml import YAML
import os
import re
import subprocess
import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico-rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
@@ -66,6 +70,7 @@ MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
DEBUG = get_var_as_bool("DEBUG", True)
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
USE_REAL_HOSTNAME = get_var_as_bool("USE_REAL_HOSTNAME", False)
# Configurable as shell vars end
@@ -78,8 +83,8 @@ class KubesprayInventory(object):
if self.config_file:
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError:
self.yaml_config = yaml.load_all(self.hosts_file)
except OSError:
pass
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
@@ -164,6 +169,7 @@ class KubesprayInventory(object):
# FIXME(mattymo): Fix condition where delete then add reuses highest id
next_host_id = highest_host_id + 1
next_host = ""
all_hosts = existing_hosts.copy()
for host in changed_hosts:
@@ -188,14 +194,33 @@ class KubesprayInventory(object):
self.debug("Skipping existing host {0}.".format(ip))
continue
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
if USE_REAL_HOSTNAME:
cmd = ("ssh -oStrictHostKeyChecking=no "
+ access_ip + " 'hostname -s'")
next_host = subprocess.check_output(cmd, shell=True)
next_host = next_host.strip().decode('ascii')
else:
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
elif host[0].isalpha():
raise Exception("Adding hosts by hostname is not supported.")
if ',' in host:
try:
hostname, ip, access_ip = host.split(',')
except Exception:
hostname, ip = host.split(',')
access_ip = ip
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
all_hosts[hostname] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
return all_hosts
def range2ips(self, hosts):
@@ -206,14 +231,14 @@ class KubesprayInventory(object):
# Python 3.x
start = int(ip_address(start_address))
end = int(ip_address(end_address))
except:
except Exception:
# Python 2.7
start = int(ip_address(unicode(start_address)))
end = int(ip_address(unicode(end_address)))
start = int(ip_address(str(start_address)))
end = int(ip_address(str(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
if '-' in host and not host.startswith('-'):
if '-' in host and not (host.startswith('-') or host[0].isalpha()):
start, end = host.strip().split('-')
try:
reworked_hosts.extend(ips(start, end))
@@ -348,6 +373,8 @@ class KubesprayInventory(object):
self.print_config()
elif command == 'print_ips':
self.print_ips()
elif command == 'print_hostnames':
self.print_hostnames()
elif command == 'load':
self.load_file(args)
else:
@@ -361,11 +388,13 @@ Available commands:
help - Display this message
print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1
@@ -381,6 +410,9 @@ MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
def print_config(self):
yaml.dump(self.yaml_config, sys.stdout)
def print_hostnames(self):
print(' '.join(self.yaml_config['all']['hosts'].keys()))
def print_ips(self):
ips = []
for host, opts in self.yaml_config['all']['hosts'].items():

View File

@@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import inventory
import mock
import unittest
@@ -22,7 +23,7 @@ path = "./contrib/inventory_builder/"
if path not in sys.path:
sys.path.append(path)
import inventory
import inventory # noqa
class TestInventory(unittest.TestCase):
@@ -43,8 +44,8 @@ class TestInventory(unittest.TestCase):
def test_get_ip_from_opts_invalid(self):
optstring = "notanaddr=value something random!chars:D"
self.assertRaisesRegexp(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring)
self.assertRaisesRegex(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring)
def test_ensure_required_groups(self):
groups = ['group1', 'group2']
@@ -63,8 +64,8 @@ class TestInventory(unittest.TestCase):
def test_get_host_id_invalid(self):
bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
for hostname in bad_hostnames:
self.assertRaisesRegexp(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
@@ -192,8 +193,8 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.assertRaisesRegexp(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
self.assertRaisesRegex(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
def test_purge_invalid_hosts(self):
proper_hostnames = ['node1', 'node2']
@@ -309,8 +310,8 @@ class TestInventory(unittest.TestCase):
def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self):
changed_hosts = ['10.90.0.2,192.168.0.2']

View File

@@ -1,7 +1,7 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8, py27
envlist = pep8, py33
[testenv]
whitelist_externals = py.test

View File

@@ -1,12 +0,0 @@
# Deploy MetalLB into Kubespray/Kubernetes
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/concepts/layer2/). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install
```
Defaults can be found in contrib/metallb/roles/provision/defaults/main.yml. You can override the defaults by copying the contents of this file to somewhere in inventory/mycluster/group_vars such as inventory/mycluster/groups_vars/k8s-cluster/addons.yml and making any adjustments as required.
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
```

View File

@@ -1 +0,0 @@
../../library

View File

@@ -1,6 +0,0 @@
---
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -1,14 +0,0 @@
---
metallb:
ip_range: "10.5.0.50-10.5.0.99"
protocol: "layer2"
# additional_address_pools:
# kube_service_pool:
# ip_range: "10.5.1.50-10.5.1.99"
# protocol: "layer2"
# auto_assign: false
limits:
cpu: "100m"
memory: "100Mi"
port: "7472"
version: v0.7.3

View File

@@ -1,18 +0,0 @@
---
- name: "Kubernetes Apps | Lay Down MetalLB"
become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
with_items: ["metallb.yml", "metallb-config.yml"]
register: "rendering"
when:
- "inventory_hostname == groups['kube-master'][0]"
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@@ -1,21 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: loadbalanced
protocol: {{ metallb.protocol }}
addresses:
- {{ metallb.ip_range }}
{% if metallb.additional_address_pools is defined %}{% for pool in metallb.additional_address_pools %}
- name: {{ pool }}
protocol: {{ metallb.additional_address_pools[pool].protocol }}
addresses:
- {{ metallb.additional_address_pools[pool].ip_range }}
auto-assign: {{ metallb.additional_address_pools[pool].auto_assign }}
{% endfor %}
{% endif %}

View File

@@ -1,221 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
---

View File

@@ -1,5 +1,5 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard

View File

@@ -21,7 +21,7 @@ You can specify a `default_release` for apt on Debian/Ubuntu by overriding this
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info.
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
## Dependencies

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added.
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
@@ -18,7 +18,7 @@
- name: Ensure GlusterFS client is installed.
apt:
name: "{{ item }}"
state: installed
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-client

View File

@@ -3,7 +3,7 @@
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
# Instal xfs package
# Install xfs package
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
@@ -36,7 +36,7 @@
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume.
- name: Configure Gluster volume with replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
@@ -46,6 +46,18 @@
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size
mount:

View File

@@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added.
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
@@ -19,7 +19,7 @@
- name: Ensure GlusterFS is installed.
apt:
name: "{{ item }}"
state: installed
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-server

View File

@@ -8,7 +8,7 @@
{% for host in groups['gfs-cluster'] %}
{
"addresses": [
{
{
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
}
],

View File

@@ -1,7 +1,7 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfs
name: glusterfs
spec:
capacity:
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"

View File

@@ -6,7 +6,7 @@
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"

View File

@@ -13,7 +13,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"

View File

@@ -18,7 +18,7 @@
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."

View File

@@ -10,10 +10,10 @@
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."

View File

@@ -1,6 +1,6 @@
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "glusterfs",
"labels": {
@@ -12,6 +12,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"glusterfs-node": "daemonset"
}
},
"template": {
"metadata": {
"name": "glusterfs",

View File

@@ -30,7 +30,7 @@
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
@@ -42,6 +42,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "deploy-heketi"
}
},
"replicas": 1,
"template": {
"metadata": {

View File

@@ -44,7 +44,7 @@
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "heketi",
"labels": {
@@ -55,6 +55,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "heketi"
}
},
"replicas": 1,
"template": {
"metadata": {

View File

@@ -16,7 +16,7 @@
{
"addresses": [
{
"ip": "{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
"ip": "{{ hostvars[node].ip }}"
}
],
"ports": [

View File

@@ -12,7 +12,7 @@
"{{ node }}"
],
"storage": [
"{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
"{{ hostvars[node].ip }}"
]
},
"zone": 1

View File

@@ -22,7 +22,7 @@
ignore_errors: true
changed_when: false
- name: "Remove volume groups."
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
@@ -30,7 +30,7 @@
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true

View File

@@ -1,43 +1,43 @@
---
- name: "Remove storage class."
- name: "Remove storage class." # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
include_tasks: "../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over."
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs."
- name: "Tear down glusterfs." # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service."
- name: "Remove heketi storage service." # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding"
- name: "Remove heketi gluster role binding" # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret"
- name: "Remove heketi config secret" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup"
- name: "Remove heketi db backup" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account"
- name: "Remove heketi service account" # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"

View File

@@ -10,7 +10,7 @@ This project will create:
* AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
**Requirements**
- Terraform 0.8.7 or newer
- Terraform 0.12.0 or newer
**How to Use:**
@@ -22,7 +22,7 @@ export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export TF_VAR_AWS_DEFAULT_REGION="zzz"
```
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
@@ -41,12 +41,12 @@ ssh -F ./ssh-bastion.conf user@$ip
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS)
Example (this one assumes you are using Ubuntu)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -b --become-user=root --flush-cache
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
***Using other distrib than Ubuntu***
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
For example, to use:
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with

View File

@@ -1,11 +1,11 @@
terraform {
required_version = ">= 0.8.7"
required_version = ">= 0.12.0"
}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
region = "${var.AWS_DEFAULT_REGION}"
access_key = var.AWS_ACCESS_KEY_ID
secret_key = var.AWS_SECRET_ACCESS_KEY
region = var.AWS_DEFAULT_REGION
}
data "aws_availability_zones" "available" {}
@@ -16,32 +16,32 @@ data "aws_availability_zones" "available" {}
*/
module "aws-vpc" {
source = "modules/vpc"
source = "./modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
default_tags = "${var.default_tags}"
aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags
}
module "aws-elb" {
source = "modules/elb"
source = "./modules/elb"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_subnet_ids_public = "${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags = "${var.default_tags}"
aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_elb_api_port = var.aws_elb_api_port
k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags
}
module "aws-iam" {
source = "modules/iam"
source = "./modules/iam"
aws_cluster_name = "${var.aws_cluster_name}"
aws_cluster_name = var.aws_cluster_name
}
/*
@@ -50,22 +50,22 @@ module "aws-iam" {
*/
resource "aws_instance" "bastion-server" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
ami = data.aws_ami.distro.id
instance_type = var.aws_bastion_size
count = length(var.aws_cidr_subnets_public)
associate_public_ip_address = true
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = module.aws-vpc.aws_security_group
key_name = "${var.AWS_SSH_KEY_NAME}"
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))
}
/*
@@ -74,71 +74,71 @@ resource "aws_instance" "bastion-server" {
*/
resource "aws_instance" "k8s-master" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_kube_master_size
count = "${var.aws_kube_master_num}"
count = var.aws_kube_master_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = module.aws-vpc.aws_security_group
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
iam_instance_profile = module.aws-iam.kube-master-profile
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = "${var.aws_kube_master_num}"
elb = "${module.aws-elb.aws_elb_api_id}"
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
count = var.aws_kube_master_num
elb = module.aws-elb.aws_elb_api_id
instance = element(aws_instance.k8s-master.*.id, count.index)
}
resource "aws_instance" "k8s-etcd" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_etcd_size
count = "${var.aws_etcd_num}"
count = var.aws_etcd_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = module.aws-vpc.aws_security_group
key_name = "${var.AWS_SSH_KEY_NAME}"
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))
}
resource "aws_instance" "k8s-worker" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_kube_worker_size
count = "${var.aws_kube_worker_num}"
count = var.aws_kube_worker_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = module.aws-vpc.aws_security_group
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
iam_instance_profile = module.aws-iam.kube-worker-profile
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))
}
/*
@@ -146,16 +146,16 @@ resource "aws_instance" "k8s-worker" {
*
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
template = file("${path.module}/templates/inventory.tpl")
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
vars = {
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
list_etcd = join("\n", aws_instance.k8s-etcd.*.private_dns)
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
}
@@ -165,7 +165,7 @@ resource "null_resource" "inventories" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers {
template = "${data.template_file.inventory.rendered}"
triggers = {
template = data.template_file.inventory.rendered
}
}

View File

@@ -1,19 +1,19 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
vpc_id = var.aws_vpc_id
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = "${var.aws_elb_api_port}"
to_port = "${var.k8s_secure_api_port}"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
@@ -22,19 +22,19 @@ resource "aws_security_group_rule" "aws-allow-api-egress" {
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"]
security_groups = ["${aws_security_group.aws-elb.id}"]
subnets = var.aws_subnet_ids_public
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = "${var.k8s_secure_api_port}"
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = "${var.aws_elb_api_port}"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
@@ -51,7 +51,7 @@ resource "aws_elb" "aws-elb-api" {
connection_draining = true
connection_draining_timeout = 400
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-elb-api"
))}"
))
}

View File

@@ -1,7 +1,7 @@
output "aws_elb_api_id" {
value = "${aws_elb.aws-elb-api.id}"
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = "${aws_elb.aws-elb-api.dns_name}"
value = aws_elb.aws-elb-api.dns_name
}

View File

@@ -42,7 +42,7 @@ EOF
resource "aws_iam_role_policy" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
role = aws_iam_role.kube-master.id
policy = <<EOF
{
@@ -77,7 +77,7 @@ EOF
resource "aws_iam_role_policy" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
role = aws_iam_role.kube-worker.id
policy = <<EOF
{
@@ -132,10 +132,10 @@ EOF
resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile"
role = "${aws_iam_role.kube-master.name}"
role = aws_iam_role.kube-master.name
}
resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile"
role = "${aws_iam_role.kube-worker.name}"
role = aws_iam_role.kube-worker.name
}

View File

@@ -1,7 +1,7 @@
output "kube-master-profile" {
value = "${aws_iam_instance_profile.kube-master.name }"
value = aws_iam_instance_profile.kube-master.name
}
output "kube-worker-profile" {
value = "${aws_iam_instance_profile.kube-worker.name }"
value = aws_iam_instance_profile.kube-worker.name
}

View File

@@ -1,55 +1,55 @@
resource "aws_vpc" "cluster-vpc" {
cidr_block = "${var.aws_vpc_cidr_block}"
cidr_block = var.aws_vpc_cidr_block
#DNS Related Entries
enable_dns_support = true
enable_dns_hostnames = true
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))
}
resource "aws_eip" "cluster-nat-eip" {
count = "${length(var.aws_cidr_subnets_public)}"
count = length(var.aws_cidr_subnets_public)
vpc = true
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
))
}
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))
}
resource "aws_nat_gateway" "cluster-nat-gateway" {
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
count = length(var.aws_cidr_subnets_public)
allocation_id = element(aws_eip.cluster-nat-eip.*.id, count.index)
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
}
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))
}
#Routing in VPC
@@ -57,53 +57,53 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
resource "aws_route_table" "kubernetes-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
gateway_id = aws_internet_gateway.cluster-vpc-internetgw.id
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))
}
resource "aws_route_table" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = length(var.aws_cidr_subnets_private)
vpc_id = aws_vpc.cluster-vpc.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
nat_gateway_id = element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))
}
resource "aws_route_table_association" "kubernetes-public" {
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
count = length(var.aws_cidr_subnets_public)
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
route_table_id = aws_route_table.kubernetes-public.id
}
resource "aws_route_table_association" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
count = length(var.aws_cidr_subnets_private)
subnet_id = element(aws_subnet.cluster-vpc-subnets-private.*.id, count.index)
route_table_id = element(aws_route_table.kubernetes-private.*.id, count.index)
}
#Kubernetes Security Groups
resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))
}
resource "aws_security_group_rule" "allow-all-ingress" {
@@ -111,8 +111,8 @@ resource "aws_security_group_rule" "allow-all-ingress" {
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["${var.aws_vpc_cidr_block}"]
security_group_id = "${aws_security_group.kubernetes.id}"
cidr_blocks = [var.aws_vpc_cidr_block]
security_group_id = aws_security_group.kubernetes.id
}
resource "aws_security_group_rule" "allow-all-egress" {
@@ -121,7 +121,7 @@ resource "aws_security_group_rule" "allow-all-egress" {
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
security_group_id = aws_security_group.kubernetes.id
}
resource "aws_security_group_rule" "allow-ssh-connections" {
@@ -130,5 +130,5 @@ resource "aws_security_group_rule" "allow-ssh-connections" {
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
security_group_id = aws_security_group.kubernetes.id
}

View File

@@ -1,19 +1,19 @@
output "aws_vpc_id" {
value = "${aws_vpc.cluster-vpc.id}"
value = aws_vpc.cluster-vpc.id
}
output "aws_subnet_ids_private" {
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
value = aws_subnet.cluster-vpc-subnets-private.*.id
}
output "aws_subnet_ids_public" {
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
value = aws_subnet.cluster-vpc-subnets-public.*.id
}
output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
value = aws_security_group.kubernetes.*.id
}
output "default_tags" {
value = "${var.default_tags}"
value = var.default_tags
}

View File

@@ -1,17 +1,17 @@
output "bastion_ip" {
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
value = join("\n", aws_instance.bastion-server.*.public_ip)
}
output "masters" {
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
value = join("\n", aws_instance.k8s-master.*.private_ip)
}
output "workers" {
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
value = join("\n", aws_instance.k8s-worker.*.private_ip)
}
output "etcd" {
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
value = join("\n", aws_instance.k8s-etcd.*.private_ip)
}
output "aws_elb_api_fqdn" {
@@ -19,9 +19,9 @@ output "aws_elb_api_fqdn" {
}
output "inventory" {
value = "${data.template_file.inventory.rendered}"
value = data.template_file.inventory.rendered
}
output "default_tags" {
value = "${var.default_tags}"
value = var.default_tags
}

View File

@@ -2,9 +2,9 @@
aws_cluster_name = "devtest"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
@@ -12,24 +12,24 @@ aws_bastion_size = "t2.medium"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest"
# Product = "kubernetes"
# Env = "devtest"
# Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"

View File

@@ -25,7 +25,7 @@ data "aws_ami" "distro" {
filter {
name = "name"
values = ["CoreOS-stable-*"]
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
filter {
@@ -33,7 +33,7 @@ data "aws_ami" "distro" {
values = ["hvm"]
}
owners = ["595879546273"] #CoreOS
owners = ["099720109477"] # Canonical
}
//AWS VPC Variables

View File

@@ -1,11 +1,11 @@
# Kubernetes on Openstack with Terraform
# Kubernetes on OpenStack with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
Openstack.
OpenStack.
## Status
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
This will install a Kubernetes cluster on an OpenStack Cloud. It should work on
most modern installs of OpenStack that support the basic services.
### Known compatible public clouds
@@ -38,6 +38,16 @@ hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there. Alternatively, a node with
a floating IP can be used as a jump host to nodes without.
#### Using an existing router
It is possible to use an existing router instead of creating one. To use an
existing router set the router\_id variable to the uuid of the router you wish
to use.
For example:
```
router_id = "00c542e7-6f46-4535-ae95-984c7f0391a3"
```
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
@@ -62,9 +72,9 @@ specify:
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still
Even if you are using Flatcar Container Linux by Kinvolk for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images.
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
Flatcar Container Linux by Kinvolk cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements
@@ -224,7 +234,9 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|Variable | Description |
|---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
@@ -246,6 +258,113 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`use_server_group` | Create and use openstack nova servergroups, default: false |
|`k8s_nodes` | Map containing worker node definition, see explanation below |
##### k8s_nodes
Allows a custom defintion of worker nodes giving the operator full control over individual node flavor and
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
using the `k8s_nodes` variable.
For example:
```
k8s_nodes = {
"1" = {
"az" = "sto1"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"2" = {
"az" = "sto2"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"3" = {
"az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
}
}
```
Would result in the same configuration as:
```
number_of_k8s_nodes = 3
flavor_k8s_node = "83d8b44a-26a0-4f02-a981-079446926445"
az_list = ["sto1", "sto2", "sto3"]
```
And:
```
k8s_nodes = {
"ing-1" = {
"az" = "sto1"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"ing-2" = {
"az" = "sto2"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"ing-3" = {
"az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"big-1" = {
"az" = "sto1"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"big-2" = {
"az" = "sto2"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"big-3" = {
"az" = "sto3"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"small-1" = {
"az" = "sto1"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
},
"small-2" = {
"az" = "sto2"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
},
"small-3" = {
"az" = "sto3"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
}
}
```
Would result in three nodes in each availability zone each with their own separate naming,
flavor and floating ip configuration.
The "schema":
```
k8s_nodes = {
"key | node name suffix, must be unique" = {
"az" = string
"flavor" = string
"floating_ip" = bool
},
}
```
All values are required.
#### Terraform state files
@@ -363,7 +482,7 @@ So, either a bastion host, or at least master/node with a floating IP are requir
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all
@@ -391,7 +510,7 @@ Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
# Directory where the binaries will be installed
# Default:
# bin_dir: /usr/local/bin
# For Container Linux by CoreOS:
# For Flatcar Container Linux by Kinvolk:
bin_dir: /opt/bin
```
- and **cloud_provider**:
@@ -412,14 +531,17 @@ kube_network_plugin: flannel
# Can be docker_dns, host_resolvconf or none
# Default:
# resolvconf_mode: docker_dns
# For Container Linux by CoreOS:
# For Flatcar Container Linux by Kinvolk:
resolvconf_mode: host_resolvconf
```
- Set max amount of attached cinder volume per host (default 256)
```
node_volume_attach_limit: 26
```
- Disable access_ip, this will make all innternal cluster traffic to be sent over local network when a floating IP is attached (default this value is set to 1)
```
use_access_ip: 0
```
### Deploy Kubernetes
@@ -483,3 +605,81 @@ $ ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storag
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).
## Appendix
### Migration from `number_of_k8s_nodes*` to `k8s_nodes`
If you currently have a cluster defined using the `number_of_k8s_nodes*` variables and wish
to migrate to the `k8s_nodes` style you can do it like so:
```ShellSession
$ terraform state list
module.compute.data.openstack_images_image_v2.gfs_image
module.compute.data.openstack_images_image_v2.vm_image
module.compute.openstack_compute_floatingip_associate_v2.k8s_master[0]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]
module.compute.openstack_compute_instance_v2.k8s_master[0]
module.compute.openstack_compute_instance_v2.k8s_node[0]
module.compute.openstack_compute_instance_v2.k8s_node[1]
module.compute.openstack_compute_instance_v2.k8s_node[2]
module.compute.openstack_compute_keypair_v2.k8s
module.compute.openstack_compute_servergroup_v2.k8s_etcd[0]
module.compute.openstack_compute_servergroup_v2.k8s_master[0]
module.compute.openstack_compute_servergroup_v2.k8s_node[0]
module.compute.openstack_networking_secgroup_rule_v2.bastion[0]
module.compute.openstack_networking_secgroup_rule_v2.egress[0]
module.compute.openstack_networking_secgroup_rule_v2.k8s
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[0]
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[1]
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[2]
module.compute.openstack_networking_secgroup_rule_v2.k8s_master[0]
module.compute.openstack_networking_secgroup_rule_v2.worker[0]
module.compute.openstack_networking_secgroup_rule_v2.worker[1]
module.compute.openstack_networking_secgroup_rule_v2.worker[2]
module.compute.openstack_networking_secgroup_rule_v2.worker[3]
module.compute.openstack_networking_secgroup_rule_v2.worker[4]
module.compute.openstack_networking_secgroup_v2.bastion[0]
module.compute.openstack_networking_secgroup_v2.k8s
module.compute.openstack_networking_secgroup_v2.k8s_master
module.compute.openstack_networking_secgroup_v2.worker
module.ips.null_resource.dummy_dependency
module.ips.openstack_networking_floatingip_v2.k8s_master[0]
module.ips.openstack_networking_floatingip_v2.k8s_node[0]
module.ips.openstack_networking_floatingip_v2.k8s_node[1]
module.ips.openstack_networking_floatingip_v2.k8s_node[2]
module.network.openstack_networking_network_v2.k8s[0]
module.network.openstack_networking_router_interface_v2.k8s[0]
module.network.openstack_networking_router_v2.k8s[0]
module.network.openstack_networking_subnet_v2.k8s[0]
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["1"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["2"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["3"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"3\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[0]' 'module.compute.openstack_compute_instance_v2.k8s_node["1"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[0]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[1]' 'module.compute.openstack_compute_instance_v2.k8s_node["2"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[1]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[2]' 'module.compute.openstack_compute_instance_v2.k8s_node["3"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[2]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"3\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[0]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["1"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[0]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[1]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["2"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[1]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[2]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["3"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[2]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"3\"]"
Successfully moved 1 object(s).
```
Of course for nodes without floating ips those steps can be omitted.

View File

@@ -5,89 +5,104 @@ provider "openstack" {
module "network" {
source = "./modules/network"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
use_neutron = "${var.use_neutron}"
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
router_id = var.router_id
}
module "ips" {
source = "./modules/ips"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
floatingip_pool = "${var.floatingip_pool}"
number_of_bastions = "${var.number_of_bastions}"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
router_id = "${module.network.router_id}"
number_of_k8s_masters = var.number_of_k8s_masters
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
number_of_k8s_nodes = var.number_of_k8s_nodes
floatingip_pool = var.floatingip_pool
number_of_bastions = var.number_of_bastions
external_net = var.external_net
network_name = var.network_name
router_id = module.network.router_id
k8s_nodes = var.k8s_nodes
}
module "compute" {
source = "./modules/compute"
cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}"
number_of_k8s_masters_no_floating_ip = "${var.number_of_k8s_masters_no_floating_ip}"
number_of_k8s_masters_no_floating_ip_no_etcd = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}"
image = "${var.image}"
image_gfs = "${var.image_gfs}"
ssh_user = "${var.ssh_user}"
ssh_user_gfs = "${var.ssh_user_gfs}"
flavor_k8s_master = "${var.flavor_k8s_master}"
flavor_k8s_node = "${var.flavor_k8s_node}"
flavor_etcd = "${var.flavor_etcd}"
flavor_gfs_node = "${var.flavor_gfs_node}"
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_master_no_etcd_fips = "${module.ips.k8s_master_no_etcd_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
master_allowed_remote_ips = "${var.master_allowed_remote_ips}"
k8s_allowed_remote_ips = "${var.k8s_allowed_remote_ips}"
k8s_allowed_egress_ips = "${var.k8s_allowed_egress_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}"
wait_for_floatingip = "${var.wait_for_floatingip}"
cluster_name = var.cluster_name
az_list = var.az_list
az_list_node = var.az_list_node
number_of_k8s_masters = var.number_of_k8s_masters
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
number_of_etcd = var.number_of_etcd
number_of_k8s_masters_no_floating_ip = var.number_of_k8s_masters_no_floating_ip
number_of_k8s_masters_no_floating_ip_no_etcd = var.number_of_k8s_masters_no_floating_ip_no_etcd
number_of_k8s_nodes = var.number_of_k8s_nodes
number_of_bastions = var.number_of_bastions
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
k8s_nodes = var.k8s_nodes
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
master_root_volume_size_in_gb = var.master_root_volume_size_in_gb
node_root_volume_size_in_gb = var.node_root_volume_size_in_gb
gfs_root_volume_size_in_gb = var.gfs_root_volume_size_in_gb
gfs_volume_size_in_gb = var.gfs_volume_size_in_gb
master_volume_type = var.master_volume_type
public_key_path = var.public_key_path
image = var.image
image_gfs = var.image_gfs
ssh_user = var.ssh_user
ssh_user_gfs = var.ssh_user_gfs
flavor_k8s_master = var.flavor_k8s_master
flavor_k8s_node = var.flavor_k8s_node
flavor_etcd = var.flavor_etcd
flavor_gfs_node = var.flavor_gfs_node
network_name = var.network_name
flavor_bastion = var.flavor_bastion
k8s_master_fips = module.ips.k8s_master_fips
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
k8s_node_fips = module.ips.k8s_node_fips
k8s_nodes_fips = module.ips.k8s_nodes_fips
bastion_fips = module.ips.bastion_fips
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
master_allowed_remote_ips = var.master_allowed_remote_ips
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
k8s_allowed_egress_ips = var.k8s_allowed_egress_ips
supplementary_master_groups = var.supplementary_master_groups
supplementary_node_groups = var.supplementary_node_groups
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
wait_for_floatingip = var.wait_for_floatingip
use_access_ip = var.use_access_ip
use_server_groups = var.use_server_groups
network_id = "${module.network.router_id}"
network_id = module.network.router_id
}
output "private_subnet_id" {
value = "${module.network.subnet_id}"
value = module.network.subnet_id
}
output "floating_network_id" {
value = "${var.external_net}"
value = var.external_net
}
output "router_id" {
value = "${module.network.router_id}"
value = module.network.router_id
}
output "k8s_master_fips" {
value = "${concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)}"
value = concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)
}
output "k8s_node_fips" {
value = "${module.ips.k8s_node_fips}"
value = var.number_of_k8s_nodes > 0 ? module.ips.k8s_node_fips : [for key, value in module.ips.k8s_nodes_fips : value.address]
}
output "bastion_fips" {
value = "${module.ips.bastion_fips}"
value = module.ips.bastion_fips
}

View File

@@ -1,6 +1,14 @@
data "openstack_images_image_v2" "vm_image" {
name = var.image
}
data "openstack_images_image_v2" "gfs_image" {
name = var.image_gfs == "" ? var.image : var.image_gfs
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
public_key = chomp(file(var.public_key_path))
}
resource "openstack_networking_secgroup_v2" "k8s_master" {
@@ -10,32 +18,43 @@ resource "openstack_networking_secgroup_v2" "k8s_master" {
}
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
count = "${length(var.master_allowed_remote_ips)}"
count = length(var.master_allowed_remote_ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "${var.master_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
remote_ip_prefix = var.master_allowed_remote_ips[count.index]
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
}
resource "openstack_networking_secgroup_rule_v2" "k8s_master_ports" {
count = length(var.master_allowed_ports)
direction = "ingress"
ethertype = "IPv4"
protocol = lookup(var.master_allowed_ports[count.index], "protocol", "tcp")
port_range_min = lookup(var.master_allowed_ports[count.index], "port_range_min")
port_range_max = lookup(var.master_allowed_ports[count.index], "port_range_max")
remote_ip_prefix = lookup(var.master_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions != "" ? 1 : 0}"
count = var.number_of_bastions != "" ? 1 : 0
description = "${var.cluster_name} - Bastion Server"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ips) : 0}"
count = var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ips) : 0
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion[count.index].id}"
remote_ip_prefix = var.bastion_allowed_remote_ips[count.index]
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
}
resource "openstack_networking_secgroup_v2" "k8s" {
@@ -47,27 +66,27 @@ resource "openstack_networking_secgroup_v2" "k8s" {
resource "openstack_networking_secgroup_rule_v2" "k8s" {
direction = "ingress"
ethertype = "IPv4"
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
remote_group_id = openstack_networking_secgroup_v2.k8s.id
security_group_id = openstack_networking_secgroup_v2.k8s.id
}
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
count = "${length(var.k8s_allowed_remote_ips)}"
count = length(var.k8s_allowed_remote_ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.k8s_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
remote_ip_prefix = var.k8s_allowed_remote_ips[count.index]
security_group_id = openstack_networking_secgroup_v2.k8s.id
}
resource "openstack_networking_secgroup_rule_v2" "egress" {
count = "${length(var.k8s_allowed_egress_ips)}"
count = length(var.k8s_allowed_egress_ips)
direction = "egress"
ethertype = "IPv4"
remote_ip_prefix = "${var.k8s_allowed_egress_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
remote_ip_prefix = var.k8s_allowed_egress_ips[count.index]
security_group_id = openstack_networking_secgroup_v2.k8s.id
}
resource "openstack_networking_secgroup_v2" "worker" {
@@ -77,35 +96,66 @@ resource "openstack_networking_secgroup_v2" "worker" {
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
count = "${length(var.worker_allowed_ports)}"
count = length(var.worker_allowed_ports)
direction = "ingress"
ethertype = "IPv4"
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
protocol = lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")
port_range_min = lookup(var.worker_allowed_ports[count.index], "port_range_min")
port_range_max = lookup(var.worker_allowed_ports[count.index], "port_range_max")
remote_ip_prefix = lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
security_group_id = openstack_networking_secgroup_v2.worker.id
}
resource "openstack_compute_servergroup_v2" "k8s_master" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
name = "k8s-master-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_node" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
name = "k8s-node-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_etcd" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
name = "k8s-etcd-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.number_of_bastions}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-bastion-${count.index + 1}"
count = var.number_of_bastions
image_name = var.image
flavor_id = var.flavor_bastion
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.bastion_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.bastion_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${element(openstack_networking_secgroup_v2.bastion.*.name, count.index)}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
element(openstack_networking_secgroup_v2.bastion.*.name, count.index),
]
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
@@ -114,233 +164,454 @@ resource "openstack_compute_instance_v2" "bastion" {
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
count = var.number_of_k8s_masters
availability_zone = element(var.az_list, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
count = var.number_of_k8s_masters_no_etcd
availability_zone = element(var.az_list, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-etcd-${count.index + 1}"
count = var.number_of_etcd
availability_zone = element(var.az_list, count.index)
image_name = var.image
flavor_id = var.flavor_etcd
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.etcd_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.etcd_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_etcd[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip
availability_zone = element(var.az_list, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
availability_zone = element(var.az_list, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
count = var.number_of_k8s_nodes
availability_zone = element(var.az_list_node, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
count = var.number_of_k8s_nodes_no_floating_ip
availability_zone = element(var.az_list_node, count.index)
image_name = var.image
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
network {
name = "${var.network_name}"
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
}
metadata = {
ssh_user = "${var.ssh_user}"
ssh_user = var.ssh_user
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_instance_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
availability_zone = each.value.az
image_name = var.image
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.name,
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
}
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube-node,k8s-cluster,%{if each.value.floating_ip == false}no-floating,%{endif}${var.supplementary_node_groups}"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ > group_vars/no-floating.yml%{else}true%{endif}"
}
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip
availability_zone = element(var.az_list, count.index)
image_name = var.image_gfs
flavor_id = var.flavor_gfs_node
key_pair = openstack_compute_keypair_v2.k8s.name
dynamic "block_device" {
for_each = var.gfs_root_volume_size_in_gb > 0 ? [var.image] : []
content {
uuid = data.openstack_images_image_v2.vm_image.id
source_type = "image"
volume_size = var.gfs_root_volume_size_in_gb
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
network {
name = var.network_name
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
}
metadata = {
ssh_user = var.ssh_user_gfs
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = var.network_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = "${var.number_of_bastions}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
count = var.number_of_bastions
floating_ip = var.bastion_fips[count.index]
instance_id = element(openstack_compute_instance_v2.bastion.*.id, count.index)
wait_until_associated = var.wait_for_floatingip
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
wait_until_associated = "${var.wait_for_floatingip}"
count = var.number_of_k8s_masters
instance_id = element(openstack_compute_instance_v2.k8s_master.*.id, count.index)
floating_ip = var.k8s_master_fips[count.index]
wait_until_associated = var.wait_for_floatingip
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
instance_id = element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)
floating_ip = var.k8s_master_no_etcd_fips[count.index]
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0
floating_ip = var.k8s_node_fips[count.index]
instance_id = element(openstack_compute_instance_v2.k8s_node[*].id, count.index)
wait_until_associated = var.wait_for_floatingip
}
resource "openstack_compute_floatingip_associate_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
floating_ip = var.k8s_nodes_fips[each.key].address
instance_id = openstack_compute_instance_v2.k8s_nodes[each.key].id
wait_until_associated = var.wait_for_floatingip
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}"
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
}
size = var.gfs_volume_size_in_gb
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = "${var.number_of_gfs_nodes_no_floating_ip}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)
volume_id = element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)
}

View File

@@ -1,7 +1,11 @@
variable "cluster_name" {}
variable "az_list" {
type = "list"
type = list(string)
}
variable "az_list_node" {
type = list(string)
}
variable "number_of_k8s_masters" {}
@@ -22,8 +26,20 @@ variable "number_of_bastions" {}
variable "number_of_gfs_nodes_no_floating_ip" {}
variable "bastion_root_volume_size_in_gb" {}
variable "etcd_root_volume_size_in_gb" {}
variable "master_root_volume_size_in_gb" {}
variable "node_root_volume_size_in_gb" {}
variable "gfs_root_volume_size_in_gb" {}
variable "gfs_volume_size_in_gb" {}
variable "master_volume_type" {}
variable "public_key_path" {}
variable "image" {}
@@ -51,37 +67,43 @@ variable "network_id" {
}
variable "k8s_master_fips" {
type = "list"
type = list
}
variable "k8s_master_no_etcd_fips" {
type = "list"
type = list
}
variable "k8s_node_fips" {
type = "list"
type = list
}
variable "k8s_nodes_fips" {
type = map
}
variable "bastion_fips" {
type = "list"
type = list
}
variable "bastion_allowed_remote_ips" {
type = "list"
type = list
}
variable "master_allowed_remote_ips" {
type = "list"
type = list
}
variable "k8s_allowed_remote_ips" {
type = "list"
type = list
}
variable "k8s_allowed_egress_ips" {
type = "list"
type = list
}
variable "k8s_nodes" {}
variable "wait_for_floatingip" {}
variable "supplementary_master_groups" {
@@ -92,6 +114,16 @@ variable "supplementary_node_groups" {
default = ""
}
variable "worker_allowed_ports" {
type = "list"
variable "master_allowed_ports" {
type = list
}
variable "worker_allowed_ports" {
type = list
}
variable "use_access_ip" {}
variable "use_server_groups" {
type = bool
}

View File

@@ -1,29 +1,36 @@
resource "null_resource" "dummy_dependency" {
triggers = {
dependency_id = "${var.router_id}"
dependency_id = var.router_id
}
}
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_masters
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_masters_no_etcd
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_nodes
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "bastion" {
count = "${var.number_of_bastions}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_bastions
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}

View File

@@ -1,15 +1,19 @@
output "k8s_master_fips" {
value = "${openstack_networking_floatingip_v2.k8s_master[*].address}"
value = openstack_networking_floatingip_v2.k8s_master[*].address
}
output "k8s_master_no_etcd_fips" {
value = "${openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address}"
value = openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address
}
output "k8s_node_fips" {
value = "${openstack_networking_floatingip_v2.k8s_node[*].address}"
value = openstack_networking_floatingip_v2.k8s_node[*].address
}
output "k8s_nodes_fips" {
value = openstack_networking_floatingip_v2.k8s_nodes
}
output "bastion_fips" {
value = "${openstack_networking_floatingip_v2.bastion[*].address}"
value = openstack_networking_floatingip_v2.bastion[*].address
}

View File

@@ -15,3 +15,5 @@ variable "network_name" {}
variable "router_id" {
default = ""
}
variable "k8s_nodes" {}

View File

@@ -1,27 +1,33 @@
resource "openstack_networking_router_v2" "k8s" {
name = "${var.cluster_name}-router"
count = "${var.use_neutron}"
count = var.use_neutron == 1 && var.router_id == null ? 1 : 0
admin_state_up = "true"
external_network_id = "${var.external_net}"
external_network_id = var.external_net
}
data "openstack_networking_router_v2" "k8s" {
router_id = var.router_id
count = var.use_neutron == 1 && var.router_id != null ? 1 : 0
}
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
count = "${var.use_neutron}"
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
count = "${var.use_neutron}"
network_id = "${openstack_networking_network_v2.k8s[count.index].id}"
cidr = "${var.subnet_cidr}"
count = var.use_neutron
network_id = openstack_networking_network_v2.k8s[count.index].id
cidr = var.subnet_cidr
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
dns_nameservers = var.dns_nameservers
}
resource "openstack_networking_router_interface_v2" "k8s" {
count = "${var.use_neutron}"
router_id = "${openstack_networking_router_v2.k8s[count.index].id}"
subnet_id = "${openstack_networking_subnet_v2.k8s[count.index].id}"
count = var.use_neutron
router_id = "%{if openstack_networking_router_v2.k8s != []}${openstack_networking_router_v2.k8s[count.index].id}%{else}${var.router_id}%{endif}"
subnet_id = openstack_networking_subnet_v2.k8s[count.index].id
}

View File

@@ -1,11 +1,11 @@
output "router_id" {
value = "${element(concat(openstack_networking_router_v2.k8s.*.id, list("")), 0)}"
value = "%{if var.use_neutron == 1} ${var.router_id == null ? element(concat(openstack_networking_router_v2.k8s.*.id, [""]), 0) : var.router_id} %{else} %{endif}"
}
output "router_internal_port_id" {
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
value = element(concat(openstack_networking_router_interface_v2.k8s.*.id, [""]), 0)
}
output "subnet_id" {
value = "${element(concat(openstack_networking_subnet_v2.k8s.*.id, list("")), 0)}"
value = element(concat(openstack_networking_subnet_v2.k8s.*.id, [""]), 0)
}

View File

@@ -2,12 +2,16 @@ variable "external_net" {}
variable "network_name" {}
variable "network_dns_domain" {}
variable "cluster_name" {}
variable "dns_nameservers" {
type = "list"
type = list
}
variable "subnet_cidr" {}
variable "use_neutron" {}
variable "router_id" {}

View File

@@ -1,6 +1,9 @@
# your Kubernetes cluster name here
cluster_name = "i-didnt-read-the-docs"
# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]
# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

View File

@@ -3,8 +3,14 @@ variable "cluster_name" {
}
variable "az_list" {
description = "List of Availability Zones available in your OpenStack cluster"
type = "list"
description = "List of Availability Zones to use for masters in your OpenStack cluster"
type = list(string)
default = ["nova"]
}
variable "az_list_node" {
description = "List of Availability Zones to use for nodes in your OpenStack cluster"
type = list(string)
default = ["nova"]
}
@@ -44,10 +50,34 @@ variable "number_of_gfs_nodes_no_floating_ip" {
default = 0
}
variable "bastion_root_volume_size_in_gb" {
default = 0
}
variable "etcd_root_volume_size_in_gb" {
default = 0
}
variable "master_root_volume_size_in_gb" {
default = 0
}
variable "node_root_volume_size_in_gb" {
default = 0
}
variable "gfs_root_volume_size_in_gb" {
default = 0
}
variable "gfs_volume_size_in_gb" {
default = 75
}
variable "master_volume_type" {
default = "Default"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
@@ -55,12 +85,12 @@ variable "public_key_path" {
variable "image" {
description = "the image to use"
default = "ubuntu-14.04"
default = ""
}
variable "image_gfs" {
description = "Glance image to use for GlusterFS"
default = "ubuntu-16.04"
default = ""
}
variable "ssh_user" {
@@ -103,6 +133,12 @@ variable "network_name" {
default = "internal"
}
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = string
default = null
}
variable "use_neutron" {
description = "Use neutron"
default = 1
@@ -110,13 +146,13 @@ variable "use_neutron" {
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
type = string
default = "10.0.0.0/24"
}
variable "dns_nameservers" {
description = "An array of DNS name server names used by hosts in this subnet."
type = "list"
type = list
default = []
}
@@ -146,30 +182,36 @@ variable "supplementary_node_groups" {
variable "bastion_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "master_allowed_remote_ips" {
description = "An array of CIDRs allowed to access API of masters"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "k8s_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
type = list(string)
default = []
}
variable "k8s_allowed_egress_ips" {
description = "An array of CIDRs allowed for egress traffic"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "master_allowed_ports" {
type = list
default = []
}
variable "worker_allowed_ports" {
type = "list"
type = list
default = [
{
@@ -180,3 +222,21 @@ variable "worker_allowed_ports" {
},
]
}
variable "use_access_ip" {
default = 1
}
variable "use_server_groups" {
default = false
}
variable "router_id" {
description = "uuid of an externally defined router to use"
default = null
}
variable "k8s_nodes" {
default = {}
}

View File

@@ -38,7 +38,7 @@ now six total etcd replicas.
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error.
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
@@ -72,7 +72,7 @@ If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see:
https://support.packet.com/kb/articles/api-integrations
The Packet Project ID associated with the key will be set later in cluster.tf.
The Packet Project ID associated with the key will be set later in cluster.tfvars.
For more information about the API, please see:
https://www.packet.com/developers/api/
@@ -88,7 +88,7 @@ Note that to deploy several clusters within the same project you need to use [te
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
@@ -138,7 +138,7 @@ This should finish fairly quickly telling you Terraform has successfully initial
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible-playbook -i hosts ../../cluster.yml
```
@@ -147,7 +147,7 @@ $ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
@@ -176,7 +176,7 @@ If you have deployed and destroyed a previous iteration of your cluster, you wil
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all

View File

@@ -4,59 +4,60 @@ provider "packet" {
}
resource "packet_ssh_key" "k8s" {
count = "${var.public_key_path != "" ? 1 : 0}"
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
public_key = chomp(file(var.public_key_path))
}
resource "packet_device" "k8s_master" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters_no_etcd}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters_no_etcd}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters_no_etcd
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
}
resource "packet_device" "k8s_etcd" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_etcd}"
hostname = "${var.cluster_name}-etcd-${count.index+1}"
plan = "${var.plan_etcd}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = var.plan_etcd
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_nodes}"
hostname = "${var.cluster_name}-k8s-node-${count.index+1}"
plan = "${var.plan_k8s_nodes}"
facilities = ["${var.facility}"]
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = var.plan_k8s_nodes
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
}

View File

@@ -1,15 +1,16 @@
output "k8s_masters" {
value = "${packet_device.k8s_master.*.access_public_ipv4}"
value = packet_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}"
value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = "${packet_device.k8s_etcd.*.access_public_ipv4}"
value = packet_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = "${packet_device.k8s_node.*.access_public_ipv4}"
value = packet_device.k8s_node.*.access_public_ipv4
}

View File

@@ -54,3 +54,4 @@ variable "number_of_etcd" {
variable "number_of_k8s_nodes" {
default = 0
}

View File

@@ -0,0 +1,4 @@
terraform {
required_version = ">= 0.12"
}

View File

@@ -73,7 +73,7 @@ def iterresources(filenames):
# In version 4 the structure changes so we need to iterate
# each instance inside the resource branch.
for resource in state['resources']:
name = resource['module'].split('.')[-1]
name = resource['provider'].split('.')[-1]
for instance in resource['instances']:
key = "{}.{}".format(resource['type'], resource['name'])
if 'index_key' in instance:
@@ -182,6 +182,9 @@ def parse_list(source, prefix, sep='.'):
def parse_bool(string_form):
if type(string_form) is bool:
return string_form
token = string_form.lower()[0]
if token == 't':
@@ -210,7 +213,7 @@ def packet_device(resource, tfvars=None):
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # it's always "root" on Packet
'ansible_ssh_user': 'root', # Use root by default in packet
# generic
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
@@ -220,6 +223,10 @@ def packet_device(resource, tfvars=None):
'provider': 'packet',
}
if raw_attrs['operating_system'] == 'flatcar_stable':
# For Flatcar set the ssh_user to core
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
@@ -312,9 +319,7 @@ def openstack_host(resource, module_name):
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
'role': attrs['metadata'].get('role', 'none')
})
# add groups based on attrs
@@ -324,10 +329,6 @@ def openstack_host(resource, module_name):
for item in list(attrs['metadata'].items()))
groups.append('os_region=' + attrs['region'])
# groups specific to Mantl
groups.append('role=' + attrs['metadata'].get('role', 'none'))
groups.append('dc=' + attrs['consul_dc'])
# groups specific to kubespray
for group in attrs['metadata'].get('kubespray_groups', "").split(","):
groups.append(group)
@@ -339,14 +340,20 @@ def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
if host_id in ips:
ip = ips[host_id]
host[1].update({
'access_ip_v4': ip,
'access_ip': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
})
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0":
host[1].pop('access_ip')
yield host

View File

@@ -13,7 +13,7 @@
/usr/local/share/ca-certificates/vault-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/vault-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
{%- elif ansible_os_family in ["Flatcar Container Linux by Kinvolk"] -%}
/etc/ssl/certs/vault-ca.pem
{%- endif %}
@@ -23,9 +23,9 @@
dest: "{{ ca_cert_path }}"
register: vault_ca_cert
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/CoreOS)
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/Flatcar)
command: update-ca-certificates
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "Flatcar Container Linux by Kinvolk"]
- name: bootstrap/ca_trust | update ca-certificates (RedHat)
command: update-ca-trust extract

View File

@@ -6,11 +6,11 @@
- name: bootstrap/start_vault_temp | Start single node Vault with file backend
command: >
docker run -d --cap-add=IPC_LOCK --name {{ vault_temp_container_name }}
-p {{ vault_port }}:{{ vault_port }}
-e 'VAULT_LOCAL_CONFIG={{ vault_temp_config|to_json }}'
-v /etc/vault:/etc/vault
{{ vault_image_repo }}:{{ vault_version }} server
docker run -d --cap-add=IPC_LOCK --name {{ vault_temp_container_name }}
-p {{ vault_port }}:{{ vault_port }}
-e 'VAULT_LOCAL_CONFIG={{ vault_temp_config|to_json }}'
-v /etc/vault:/etc/vault
{{ vault_image_repo }}:{{ vault_version }} server
- name: bootstrap/start_vault_temp | Start again single node Vault with file backend
command: docker start {{ vault_temp_container_name }}

View File

@@ -21,9 +21,9 @@
- name: bootstrap/sync_secrets | Print out warning message if secrets are not available and vault is initialized
pause:
prompt: >
Vault orchestration may not be able to proceed. The Vault cluster is initialzed, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps.
Vault orchestration may not be able to proceed. The Vault cluster is initialized, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps.
when: vault_cluster_is_initialized and not vault_secrets_available
- name: bootstrap/sync_secrets | Cat root_token from a vault host

View File

@@ -25,6 +25,6 @@
- name: check_etcd | Fail if etcd is not available and needed
fail:
msg: >
Unable to start Vault cluster! Etcd is not available at
{{ vault_etcd_url.split(',') | first }} however it is needed by Vault as a backend.
Unable to start Vault cluster! Etcd is not available at
{{ vault_etcd_url.split(',') | first }} however it is needed by Vault as a backend.
when: vault_etcd_needed|d() and not vault_etcd_available

View File

@@ -36,6 +36,7 @@
{{ etcd_access_addresses.split(',') | first }}/v3alpha/kv/range
register: vault_etcd_exists
retries: 4
until: vault_etcd_exists.status == 200
delay: "{{ retry_stagger | random + 3 }}"
run_once: true
when: not vault_is_running and vault_etcd_available
@@ -45,7 +46,7 @@
set_fact:
vault_cluster_is_initialized: >-
{{ vault_is_initialized or
hostvars[item]['vault_is_initialized'] or
('value' in vault_etcd_exists.stdout|default('')) }}
hostvars[item]['vault_is_initialized'] or
('value' in vault_etcd_exists.stdout|default('')) }}
with_items: "{{ groups.vault }}"
run_once: true

View File

@@ -6,9 +6,9 @@
ca_cert: "{{ vault_cert_dir }}/ca.pem"
name: "{{ create_role_name }}"
rules: >-
{%- if create_role_policy_rules|d("default") == "default" -%}
{{
{ 'path': {
{%- if create_role_policy_rules|d("default") == "default" -%}
{{
{ 'path': {
create_role_mount_path + '/issue/' + create_role_name: {'policy': 'write'},
create_role_mount_path + '/roles/' + create_role_name: {'policy': 'read'}
}} | to_json + '\n'
@@ -24,13 +24,13 @@
ca_cert: "{{ vault_cert_dir }}/ca.pem"
secret: "{{ create_role_mount_path }}/roles/{{ create_role_name }}"
data: |
{%- if create_role_options|d("default") == "default" -%}
{
allow_any_name: true
}
{%- else -%}
{{ create_role_options | to_json }}
{%- endif -%}
{%- if create_role_options|d("default") == "default" -%}
{
allow_any_name: true
}
{%- else -%}
{{ create_role_options | to_json }}
{%- endif -%}
## Userpass based auth method

Some files were not shown because too many files have changed in this diff Show More