Compare commits

...

301 Commits

Author SHA1 Message Date
Xavi
fc1edbe79d Fix calico v3.4.0 scale and upgrade (#4531)
* reset calico after install

* remove extra line
2019-04-21 08:07:45 -07:00
Maxime Guyot
a4e65c7ceb Upgrade to Ansible >2.7.0 (#4471) 2019-04-09 04:21:07 -07:00
Karen Almog
20ebb49568 Don't create security groups for a bastion host on openstack, if doesn't exist (#4291) 2019-04-09 04:01:09 -07:00
Andreas Krüger
aa162b0d5d Update kube-router to 0.2.5 (#4469) 2019-04-09 03:37:04 -07:00
Maxime Guyot
b15f3e182d add default routing to canal and disable bird checks (#4468)
Co-Author: Paweł Skrzyński
2019-04-09 02:45:07 -07:00
Andreas Krüger
4d39c1856e Fix jinja filters (#4470) 2019-04-09 02:19:06 -07:00
Maxime Guyot
b2fa84af61 Vagrant fix password prompt (#4457) 2019-04-09 00:59:05 -07:00
Maxime Guyot
913fed0089 kubeadmn init: add 'until' to make 'retries' effective (#4464)
an 'until' clause is required or 'retries' is ignored

(see note @ https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#do-until-loops)
2019-04-09 00:21:04 -07:00
Maxime Guyot
80ea18bd28 Disable download_once in Vagrant to workaround rsync error (#4448) 2019-04-09 00:19:05 -07:00
Markos Chandras
12c6b5c3eb openSUSE: Use Leap 15.0 instead of 42.3 (#4442)
* Vagrantfile: Bump openSUSE to Leap 15.0

* roles: container-engine: Add 'containerd' package for openSUSE

The 'containerd' package contains the docker-containerd and
docker-containerd-shim binaries. We also need to ensure that the latest
version is installed since an older version may already be present (eg GCE
images)

* Remove docker log-opts for opensuse

* roles: bootstrap-os: Use lowercase 'o' for openSUSE

OpenSUSE is not a valid family name. The correct one is openSUSE

* roles: bootstrap-os: Update zypper cache before first installation

The zypper cache may be outdated so ensure that it's fully updated
before we try and install the bootstrap packages.
2019-04-09 00:17:05 -07:00
Maxime Guyot
35c0010876 Rename inventory/sample/hosts.ini to fix vagrant up (#4447) 2019-04-09 00:15:06 -07:00
rptaylor
f52584a715 robust handling of API server SANs (#4435)
* robust handling of API server SANs

* use apiserver_loadbalancer_domain_name if it is defined, according to PR 3977
2019-04-08 08:10:35 -07:00
Erwan Miran
09bbdadcee remove nodelocaldns iface on reset (#4460) 2019-04-08 02:26:25 -07:00
Xinghong Fang
d711a0c83f [nodelocaldns] expand tolerations on the daemonset (#4451) 2019-04-08 02:24:26 -07:00
Andreas Holmsten
01cf11b961 Run terraform fmt and add step to CI (#4405)
* Run terraform fmt

* Add terraform fmt to .terraform-validate CI step

* Add tf-validate-aws CI step

* Revert "Add tf-validate-aws CI step"

This reverts commit e007225fac.
2019-04-08 02:22:24 -07:00
Eric Ross
29825e6873 Missing ruamel.yaml from requirements.txt (#4446) 2019-04-08 02:20:27 -07:00
Andreas Krüger
d18ad63e49 Update nginx to 1.15. Update manifest and performance optimize (#4458) 2019-04-08 02:02:29 -07:00
Andreas Holmsten
3da392d1cf Add OWNERS to contrib/terraform (#4441) 2019-04-08 00:36:24 -07:00
Maxime Guyot
8947614d97 Upgrade to etcd v3.2.26 (#4444) 2019-04-08 00:34:25 -07:00
Victor Morales
7e4f4a96fc Replace iteritems() to items() in Jinja2 templates (#4437)
The iteritems() dictionary's method has been removed in Python3. Using
this method in Jinja2 templates limits the execution to Python2 which
will be deprecated in 2020[1]. This change replaces that method for
the items() method as it's suggested in the official website[2].

[1] https://pythonclock.org/
[2] https://docs.ansible.com/ansible/latest/user_guide/playbooks_python_version.html#dict-iteritems
2019-04-08 00:32:26 -07:00
MarkusTeufelberger
301a371efe Update pypy3 on CoreOS to 7.0.0 (#4456) 2019-04-08 00:28:24 -07:00
Maxime Guyot
1a6df84c7a Upgrade to Helm 2.13.1 (#4445) 2019-04-07 07:04:25 -07:00
Andreas Krüger
2d38c1e20c Update premoderator to fix Github API throttle (#4424)
* Update premoderator to fix Github API throttle

* Update premoderator script

Add exit codes and document the exit code.

* Fix indentation
2019-04-06 12:12:26 -07:00
Maxime Guyot
9155339cf0 Fix pep8 warnings (#4368) 2019-04-05 12:51:22 -07:00
rptaylor
d8a023a92c Tell git to ignore .terraform directory (#4428)
The .terraform directory is populated when modules are downloaded:
https://www.terraform.io/docs/commands/get.html
"The modules are downloaded into a local .terraform folder. This folder should not be committed to version control."
2019-04-05 01:27:18 -07:00
Maxime Guyot
8ad74404c9 Remove bash-completion (#4431) 2019-04-05 01:23:22 -07:00
Maxime Guyot
1ce2f04f47 allow Suse OS family (#4430) 2019-04-04 03:02:51 -07:00
Xavi
20b12751af add Cinder allowVolumeExpansion option (#4415) 2019-04-04 02:36:50 -07:00
Maxime Guyot
e485fab7eb Add CI for contrib/terraform/ (#4133) 2019-04-04 01:42:52 -07:00
Maxime Guyot
adca353fe9 Use docker.io for calico (#4253) 2019-04-04 01:20:49 -07:00
Andreas Krüger
7a72e567d5 Update CoreDNS to 1.4.0 (#4422)
* Update CoreDNS to 1.4.0

* Update readme to reflect CoreDNS update
2019-04-04 00:40:50 -07:00
Andreas Krüger
3c050be0b0 Update nodelocaldns cache settings (#4423) 2019-04-04 00:38:51 -07:00
Andreas Krüger
41e684eb5a Update DNS Autoscaler to 1.4.0 (#4425)
* Update DNS Autoscaler

* Update downloads too

* Fix yamllint

* Fix yamllint
2019-04-04 00:36:51 -07:00
Erwan Miran
2067417ad4 jmespath is required when re-running cluster.yml (#4426) 2019-04-04 00:34:49 -07:00
Sergey
55890e1b82 keep compatibility as it was before (#4268) 2019-04-03 01:39:42 -07:00
Sergey
1e524c68d5 remove our config if docker start failed (#4260) 2019-04-03 01:37:44 -07:00
Sergey
740d8b0a26 enable kubelet client certificate rotation (#4081)
* enable kubelet client certificate rotation

* change to variable kubelet_rotate_certificates
2019-04-03 01:35:44 -07:00
Gautam Divgi
a8dd69cf17 Fixed cleanup-docker-orphans.sh to use docker-containerd-shim and containerd-shim (#4418) 2019-04-02 09:11:21 -07:00
Matthew Mosesohn
4fe2aa6bf7 Use install_cni init container for cni copy for calico/canal (#4416) 2019-04-02 03:32:36 -07:00
Chad Swenson
5d5c9cab19 Speed up old docker package removal (#4408)
Both the `yum` and `apt` modules support a list as input, this allows us avoid the slower `with_items` approach, which can take a long time with a large count of cluster nodes.
2019-04-01 15:08:35 -07:00
Matthew Mosesohn
5f12b7aedf Remove kubedns and dnsmasq. Move dns_late phase after apps (#4406)
Both kubedns and dnsmasq modes are long not maintained.
We should run dns_late steps at the end because sshd
makes DNS lookups during Ansible run and has 2s timeouts
for each failed lookup trying to connect to coredns before
it is ready.
2019-04-01 12:32:34 -07:00
Bort Verwilst
d71590bbd0 add 1.14.0 checksum, remove 1.11.* checksums (#4401) 2019-04-01 07:16:33 -07:00
MarkusTeufelberger
9ffc65f8f3 Yamllint fixes (#4410)
* Lint everything in the repository with yamllint

* yamllint fixes: syntax fixes only

* yamllint fixes: move comments to play names

* yamllint fixes: indent comments in .gitlab-ci.yml file
2019-04-01 02:38:33 -07:00
ml
483f1d2ca0 Calico felix - Fix jinja2 boolean condition (#4348)
* Fix jinja2 boolean condition

* Convert all felix variable to booleans instead.
2019-03-29 16:07:09 -07:00
tikitavi
1babba753d adapt inventory script to python 2.7 version (#4407) 2019-03-29 06:08:13 -07:00
johnstudarus
ed18a10571 Corrected cloud name (#4316)
The correct name is Packet, not Packet Host.
2019-03-29 00:28:13 -07:00
Dmitry Chepurovskiy
0440e45d65 Fix supplementary_addresses rendering error (#4403) 2019-03-29 00:26:13 -07:00
Stefan Prietl
2fb27c8521 Use static files in KubeDNS templating task (#4379)
This commit adapts the "Lay Down KubeDNS Template" task to use the static
files moved by pull request [1]

[1] https://github.com/kubernetes-sigs/kubespray/pull/4341
2019-03-28 06:26:43 -07:00
Qasim Sarfraz
f17f4ff963 Fix bootsrap-os role, failing to create remote_tmp (#4384)
* Fix bootsrap-os role, failing to create remote_tmp

* use ansible_remote_tmp hostvar
2019-03-28 06:24:43 -07:00
Sergey
e9c34fe038 Default values for variable dns_servers and dns_domain are set in two files: (#3999)
values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml

dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve

another variables in roles/container-engine/defaults not need to set
2019-03-28 06:22:44 -07:00
Dmitry Chepurovskiy
669ab10c17 Added livenessProbe for local nginx apiserver proxy liveness probe (#4222)
* Added configurable local apiserver proxy liveness probe

* Enable API LB healthcheck by default

* Fix template spacing and moved healthz location to nginx http section

* Fix healthcheck listen address to allow kubelet request healthcheck
2019-03-28 06:20:46 -07:00
Qasim Sarfraz
0a3cf1a087 Fix CA cert environment variable for ectd v3 (#4381) 2019-03-28 00:18:43 -07:00
Maxime Guyot
3511b55cf5 Increase CPU flavor for CI (#4389) 2019-03-27 16:26:48 -07:00
Chad Swenson
1f01b6546c Merge pull request #4396 from verwilst/feature/k8s-1.13.5
Upgrade to k8s 1.13.5
2019-03-27 13:47:39 -05:00
Bart Verwilst
0efa3e6392 Upgrade to k8s 1.13.5 2019-03-27 11:16:21 +01:00
Matthew Mosesohn
6d7f3c4405 Reduce jinja2 filters in coredns templates (#4390) 2019-03-26 11:09:17 -07:00
Michael Vorburger ⛑️
85e0fb32e6 clarify that kubespray now supports kubeadm (fixes #4089) (#4366) 2019-03-26 03:51:19 -07:00
Etienne
d0ae316934 Use proxy_env with kubeadm phase commands (#4325) 2019-03-26 03:03:19 -07:00
Dmitry Chepurovskiy
f6d280452f Added support of bastion host for reset.yaml (#4359)
* Added support of bastion host for reset.yaml

* Empty commit to triger CI
2019-03-26 02:59:16 -07:00
Maxime Guyot
7fb5fbac37 Use wide for netchecker debug output (#4383) 2019-03-22 19:41:06 -07:00
Matthew Mosesohn
b7fd462944 Fix support for ansible 2.7.9 (#4375) 2019-03-20 11:29:42 -07:00
Matthew Mosesohn
ec08303f82 Revert "Fix #4237: update kube cert path (#4354)" (#4369)
This reverts commit ea7a6f1cf1.

This change modified the certs dir for Kubernetes, but did not move the directories for existing clusters.
2019-03-20 05:56:57 -07:00
Maxime Guyot
e640233947 Use sample inventory file in doc (#4052) 2019-03-18 01:43:15 -07:00
Dmitry Chepurovskiy
ea7a6f1cf1 Fix #4237: update kube cert path (#4354) 2019-03-17 23:55:11 -07:00
Peter Metz
38009a215a fix(contrib/metallb): adds missing become: true in role (#4356)
On CoreOS, without this, it fails to kubectl apply MetalLB due to lack of privileges.
2019-03-17 18:15:09 -07:00
Matthew Mosesohn
150a969cf4 Forcefully delete pods when necessary (#4328)
Pods on down/unresponsive nodes can't be deleted without
--force --grace-period=0.

Fixes #4314
2019-03-14 07:45:46 -07:00
Manuel Cintron
3c4cbf133e Adding ability to override dashboard replica count (#4344) 2019-03-13 13:58:25 -07:00
Matthew Mosesohn
fd2c47b56a Move most coredns templates to static files (#4341)
* Move most coredns templates to static files

This should speed up the task slightly

* yaml lint fixes
2019-03-12 21:17:31 -07:00
tikitavi
2560c4dda3 fixing dump of ordered dictionaries in inventory script (#4343) 2019-03-13 02:57:34 +03:00
tikitavi
254a0ab69d fix inventory script (#4342)
hosts are ordered dictionary
remove ansible_user from inventory file
2019-03-13 01:46:46 +03:00
tikitavi
7b3e59ed0a fix inventory script (#4339)
- fix order of entries when the new yaml file is created
- fix group in case there are no hosts in it
2019-03-12 11:02:44 -07:00
tikitavi
44de04be89 update inventory builder for public and private IP per node (#4323) 2019-03-07 18:30:12 +03:00
Bort Verwilst
33024731e4 Upgrade to k8s 1.13.4 (#4319) 2019-03-06 23:16:56 -08:00
chadswilson
d469282f1c add blockSize to IPPool spec for Calico >= v3.3.0 (#4224)
* add blockSize to IPPool spec for Calico >= v3.3.0

* fix "cidr" spec in Calico IPPool resource for my PR
2019-03-06 12:42:48 -08:00
Matthew Mosesohn
acbf3db233 Remove hard dependence on facts for all nodes (#4304)
* Remove hard dependence on facts for all nodes

* Update main.yaml

* Update main.yaml
2019-03-05 03:04:39 -08:00
Matthew Mosesohn
adf6a7121f Reenable set_facts task for dns_late (#4312) 2019-03-01 05:39:30 -08:00
tikitavi
b73f009c07 rewrite inventory script to create inventory file in YAML format (#4303)
* rewrite inventory script to create inventory file in YAML format

* minor fixes to inventory script

* change requirments for the inventory script
2019-02-28 17:28:27 +03:00
Bort Verwilst
bbfd2dc2bd Add 1.12.6, sort arm64 descending (#4308)
* Add 1.12.6, sort arm64 descending

* remove 1.10.x checksums (EOL anyways)
2019-02-28 05:55:19 -08:00
Matthew Mosesohn
4fe61968cf Set default value for local_path_provisioner_enabled in role (#4309) 2019-02-28 05:36:08 -08:00
Anupam Basak
9e8e069b23 remove kube bridge on reset (#4250) 2019-02-26 00:32:00 -08:00
Peter Metz
26ca58419f feat(external-provisioner): adds support for local-path-provisioner (#4232)
* feat(external-provisioner/local-path-provisioner): adds support for local path provisioner

Helpful for local development but also in production workloads (once the
permission model is worked out) where you have redundancy built into the
software uses the PVCs (e.g. database cluster with synchronous
replication)

* feat(local-path-provisioner): adds debug flag, image tag group var

* fix(local-path-provisioner): moves image repo/tag to download role

* test(gce_centos7-flannel): enables local-path-provisioner in test case

* fix(addons): add image repo/tag to commented default values

* fix(local-path-provisioner): typo in jinja template for local path provisioner

* style(local-path-provisioner): debug flag condition re-formatted

* fix(local-path-provisioner): adds missing default value for debug flag

* fix(local-path-provisioner): syntax fix for debug if condition end

* fix(local-path-provisioner): jinja template syntax: if condition white space
2019-02-25 22:45:30 -08:00
etharendil
063faaae1c recursive option for kube ansible module (#4273)
kube ansible module can be used with recursive: true
which sill process the directory used in -f, --filename recursively
2019-02-25 22:17:23 -08:00
Maxime Guyot
131c3d4d5b Add link to Kubespray.io (#4240) 2019-02-25 21:20:14 -08:00
Christian Berendt
44ee4b507c terraform: use openstackclient instead of novaclient (#4280)
The openstackclient is the preferred CLI for OpenStack
environments and should be used instead of novaclient.
2019-02-25 20:13:16 -08:00
Maxime Guyot
c36a0226d0 Add more links to the docs (#4204) 2019-02-25 20:11:23 -08:00
hikoz
67832aada9 changed_when:false (#4189) 2019-02-25 20:09:30 -08:00
johnstudarus
74727b085b Packet docs (#4160)
* Create packet.md

* Update README.md

* Update README.md

* Update packet.md

download the latest version

* Update packet.md
2019-02-25 20:07:38 -08:00
Maxime Guyot
bb495006c8 Update MetalLB to v0.7.3 (#4194) 2019-02-25 20:05:45 -08:00
hikoz
3d25b4dfc1 30MiB for gpu-device-plugin (#4227)
* 30MiB for gpu-device-plugin

* use vars for easier configuration
2019-02-25 20:03:53 -08:00
Wong Hoi Sing Edison
1c12c19150 weave: Upgrade to 2.5.1 (#4248)
Upstream Changes:

  - weave 2.5.1 (https://github.com/weaveworks/weave/releases/tag/v2.5.1)

Our Changes:

  - Sync templates with upstream changes
2019-02-25 20:02:00 -08:00
Sebastian Poxhofer
58dc641001 added hardware requirements in README.md (#4233)
* added hardware requirements in README.md

* added hardware requirements in README.md
2019-02-25 20:00:08 -08:00
Ryler Hockenbury
88249308a0 Add labels to vsphere cloud config (#4275) 2019-02-25 19:58:15 -08:00
Gabor Lekeny
b4aaa7b908 Speed up tasks (#4278)
* fact gathering should run only once per node
* eliminate ansible version check, it is at the beginning of each
  playbook
2019-02-25 19:56:23 -08:00
Christian Berendt
c386172be7 terraform: correct the spelling of Betacloud (#4282) 2019-02-25 19:38:32 -08:00
Andrey Zhelnin
c66e9a6d62 Disable become for localhost (#4287) 2019-02-25 19:36:44 -08:00
Vasilis Remmas
81801ce23b Add master toleration flag in dashboard deployment (#4290) 2019-02-25 19:34:47 -08:00
Etienne
7dfa39483f Make container storage repository configurable (#4284) 2019-02-25 19:29:32 -08:00
Matthew Mosesohn
b07641c3f3 Move kube_proxy_remove out of set_facts and set default (#4180) 2019-02-25 00:08:06 -08:00
Matthew Mosesohn
4638acfe81 Retry applying podsecurity policies (#4279) 2019-02-24 22:50:55 -08:00
Kaoet
aadef80404 Upgrade to latest version of ubuntu-nvidia-driver-installer. (#4296)
The lastest version of ubuntu-nvidia-driver-installer contains a fix for
https://github.com/GoogleCloudPlatform/container-engine-accelerators/issues/90
which causes the installer pod to crash when driver is already loaded.
2019-02-24 22:22:48 -08:00
Frank Ritchie
9805fb7a34 Add flexvolume plugin dir to kubeadm kubelet (#4168)
This was already approved in #4106 but there are CI issues
with that PR due to references to kubernetes incubator.

After upgrading to Kubespray 2.8.1 with Kubeadm enabled Rook
Ceph volume provision failed due to the flexvolume plugin dir not
being correct. Adding the var fixed the issue
2019-02-20 15:02:02 -08:00
Christian Berendt
7d2ba49969 Add CNCF CLA to the contributing document (#4281) 2019-02-20 06:47:17 -08:00
Peter Metz
f81bafa07b feat(vagrant/virtualbox): adds parameter to resize vbox disks (#4231)
Useful if the default 20GB is not enough in cases where you are using
the local path provisioner of rancher for example
2019-02-20 06:37:18 -08:00
Peter Metz
94892ab3a4 fix(vagrant): sets video RAM to 8 MB, avoids large default (256) (#4230) 2019-02-20 06:35:21 -08:00
Maxime Guyot
323d788f48 Add support for --enable-skip-login in Dashboard (#4265) 2019-02-19 23:24:29 -08:00
Abdulaziz AlMalki
eafab9636f fix wrong indent of oidc-username-prefix and oidc-groups-prefix in kubeadm config template (#4263) 2019-02-19 23:22:32 -08:00
Seungkyu Ahn
107bfb259a This PS is to fix the bug when Workers can't join the cluster (#4276)
because of etc-kubernetes-manifests not empty.
2019-02-19 22:13:59 -08:00
Rong Zhang
d4a36aa55b Merge pull request #4027 from riverzhang/kube-proxy
Add update server field in kube-proxy kubeconfig
2019-02-20 13:41:06 +08:00
Manuel Cintron
07b2894080 Adding ability to maintain existing Encryption Secrets at Rest. (#4255)
* Adding ability to maintain existing Encryption Secrets at Rest.

If secrets_encryption.yaml is present it will not be overriten with a new kube_encrypt_token.

This should allow for it to be set ahead of a playbook running or maintain it if cluster.yml is ran on the same cluster and the ansible host does not have access to the secrets.

* Setting existing kube_encrypt_token across all master nodes in case it was missing in one or more nodes.
2019-02-19 07:31:45 -08:00
Florent Monbillard
802ac377b8 Fix typo in task description (#4243) 2019-02-19 06:06:29 -08:00
Roy Lenferink
738ab4239a Updated OWNERS file pointing to docs (#4184) 2019-02-18 05:49:36 -08:00
Ted Wexler
b5a895d1ec Run 'terraform fmt' in contrib/terraform/openstack (#4242) 2019-02-17 21:04:41 -08:00
Kaoet
23685b4537 Add image tag in "pause" container of nvidia driver installer. (#4247) 2019-02-17 21:02:30 -08:00
Chad Swenson
e552be76ce Docker apt repo name fix (again) (#4246)
For some reason 18.09 packages are now prefixed with `5:` in the download.docker.com apt repos
Followup to #4236
2019-02-14 10:19:19 -08:00
Ryler Hockenbury
eea22dfd40 Fix typo with docker-ce package versions (#4236) 2019-02-14 07:32:12 -08:00
Maxime Guyot
0a722942cc Use git tag when checking out for test upgrade (#4209) 2019-02-14 05:09:56 -08:00
Kaoet
192f4c4e96 Allow customizing container image path used in NVIDIA GPU addon. (#4229) 2019-02-14 03:51:38 -08:00
hikoz
e03588f431 use swapon -s (#4216) 2019-02-14 02:35:17 -08:00
Chad Swenson
8872b2e0c6 Fix calico when kube_override_hostname is set (#4235)
This fixes an issue where the `nodename` in calico's cni config json can fall out of sync with the k8s node name used by the calico pod if `kube_override_hostname` is set
2019-02-13 16:02:48 -08:00
Florent Monbillard
061f5a313b Explicitely set etcd endpoint in kubeadm-images.yaml (#4063)
Currently, the task `container_download | download images for kubeadm config images` fetches etcd image even though it's not required (etcd is bootstrapped by kubespray, not kubeadm).

`kubeadm-images.yaml` is only a subset of `kubeadm-config.yaml`, therefore ``kubeadm config images pull` will try to get all this list (including etcd)

```
# kubeadm config images list --config /etc/kubernetes/kubeadm-images.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
```

When using the `kubeadm-config.yaml` though, it doesn't list etcd image:

```
# kubeadm config images list --config /etc/kubernetes/kubeadm-config.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/coredns:1.2.6
```

This change just adds the etcd endpoints in the `kubeadm-images.yaml` to give a hint to kubeadm it doesn't need etcd image for its boostrapping as etcd is "external".
I confess it is a ugly hack, a better way would be to use a single `kubeadm-config.yaml` for both tasks, but they are triggered by different roles (`kubeadm-images.yaml` is used by download, `kubeadm-config.yaml` by kubernetes/master) at different steps and I didn't want to refactor too many things to prevent breakage. 

This is specially useful for offline installation where a whitelist of container images is mirrored on a local private container registry. `k8s.gcr.io/etcd` and `quay.io/coreos/etcd`  are two different repositories hosting the same images but using *different tags*! 
* coreos/etcd:v3.2.24   
* k8s.gcr.io/etcd:3.2.24 (note the missing 'v' in the tag name)
2019-02-13 12:44:12 -08:00
Chad Swenson
2e2ed3bd35 [SECURITY] Docker patches for CVE-2019-5736 (#4223)
This updates docker 18.06 and 18.09 with the two patches released
yesterday to address the new runc exploit. Details here:
https://kubernetes.io/blog/2019/02/11/runc-and-cve-2019-5736/
2019-02-13 01:50:53 -08:00
Manuel Cintron
7697baf0da Omit does not work in the context of yum_repository proxy. The ansible documentation specifies to use _none_ to disable the global proxy setting. (#4225) 2019-02-12 16:46:32 -08:00
Sorin Sbarnea
22a5a00c49 Improve kubeadm join tasks (#4206)
Fix issue where `kubeadm join` could wait forever for joining.

Fix issue where `kubeadm join` were not reaching the user, making
impossible to find the cause of the failure.

New behaviour is to first attempt to join without bypassing the
verifications checks and to display them if needed.

If this fails it still attempts to join by ignoring the check in
order to make previous behavior.

A timeout of 60 seconds is allocated for a joining.

Related-bug: #3973
2019-02-12 13:42:56 -08:00
Robert Neumann
8b289ad9e1 Fix the file path for all.yml and k8s-cluster.yml (#4210) 2019-02-11 14:55:41 -08:00
Maxime Guyot
6a33411d65 Add an option for helm init --wait (#4202) 2019-02-11 14:32:26 -08:00
hikoz
9a91ef8628 change permission after unarchive (#4191) 2019-02-11 14:21:38 -08:00
Sergey
fbce6349c4 check kube_pods_subnet and kube_service_addresses to valid ip network range, not single ip address (#4188) 2019-02-11 14:12:06 -08:00
Maxime Guyot
954676b3d8 Update the admin cert paths (#4135) 2019-02-11 14:10:10 -08:00
MarkusTeufelberger
e2ad6aad5a bootstrap: rework role (#4045)
* bootstrap: rework role

* support being called from a non-root user
* run some commands in check mode
* unify spelling/task names

* bootstrap: fix wording of comments for check_mode: false

* bootstrap: remove setup-pipelining task
2019-02-11 14:04:27 -08:00
Chad Swenson
038a2eb862 Merge pull request #3949 from trogeat/patch-fix-missing-ca-cert-apiserver
kubespray: fix missing ca-certificate path in apiserver
2019-02-11 15:40:04 -06:00
Manuel Cintron
5d146e52fe If a centos or rhel node is not configured with the extras repo installation of required packages (python-httplib2 in particular) will fail later on. (#4213) 2019-02-11 13:27:02 -08:00
Jeff Bornemann
c41c1e771f OCI Cloud Provider Update (#4186)
* OCI subnet AD 2 is not required for CCM >= 0.7.0

Reorganize OCI provider to generate configuration, rather than pull

Add pull secret option to OCI cloud provider

* Updated oci example to document new parameters
2019-02-11 12:08:53 -08:00
tikitavi
befa8a6cbd fix error with delete host in inventory.py script (#4203)
* fix error with delete host in inventory.py script

* minor fix
2019-02-11 15:57:51 +03:00
Karl
85b77f7c22 Remove Ubuntu Bionic specific vars file - breaks multi-arch (#3974) 2019-02-11 00:04:27 -08:00
Maxime Guyot
6b3f7306a4 Add support for arm64 images for hyperkube, kubeadm and cni_binary (#4176) 2019-02-09 02:08:57 -08:00
Earl C. Ruby III
ba5c0fa364 Tell Git to ignore the inventory/mycluster directory (#3900)
The inventory/mycluster directory gets created when someone follows
the instructions in README.md, but it should never be committed to
the kubespray repo. Ignore it.
2019-02-07 23:30:28 -08:00
Maxime Guyot
2a92fd2f14 Update docs/roadmap.md (#4198) 2019-02-07 07:43:35 -08:00
Maxime Guyot
7e974f1401 Fix MetaLB library (#4195) 2019-02-07 17:31:53 +03:00
Matthew Mosesohn
8373fa393a Update CNAME 2019-02-07 16:30:25 +03:00
Matthew Mosesohn
613841381d Create CNAME 2019-02-07 16:28:44 +03:00
Maxime Guyot
9e76aafc1c Publish docs with docsify (#4193)
* Add docsify website

* Add website CI
2019-02-07 04:52:08 -08:00
Matthew Mosesohn
9b5096ab10 Set theme jekyll-theme-slate 2019-02-07 15:47:50 +03:00
joakimr-axis
01d70f2c7c Update flannel version to v0.11.0 (#4190)
Change-Id: I27d670803bea82a68d5eb0e49d4677f4afdce55f
2019-02-07 04:33:01 -08:00
Chad Swenson
6878c2af4e Fix kube_hostname_override inconsistencies (#4185) 2019-02-06 22:20:11 -08:00
Bort Verwilst
db2b76a22a update k8s to 1.13.3 (#4192)
* update k8s to 1.13.3

* update README as well
2019-02-06 10:48:05 -08:00
tikitavi
263c8731f2 add to inventory.py script ability to indicate ip ranges (#4182)
* add to inventory.py script ability to indicate ip ranges

* add test for range2ip function for inventory.py script

some fixes

* add negative test for range2ip function for inventory.py script
2019-02-06 18:22:13 +03:00
peerapach
69e5deeccc Fix newline issue of priorityClassName when enable tolerations (#4164) 2019-02-04 12:59:01 -08:00
Matthew Mosesohn
2e1e27219e Refactor collect-info.yaml playbook (#4157)
Run only commands that apply to the current deployed cluster (only get
calico info and skip weave/flannel when deploying calico, for example).

Add helm release info if helm is deployed
2019-02-04 12:46:48 -08:00
Danny Kulchinsky
226d5ed7de [Calico] Define FELIX_KUBENODEPORTRANGES when kube-proxy in ipvs mode (#4173)
* Define FELIX_KUBENODEPORTRANGES when kube-proxy in ipvs mode

* ensure kube_apiserver_node_port_range is defined
2019-02-04 12:42:40 -08:00
Earl C. Ruby III
52e0aa7a80 Install the latest filesystem creation packages (#3904)
This PR ensures that the e2fsprogs and xfsprogs packages are
installed on all Kubernetes nodes and that the packages are
the latest versions. It also ensures that the nodes can
create XFS filesystems when necessary, since not all distros
install xfsprogs by default.

e2fsprogs - ext2/ext3/ext4 file system utilities
xfsprogs - Utilities for managing the XFS filesystem
2019-02-04 12:23:33 -08:00
peerapach
bd9474bafd fix kubeadm-setup when enable access_ip (#4145) 2019-02-01 20:10:34 -08:00
Sorin Sbarnea
316b73178d Add timeout to Get current version of calico cluster version (#4149)
Avoid waiting forever for this task that should be very quick.

Fixes: #4148
2019-02-01 20:09:04 -08:00
Samina Fu
58c71d8ea6 Add Setting Multi on group_vars (#4054) 2019-01-31 23:48:13 -08:00
Peter Metz
e245e935aa fix(vagrant): sets ansible.inventory_path to file not dir (#4153)
This fixes the issue where if there was a hosts.ini file present in the
inventory directory, then Vagrant would set an incorrect path as
ansible.inventory_path
2019-01-31 23:46:52 -08:00
Manuel Cintron
143e2272ff Fixing an issue where trying to install docker-ce-18.09 on rhel7 nodes (or potentially centos 7) without an enabled extras repo the installation will fail because container-selinux >= 2.9 is required. The check for container-selinux upfront should obviate the need for adding an extras repo if the node is able to find it from another source. (#4161) 2019-01-31 16:19:48 -08:00
Vasilis Remmas
cd7924f8c9 Add oidc prefixes to kubeadm templates (#4159) 2019-01-31 15:31:43 -08:00
Erwan Miran
7f93a5a0f5 Fix deprecation warnings (#4130)
* use not deprecated ansible_play_hosts variable

* Using tests as filters is deprecated

* Fix deprecation warning about pkg list
2019-01-31 14:57:22 -08:00
Danny Kulchinsky
1abd3cf3d7 Update calico version in README (#4143) 2019-01-31 14:52:43 -08:00
Petr Ruzicka
91e2d61cf2 Adding link to ../../contrib in README (#4097) 2019-01-31 14:44:06 -08:00
Erwan Miran
f6d60a7e89 Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet) (#4131)
* Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet)

* Documentation for calico_pool_cidr (and calico_advertise_cluster_ips which has been forgotten...)
2019-01-31 13:39:13 -08:00
Maxime Guyot
40f1c51ec3 Add support for Packet with Terraform (#4043)
* Add support for Packet with Terraform

Co-Author: johnstudarus <john@jhlconsulting.com>

* removed advanced features to streamline

* clarifying usage

* Update README.md

provide a better test to validate things are working OK

* Update README.md

clarifying what to set

* minor wordsmithing

* Fix admin cert path

* clarifying how to configure keys

* enabling kubeconfig_localhost

pull over the configuration file via playbooks rather than the key files individually

* Create output.tf

* Add support for node specific plans
2019-01-31 07:24:36 -08:00
Thomas Nys
68fd7e39da Set cluster DNS correctly in case of nodelocal dns cache (#3879)
* Set cluster DNS correctly in case of nodelocal dns cache

* Pass in cluster_ip based on dns mode

* Disable nodelocaldns by default

* Fix syntax error

* Fix syntax issue

* Add nodelocadns ip to vars of node installation

* Change location of nodelocaldns_ip

* Try to remove newlines from jinja template

* Add debug for config file

* Move parameter logic outside of template

* Adapt templates after feedback

* Remove debugging
2019-01-28 23:39:27 -08:00
wangxf
a096761306 [PR-Calico]Support calico 3.4.0 (#4102)
* Suport calico 3.4.0

Signed-off-by: wangxf1987 <xiaofeix.wang@gmail.com>

* Remove symlink + cni conflist template when 3.3.0+, handle Canal, addition of install-cni: sidecar(3.3.0) or initontainer(3.4.0), KUBECONFIG_FILEPATH, calico_cert_dir, advertise cluster ips

* scheduler.alpha.kubernetes.io/critical-pod deprecated since 1.12
2019-01-28 11:03:49 -08:00
Erwan Miran
d790ec96d8 Fixup 4125: Debug agents when requests time out (#4132) 2019-01-28 10:22:43 -08:00
Erwan Miran
5e260fe23a Fixup 4094: Debug agents when nothing is return (#4125) 2019-01-28 03:33:18 -08:00
Florent Monbillard
2054a98cf7 Run kubeadm and hyperkube outside of local_release_dir (#4098)
Addressing the discussion started in #4064, this PR moves kubeadm and
hyperkube binaries to /usr/local/bin before running them on the master
nodes.

It is to address the case where local_release_dir points to /tmp
(kubespray default) and /tmp is mounted with noexec mode, preventing
any binaries to be run in that partition.

In role "node", we still move kubeadm to bin_dir only on the worker
nodes.
2019-01-28 02:00:49 -08:00
Sergey
ce8ba1f170 create artifacts_dir (#4079) 2019-01-28 01:59:15 -08:00
Danny Kulchinsky
595d6427ac [Nodelocal DNS cache] Mount host /run/xtables.lock in nodelocaldns container (#4074)
* Mount host /run/xtables.lock in nodelocaldns container

* fix typo in nodelocaldns daemonset manifest yml

* Add prometheus scrape annotation, updateStrategy and reduce termination grace period

* fix indentation

* actually fix it..

* Bump k8s-dns-node-cache tag to 1.15.1 (fixes https://github.com/kubernetes/dns/issues/282)
2019-01-28 01:57:40 -08:00
Aivars Sterns
39dc61b948 add miouge1 to reviewers (slack - maxguy) (#4108) 2019-01-28 00:42:22 -08:00
Danny Kulchinsky
96688269f8 Support both --address and --bind-address for scheduler and controller-manager (#4112) 2019-01-27 23:43:34 -08:00
Rong Zhang
55aa58ee2e Merge pull request #4025 from riverzhang/download-images
Fix kubeadm config images pull
2019-01-28 15:41:15 +08:00
Erwan Miran
556a8d68bc Set IP env var to autodetect when calico_ip_auto_method is defined (#4105) 2019-01-27 23:09:18 -08:00
rongzhang
3ed5f89cf5 Add update server field in kube-proxy kubeconfig
I know this is a bit hack.
If you use cloud LB, you can use kubeadm's controlPlaneEndpoint to configure kube-proxy's server field.
But for nginx-proxy, it didn't start when kubeadm init.
2019-01-28 14:45:43 +08:00
rongzhang
8d0158ceeb Fix kubeadm config images pull
Supported by kubeadm v1.11
2019-01-28 14:42:55 +08:00
Peter Metz
fcd895d032 fix(vagrant): forces flannel interface as eth1 (#4070)
Without this pods cannot communicate with each other by default (broken
networking)

Closes #2114
2019-01-26 13:38:37 -08:00
Erwan Miran
61d88b8db2 Fix random failure in debug: var=result.content|from_json (#4094)
* Fix random failure in debug: var=result.content|from_json

* netchecker agents are deployed on all k8s-cluster group members

* reducing limits/requests is not enough, switching to n1-standard-2

* gce_centos7 need more cpu
2019-01-25 08:14:22 -08:00
Chad Swenson
3e52f1a4e9 Merge pull request #4091 from doughgle/master
Introduce `calico_upgrade_url` var for Calico upgrade tool.
2019-01-23 17:39:59 -06:00
Douglas Hellinger
4479cc48fe Introduce calico_upgrade_url var for Calico upgrade tool.
So that binary can be sourced from anywhere - not only github.
2019-01-23 16:19:27 +08:00
Chad Swenson
5708914699 Merge pull request #4088 from chadswen/bootstrap-rhel-epel-fixes
Fix epel_enabled and RHEL support in bootstrap-os
2019-01-22 17:13:10 -06:00
Chad Swenson
881be9b741 Fix epel_enabled and RHEL support in bootstrap-os
Looks like `epel_enabled` was not configured for the epel install in `bootstrap-centos.yml`. Also, there were no conditionals that would trigger bootstrap for RHEL.
2019-01-22 16:40:02 -06:00
Chad Swenson
e6f1c4df7f Merge pull request #4085 from chadswen/docker-systemd-after-containerd
Fix docker 18.09.1 systemd service
2019-01-22 13:33:34 -06:00
Chad Swenson
e2592f1ce2 Fix docker 18.09.1 systemd service
The `docker-ce` 18.09.1 packaging missed an `After` dependency on containerd in the systemd service. Upstream PR: https://github.com/docker/docker-ce-packaging/pull/290
2019-01-22 11:19:54 -06:00
Matthew Mosesohn
77d31e679a fixup external kube-apiserver port (#4075) 2019-01-21 14:43:27 +03:00
Florent Monbillard
decbcdc423 Use external LB IP for external api endpoint (#4060)
* Use external LB IP for external api endpoint

Use loadbalancer_apiserver.address instead of apiserver_loadbalancer_domain_name for kudadm init --apiserver-advertise-address argument

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options states apiserver-advertise-address needs to be a IPv4 or IPv6 address

* only use loadbalancer IP if it is defined
2019-01-21 12:27:42 +03:00
Chad Swenson
e3ffa21303 Merge pull request #4019 from chadswen/kubeadm-env
Fix PATH for kubeadm init
2019-01-18 11:27:57 -06:00
Chad Swenson
f2ecda6f0f Merge pull request #4059 from chadswen/helm-version-bump
Update helm version for security and stablity fixes
2019-01-18 11:25:42 -06:00
Chad Swenson
26f6f1f62e Merge pull request #4050 from chadswen/docker-18.09.1
Bump docker 18.09 to the latest patch
2019-01-18 11:23:44 -06:00
Matthew Mosesohn
28aee0fc34 Update OWNERS_ALIASES (#4068) 2019-01-18 18:58:04 +03:00
Bort Verwilst
f97cb4e761 Add 1.12.5 checksums (#4067) 2019-01-18 07:16:43 -08:00
Chad Swenson
405198acd0 Update helm version for security and stablity fixes
Helm v2.12.2 has fixes for a security vuln, and there have been several improvements since our last update.
2019-01-16 11:03:23 -06:00
Matthew Mosesohn
eecaba6b84 Generate external admin.conf with kubeadm (#4056)
* Generate external admin.conf with kubeadm

* Fix apiserver sans
2019-01-16 16:30:50 +03:00
Thomas Rogeat
83e11f9ef7 kubespray: fix missing ca-certificate path in apiserver 2019-01-16 11:48:24 +01:00
Chad Swenson
5a7ac7e5c1 Merge pull request #3984 from dannyk81/calico_xtables_lock
[calico/canal] mount host's xtables lock and enable calico locking for <v3.2.1
2019-01-15 23:13:02 -06:00
Chad Swenson
c15c933ce8 Bump docker 18.09 to the latest patch
Docker 18.09.1 is out and it includes some fixes that are quite critical for RHEL distros, details here: https://docs.docker.com/engine/release-notes/#18091
2019-01-15 13:54:58 -06:00
Chad Swenson
0697ab4b4f Merge pull request #4048 from chadswen/readonly-writable-fix
Fix kubeadm config extra volumes
2019-01-15 13:02:04 -06:00
Chad Swenson
13e3e867ac Fix kubeadm config extra volumes
I found a potential use case where `writable` could be null and therfore
not treated like a boolean, so this adds an extra default statement to
avoid negating a non-boolean as boolean which would lead to undefined. refs #4020
2019-01-15 12:35:22 -06:00
Chad Swenson
cc30220f01 Merge pull request #4044 from chadswen/lvp-cm-fix
Fix local-volume-provisioner configmap template
2019-01-15 09:08:08 -06:00
Danny Kulchinsky
257019d424 Mount host's xtable lock and enable calico lokcing for <v3.2.1 2019-01-14 17:16:29 -05:00
Chad Swenson
4959bfc1b3 Merge pull request #3950 from elementyang/pr-registry
fix registry_storage_class equals empty string
2019-01-14 15:45:09 -06:00
Chad Swenson
301671ae19 Merge pull request #4026 from riverzhang/bind-address
Use --bind-address instead of --address
2019-01-14 15:35:00 -06:00
Chad Swenson
1e09fd8e0f Merge pull request #3970 from woopstar/image_builder_1
Add image builder to create Docker vm's for kube-virt
2019-01-14 15:21:58 -06:00
Chad Swenson
f10f7d0e84 Merge pull request #3975 from kskewes/arm64-urls
Update kubectl and etcd download urls for mult-arch
2019-01-14 15:04:29 -06:00
Chad Swenson
3ee5aa0d6b Fix local-volume-provisioner configmap template
Looks like the template is removing the trailing space between storage
class entries, and since CI only has one storage class we never hit this
issue. This change will prevent the yaml from printing on a single line
when multiple storage classes are defined.
2019-01-14 14:28:00 -06:00
Chad Swenson
fce8712bff Merge pull request #4033 from MarkusTeufelberger/pypy_portable
Use Pypy portable on coreos
2019-01-14 12:30:47 -06:00
Chad Swenson
2051bf2b67 Merge pull request #4028 from riverzhang/v1.13.2
Upgrade kubernetes to v1.13.2
2019-01-14 10:00:15 -06:00
Markus Teufelberger
87c9a871b9 bootstrap-os: use the systemd module to stop and mask locksmithd 2019-01-12 15:06:01 +01:00
Markus Teufelberger
5e2c14e916 bootstrap-os: simplify pip3 installation on coreos 2019-01-12 15:05:33 +01:00
Markus Teufelberger
5b5546adf1 bootstrap-os: Install pypy3 portable 2019-01-12 15:04:33 +01:00
rongzhang
0b09c8154a Upgrade kubernetes to v1.13.2 2019-01-11 14:32:42 +08:00
rongzhang
bab2e5ed0d Use --bind-address instead of --address
--address deprecated
2019-01-11 12:22:47 +08:00
Chad Swenson
7c620ade85 Merge pull request #4020 from chadswen/kubeadm-config-field-updates
Fix readOnly flag in kubeadm-config.v1beta1.yaml.j2
2019-01-10 16:30:56 -06:00
Chad Swenson
1d9c0c7d17 Fix readOnly flag in kubeadm-config.v1beta1.yaml.j2
In v1beta1 of `ClusterConfiguration` the extraVolumes `writable` field was changed to `readOnly` and its boolean value must be negated.

Also, the json field for `useHyperKubeImage` was incorrectly capitalized.
2019-01-09 20:43:35 -06:00
Chad Swenson
aa1d5b8970 Fix PATH for kubeadm init
Right now we're consistently getting warnings about kubelet not found in
path during `kubeadm init`. We fixed this for `kubeadm join` in #3342, and this brings the change to init
as well.
2019-01-09 18:38:02 -06:00
Sascha Marcel Schmidt
435993891b fix assertions, use msg instead of message (#3913) 2019-01-09 11:01:47 -08:00
Chad Swenson
1d5a9464e2 Merge pull request #4009 from chadswen/lvp-fixup
Bugfixes for Local Volume Provisioner
2019-01-09 11:22:28 -06:00
Chad Swenson
e88b8f247a Merge pull request #3996 from Bobonium/issue_3586_kube_router_with_external_loadbalancer_not_working
use api server loadbalancer ip if external loadbalancer is used (fixes kube-router deployment)
2019-01-09 11:20:38 -06:00
Chad Swenson
880c9c6b48 Merge pull request #4016 from mcntrn/download_file_basic_auth
Added optional basic auth parameters
2019-01-09 11:15:05 -06:00
Manuel Cintron
7633e6d582 Added pass through parameters to enable basic auth for downloads 2019-01-08 19:36:13 -06:00
Chad Swenson
72802e4d8d Bugfixes for Local Volume Provisioner
- Fixed an issue where storage class host directories were looped
through excessive target hosts
- Fixes examples in the LVP `README.md` to use nested dicts instead of a
list of dicts
2019-01-08 17:45:20 -06:00
Wilmar den Ouden
4fb8adb9e4 More dynamic local-storage-provisioner approach (#3472)
* Makes local volume provisioner more dynamic

* Correct variable name in local storage provisioner defaults

* Updates external-provisioner readme

* Updates variable naming to be more clear, more documentation, fixes sample inventory

* Variable refactor, untangled some jinja2 loops

* Corrects variable name

* No variable substitution in dict keys, replaced with anchor

* Fixes default storage_classes dict, inline docs

* Fixes spelling in inline docs

* Addresses comments in review

* Updates all the defaults

* Fix failing CI task

* Fixes external provisioner daemonset
2019-01-08 12:36:44 -08:00
Chad Swenson
5c52a830d2 Update kubernetes dashboard to latest patch (#3995) 2019-01-08 09:46:20 -08:00
Julien C
2c8d75afb7 Remove --limit option to select node to delete (#4001)
--limit doesn't work when using remove-node.yml as there is group listing with "hosts: kube-master" in the playbook. Thus, remove-node/pre-remove/post-remove tasks are skipped as they are filtered by group "hosts: kube-master"
2019-01-08 12:09:18 +01:00
Andreas Holmsten
4d5b41b8db Allow override of bind addr for controller-manager and scheduler (#3968)
* allows to override the bind addresses for controller-manager and scheduler

Useful for Prometheus metrics monitoring

* Add bind addr override support in kubeadm/v1beta1

Adds support for override of bind addresses for controller-manager
and scheduler in kubeadm/v1beta1

* Move location of bind address vars

* Remove double declaration of schedulerExtraArgs
2019-01-07 20:41:54 -08:00
Bobonium
11d9c2e2c3 use api server loadbalancer ip if external loadbalancer is used - this fixes the broken kube-router deployment 2019-01-07 23:06:52 +01:00
Andreas Kruger
352fbd71e7 Merge branch 'master' of github.com:kubernetes-sigs/kubespray into image_builder_1 2019-01-04 10:18:23 +01:00
Andreas Kruger
2706633f81 Update OWNERS file 2019-01-04 10:18:05 +01:00
Andreas Kruger
50af3cf6c1 Added owners file 2019-01-04 10:16:07 +01:00
Aivars Sterns
39d7503069 Merge pull request #3959 from elementyang/pr-ingress
fix ingress nodeSelector label
2019-01-04 08:58:16 +00:00
Karl Skewes
41434ce080 Update kubectl and etcd download urls for mult-arch 2019-01-04 21:44:57 +13:00
MarkusTeufelberger
f72ed13f3c remove os_family variable from bootstrap-os (#3962)
* remove os_family variable from bootstrap-os

* quote the conditions another time to fix the syntax error
2019-01-03 11:28:03 -08:00
Andreas Kruger
0fec370dcd Minor changes 2019-01-03 15:41:31 +01:00
Andreas Kruger
bf63569184 Add image builder to create Docker vm's for kube-virt 2019-01-03 15:34:37 +01:00
okamototk
8216e821d3 Fix kubeadm v1beta1 configuration taint (#3928)
* Use master node taint same as kubeadm configuration v1alpha3 or before.
2019-01-03 03:42:23 -08:00
Andreas Krüger
13efa95ef7 Run less CI jobs on each PR (#3967) 2019-01-03 01:26:38 -08:00
Anton Patsev
e25237455c Fix mixup http/https in bootstrap-debian.yml (#3963)
* Fix mixup http/https in bootstrap-debian.yml

* Update bootstrap-debian.yml
2019-01-03 00:18:09 -08:00
Andreas Krüger
b38ed2c959 Update to Dockerfile used for releasing 2.8 and 2.8.1 (#3966) 2019-01-03 00:16:35 -08:00
Andreas Holmsten
a34139e19e (Re)add line break for supplementary addr in SANs (#3952)
The change implemented in #3908 remove line breaks for supplementary
addresses in kubeadm SANs, causing errors in the config file and
failure to bring cluster up. This commit reimplement line breaks in
between supplementary addresses.
2019-01-03 00:12:00 -08:00
Chad Swenson
80379f6cab Fix kube-proxy configuration for kubeadm (#3958)
- Creates and defaults an ansible variable for every configuration option in the `kubeproxy.config.k8s.io/v1alpha1` type spec
  - Fixes vars that were orphaned by removing non-kubeadm
  - Fixes previously harcoded kubeadm values
- Introduces a `main` directory for role default files per component (requires ansible 2.6.0+)
  - Split out just `kube-proxy.yml` in this first effort
- Removes the kube-proxy server field patch task

We should continue to pull out other components from `main.yml` into their own defaults files as I did here for `defaults/main/kube-proxy.yml`. I hope for and will need others to join me in this refactoring across the project until each component config template has a matching role defaults file, with shared defaults in `kubespray-defaults` or `downloads`
2019-01-03 00:04:26 -08:00
MarkusTeufelberger
d58b338bd8 Update the version of pypy used on CoreOS bootstrap-os (#3922)
* Update the version of pypy used on CoreOS bootstrap-os

* update the pip installation process on CoreOS
2019-01-02 06:17:20 -08:00
elementyang
e1e13b68b3 fix ingress nodeSelector label 2018-12-29 14:41:23 +08:00
elementyang
90ee5df413 fix registry_storage_class equals empty string 2018-12-29 14:31:47 +08:00
Rong Zhang
5834e609a6 Add scale master features (#3946)
* Add scale master features

* Add certificate management with kubeadm

* Add kubeadm kubeconfig

* Fix ymalroles error

* fix upgrade cluster fialed

* force update cert and keys when you reconfigure cluster
2018-12-27 23:27:27 -08:00
elementyang
532e97c542 fix registry_storage_class equals empty string 2018-12-28 14:23:19 +08:00
Markos Chandras
d156449819 roles: docker: Update docker service for SUSE distributions (#3924)
The containerd service and socket files have been dropped from the
openSUSE docker package so we should not require them in the docker
service anymore. This makes the docker service file look similar to
the one shipped by the openSUSE package.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-12-27 07:26:02 -08:00
Anton Patsev
d4bd08f82e Install python-pip from local yum repository (#3940)
Add support install python-pip from local yum repository if local yum repository exist.
2018-12-27 06:30:59 -08:00
Earl C. Ruby III
3ce033995f Documented docker_version acceptable values (#3901)
Added a line documenting where to find acceptable values for the
`docker_version` setting. If you use a value that is not used as
a key value by `docker_versioned_pkg` the container-engine/docker
playbook will throw a "Unexpected templating type error". (e.g.
If you use '18.06.1' or '18.06.1-ce', neither of which is used
as a key value of `docker_versioned_pkg`, rather than '18.06',
you'll get an error when installing on Ubuntu 18.04.)
2018-12-27 16:32:16 +03:00
Gautam Divgi
320f4d4d7f Added filters for integer conversion of kubelet_max_pods and kube_network_node_prefix (#3857) 2018-12-26 13:58:53 -08:00
Seongjin Cho
16715adfa0 Adds support for webhook token auth. (#3939)
Webhook token auth:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication

Fixes #3063.
2018-12-26 01:52:53 -08:00
Lucas Melchior
100d972cea Updated cri-o documentation (#3878) 2018-12-25 22:55:17 -08:00
Rong Zhang
ce63597e4a Merge pull request #3941 from riverzhang/gpu
Fix GPU node Scheduling
2018-12-26 13:39:10 +08:00
Anton Patsev
5f117fb65e Add support http/https proxy for bootstrap-debian (#3932) 2018-12-25 10:46:53 -08:00
WillPlatnick
72fee60c8f Update nodelocal to be in its own section (#3931) 2018-12-25 07:10:08 -08:00
rongzhang
1bb1ba2274 Fix GPU node Scheduling 2018-12-25 21:37:10 +08:00
Zefool
6ebcaab2bb controlPlaneEndpoint set up through load balancer should be possible … (#3888)
* controlPlaneEndpoint set up through load balancer should be possible  even in single master setups

Enable load balancer for single-master setups
Fixes an issue where single-master setups are not reachable using the usual admin.conf from outside the cluster. 

controlPlaneEndpoint set up through load balancer should be possible  even in single master setups

* add fix to other api versions

* remove obsolete check completely

* remove check, pass 2

* removes checks in client configuration

* delete 'and'
2018-12-25 00:03:32 -08:00
Rong Zhang
cd42e649a7 Fix reconfigure and upgrade cluster (#3938) 2018-12-24 23:06:27 -08:00
Rong Zhang
8167e5b690 Fix kubeadm images templates (#3936)
download v1.12.3 kubernetes images failed
2018-12-23 06:35:06 -08:00
Bort Verwilst
de014422bf Add k8s 1.12.4 checksums (#3929) 2018-12-23 01:09:09 -08:00
Rong Zhang
2f5c0d10bb Merge pull request #3934 from riverzhang/delete-kubeamd-client
Delete unused controlPlane for join node
2018-12-23 12:07:26 +08:00
Rong Zhang
48b5ee5cd5 Merge pull request #3933 from riverzhang/fix-kubeadm-images
Fix installation using CRIO about download images failed
2018-12-23 11:10:08 +08:00
rongzhang
dd4159fe65 Delete unused controlPlane for join node
it is used for join master or use --experimental-control-plane argments
2018-12-23 00:31:01 +08:00
rongzhang
62a8961d8f Fix installation using CRIO about download images failed 2018-12-23 00:20:39 +08:00
Seongjin Cho
e7b835eb4c Fix duplicate storage-backend (#3906) 2018-12-20 01:01:39 -08:00
Hedayat Vatankhah (هدایت)
fbe9e0ac1a Fix docker_options definition when docker_version is 'latest' rather than a number (#3919)
- NOTE: it assumes that the 'latest' version is newer than 17.05
2018-12-20 00:58:21 -08:00
Rong Zhang
40feb120e4 Merge pull request #3895 from riverzhang/v1.13.1
Upgrade kubernetes to v1.13.1
2018-12-20 16:53:31 +08:00
Rong Zhang
6362211860 Add images downloader to download roles (#3914)
* Add images downloader to download roles

* Use single jinja2 templates

* add kube_version to templates
2018-12-19 05:17:58 -08:00
Rong Zhang
925a820b56 Fix skip upgrade first master (#3915) 2018-12-19 05:16:14 -08:00
Matthew Mosesohn
50b884a32d Fixup line breaks for kubeadm SANs (#3908) 2018-12-19 02:47:31 -08:00
rongzhang
890878f5db disable ubuntu18-flannel test 2018-12-19 15:14:04 +08:00
rongzhang
435ef14379 Upgrade kubernetes to v1.13.1 2018-12-19 15:13:43 +08:00
Matthew Mosesohn
3c44ffcf80 set kubespray-defaults kube_api_anonymous_auth to true (#3909) 2018-12-18 06:53:58 -08:00
Ganesh Maharaj Mahalingam
73aee004ac Enable ClearLinux as a distro in kubespray (#3855)
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-12-18 01:39:25 -08:00
ihard
30a9149b52 add vars for cilium init container (#3893)
* add vars for cilium init container

* make yamllint happy

* add var cilium_init in downloads
2018-12-18 00:34:19 -08:00
Ryler Hockenbury
4a7f829ecf Reapply win_node patches (#3868) 2018-12-13 06:17:46 -08:00
Egor
dc8a8011be Load nf_conntrack module if nf_conntrack_ipv4 failed (#3764) 2018-12-12 05:33:54 -08:00
Maxim Snezhkov
5e84dabb46 Fix assertion for alone etcd nodes (#3847) 2018-12-12 05:21:54 -08:00
Ryler Hockenbury
3e8f4c1545 Use recommended defaults for dns autoscale (#3884) 2018-12-12 05:05:46 -08:00
Ganesh Maharaj Mahalingam
1a50a1a733 cri-o reset all containers and pods (#3856)
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-12-12 01:59:55 -08:00
Florent Monbillard
e50647d252 dns_mode defaults to coredns (#3882)
since bad886ca9b, dns_mode is set to coredns by default instead of kubedns
2018-12-12 01:45:00 -08:00
Maxim Snezhkov
951e4675c6 Fix error with ipvs on cluster reset task (#3848) 2018-12-12 01:43:16 -08:00
Ryler Hockenbury
c04e8b57b9 Metrics server resizer addon needs to target metrics server deployment (#3867)
* Metrics server resizer addon should target metrics server deployment

* Target metrics server deployment without version
2018-12-12 00:09:09 -08:00
gdoucet
32d47c836d Adding is_atomic in centos bootstrap-os (#3873)
Adding fact is_atomic in bootstrap-centos.yml.

Fix issue: #3538
2018-12-11 02:43:21 -08:00
Maxim Snezhkov
90a7941d56 Fix disabling swap on ubuntu systems (#3864) 2018-12-11 02:42:00 -08:00
Thomas Nys
3e3ee0aeb1 Add support for running a nodelocal dns cache (#3861)
* Add support for running a nodelocal dns cache

After encountering dns issues in a cluster I was recently working on I
noticed Kubernetes 1.13 introduced support for running a nodelocal dns
cache.

I believe this can usefull for more people.

73b548db06
https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md

* Add requested changes

* Add additional requested changes + documentation

* Add requested changes after review

* Replace incorrect variable
2018-12-10 17:28:03 -08:00
Anton Patsev
7b674e0607 Add proxy to /etc/apt/apt.conf for ubuntu (#3869) 2018-12-10 02:33:45 -08:00
Julien C
593a9a262d Add metrics service to kube-dns (#3852)
Metrics port is exposed through a service for CoreDNS but not for kube-dns.
2018-12-10 01:45:00 -08:00
Zohar Mamedov
456596710e kube-router manifest DSR adjustments (#3828) 2018-12-10 00:40:39 -08:00
Đào Hoàng Sơn
01cd4cf1c6 Remove vault role from inventory_builder. (#3863)
Related to https://github.com/kubernetes-sigs/kubespray/pull/3684
2018-12-09 18:13:42 +01:00
Andrey Zhelnin
1712314fab Setting host_architecture var (#3846)
Setting host_architecture to allow etcd upgrade working through: ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd (on other case host_architecture is missing)
2018-12-07 05:41:30 -08:00
Egor
7da9880ff7 Move node-cidr-mask-size to ControllerManagerextraArgs (#3845) 2018-12-07 04:23:17 -08:00
Bjorn Skovlund Ryden
d42b37b77d Added RBAC rights for metrics_server. Fixes #3829 (#3843) 2018-12-07 03:11:35 -08:00
Maxime Guyot
c3e83f464f Remove mention of fuel in README (#3826) 2018-12-07 11:12:54 +01:00
Rong Zhang
1550c05a7a Add docker 18.09 support (#3844) 2018-12-07 02:02:39 -08:00
pasqualet
ea833a4cd7 Fix apiServerCertSANs in kubeadm config file (#3839) 2018-12-07 00:11:08 -08:00
Tagir
2d8e04dca7 Added v1.10.11 v1.11.5 support (#3837) 2018-12-07 00:09:51 -08:00
Andreas Krüger
d5ce5874e8 Streamline path to certs dir (#3836)
* Streamline path to certs dir

* More fixes

* Set path to etcd certs in kubernetes defaults instead
2018-12-06 23:11:53 -08:00
Rong Zhang
225f765b56 Upgrade kubernetes to v1.13.0 (#3810)
* Upgrade kubernetes to v1.13.0

* Remove all precense of scheduler.alpha.kubernetes.io/critical-pod in templates

* Fix cert dir

* Use kubespray v2.8 as baseline for gitlab
2018-12-06 12:11:48 -08:00
Andreas Krüger
ddffdb63bf Remove non-kubeadm deployment (#3811)
* Remove non-kubeadm deployment

* More cleanup

* More cleanup

* More cleanup

* More cleanup

* Fix gitlab

* Try stop gce first before absent to make the delete process work

* More cleanup

* Fix bug with checking if kubeadm has already run

* Fix bug with checking if kubeadm has already run

* More fixes

* Fix test

* fix

* Fix gitlab checkout untill kubespray 2.8 is on quay

* Fixed

* Add upgrade path from non-kubeadm to kubeadm. Revert ssl path

* Readd secret checking

* Do gitlab checks from v2.7.0 test upgrade path to 2.8.0

* fix typo

* Fix CI jobs to kubeadm again. Fix broken hyperkube path

* Fix gitlab

* Fix rotate tokens

* More fixes

* More fixes

* Fix tokens
2018-12-06 02:33:38 -08:00
Erwan Miran
0d1be39a97 Reset: Check for kube-ipvs0 presence before remove it (#3816) 2018-12-04 19:18:50 -08:00
Erwan Miran
2c1dd69891 Reset tasks specific to Calico (#3813) 2018-12-04 11:37:45 -08:00
Chad Swenson
145687a48e Reduce log spam of verbose tasks (#3806)
Added a loop_control label to a few tasks that flood our logs.
2018-12-04 10:35:44 -08:00
Rong Zhang
9051aa5296 Fix ubuntu-contiv test failed (#3808)
netchecker agent status is pending
2018-12-03 23:01:32 -08:00
380 changed files with 5273 additions and 23840 deletions

11
.gitignore vendored
View File

@@ -1,9 +1,6 @@
.vagrant
*.retry
**/vagrant_ansible_inventory
inventory/credentials/
inventory/group_vars/fake_hosts.yml
inventory/host_vars/
temp
.idea
.tox
@@ -11,12 +8,19 @@ temp
*.bak
*.tfstate
*.tfstate.backup
.terraform/
contrib/terraform/aws/credentials.tfvars
/ssh-bastion.conf
**/*.sw[pon]
*~
vagrant/
# Ansible inventory
inventory/*
!inventory/local
!inventory/sample
inventory/*/artifacts/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@@ -24,7 +28,6 @@ __pycache__/
# Distribution / packaging
.Python
inventory/*/artifacts/
env/
build/
credentials/

View File

@@ -1,3 +1,4 @@
---
stages:
- unit-tests
- moderator
@@ -8,7 +9,7 @@ stages:
variables:
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
# DOCKER_HOST: tcp://localhost:2375
# DOCKER_HOST: tcp://localhost:2375
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
@@ -24,7 +25,6 @@ variables:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
KUBEADM_ENABLED: "false"
LOG_LEVEL: "-vv"
# asia-east1-a
@@ -35,18 +35,18 @@ variables:
# us-west1-a
before_script:
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
.job: &job
tags:
- kubernetes
- docker
image: quay.io/kubespray/kubespray:v2.7
image: quay.io/kubespray/kubespray:v2.8
.docker_service: &docker_service
services:
- docker:dind
- docker:dind
.create_cluster: &create_cluster
<<: *job
@@ -91,9 +91,7 @@ before_script:
- cd tests && make create-${CI_PLATFORM} -s ; cd -
# Check out latest tag if testing upgrade
# Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 53d87e53c5899d4ea2904ab7e3883708dd6363d3
- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
@@ -137,9 +135,7 @@ before_script:
# Tests Cases
## Test Master API
- >
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
## Ping the between 2 pod
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
@@ -237,95 +233,95 @@ before_script:
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-part1
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
# stage: deploy-part1
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-part1
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_cilium_variables: &coreos_cilium_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-part1
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-part2
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.debian9_calico_variables: &debian9_calico_variables
# stage: deploy-part2
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-part2
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_kube_router_variables: &centos7_kube_router_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-part2
# stage: deploy-part2
UPGRADE_TEST: "graceful"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_kube_router_variables: &coreos_kube_router_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-part1
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-part2
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
# stage: deploy-special
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.opensuse_canal_variables: &opensuse_canal_variables
# stage: deploy-part2
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
@@ -367,6 +363,8 @@ gce_centos7-flannel-addons:
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-part2
<<: *job
@@ -375,21 +373,7 @@ gce_centos-weave-kubeadm-sep:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_ubuntu-flannel-ha:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
only: ['triggers']
gce_ubuntu-weave-sep:
stage: deploy-part2
@@ -399,8 +383,7 @@ gce_ubuntu-weave-sep:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: manual
except: ['triggers']
only: [/^pr-.*$/]
only: ['triggers']
gce_coreos-calico-sep-triggers:
stage: deploy-part2
@@ -432,7 +415,6 @@ gce_centos7-flannel-addons-triggers:
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep-triggers:
stage: deploy-part2
<<: *job
@@ -486,6 +468,16 @@ gce_ubuntu-canal-kubeadm-triggers:
when: on_success
only: ['triggers']
gce_ubuntu-flannel-ha:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
when: manual
except: ['triggers']
gce_centos-weave-kubeadm-triggers:
stage: deploy-part2
<<: *job
@@ -736,7 +728,7 @@ yamllint:
<<: *job
stage: unit-tests
script:
- yamllint roles
- yamllint .
except: ['triggers', 'master']
tox-inventory-builder:
@@ -747,3 +739,73 @@ tox-inventory-builder:
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']
# Tests for contrib/terraform/
.terraform_install: &terraform_install
<<: *job
before_script:
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Install Terraform
- apt-get install -y unzip
- curl https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip > /tmp/terraform.zip
- unzip /tmp/terraform.zip && mv ./terraform /usr/local/bin/ && terraform --version
# Prepare inventory
- cp -LRp contrib/terraform/$PROVIDER/sample-inventory inventory/$CLUSTER
- cd inventory/$CLUSTER
- ln -s ../../contrib/terraform/$PROVIDER/hosts
- terraform init ../../contrib/terraform/$PROVIDER
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- export TF_VAR_public_key_path=""
only: ['master', /^pr-.*$/]
.terraform_validate: &terraform_validate
<<: *terraform_install
stage: unit-tests
script:
- terraform validate -var-file=cluster.tf ../../contrib/terraform/$PROVIDER
- terraform fmt -check -diff ../../contrib/terraform/$PROVIDER
.terraform_apply: &terraform_apply
<<: *terraform_install
stage: deploy-part2
when: manual
script:
- terraform apply -auto-approve ../../contrib/terraform/$PROVIDER
- ansible-playbook -i hosts ../../cluster.yml
after_script:
# Cleanup regardless of exit code
- cd inventory/$CLUSTER
- terraform destroy -auto-approve ../../contrib/terraform/$PROVIDER
tf-validate-openstack:
<<: *terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
<<: *terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-apply-packet:
<<: *terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_cluster_name: $CI_COMMIT_REF_NAME
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: "ewr1"

1
CNAME Normal file
View File

@@ -0,0 +1 @@
kubespray.io

View File

@@ -7,4 +7,5 @@
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Submit a pull request.
4. Sign the CNCF CLA (https://git.k8s.io/community/CLA.md#the-contributor-license-agreement)
5. Submit a pull request.

View File

@@ -4,13 +4,16 @@ RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python-dev sshpass apt-transport-https \
libssl-dev python-dev sshpass apt-transport-https jq \
ca-certificates curl gnupg2 software-properties-common python-pip
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.3/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl

3
OWNERS
View File

@@ -1,5 +1,4 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- kubespray-approvers

View File

@@ -11,8 +11,10 @@ aliases:
- riverzhang
- holser
- smana
- verwilst
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan
- miouge1

View File

@@ -3,10 +3,10 @@
Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -19,10 +19,6 @@ To deploy the cluster you can use :
### Ansible
#### Ansible version
Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansible/issues/46600](https://github.com/ansible/ansible/issues/46600)
#### Usage
# Install dependencies from ``requirements.txt``
@@ -90,6 +86,7 @@ Documents
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
@@ -111,27 +108,27 @@ Supported Components
--------------------
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.12.3
- [etcd](https://github.com/coreos/etcd) v3.2.24
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.13.5
- [etcd](https://github.com/coreos/etcd) v3.2.26
- [docker](https://www.docker.com/) v18.06 (see note)
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v3.1.3
- [calico](https://github.com/projectcalico/calico) v3.4.0
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.3.0
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.10.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
- [weave](https://github.com/weaveworks/weave) v2.5.0
- [weave](https://github.com/weaveworks/weave) v2.5.1
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
- [coredns](https://github.com/coredns/coredns) v1.2.6
- [coredns](https://github.com/coredns/coredns) v1.4.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
@@ -141,7 +138,7 @@ plugins can be deployed for a given single cluster.
Requirements
------------
- **Ansible v2.5 (or newer) and python-netaddr is installed on the machine
- **Ansible v2.7.6 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
@@ -153,6 +150,14 @@ Requirements
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Hardware:
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
Network Plugins
---------------
@@ -195,7 +200,6 @@ Tools and projects on top of Kubespray
--------------------------------------
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests

28
Vagrantfile vendored
View File

@@ -23,7 +23,7 @@ SUPPORTED_OS = {
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
}
@@ -50,6 +50,10 @@ $kube_node_instances = $num_instances
$kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$override_disk_size = false
$disk_size = "20GB"
$local_path_provisioner_enabled = false
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
$playbook = "cluster.yml"
@@ -97,6 +101,13 @@ Vagrant.configure("2") do |config|
# always use Vagrants insecure key
config.ssh.insert_key = false
if ($override_disk_size)
unless Vagrant.has_plugin?("vagrant-disksize")
system "vagrant plugin install vagrant-disksize"
end
config.disksize.size = $disk_size
end
(1..$num_instances).each do |i|
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
@@ -120,6 +131,7 @@ Vagrant.configure("2") do |config|
vb.cpus = $vm_cpus
vb.gui = $vm_gui
vb.linked_clone = true
vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
end
node.vm.provider :libvirt do |lv|
@@ -165,24 +177,28 @@ Vagrant.configure("2") do |config|
host_vars[vm_name] = {
"ip": ip,
"flannel_interface": "eth1",
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1",
"download_run_once": "True",
"download_localhost": "False"
"download_run_once": "False",
"download_localhost": "False",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}"
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
if File.exist?(File.join( $inventory, "hosts.ini"))
ansible.inventory_path = $inventory
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true
ansible.limit = "all"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "--ask-become-pass"]
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
ansible.groups = {

2
_config.yml Normal file
View File

@@ -0,0 +1,2 @@
---
theme: jekyll-theme-slate

View File

@@ -1,32 +1,18 @@
---
- hosts: localhost
gather_facts: false
become: no
tasks:
- name: "Check ansible version !=2.7.0"
- name: "Check ansible version >=2.7.6"
assert:
msg: "Ansible V2.7.0 can't be used until: https://github.com/ansible/ansible/issues/46600 is fixed"
msg: "Ansible must be v2.7.6 or higher"
that:
- ansible_version.string is version("2.7.0", "!=")
- ansible_version.string is version("2.5.0", ">=")
- ansible_version.string is version("2.7.6", ">=")
tags:
- check
vars:
ansible_connection: local
- hosts: localhost
gather_facts: false
tasks:
- name: deploy warning for non kubeadm
debug:
msg: "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- name: deploy cluster for non kubeadm
pause:
prompt: "Are you sure you want to deploy cluster using the deprecated non-kubeadm mode."
echo: no
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- hosts: bastion[0]
gather_facts: False
roles:
@@ -48,13 +34,14 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: true
gather_facts: false
pre_tasks:
- name: gather facts from all instances
setup:
delegate_to: "{{item}}"
delegate_facts: True
delegate_facts: true
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
run_once: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -96,7 +83,7 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- hosts: kube-master[0]
@@ -104,7 +91,7 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"], when: "kubeadm_enabled" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -121,16 +108,10 @@
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"

View File

@@ -1,3 +1,4 @@
---
apiVersion: "2015-06-15"
virtualNetworkName: "{{ azure_virtual_network_name | default('KubeVNET') }}"
@@ -34,4 +35,3 @@ imageReferenceJson: "{{imageReference|to_json}}"
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
storageAccountType: "{{ azure_storage_account_type | default('Standard_LRS') }}"

View File

@@ -1,3 +1,4 @@
---
- set_fact:
base_dir: "{{playbook_dir}}/.generated/"

View File

@@ -1,2 +1,3 @@
---
# See distro.yaml for supported node_distro images
node_distro: debian

View File

@@ -1,3 +1,4 @@
---
distro_settings:
debian: &DEBIAN
image: "debian:9.5"

View File

@@ -1,7 +1,7 @@
---
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
# See contrib/dind/README.md
kube_api_anonymous_auth: true
kubeadm_enabled: true
kubelet_fail_swap_on: false

View File

@@ -1,3 +1,4 @@
---
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
@@ -33,7 +34,7 @@
# Delete docs
path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
when:
- ansible_os_family == 'Debian'
@@ -55,7 +56,7 @@
user:
name: "{{ distro_user }}"
uid: 1000
#groups: sudo
# groups: sudo
append: yes
- name: Allow password-less sudo to "{{ distro_user }}"

View File

@@ -1,3 +1,4 @@
---
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
@@ -18,7 +19,7 @@
state: started
hostname: "{{ item }}"
command: "{{ distro_init }}"
#recreate: yes
# recreate: yes
privileged: true
tmpfs:
- /sys/module/nf_conntrack/parameters

View File

@@ -1,8 +1,6 @@
DISTROS=(debian centos)
NETCHECKER_HOST=${NODES[0]}
EXTRAS=(
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":true}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":true}'
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":true}'
)

View File

@@ -1,8 +1,8 @@
DISTROS=(debian centos)
EXTRAS=(
'kube_network_plugin=calico {"kubeadm_enabled":true}'
'kube_network_plugin=canal {"kubeadm_enabled":true}'
'kube_network_plugin=cilium {"kubeadm_enabled":true}'
'kube_network_plugin=flannel {"kubeadm_enabled":true}'
'kube_network_plugin=weave {"kubeadm_enabled":true}'
'kube_network_plugin=calico {}'
'kube_network_plugin=canal {}'
'kube_network_plugin=cilium {}'
'kube_network_plugin=flannel {}'
'kube_network_plugin=weave {}'
)

View File

@@ -17,6 +17,9 @@
#
# Advanced usage:
# Add another host after initial creation: inventory.py 10.10.1.5
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1
#
@@ -31,21 +34,21 @@
# ip: X.X.X.X
from collections import OrderedDict
try:
import configparser
except ImportError:
import ConfigParser as configparser
from ipaddress import ip_address
from ruamel.yaml import YAML
import os
import re
import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster:children',
'calico-rr', 'vault']
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico-rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
yaml.Representer.add_representer(OrderedDict, yaml.Representer.represent_dict)
def get_var_as_bool(name, default):
@@ -54,7 +57,8 @@ def get_var_as_bool(name, default):
# Configurable as shell vars start
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.ini")
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
# Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
@@ -68,11 +72,14 @@ HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None):
self.config = configparser.ConfigParser(allow_no_value=True,
delimiters=('\t', ' '))
self.config_file = config_file
self.yaml_config = {}
if self.config_file:
self.config.read(self.config_file)
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError:
pass
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
self.parse_command(changed_hosts[0], changed_hosts[1:])
@@ -81,6 +88,7 @@ class KubesprayInventory(object):
self.ensure_required_groups(ROLES)
if changed_hosts:
changed_hosts = self.range2ips(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts)
@@ -101,8 +109,9 @@ class KubesprayInventory(object):
def write_config(self, config_file):
if config_file:
with open(config_file, 'w') as f:
self.config.write(f)
with open(self.config_file, 'w') as f:
yaml.dump(self.yaml_config, f)
else:
print("WARNING: Unable to save config. Make sure you set "
"CONFIG_FILE env var.")
@@ -112,28 +121,29 @@ class KubesprayInventory(object):
print("DEBUG: {0}".format(msg))
def get_ip_from_opts(self, optstring):
opts = optstring.split(' ')
for opt in opts:
if '=' not in opt:
continue
k, v = opt.split('=')
if k == "ip":
return v
raise ValueError("IP parameter not found in options")
if 'ip' in optstring:
return optstring['ip']
else:
raise ValueError("IP parameter not found in options")
def ensure_required_groups(self, groups):
for group in groups:
try:
if group == 'all':
self.debug("Adding group {0}".format(group))
self.config.add_section(group)
except configparser.DuplicateSectionError:
pass
if group not in self.yaml_config:
all_dict = OrderedDict([('hosts', OrderedDict({})),
('children', OrderedDict({}))])
self.yaml_config = {'all': all_dict}
else:
self.debug("Adding group {0}".format(group))
if group not in self.yaml_config['all']['children']:
self.yaml_config['all']['children'][group] = {'hosts': {}}
def get_host_id(self, host):
'''Returns integer host ID (without padding) from a given hostname.'''
try:
short_hostname = host.split('.')[0]
return int(re.findall("\d+$", short_hostname)[-1])
return int(re.findall("\\d+$", short_hostname)[-1])
except IndexError:
raise ValueError("Host name must end in an integer")
@@ -141,12 +151,12 @@ class KubesprayInventory(object):
existing_hosts = OrderedDict()
highest_host_id = 0
try:
for host, opts in self.config.items('all'):
existing_hosts[host] = opts
for host in self.yaml_config['all']['hosts']:
existing_hosts[host] = self.yaml_config['all']['hosts'][host]
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except configparser.NoSectionError:
except Exception:
pass
# FIXME(mattymo): Fix condition where delete then add reuses highest id
@@ -163,22 +173,53 @@ class KubesprayInventory(object):
self.debug("Marked {0} for deletion.".format(realhost))
self.delete_host_by_ip(all_hosts, realhost)
elif host[0].isdigit():
if ',' in host:
ip, access_ip = host.split(',')
else:
ip = host
access_ip = host
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
all_hosts[next_host] = "ansible_host={0} ip={1}".format(
host, host)
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
elif host[0].isalpha():
raise Exception("Adding hosts by hostname is not supported.")
return all_hosts
def range2ips(self, hosts):
reworked_hosts = []
def ips(start_address, end_address):
try:
# Python 3.x
start = int(ip_address(start_address))
end = int(ip_address(end_address))
except:
# Python 2.7
start = int(ip_address(unicode(start_address)))
end = int(ip_address(unicode(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
if '-' in host and not host.startswith('-'):
start, end = host.strip().split('-')
try:
reworked_hosts.extend(ips(start, end))
except ValueError:
raise Exception("Range of ip_addresses isn't valid")
else:
reworked_hosts.append(host)
return reworked_hosts
def exists_hostname(self, existing_hosts, hostname):
return hostname in existing_hosts.keys()
@@ -196,16 +237,34 @@ class KubesprayInventory(object):
raise ValueError("Unable to find host by IP: {0}".format(ip))
def purge_invalid_hosts(self, hostnames, protected_names=[]):
for role in self.config.sections():
for host, _ in self.config.items(role):
for role in self.yaml_config['all']['children']:
if role != 'k8s-cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug(
"Host {0} removed from role {1}".format(host, role)) # noqa
del self.yaml_config['all']['children'][role]['hosts'][host] # noqa
# purge from all
if self.yaml_config['all']['hosts']:
all_hosts = self.yaml_config['all']['hosts'].copy()
for host in all_hosts.keys():
if host not in hostnames and host not in protected_names:
self.debug("Host {0} removed from role {1}".format(host,
role))
self.config.remove_option(role, host)
self.debug("Host {0} removed from role all".format(host))
del self.yaml_config['all']['hosts'][host]
def add_host_to_group(self, group, host, opts=""):
self.debug("adding host {0} to group {1}".format(host, group))
self.config.set(group, host, opts)
if group == 'all':
if self.yaml_config['all']['hosts'] is None:
self.yaml_config['all']['hosts'] = {host: None}
self.yaml_config['all']['hosts'][host] = opts
elif group != 'k8s-cluster:children':
if self.yaml_config['all']['children'][group]['hosts'] is None:
self.yaml_config['all']['children'][group]['hosts'] = {
host: None}
else:
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa
def set_kube_master(self, hosts):
for host in hosts:
@@ -216,31 +275,31 @@ class KubesprayInventory(object):
self.add_host_to_group('all', host, opts)
def set_k8s_cluster(self):
self.add_host_to_group('k8s-cluster:children', 'kube-node')
self.add_host_to_group('k8s-cluster:children', 'kube-master')
k8s_cluster = {'children': {'kube-master': None, 'kube-node': None}}
self.yaml_config['all']['children']['k8s-cluster'] = k8s_cluster
def set_calico_rr(self, hosts):
for host in hosts:
if host in self.config.items('kube-master'):
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-master group".format(host))
continue
if host in self.config.items('kube-node'):
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-node group".format(host))
continue
if host in self.yaml_config['all']['children']['kube-master']:
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-master group".format(host))
continue
if host in self.yaml_config['all']['children']['kube-node']:
self.debug("Not adding {0} to calico-rr group because it "
"conflicts with kube-node group".format(host))
continue
self.add_host_to_group('calico-rr', host)
def set_kube_node(self, hosts):
for host in hosts:
if len(self.config['all']) >= SCALE_THRESHOLD:
if self.config.has_option('etcd', host):
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD:
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in etcd "
"group.".format(host))
continue
if len(self.config['all']) >= MASSIVE_SCALE_THRESHOLD:
if self.config.has_option('kube-master', host):
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa
if host in self.yaml_config['all']['children']['kube-master']['hosts']: # noqa
self.debug("Not adding {0} to kube-node group because of "
"scale deployment and host is in kube-master "
"group.".format(host))
@@ -250,42 +309,31 @@ class KubesprayInventory(object):
def set_etcd(self, hosts):
for host in hosts:
self.add_host_to_group('etcd', host)
self.add_host_to_group('vault', host)
def load_file(self, files=None):
'''Directly loads JSON, or YAML file to inventory.'''
'''Directly loads JSON to inventory.'''
if not files:
raise Exception("No input file specified.")
import json
import yaml
for filename in list(files):
# Try JSON, then YAML
# Try JSON
try:
with open(filename, 'r') as f:
data = json.load(f)
except ValueError:
try:
with open(filename, 'r') as f:
data = yaml.load(f)
print("yaml")
except ValueError:
raise Exception("Cannot read %s as JSON, YAML, or CSV",
filename)
raise Exception("Cannot read %s as JSON, or CSV", filename)
self.ensure_required_groups(ROLES)
self.set_k8s_cluster()
for group, hosts in data.items():
self.ensure_required_groups([group])
for host, opts in hosts.items():
optstring = "ansible_host={0} ip={0}".format(opts['ip'])
for key, val in opts.items():
if key == "ip":
continue
optstring += " {0}={1}".format(key, val)
optstring = {'ansible_host': opts['ip'],
'ip': opts['ip'],
'access_ip': opts['ip']}
self.add_host_to_group('all', host, optstring)
self.add_host_to_group(group, host)
self.write_config(self.config_file)
@@ -313,24 +361,26 @@ print_ips - Write a space-delimited list of IPs from "all" group
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1
Configurable env vars:
DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.ini
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
HOST_PREFIX Host prefix for generated hosts. Default: node
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50
MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
'''
''' # noqa
print(help_text)
def print_config(self):
self.config.write(sys.stdout)
yaml.dump(self.yaml_config, sys.stdout)
def print_ips(self):
ips = []
for host, opts in self.config.items('all'):
for host, opts in self.yaml_config['all']['hosts'].items():
ips.append(self.get_ip_from_opts(opts))
print(' '.join(ips))
@@ -340,5 +390,6 @@ def main(argv=None):
argv = sys.argv[1:]
KubesprayInventory(argv, CONFIG_FILE)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1 +1,3 @@
configparser>=3.3.0
ruamel.yaml>=0.15.88
ipaddress

View File

@@ -34,7 +34,9 @@ class TestInventory(unittest.TestCase):
self.inv = inventory.KubesprayInventory()
def test_get_ip_from_opts(self):
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
optstring = {'ansible_host': '10.90.3.2',
'ip': '10.90.3.2',
'access_ip': '10.90.3.2'}
expected = "10.90.3.2"
result = self.inv.get_ip_from_opts(optstring)
self.assertEqual(expected, result)
@@ -48,7 +50,7 @@ class TestInventory(unittest.TestCase):
groups = ['group1', 'group2']
self.inv.ensure_required_groups(groups)
for group in groups:
self.assertTrue(group in self.inv.config.sections())
self.assertTrue(group in self.inv.yaml_config['all']['children'])
def test_get_host_id(self):
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
@@ -67,35 +69,49 @@ class TestInventory(unittest.TestCase):
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
'ansible_host=10.90.0.2 ip=10.90.0.2')])
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_duplicate(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
'ansible_host=10.90.0.2 ip=10.90.0.2')])
self.inv.config['all'] = expected
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_two(self):
changed_hosts = ['10.90.0.2', '10.90.0.3']
expected = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
self.inv.config['all'] = OrderedDict()
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_first(self):
changed_hosts = ['-10.90.0.2']
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
self.inv.config['all'] = existing_hosts
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
@@ -103,8 +119,12 @@ class TestInventory(unittest.TestCase):
hostname = 'node1'
expected = True
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
@@ -112,8 +132,12 @@ class TestInventory(unittest.TestCase):
hostname = 'node99'
expected = False
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_hostname(existing_hosts, hostname)
self.assertEqual(expected, result)
@@ -121,8 +145,12 @@ class TestInventory(unittest.TestCase):
ip = '10.90.0.2'
expected = True
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
@@ -130,26 +158,40 @@ class TestInventory(unittest.TestCase):
ip = '10.90.0.200'
expected = False
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.exists_ip(existing_hosts, ip)
self.assertEqual(expected, result)
def test_delete_host_by_ip_positive(self):
ip = '10.90.0.2'
expected = OrderedDict([
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.delete_host_by_ip(existing_hosts, ip)
self.assertEqual(expected, existing_hosts)
def test_delete_host_by_ip_negative(self):
ip = '10.90.0.200'
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.assertRaisesRegexp(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
@@ -157,59 +199,71 @@ class TestInventory(unittest.TestCase):
proper_hostnames = ['node1', 'node2']
bad_host = 'doesnotbelong2'
existing_hosts = OrderedDict([
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3'),
('doesnotbelong2', 'whateveropts=ilike')])
self.inv.config['all'] = existing_hosts
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('doesnotbelong2', {'whateveropts=ilike'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
self.inv.purge_invalid_hosts(proper_hostnames)
self.assertTrue(bad_host not in self.inv.config['all'].keys())
self.assertTrue(
bad_host not in self.inv.yaml_config['all']['hosts'].keys())
def test_add_host_to_group(self):
group = 'etcd'
host = 'node1'
opts = 'ip=10.90.0.2'
opts = {'ip': '10.90.0.2'}
self.inv.add_host_to_group(group, host, opts)
self.assertEqual(self.inv.config[group].get(host), opts)
self.assertEqual(
self.inv.yaml_config['all']['children'][group]['hosts'].get(host),
None)
def test_set_kube_master(self):
group = 'kube-master'
host = 'node1'
self.inv.set_kube_master([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_all(self):
group = 'all'
hosts = OrderedDict([
('node1', 'opt1'),
('node2', 'opt2')])
self.inv.set_all(hosts)
for host, opt in hosts.items():
self.assertEqual(self.inv.config[group].get(host), opt)
self.assertEqual(
self.inv.yaml_config['all']['hosts'].get(host), opt)
def test_set_k8s_cluster(self):
group = 'k8s-cluster:children'
group = 'k8s-cluster'
expected_hosts = ['kube-node', 'kube-master']
self.inv.set_k8s_cluster()
for host in expected_hosts:
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in
self.inv.yaml_config['all']['children'][group]['children'])
def test_set_kube_node(self):
group = 'kube-node'
host = 'node1'
self.inv.set_kube_node([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_etcd(self):
group = 'etcd'
host = 'node1'
self.inv.set_etcd([host])
self.assertTrue(host in self.inv.config[group])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
def test_scale_scenario_one(self):
num_nodes = 50
@@ -219,11 +273,13 @@ class TestInventory(unittest.TestCase):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[0:2])
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_master(list(hosts.keys())[0:2])
self.inv.set_kube_node(hosts.keys())
for h in range(3):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_scale_scenario_two(self):
num_nodes = 500
@@ -233,8 +289,57 @@ class TestInventory(unittest.TestCase):
hosts["node" + str(hostid)] = ""
self.inv.set_all(hosts)
self.inv.set_etcd(hosts.keys()[0:3])
self.inv.set_kube_master(hosts.keys()[3:5])
self.inv.set_etcd(list(hosts.keys())[0:3])
self.inv.set_kube_master(list(hosts.keys())[3:5])
self.inv.set_kube_node(hosts.keys())
for h in range(5):
self.assertFalse(hosts.keys()[h] in self.inv.config['kube-node'])
self.assertFalse(
list(hosts.keys())[h] in
self.inv.yaml_config['all']['children']['kube-node']['hosts'])
def test_range2ips_range(self):
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8']
expected = ['10.90.0.2',
'10.90.0.4',
'10.90.0.5',
'10.90.0.6',
'10.90.0.8']
result = self.inv.range2ips(changed_hosts)
self.assertEqual(expected, result)
def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_two(self):
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)

View File

@@ -2,8 +2,8 @@
- name: Upgrade all packages to the latest version (yum)
yum:
name: '*'
state: latest
name: '*'
state: latest
when: ansible_os_family == "RedHat"
- name: Install required packages

View File

@@ -6,5 +6,5 @@ This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer
## Install
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
```

1
contrib/metallb/library Symbolic link
View File

@@ -0,0 +1 @@
../../library

View File

@@ -5,3 +5,4 @@ metallb:
cpu: "100m"
memory: "100Mi"
port: "7472"
version: v0.7.3

View File

@@ -12,6 +12,7 @@
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@@ -53,22 +53,6 @@ rules:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["metallb-speaker"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
@@ -131,21 +115,6 @@ roleRef:
kind: Role
name: config-watcher
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
@@ -173,7 +142,7 @@ spec:
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.6.2
image: metallb/speaker:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
@@ -230,7 +199,7 @@ spec:
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.6.2
image: metallb/controller:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
@@ -250,5 +219,3 @@ spec:
readOnlyRootFilesystem: true
---

View File

@@ -4,7 +4,7 @@
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: all
gather_facts: true
@@ -22,4 +22,3 @@
- hosts: kube-master[0]
roles:
- { role: kubernetes-pv }

View File

@@ -22,9 +22,9 @@ galaxy_info:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing
- system
- networking
- cloud
- clustering
- files
- sharing

View File

@@ -12,5 +12,5 @@
- name: Ensure Gluster mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_mount_dir }}"
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -22,9 +22,9 @@ galaxy_info:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing
- system
- networking
- cloud
- clustering
- files
- sharing

View File

@@ -33,24 +33,24 @@
- name: Ensure Gluster brick and mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume.
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
- name: Mount glusterfs to retrieve disk size
mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
opts: "defaults,_netdev"
state: mounted
@@ -63,13 +63,13 @@
- name: Set Gluster disk size to variable
set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS
template:
dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt
dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
@@ -79,4 +79,3 @@
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -2,9 +2,9 @@
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}}
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
register: gluster_pv
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined

View File

@@ -1,2 +1,3 @@
---
dependencies:
- {role: kubernetes-pv/ansible, tags: apps}

View File

@@ -2,23 +2,23 @@
- name: "Load lvm kernel modules"
become: true
with_items:
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
modprobe:
name: "{{ item }}"
state: "present"
name: "{{ item }}"
state: "present"
- name: "Install glusterfs mount utils (RedHat)"
become: true
yum:
name: "glusterfs-fuse"
state: "present"
name: "glusterfs-fuse"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install glusterfs mount utils (Debian)"
become: true
apt:
name: "glusterfs-client"
state: "present"
name: "glusterfs-client"
state: "present"
when: "ansible_os_family == 'Debian'"

View File

@@ -1,3 +1,4 @@
---
# Bootstrap heketi
- name: "Get state of heketi service, deployment and pods."
register: "initial_heketi_state"

View File

@@ -18,7 +18,7 @@
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -38,4 +38,4 @@
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined" , msg: "Heketi database volume does not exist." }
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }

View File

@@ -18,8 +18,8 @@
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- set_fact:

View File

@@ -8,7 +8,9 @@
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "clusterrolebinding_state.stdout != \"\"", message: "Cluster role binding is not present." }
- assert:
that: "clusterrolebinding_state.stdout != \"\""
msg: "Cluster role binding is not present."
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
@@ -24,4 +26,6 @@
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "secret_state.stdout != \"\"", message: "Heketi config secret is not present." }
- assert:
that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present."

View File

@@ -8,7 +8,7 @@
register: "heketi_service"
changed_when: false
- name: "Ensure heketi service is available."
assert: { that: "heketi_service.stdout != \"\"" }
assert: { that: "heketi_service.stdout != \"\"" }
- name: "Render storage class configuration."
become: true
vars:

View File

@@ -2,15 +2,15 @@
- name: "Install lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "present"
name: "lvm2"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "present"
name: "lvm2"
state: "present"
when: "ansible_os_family == 'Debian'"
- name: "Get volume group information."
@@ -34,13 +34,13 @@
- name: "Remove lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "absent"
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat'"
- name: "Remove lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "absent"
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'Debian'"

5
contrib/terraform/OWNERS Normal file
View File

@@ -0,0 +1,5 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- holmsten
- miouge1

View File

@@ -1,11 +1,11 @@
terraform {
required_version = ">= 0.8.7"
required_version = ">= 0.8.7"
}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
region = "${var.AWS_DEFAULT_REGION}"
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
region = "${var.AWS_DEFAULT_REGION}"
}
data "aws_availability_zones" "available" {}
@@ -18,33 +18,30 @@ data "aws_availability_zones" "available" {}
module "aws-vpc" {
source = "modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
default_tags="${var.default_tags}"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
default_tags = "${var.default_tags}"
}
module "aws-elb" {
source = "modules/elb"
aws_cluster_name="${var.aws_cluster_name}"
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags="${var.default_tags}"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_subnet_ids_public = "${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags = "${var.default_tags}"
}
module "aws-iam" {
source = "modules/iam"
aws_cluster_name="${var.aws_cluster_name}"
aws_cluster_name = "${var.aws_cluster_name}"
}
/*
@@ -53,50 +50,44 @@ module "aws-iam" {
*/
resource "aws_instance" "bastion-server" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
key_name = "${var.AWS_SSH_KEY_NAME}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
}
/*
* Create K8s Master and worker nodes and etcd instances
*
*/
resource "aws_instance" "k8s-master" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}"
count = "${var.aws_kube_master_num}"
count = "${var.aws_kube_master_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
@@ -104,88 +95,77 @@ resource "aws_instance" "k8s-master" {
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = "${var.aws_kube_master_num}"
count = "${var.aws_kube_master_num}"
elb = "${module.aws-elb.aws_elb_api_id}"
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
}
resource "aws_instance" "k8s-etcd" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
count = "${var.aws_etcd_num}"
count = "${var.aws_etcd_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
key_name = "${var.AWS_SSH_KEY_NAME}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
}
resource "aws_instance" "k8s-worker" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
count = "${var.aws_kube_worker_num}"
count = "${var.aws_kube_worker_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
}
/*
* Create Kubespray Inventory File
*
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers {
template = "${data.template_file.inventory.rendered}"
template = "${data.template_file.inventory.rendered}"
}
}

View File

@@ -1,55 +1,54 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))}"
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = "${var.aws_elb_api_port}"
to_port = "${var.k8s_secure_api_port}"
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
type = "ingress"
from_port = "${var.aws_elb_api_port}"
to_port = "${var.k8s_secure_api_port}"
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"]
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"]
security_groups = ["${aws_security_group.aws-elb.id}"]
listener {
instance_port = "${var.k8s_secure_api_port}"
instance_port = "${var.k8s_secure_api_port}"
instance_protocol = "tcp"
lb_port = "${var.aws_elb_api_port}"
lb_protocol = "tcp"
lb_port = "${var.aws_elb_api_port}"
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "TCP:${var.k8s_secure_api_port}"
interval = 30
timeout = 3
target = "TCP:${var.k8s_secure_api_port}"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = "${merge(var.default_tags, map(

View File

@@ -1,7 +1,7 @@
output "aws_elb_api_id" {
value = "${aws_elb.aws-elb-api.id}"
value = "${aws_elb.aws-elb-api.id}"
}
output "aws_elb_api_fqdn" {
value = "${aws_elb.aws-elb-api.dns_name}"
value = "${aws_elb.aws-elb-api.dns_name}"
}

View File

@@ -1,33 +1,30 @@
variable "aws_cluster_name" {
description = "Name of Cluster"
description = "Name of Cluster"
}
variable "aws_vpc_id" {
description = "AWS VPC ID"
description = "AWS VPC ID"
}
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
description = "Port for AWS ELB"
}
variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
description = "Secure Port of K8S API Server"
}
variable "aws_avail_zones" {
description = "Availability Zones Used"
type = "list"
description = "Availability Zones Used"
type = "list"
}
variable "aws_subnet_ids_public" {
description = "IDs of Public Subnets"
type = "list"
description = "IDs of Public Subnets"
type = "list"
}
variable "default_tags" {
description = "Tags for all resources"
type = "map"
description = "Tags for all resources"
type = "map"
}

View File

@@ -1,8 +1,9 @@
#Add AWS Roles for Kubernetes
resource "aws_iam_role" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
assume_role_policy = <<EOF
name = "kubernetes-${var.aws_cluster_name}-master"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -19,8 +20,9 @@ EOF
}
resource "aws_iam_role" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
assume_role_policy = <<EOF
name = "kubernetes-${var.aws_cluster_name}-node"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -39,9 +41,10 @@ EOF
#Add AWS Policies for Kubernetes
resource "aws_iam_role_policy" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
policy = <<EOF
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -73,9 +76,10 @@ EOF
}
resource "aws_iam_role_policy" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
policy = <<EOF
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -124,15 +128,14 @@ resource "aws_iam_role_policy" "kube-worker" {
EOF
}
#Create AWS Instance Profiles
resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile"
role = "${aws_iam_role.kube-master.name}"
name = "kube_${var.aws_cluster_name}_master_profile"
role = "${aws_iam_role.kube-master.name}"
}
resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile"
role = "${aws_iam_role.kube-worker.name}"
name = "kube_${var.aws_cluster_name}_node_profile"
role = "${aws_iam_role.kube-worker.name}"
}

View File

@@ -1,7 +1,7 @@
output "kube-master-profile" {
value = "${aws_iam_instance_profile.kube-master.name }"
value = "${aws_iam_instance_profile.kube-master.name }"
}
output "kube-worker-profile" {
value = "${aws_iam_instance_profile.kube-worker.name }"
value = "${aws_iam_instance_profile.kube-worker.name }"
}

View File

@@ -1,3 +1,3 @@
variable "aws_cluster_name" {
description = "Name of Cluster"
description = "Name of Cluster"
}

View File

@@ -1,58 +1,53 @@
resource "aws_vpc" "cluster-vpc" {
cidr_block = "${var.aws_vpc_cidr_block}"
cidr_block = "${var.aws_vpc_cidr_block}"
#DNS Related Entries
enable_dns_support = true
enable_dns_hostnames = true
#DNS Related Entries
enable_dns_support = true
enable_dns_hostnames = true
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))}"
}
resource "aws_eip" "cluster-nat-eip" {
count = "${length(var.aws_cidr_subnets_public)}"
vpc = true
count = "${length(var.aws_cidr_subnets_public)}"
vpc = true
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
}
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count="${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))}"
}
resource "aws_nat_gateway" "cluster-nat-gateway" {
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
}
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count="${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))}"
}
@@ -62,81 +57,78 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
resource "aws_route_table" "kubernetes-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
}
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags = "${merge(var.default_tags, map(
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))}"
}
resource "aws_route_table" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
}
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags = "${merge(var.default_tags, map(
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
}
resource "aws_route_table_association" "kubernetes-public" {
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
}
resource "aws_route_table_association" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
}
#Kubernetes Security Groups
resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
tags = "${merge(var.default_tags, map(
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))}"
}
resource "aws_security_group_rule" "allow-all-ingress" {
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks= ["${var.aws_vpc_cidr_block}"]
security_group_id = "${aws_security_group.kubernetes.id}"
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["${var.aws_vpc_cidr_block}"]
security_group_id = "${aws_security_group.kubernetes.id}"
}
resource "aws_security_group_rule" "allow-all-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
type = "egress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
}
resource "aws_security_group_rule" "allow-ssh-connections" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
type = "ingress"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
}

View File

@@ -1,21 +1,19 @@
output "aws_vpc_id" {
value = "${aws_vpc.cluster-vpc.id}"
value = "${aws_vpc.cluster-vpc.id}"
}
output "aws_subnet_ids_private" {
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
}
output "aws_subnet_ids_public" {
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
}
output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
value = ["${aws_security_group.kubernetes.*.id}"]
}
output "default_tags" {
value = "${var.default_tags}"
value = "${var.default_tags}"
}

View File

@@ -1,29 +1,27 @@
variable "aws_vpc_cidr_block" {
description = "CIDR Blocks for AWS VPC"
description = "CIDR Blocks for AWS VPC"
}
variable "aws_cluster_name" {
description = "Name of Cluster"
description = "Name of Cluster"
}
variable "aws_avail_zones" {
description = "AWS Availability Zones Used"
type = "list"
description = "AWS Availability Zones Used"
type = "list"
}
variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability zones"
type = "list"
type = "list"
}
variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability zones"
type = "list"
type = "list"
}
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
type = "map"
}

View File

@@ -1,28 +1,27 @@
output "bastion_ip" {
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
}
output "masters" {
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
}
output "workers" {
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
}
output "etcd" {
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
}
output "inventory" {
value = "${data.template_file.inventory.rendered}"
value = "${data.template_file.inventory.rendered}"
}
output "default_tags" {
value = "${var.default_tags}"
value = "${var.default_tags}"
}

View File

@@ -44,18 +44,18 @@ variable "aws_vpc_cidr_block" {
variable "aws_cidr_subnets_private" {
description = "CIDR Blocks for private subnets in Availability Zones"
type = "list"
type = "list"
}
variable "aws_cidr_subnets_public" {
description = "CIDR Blocks for public subnets in Availability Zones"
type = "list"
type = "list"
}
//AWS EC2 Settings
variable "aws_bastion_size" {
description = "EC2 Instance Size of Bastion Host"
description = "EC2 Instance Size of Bastion Host"
}
/*
@@ -64,27 +64,27 @@ variable "aws_bastion_size" {
* AWS Availability Zones without an remainder.
*/
variable "aws_kube_master_num" {
description = "Number of Kubernetes Master Nodes"
description = "Number of Kubernetes Master Nodes"
}
variable "aws_kube_master_size" {
description = "Instance size of Kube Master Nodes"
description = "Instance size of Kube Master Nodes"
}
variable "aws_etcd_num" {
description = "Number of etcd Nodes"
description = "Number of etcd Nodes"
}
variable "aws_etcd_size" {
description = "Instance size of etcd Nodes"
description = "Instance size of etcd Nodes"
}
variable "aws_kube_worker_num" {
description = "Number of Kubernetes Worker Nodes"
description = "Number of Kubernetes Worker Nodes"
}
variable "aws_kube_worker_size" {
description = "Instance size of Kubernetes Worker Nodes"
description = "Instance size of Kubernetes Worker Nodes"
}
/*
@@ -92,16 +92,16 @@ variable "aws_kube_worker_size" {
*
*/
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
description = "Port for AWS ELB"
}
variable "k8s_secure_api_port" {
description = "Secure Port of K8S API Server"
description = "Secure Port of K8S API Server"
}
variable "default_tags" {
description = "Default tags for all resources"
type = "map"
type = "map"
}
variable "inventory_file" {

View File

@@ -10,7 +10,7 @@ most modern installs of OpenStack that support the basic services.
### Known compatible public clouds
- [Auro](https://auro.io/)
- [BetaCloud](https://www.betacloud.io/)
- [Betacloud](https://www.betacloud.io/)
- [CityCloud](https://www.citycloud.com/)
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
- [ELASTX](https://elastx.se/)
@@ -51,8 +51,8 @@ floating IP addresses or not.
Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
### GlusterFS
@@ -109,6 +109,7 @@ Create an inventory directory for your cluster by copying the existing sample an
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
$ cd inventory/$CLUSTER
$ ln -s ../../contrib/terraform/openstack/hosts
$ ln -s ../../contrib
```
This will be the base for subsequent Terraform commands.
@@ -116,8 +117,8 @@ This will be the base for subsequent Terraform commands.
#### OpenStack access and credentials
No provider variables are hardcoded inside `variables.tf` because Terraform
supports various authentication methods for OpenStack: the older script and
environment method (using `openrc`) as well as a newer declarative method, and
supports various authentication methods for OpenStack: the older script and
environment method (using `openrc`) as well as a newer declarative method, and
different OpenStack environments may support Identity API version 2 or 3.
These are examples and may vary depending on your OpenStack cloud provider,
@@ -228,7 +229,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
@@ -358,7 +359,7 @@ If it fails try to connect manually via SSH. It could be something as simple as
### Configure cluster variables
Edit `inventory/$CLUSTER/group_vars/all.yml`:
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
- **bin_dir**:
```
# Directory where the binaries will be installed
@@ -371,7 +372,7 @@ bin_dir: /opt/bin
```
cloud_provider: openstack
```
Edit `inventory/$CLUSTER/group_vars/k8s-cluster.yml`:
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
- Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
@@ -415,8 +416,8 @@ ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
```
4. Get `admin`'s certificates and keys:
```
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
```
5. Configure kubectl:

View File

@@ -9,28 +9,29 @@ resource "openstack_networking_secgroup_v2" "k8s_master" {
}
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions ? 1 : 0}"
description = "${var.cluster_name} - Bastion Server"
}
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${length(var.bastion_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
count = "${var.number_of_bastions ? length(var.bastion_allowed_remote_ips) : 0}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
}
@@ -40,9 +41,9 @@ resource "openstack_networking_secgroup_v2" "k8s" {
}
resource "openstack_networking_secgroup_rule_v2" "k8s" {
direction = "ingress"
ethertype = "IPv4"
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
direction = "ingress"
ethertype = "IPv4"
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
@@ -52,13 +53,13 @@ resource "openstack_networking_secgroup_v2" "worker" {
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
count = "${length(var.worker_allowed_ports)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
count = "${length(var.worker_allowed_ports)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
}
@@ -87,55 +88,56 @@ resource "openstack_compute_instance_v2" "bastion" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
# The join() hack is described here: https://github.com/hashicorp/terraform/issues/11566
# As a workaround for creating "dynamic" lists (when, for example, no bastion host is created)
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
"default",
))}"]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
))}"]
metadata = {
ssh_user = "${var.ssh_user}"
@@ -146,16 +148,15 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
@@ -168,16 +169,15 @@ resource "openstack_compute_instance_v2" "etcd" {
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
@@ -193,16 +193,15 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
@@ -217,26 +216,26 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.worker.name}",
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
"default",
]
))}"]
metadata = {
ssh_user = "${var.ssh_user}"
@@ -247,16 +246,15 @@ resource "openstack_compute_instance_v2" "k8s_node" {
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
@@ -272,7 +270,6 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
@@ -301,12 +298,12 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
@@ -321,7 +318,6 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
}
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {

View File

@@ -4,7 +4,6 @@ output "router_id" {
output "router_internal_port_id" {
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
}
output "subnet_id" {

View File

@@ -6,11 +6,13 @@ public_key_path = "~/.ssh/id_rsa.pub"
# image to use for bastion, masters, standalone etcd instances, and nodes
image = "<image name>"
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "<cloud-provisioned user>"
# 0|1 bastion nodes
number_of_bastions = 0
#flavor_bastion = "<UUID>"
# standalone etcds
@@ -18,14 +20,20 @@ number_of_etcd = 0
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
# nodes
number_of_k8s_nodes = 2
number_of_k8s_nodes_no_floating_ip = 4
#flavor_k8s_node = "<UUID>"
# GlusterFS
@@ -40,7 +48,11 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking
network_name = "<network>"
external_net = "<UUID>"
subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]

View File

@@ -4,8 +4,8 @@ variable "cluster_name" {
variable "az_list" {
description = "List of Availability Zones available in your OpenStack cluster"
type = "list"
default = ["nova"]
type = "list"
default = ["nova"]
}
variable "number_of_bastions" {
@@ -74,27 +74,27 @@ variable "ssh_user_gfs" {
}
variable "flavor_bastion" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_master" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_k8s_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_etcd" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
variable "flavor_gfs_node" {
description = "Use 'nova flavor-list' command to see what your OpenStack instance uses for IDs"
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
default = 3
}
@@ -110,8 +110,8 @@ variable "use_neutron" {
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
default = "10.0.0.0/24"
type = "string"
default = "10.0.0.0/24"
}
variable "dns_nameservers" {
@@ -131,28 +131,29 @@ variable "external_net" {
variable "supplementary_master_groups" {
description = "supplementary kubespray ansible groups for masters, such kube-node"
default = ""
default = ""
}
variable "supplementary_node_groups" {
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
default = ""
default = ""
}
variable "bastion_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
default = ["0.0.0.0/0"]
type = "list"
default = ["0.0.0.0/0"]
}
variable "worker_allowed_ports" {
type = "list"
default = [
{
"protocol" = "tcp"
"port_range_min" = 30000
"port_range_max" = 32767
"protocol" = "tcp"
"port_range_min" = 30000
"port_range_max" = 32767
"remote_ip_prefix" = "0.0.0.0/0"
}
},
]
}

View File

@@ -0,0 +1,231 @@
# Kubernetes on Packet with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
[Packet](https://www.packet.com).
## Status
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Packet project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../..//cluster.yml)
to actually install Kubernetes with Kubespray.
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts.
- Master nodes with etcd: `number_of_k8s_masters` variable
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Standalone etcd hosts: `number_of_etcd` variable
- Kubernetes worker nodes: `number_of_k8s_nodes` variable
Note that the Ansible script will report an invalid configuration if you wind up
with an *even number* of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- Install dependencies: `sudo pip install -r requirements.txt`
- Account with Packet Host
- An SSH key pair
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
```ShellSession
ssh-keygen -f ~/.ssh/id_rsa
```
## Terraform
Terraform will be used to provision all of the Packet resources with base software as appropriate.
### Configuration
#### Inventory files
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
$ cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
$ cd inventory/$CLUSTER
$ ln -s ../../contrib/terraform/packet/hosts
```
This will be the base for subsequent Terraform commands.
#### Packet API access
Your Packet API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see:
https://support.packet.com/kb/articles/api-integrations
The Packet Project ID associated with the key will be set later in cluster.tf.
For more information about the API, please see:
https://www.packet.com/developers/api/
Example:
```ShellSession
$ export PACKET_AUTH_TOKEN="Example-API-Token"
```
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
#### Cluster variables
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
* cluster_name = the name of the inventory directory created above as $CLUSTER
* packet_project_id = the Packet Project ID associated with the Packet API token above
#### Enable localhost access
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
`kubeconfig_localhost: true` in the Kubespray configuration.
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml` and comment back in the following line and change from `false` to `true`:
`\# kubeconfig_localhost: false`
becomes:
`kubeconfig_localhost: true`
Once the Kubespray playbooks are run, a Kubernetes configuration file will be written to the local host at `inventory/$CLUSTER/artifacts/admin.conf`
#### Terraform state files
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory :
* `.terraform`
* `.tfvars`
* `.tfstate`
* `.tfstate.backup`
You can still add them manually if you want to.
### Initialization
Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished as follows:
```ShellSession
$ cd inventory/$CLUSTER
$ terraform init ../../contrib/terraform/packet
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible-playbook -i hosts ../../cluster.yml
```
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
## Ansible
### Node access
#### SSH
Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner:
```
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa
```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all
example-k8s_node-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-etcd-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
example-k8s-master-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
### Deploy Kubernetes
```
$ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
```
This will take some time as there are many tasks to run.
## Kubernetes
### Set up kubectl
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
* Verify that Kubectl runs correctly
```
kubectl version
```
* Verify that the Kubernetes configuration file has been copied over
```
cat inventory/alpha/$CLUSTER/admin.conf
```
* Verify that all the nodes are running correctly.
```
kubectl version
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
```
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View File

@@ -0,0 +1 @@
../terraform.py

View File

@@ -0,0 +1,60 @@
# Configure the Packet Provider
provider "packet" {}
resource "packet_ssh_key" "k8s" {
count = "${var.public_key_path != "" ? 1 : 0}"
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
}
resource "packet_device" "k8s_master" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_masters}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_masters_no_etcd}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters_no_etcd}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
}
resource "packet_device" "k8s_etcd" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_etcd}"
hostname = "${var.cluster_name}-etcd-${count.index+1}"
plan = "${var.plan_etcd}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = ["packet_ssh_key.k8s"]
count = "${var.number_of_k8s_nodes}"
hostname = "${var.cluster_name}-k8s-node-${count.index+1}"
plan = "${var.plan_k8s_nodes}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
}

View File

@@ -0,0 +1,15 @@
output "k8s_masters" {
value = "${packet_device.k8s_master.*.access_public_ipv4}"
}
output "k8s_masters_no_etc" {
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}"
}
output "k8s_etcds" {
value = "${packet_device.k8s_etcd.*.access_public_ipv4}"
}
output "k8s_nodes" {
value = "${packet_device.k8s_node.*.access_public_ipv4}"
}

View File

@@ -0,0 +1,32 @@
# your Kubernetes cluster name here
cluster_name = "mycluster"
# Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations
packet_project_id = "Example-API-Token"
# The public SSH key to be uploaded into authorized_keys in bare metal Packet nodes provisioned
# leave this value blank if the public key is already setup in the Packet project
# Terraform will complain if the public key is setup in Packet
public_key_path = "~/.ssh/id_rsa.pub"
# cluster location
facility = "ewr1"
# standalone etcds
number_of_etcd = 0
plan_etcd = "t1.small.x86"
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
plan_k8s_masters = "t1.small.x86"
plan_k8s_masters_no_etcd = "t1.small.x86"
# nodes
number_of_k8s_nodes = 2
plan_k8s_nodes = "t1.small.x86"

View File

@@ -0,0 +1 @@
../../../../inventory/sample/group_vars

View File

@@ -0,0 +1,56 @@
variable "cluster_name" {
default = "kubespray"
}
variable "packet_project_id" {
description = "Your Packet project ID. See https://support.packet.com/kb/articles/api-integrations"
}
variable "operating_system" {
default = "ubuntu_16_04"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
}
variable "billing_cycle" {
default = "hourly"
}
variable "facility" {
default = "dfw2"
}
variable "plan_k8s_masters" {
default = "c2.medium.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c2.medium.x86"
}
variable "plan_etcd" {
default = "c2.medium.x86"
}
variable "plan_k8s_nodes" {
default = "c2.medium.x86"
}
variable "number_of_k8s_masters" {
default = 0
}
variable "number_of_k8s_masters_no_etcd" {
default = 0
}
variable "number_of_etcd" {
default = 0
}
variable "number_of_k8s_nodes" {
default = 0
}

View File

@@ -218,6 +218,47 @@ def triton_machine(resource, module_name):
return name, attrs, groups
@parses('packet_device')
def packet_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['hostname']
groups = []
attrs = {
'id': raw_attrs['id'],
'facility': raw_attrs['facility'],
'hostname': raw_attrs['hostname'],
'operating_system': raw_attrs['operating_system'],
'locked': parse_bool(raw_attrs['locked']),
'tags': parse_list(raw_attrs, 'tags'),
'plan': raw_attrs['plan'],
'project_id': raw_attrs['project_id'],
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # it's always "root" on Packet
# generic
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
'ipv6_address': raw_attrs['network.1.address'],
'public_ipv6': raw_attrs['network.1.address'],
'private_ipv4': raw_attrs['network.2.address'],
'provider': 'packet',
}
# add groups based on attrs
groups.append('packet_facility=' + attrs['facility'])
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state'])
groups.append('packet_plan=' + attrs['plan'])
# groups specific to kubespray
groups = groups + attrs['tags']
return name, attrs, groups
@parses('digitalocean_droplet')
@calculate_mantl_vars
def digitalocean_host(resource, tfvars=None):

View File

@@ -1,3 +1,4 @@
---
vault_deployment_type: docker
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
vault_version: 0.10.1

39
docs/_sidebar.md Normal file
View File

@@ -0,0 +1,39 @@
* [Readme](/)
* [Comparisons](/docs/comparisons.md)
* [Getting started](/docs/getting-started.md)
* [Ansible](docs/ansible.md)
* [Variables](/docs/vars.md)
* [Ansible](/docs/ansible.md)
* Operations
* [Integration](docs/integration.md)
* [Upgrades](/docs/upgrades.md)
* [HA Mode](docs/ha-mode.md)
* [Large deployments](docs/large-deployments.md)
* CNI
* [Calico](docs/calico.md)
* [Contiv](docs/contiv.md)
* [Flannel](docs/flannel.md)
* [Kube Router](docs/kube-router.md)
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* [Cloud providers](docs/cloud.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [OpenStack](/docs/openstack.md)
* [vSphere](/docs/vsphere.md)
* Operating Systems
* [Atomic](docs/atomic.md)
* [Debian](docs/debian.md)
* [Coreos](docs/coreos.md)
* [OpenSUSE](docs/opensuse.md)
* Advanced
* [Proxy](/docs/proxy.md)
* [Downloads](docs/downloads.md)
* [CRI-O](docs/cri-o.md)
* [Netcheck](docs/netcheck.md)
* [DNS Stack](docs/dns-stack.md)
* [Kubernetes reliability](docs/kubernetes-reliability.md)
* Developers
* [Test cases](docs/test_cases.md)
* [Vagrant](docs/vagrant.md)
* [Roadmap](docs/roadmap.md)

View File

@@ -110,7 +110,6 @@ The following tags are defined in playbooks:
| calico | Network plugin Calico
| canal | Network plugin Canal
| cloud-provider | Cloud-provider related tasks
| dnsmasq | Configuring DNS stack for hosts and K8s apps
| docker | Configuring docker for hosts
| download | Fetching container images to a delegate host
| etcd | Configuring etcd cluster
@@ -152,11 +151,11 @@ Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
```
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
```
And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:

View File

@@ -67,6 +67,15 @@ To re-define you need to edit the inventory and add a group variable `calico_net
calico_network_backend: none
```
##### Optional : Define the default pool CIDR
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
```
calico_pool_cidr: 10.233.64.0/20
```
##### Optional : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
@@ -86,6 +95,12 @@ In order to define global peers, the `peers` variable can be defined in group_va
In order to define peers on a per node basis, the `peers` variable must be defined in hostvars.
NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers)
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
```
calico_advertise_cluster_ips: true
```
##### Optional : Define global AS number
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).

View File

@@ -1,10 +1,10 @@
#---
#peers:
# -router_id: "10.99.0.34"
# as: "65xxx"
# - router_id: "10.99.0.35"
# as: "65xxx"
#
#loadbalancer_apiserver:
# address: "10.99.0.44"
# port: "8383"
# ---
# peers:
# - router_id: "10.99.0.34"
# as: "65xxx"
# - router_id: "10.99.0.35"
# as: "65xxx"
# loadbalancer_apiserver:
# address: "10.99.0.44"
# port: "8383"

View File

@@ -1,10 +1,10 @@
#---
#peers:
# -router_id: "10.99.0.2"
# as: "65xxx"
# - router_id: "10.99.0.3"
# as: "65xxx"
#
#loadbalancer_apiserver:
# address: "10.99.0.21"
# port: "8383"
# ---
# peers:
# - router_id: "10.99.0.2"
# as: "65xxx"
# - router_id: "10.99.0.3"
# as: "65xxx"
# loadbalancer_apiserver:
# address: "10.99.0.21"
# port: "8383"

View File

@@ -19,7 +19,9 @@ on. Had it belonged to the new [operators world](https://coreos.com/blog/introdu
it may have been named a "Kubernetes cluster operator". Kubespray however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kubespray [strives](https://github.com/kubernetes-sigs/kubespray/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.
control plane bootstrapping.
Kubespray supports `kubeadm` for cluster creation since v2.3
(and deprecated non-kubeadm deployment starting from v2.8)
in order to consume life cycle management domain knowledge from it
and offload generic OS configuration things from it, which hopefully benefits both sides.

View File

@@ -1,31 +1,28 @@
cri-o
CRI-O
===============
cri-o is container developed by kubernetes project.
Currently, only basic function is supported for cri-o.
[CRI-O] is a lightweight container runtime for Kubernetes.
Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster.
* cri-o is supported kubernetes 1.11.1 or later.
* helm and other feature may not be supported due to docker dependency.
* scale.yml and upgrade-cluster.yml are not supported.
* Kubernetes supports CRI-O on v1.11.1 or later.
* Helm and other tools may not function as normal due to dependency on Docker.
* `scale.yml` and `upgrade-cluster.yml` are not supported on clusters using CRI-O.
helm and other feature may not be supported due to docker dependency.
Use cri-o instead of docker, set following variable:
_To use CRI-O instead of Docker, set the following variables:_
#### all.yml
```
kubeadm_enabled: true
...
```yaml
download_container: false
skip_downloads: false
```
#### k8s-cluster.yml
```
```yaml
etcd_deployment_type: host
kubelet_deployment_type: host
container_manager: crio
```
[CRI-O]: https://cri-o.io/

View File

@@ -20,10 +20,6 @@ ndots value to be used in ``/etc/resolv.conf``
It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely.
The dnsmasq DaemonSet can accept lower ``ndots`` values and return NXDOMAIN
replies for [bogus internal FQDNS](https://github.com/kubernetes/kubernetes/issues/19634#issuecomment-253948954)
before it even hits the kubedns app. This enables dnsmasq to serve as a
protective, but still recursive resolver in front of kubedns.
#### searchdomains
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
@@ -41,8 +37,7 @@ is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8
#### upstream_dns_servers
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
DNS servers in early cluster deployment when no cluster DNS is available yet.
DNS modes supported by Kubespray
============================
@@ -52,32 +47,20 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns (default)
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.
#### coredns
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries.
#### coredns (default)
This installs CoreDNS as the default cluster DNS for all queries.
#### coredns_dual
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries. It will also deploy a secondary CoreDNS stack
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
#### manual
This does not install dnsmasq or kubedns, but allows you to specify
This does not install coredns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after
initial deployment.
#### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
This does not install any of DNS solution at all. This basically disables cluster DNS completely and
leaves you with a non functional cluster.
## resolvconf_mode
@@ -103,7 +86,7 @@ The following dns options are added to the docker daemon
* attempts:2
For normal PODs, k8s will ignore these options and setup its own DNS settings for the PODs, taking
the --cluster_dns (either dnsmasq or kubedns, depending on dns_mode) kubelet option into account.
the --cluster_dns (either coredns or coredns_dual, depending on dns_mode) kubelet option into account.
For ``hostNetwork: true`` PODs however, k8s will let docker setup DNS settings. Docker containers which
are not started/managed by k8s will also use these docker options.
@@ -115,7 +98,7 @@ servers, which in turn will forward queries to the system nameserver if required
#### host_resolvconf
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
stage (``dns_early: true``), ``/etc/resolv.conf`` is configured to use the DNS servers found in ``upstream_dns_servers``
@@ -130,6 +113,11 @@ Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
cluster service names.
## Nodelocal DNS cache
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns / core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md).
Limitations
-----------

View File

@@ -56,22 +56,12 @@ Add worker nodes to the list under kube-node if you want to delete them (or util
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key
We support two ways to select the nodes:
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2"
```
or
- Use `--limit nodename,nodename2` to select the node
```
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
--private-key=~/.ssh/private_key \
--limit nodename,nodename2"
```
Connecting to Kubernetes
------------------------

View File

@@ -62,34 +62,6 @@ You can change the default configuration by overriding `kube_router_...` variabl
these are named to follow `kube-router` command-line options as per
<https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>.
## Caveats
### kubeadm_enabled: true
If you want to set `kube-router` to replace `kube-proxy`
(`--run-service-proxy=true`) while using `kubeadm_enabled`,
then 'kube-proxy` DaemonSet will be removed *after* kubeadm finishes
running, as it's not possible to skip kube-proxy install in kubeadm flags
and/or config, see https://github.com/kubernetes/kubeadm/issues/776.
Given above, if `--run-service-proxy=true` is needed it would be
better to void `kubeadm_enabled` i.e. set:
```
kubeadm_enabled: false
kube_router_run_service_proxy: true
```
If for some reason you do want/need to set `kubeadm_enabled`, removing
it afterwards behave better if kube-proxy is set to ipvs mode, i.e. set:
```
kubeadm_enabled: true
kube_router_run_service_proxy: true
kube_proxy_mode: ipvs
```
## Advanced BGP Capabilities
https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities
@@ -105,4 +77,4 @@ Next options will set up annotations for kube-router, using `kubectl annotate` c
kube_router_annotations_master: []
kube_router_annotations_node: []
kube_router_annotations_all: []
```
```

View File

@@ -15,8 +15,8 @@ For a large scaled deployments, consider the following configuration changes:
load on a delegate (the first K8s master node) then retrying failed
push or download operations.
* Tune parameters for DNS related applications (dnsmasq daemon set, kubedns
replication controller). Those are ``dns_replicas``, ``dns_cpu_limit``,
* Tune parameters for DNS related applications
Those are ``dns_replicas``, ``dns_cpu_limit``,
``dns_cpu_requests``, ``dns_memory_limit``, ``dns_memory_requests``.
Please note that limits must always be greater than or equal to requests.

View File

@@ -1,4 +1,4 @@
openSUSE Leap 42.3 and Tumbleweed
openSUSE Leap 15.0 and Tumbleweed
===============
openSUSE Leap installation Notes:

97
docs/packet.md Normal file
View File

@@ -0,0 +1,97 @@
Packet
===============
Kubespray provides support for bare metal deployments using the [Packet bare metal cloud](http://www.packet.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
as cell tower, edge collocated installations. The deployment mechanism used by Kubespray for Packet is similar to that used for
AWS and OpenStack clouds (notably using Terraform to deploy the infrastructure). Terraform uses the Packet provider plugin
to provision and configure hosts which are then used by the Kubespray Ansible playbooks. The Ansible inventory is generated
dynamically from the Terraform state file.
## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Packet.
In this example, we're using an m1.large CentOS 7 OpenStack VM as the localhost to kickoff the Kubernetes installation.
You'll need Ansible, Git, and PIP.
```bash
sudo yum install epel-release
sudo yum install ansible
sudo yum install git
sudo yum install python-pip
```
## Playbook SSH Key
An SSH key is needed by Kubespray/Ansible to run the playbooks.
This key is installed into the bare metal hosts during the Terraform deployment.
You can generate a key new key or use an existing one.
```bash
ssh-keygen -f ~/.ssh/id_rsa
```
## Install Terraform
Terraform is required to deploy the bare metal infrastructure. The steps below are for installing on CentOS 7.
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it.
```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_darwin_amd64.zip"
sudo yum install unzip
sudo unzip terraform_0.11.11_linux_amd64.zip -d /usr/local/bin/
```
## Download Kubespray
Pull over Kubespray and setup any required libraries.
```bash
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
sudo pip install -r requirements.txt
```
## Cluster Definition
In this example, a new cluster called "alpha" will be created.
```bash
cp -LRp contrib/terraform/packet/sample-inventory inventory/alpha
cd inventory/alpha/
ln -s ../../contrib/terraform/packet/hosts
```
Details about the cluster, such as the name, as well as the authentication tokens and project ID
for Packet need to be defined. To find these values see [Packet API Integration](https://support.packet.com/kb/articles/api-integrations)
```bash
vi cluster.tf
```
* cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
## Deploy Bare Metal Hosts
Initializing Terraform will pull down any necessary plugins/providers.
```bash
terraform init ../../contrib/terraform/packet/
```
Run Terraform to deploy the hardware.
```bash
terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
```
## Run Kubespray Playbooks
With the bare metal infrastructure deployed, Kubespray can now install Kubernetes and setup the cluster.
```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
```

View File

@@ -1,11 +1,6 @@
Kubespray's roadmap
=================
### Kubeadm
- Switch to kubeadm deployment as the default method after some bugs are fixed:
* Support for basic auth
* cloudprovider cloud-config mount [#484](https://github.com/kubernetes/kubeadm/issues/484)
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
- the playbook would install and configure docker/rkt and the etcd cluster
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
@@ -14,7 +9,12 @@ Kubespray's roadmap
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
### Provisioning and cloud providers
- [ ] Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
- [ ] Terraform to provision instances on:
- [ ] GCE
- [x] AWS (contrib/terraform/aws)
- [x] Openstack (contrib/terraform/openstack)
- [ ] Digital Ocean
- [ ] Azure
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
@@ -34,12 +34,12 @@ Kubespray's roadmap
### Networking
- [ ] Opencontrail
- [ ] Consolidate network_plugins and kubernetes-apps/network_plugins
- [ ] Consolidate roles/network_plugin and roles/kubernetes-apps/network_plugin
### Kubespray API
- Perform all actions through an **API**
- Store inventories / configurations of mulltiple clusters
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
- Store inventories / configurations of multiple clusters
- Make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
### Addons (helm or native ansible)
Include optionals deployments to init the cluster:
@@ -61,10 +61,9 @@ Include optionals deployments to init the cluster:
- Deis Workflow
### Others
- remove nodes (adding is already supported)
- Organize and update documentation (split in categories)
- Refactor downloads so it all runs in the beginning of deployment
- Make bootstrapping OS more consistent
- **consul** -> if officially supported by k8s
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kubespray/issues/312)
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kubespray/issues/329)

View File

@@ -17,12 +17,10 @@ Some variables of note include:
* *calico_version* - Specify version of Calico to use
* *calico_cni_version* - Specify version of Calico CNI plugin to use
* *docker_version* - Specify version of Docker to used (should be quoted
string)
string). Must match one of the keys defined for *docker_versioned_pkg*
in `roles/container-engine/docker/vars/*.yml`.
* *etcd_version* - Specify version of ETCD to use
* *ipip* - Enables Calico ipip encapsulation by default
* *hyperkube_image_repo* - Specify the Docker repository where Hyperkube
resides
* *hyperkube_image_tag* - Specify the Docker tag where Hyperkube resides
* *kube_network_plugin* - Sets k8s network plugin (default Calico)
* *kube_proxy_mode* - Changes k8s proxy mode to iptables mode
* *kube_version* - Specify a given Kubernetes hyperkube version
@@ -61,8 +59,6 @@ following default cluster parameters:
overlap with kube_service_addresses.
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
* *dns_setup* - Enables dnsmasq
* *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *skydns_server* - Cluster IP for DNS (default is 10.233.0.3)
* *skydns_server_secondary* - Secondary Cluster IP for CoreDNS used with coredns_dual deployment (default is 10.233.0.4)
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
@@ -86,15 +82,14 @@ and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.
#### DNS variables
By default, dnsmasq gets set up with 8.8.8.8 as an upstream DNS server and all
By default, hosts are set up with 8.8.8.8 as an upstream DNS server and all
other settings from your existing /etc/resolv.conf are lost. Set the following
variables to match your requirements.
* *upstream_dns_servers* - Array of upstream DNS servers configured on host in
addition to Kubespray deployed DNS
* *nameservers* - Array of DNS servers configured for use in dnsmasq
* *nameservers* - Array of DNS servers configured for use by hosts
* *searchdomains* - Array of up to 4 search domains
* *skip_dnsmasq* - Don't set up dnsmasq (use only KubeDNS)
For more information, see [DNS
Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md).
@@ -118,6 +113,8 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
* *kubelet_cgroup_driver* - Allows manual override of the
cgroup-driver option for Kubelet. By default autodetection is used
to match Docker configuration.
* *kubelet_rotate_certificates* - Auto rotate the kubelet client certificates by requesting new certificates
from the kube-apiserver when the certificate expiration approaches.
* *node_labels* - Labels applied to nodes via kubelet --node-labels parameter.
For example, labels can be set in the inventory as variables or more widely in group_vars.
*node_labels* must be defined as a dict:

View File

@@ -14,6 +14,9 @@ After this step you should have:
- UUID activated for each VM where Kubernetes will be deployed
- A vSphere account with required privileges
If you intend to leverage the [zone and region node labeling](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region), create a tag category for both the zone and region in vCenter. The tags can then be applied at the host, cluster, datacenter, or folder level, and the cloud provider will walk the hierarchy to extract and apply the labels to the Kubernetes nodes.
## Kubespray configuration
First you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.
@@ -34,9 +37,11 @@ Then, in the same file, you need to declare your vCenter credential following th
| vsphere_datastore | TRUE | string | | | Datastore name to use |
| vsphere_working_dir | TRUE | string | | | Working directory from the view "VMs and template" in the vCenter where VM are placed |
| vsphere_scsi_controller_type | TRUE | string | buslogic, pvscsi, parallel | pvscsi | SCSI controller name. Commonly "pvscsi". |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_vm_uuid | FALSE | string | | | VM Instance UUID of virtual machine that host K8s master. Can be retrieved from instanceUuid property in VmConfigInfo, or as vc.uuid in VMX file or in `/sys/class/dmi/id/product_serial` (Optional, only used for Kubernetes <= 1.9.2) |
| vsphere_public_network | FALSE | string | | Blank | Name of the network the VMs are joined to |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
| vsphere_resource_pool | FALSE | string | | Blank | Name of the Resource pool where the VMs are located (Optional, only used for Kubernetes >= 1.9.2) |
| vsphere_zone_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/zone` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
| vsphere_region_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/region` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
Example configuration

View File

@@ -40,7 +40,7 @@
version: 06fddbe2
clone: yes
update: yes
- name: CephFS Provisioner | Build image
shell: |
cd ~/go/src/github.com/kubernetes-incubator/external-storage

View File

@@ -1,3 +1,4 @@
---
### NOTE: This playbook cannot be used to deploy any new nodes to the cluster.
### Additional information:
### * Will not upgrade etcd
@@ -38,21 +39,21 @@
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
#Handle upgrades to master components first to maintain backwards compat.
- hosts: kube-master
- name: Handle upgrades to master components first to maintain backwards compat.
hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: 1
roles:
- { role: kubespray-defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/master, tags: master, upgrade_cluster_setup: true }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- { role: upgrade/post-upgrade, tags: post-upgrade }
#Finally handle worker upgrades, based on given batch size
- hosts: kube-node:!kube-master
- name: Finally handle worker upgrades, based on given batch size
hosts: kube-node:!kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}"
roles:

46
index.html Normal file
View File

@@ -0,0 +1,46 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Kubespray - Deploy a Production Ready Kubernetes Cluster</title>
<meta name="description" content="Deploy a Production Ready Kubernetes Cluster">
<meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<link rel="stylesheet" href="//unpkg.com/docsify-themeable/dist/css/theme-simple.css">
<style>
:root {
--base-font-size: 16px;
--theme-color: rgb(104, 118, 52);
--link-color: rgb(104, 118, 52);
--link-color--hover: rgb(137, 152, 100);
--sidebar-name-margin: 0;
--sidebar-name-padding: 0;
--code-font-size: .9em;
}
.sidebar > h1 {
margin-bottom: -.75em;
margin-top: .75em;
}
.markdown-section a code {
color: var(--link-color)!important;
}
.markdown-section code:not([class*="lang-"]):not([class*="language-"]) {
white-space: unset
}
</style>
</head>
<body>
<div id="app"></div>
</body>
<script>
window.$docsify = {
name: 'Kubespray',
loadSidebar: 'docs/_sidebar.md',
repo: 'https://github.com/kubernetes-sigs/kubespray',
auto2top: true,
}
</script>
<script src="//unpkg.com/docsify/lib/docsify.min.js"></script>
<script src="//unpkg.com/docsify/lib/plugins/search.min.js"></script>
<script src="//unpkg.com/docsify/lib/plugins/ga.min.js"></script>
</html>

View File

@@ -1,3 +1,4 @@
---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd
@@ -9,84 +10,80 @@ bin_dir: /usr/local/bin
## this node for example. The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
#access_ip: 1.1.1.1
# access_ip: 1.1.1.1
## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
#loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234
# loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234
## Internal loadbalancers for apiservers
#loadbalancer_apiserver_localhost: true
# loadbalancer_apiserver_localhost: true
## Local loadbalancer should use this port instead, if defined.
## Defaults to kube_apiserver_port (6443)
#nginx_kube_apiserver_port: 8443
## Local loadbalancer should use this port
## And must be set port 6443
nginx_kube_apiserver_port: 6443
## If nginx_kube_apiserver_healthcheck_port variable defined, enables proxy liveness check.
nginx_kube_apiserver_healthcheck_port: 8081
### OTHER OPTIONAL VARIABLES
## For some things, kubelet needs to load kernel modules. For example, dynamic kernel services are needed
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules.
#kubelet_load_modules: false
# kubelet_load_modules: false
## Upstream dns servers used by dnsmasq
#upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
## Upstream dns servers
# upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
## Note: The 'external' cloud provider is not supported.
## like you would do when using openstack-client before starting the playbook.
## Note: The 'external' cloud provider is not supported.
## TODO(riverzhang): https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager
#cloud_provider:
## kubeadm deployment mode
kubeadm_enabled: true
# Skip alert information
skip_non_kubeadm_warning: false
# cloud_provider:
## Set these proxy values in order to update package manager and docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
# http_proxy: ""
# https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
#no_proxy: ""
# no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
#download_validate_certs: False
# download_validate_certs: False
## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
#additional_no_proxy: ""
# additional_no_proxy: ""
## Certificate Management
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is "script", "none"
## note: vault is removed
#cert_management: script
# cert_management: script
## Set to true to allow pre-checks to fail and continue deployment
#ignore_assert_errors: false
# ignore_assert_errors: false
## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
#kube_read_only_port: 10255
# kube_read_only_port: 10255
## Set true to download and cache container
#download_container: true
# download_container: true
## Deploy container engine
# Set false if you want to deploy container engine manually.
#deploy_container_engine: true
# deploy_container_engine: true
## Set Pypi repo and cert accordingly
#pyrepo_index: https://pypi.example.com/simple
#pyrepo_cert: /etc/ssl/certs/ca-certificates.crt
# pyrepo_index: https://pypi.example.com/simple
# pyrepo_cert: /etc/ssl/certs/ca-certificates.crt

View File

@@ -1,14 +1,14 @@
## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
#azure_tenant_id:
#azure_subscription_id:
#azure_aad_client_id:
#azure_aad_client_secret:
#azure_resource_group:
#azure_location:
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
#azure_vnet_resource_group:
#azure_route_table_name:
# azure_tenant_id:
# azure_subscription_id:
# azure_aad_client_id:
# azure_aad_client_secret:
# azure_resource_group:
# azure_location:
# azure_subnet_name:
# azure_security_group_name:
# azure_vnet_name:
# azure_vnet_resource_group:
# azure_route_table_name:

View File

@@ -1,2 +1,2 @@
## Does coreos need auto upgrade, default is true
#coreos_auto_upgrade: true
# coreos_auto_upgrade: true

View File

@@ -1,13 +1,14 @@
---
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
#docker_storage_options: -s overlay2
# docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
#docker_container_storage_setup_devs: /dev/vdb
# docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
docker_dns_servers_strict: false
@@ -32,12 +33,12 @@ docker_rpm_keepcache: 0
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
#docker_insecure_registries:
# docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
#docker_registry_mirrors:
# docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
@@ -46,7 +47,7 @@ docker_rpm_keepcache: 0
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
#docker_mount_flags:
# docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
@@ -57,10 +58,10 @@ docker_options: >-
{% if docker_registry_mirrors is defined %}
{{ docker_registry_mirrors | map('regex_replace', '^(.*)$', '--registry-mirror=\1' ) | list | join(' ') }}
{%- endif %}
{%- if docker_version is version('17.05', '<') %}
--graph={{ docker_daemon_graph }} {{ docker_log_opts }}
{%- if docker_version != "latest" and docker_version is version('17.05', '<') %}
--graph={{ docker_daemon_graph }} {% if ansible_os_family not in ["openSUSE Leap", "openSUSE Tumbleweed", "Suse"] %}{{ docker_log_opts }}{% endif %}
{%- else %}
--data-root={{ docker_daemon_graph }} {{ docker_log_opts }}
--data-root={{ docker_daemon_graph }} {% if ansible_os_family not in ["openSUSE Leap", "openSUSE Tumbleweed", "Suse"] %}{{ docker_log_opts }}{% endif %}
{%- endif %}
{%- if ansible_architecture == "aarch64" and ansible_os_family == "RedHat" %}
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current

View File

@@ -1,25 +1,28 @@
## When Oracle Cloud Infrastructure is used, set these variables
#oci_private_key:
#oci_region_id:
#oci_tenancy_id:
#oci_user_id:
#oci_user_fingerprint:
#oci_compartment_id:
#oci_vnc_id:
#oci_subnet1_id:
#oci_subnet2_id:
# oci_private_key:
# oci_region_id:
# oci_tenancy_id:
# oci_user_id:
# oci_user_fingerprint:
# oci_compartment_id:
# oci_vnc_id:
# oci_subnet1_id:
# oci_subnet2_id:
## Overide these default/optional behaviors if you wish
#oci_security_list_management: All
# If you would like the controller to manage specific lists per subnet. This is a mapping of subnet ocids to security list ocids. Below are examples.
#oci_security_lists:
#ocid1.subnet.oc1.phx.aaaaaaaasa53hlkzk6nzksqfccegk2qnkxmphkblst3riclzs4rhwg7rg57q: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
#ocid1.subnet.oc1.phx.aaaaaaaahuxrgvs65iwdz7ekwgg3l5gyah7ww5klkwjcso74u3e4i64hvtvq: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
# If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
#oci_use_instance_principals: false
#oci_cloud_controller_version: 0.6.0
# If you would like to control OCI query rate limits for the controller
#oci_rate_limit:
#rate_limit_qps_read:
#rate_limit_qps_write:
#rate_limit_bucket_read:
#rate_limit_bucket_write:
# oci_security_list_management: All
## If you would like the controller to manage specific lists per subnet. This is a mapping of subnet ocids to security list ocids. Below are examples.
# oci_security_lists:
# ocid1.subnet.oc1.phx.aaaaaaaasa53hlkzk6nzksqfccegk2qnkxmphkblst3riclzs4rhwg7rg57q: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
# ocid1.subnet.oc1.phx.aaaaaaaahuxrgvs65iwdz7ekwgg3l5gyah7ww5klkwjcso74u3e4i64hvtvq: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
## If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
# oci_use_instance_principals: false
# oci_cloud_controller_version: 0.6.0
## If you would like to control OCI query rate limits for the controller
# oci_rate_limit:
# rate_limit_qps_read:
# rate_limit_qps_write:
# rate_limit_bucket_read:
# rate_limit_bucket_write:
## Other optional variables
# oci_cloud_controller_pull_source: (default iad.ocir.io/oracle/cloud-provider-oci)
# oci_cloud_controller_pull_secret: (name of pull secret to use if you define your own mirror above)

View File

@@ -1,16 +1,16 @@
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
#openstack_blockstorage_version: "v1/v2/auto (default)"
#openstack_blockstorage_ignore_volume_az: yes
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
#openstack_lbaas_enabled: True
#openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
#openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
#openstack_lbaas_use_octavia: False
#openstack_lbaas_method: "ROUND_ROBIN"
#openstack_lbaas_provider: "haproxy"
#openstack_lbaas_create_monitor: "yes"
#openstack_lbaas_monitor_delay: "1m"
#openstack_lbaas_monitor_timeout: "30s"
#openstack_lbaas_monitor_max_retries: "3"
# # When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
# openstack_blockstorage_version: "v1/v2/auto (default)"
# openstack_blockstorage_ignore_volume_az: yes
# # When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
# openstack_lbaas_enabled: True
# openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
# # To enable automatic floating ip provisioning, specify a subnet.
# openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
# # Override default LBaaS behavior
# openstack_lbaas_use_octavia: False
# openstack_lbaas_method: "ROUND_ROBIN"
# openstack_lbaas_provider: "haproxy"
# openstack_lbaas_create_monitor: "yes"
# openstack_lbaas_monitor_delay: "1m"
# openstack_lbaas_monitor_timeout: "30s"
# openstack_lbaas_monitor_max_retries: "3"

View File

@@ -1,18 +1,18 @@
## Etcd auto compaction retention for mvcc key value store in hour
#etcd_compaction_retention: 0
# etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
#etcd_metrics: basic
# etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
#etcd_memory_limit: "512M"
# etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
#etcd_quota_backend_bytes: "2G"
# etcd_quota_backend_bytes: "2G"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
#etcd_peer_client_auth: true
# etcd_peer_client_auth: true

View File

@@ -1,3 +1,4 @@
---
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true
@@ -17,22 +18,31 @@ metrics_server_enabled: false
# metrics_server_metric_resolution: 60s
# metrics_server_kubelet_preferred_address_types: "InternalIP"
# Rancher Local Path Provisioner
local_path_provisioner_enabled: false
# local_path_provisioner_namespace: "local-path-storage"
# local_path_provisioner_storage_class: "local-path"
# local_path_provisioner_reclaim_policy: Delete
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.2"
# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: kube-system
# local_volume_provisioner_storage_classes:
# - name: "{{ local_volume_provisioner_storage_class | default('local-storage') }}"
# host_dir: "{{ local_volume_provisioner_base_dir | default ('/mnt/disks') }}"
# mount_dir: "{{ local_volume_provisioner_mount_dir | default('/mnt/disks') }}"
# - name: "local-ssd"
# host_dir: "/mnt/local-storage/ssd"
# mount_dir: "/mnt/local-storage/ssd"
# - name: "local-hdd"
# host_dir: "/mnt/local-storage/hdd"
# mount_dir: "/mnt/local-storage/hdd"
# - name: "local-shared"
# host_dir: "/mnt/local-storage/shared"
# mount_dir: "/mnt/local-storage/shared"
# local-storage:
# host_dir: /mnt/disks
# mount_dir: /mnt/disks
# fast-disks:
# host_dir: /mnt/fast-disks
# mount_dir: /mnt/fast-disks
# block_cleaner_command:
# - "/scripts/shred.sh"
# - "2"
# volume_mode: Filesystem
# fs_type: ext4
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
@@ -50,11 +60,11 @@ cephfs_provisioner_enabled: false
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_nodeselector:
# node-role.kubernetes.io/master: ""
# node-role.kubernetes.io/node: ""
# ingress_nginx_tolerations:
# - key: "key"
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: "value"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80

View File

@@ -1,3 +1,4 @@
---
# Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernetes.
@@ -19,7 +20,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.12.3
kube_version: v1.13.5
# kubernetes image repo define
kube_image_repo: "gcr.io/google-containers"
@@ -51,9 +52,9 @@ kube_users:
- system:masters
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false
#kube_basic_auth: false
#kube_token_auth: false
# kube_oidc_auth: false
# kube_basic_auth: false
# kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
@@ -73,6 +74,9 @@ kube_users:
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
kube_network_plugin_multus: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
@@ -88,19 +92,32 @@ kube_network_node_prefix: 24
# The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https)
#kube_apiserver_insecure_port: 8080 # (http)
kube_apiserver_port: 6443 # (https)
# kube_apiserver_insecure_port: 8080 # (http)
# Set to 0 to disable insecure port - Requires RBAC in authorization_modes and kube_api_anonymous_auth: true
kube_apiserver_insecure_port: 0 # (disabled)
kube_apiserver_insecure_port: 0 # (disabled)
# Kube-proxy proxyMode configuration.
# Can be ipvs, iptables
kube_proxy_mode: ipvs
# Kube-proxy nodeport address.
# cidr to bind nodeport services. Flag --nodeport-addresses on kube-proxy manifest
kube_proxy_nodeport_addresses: false
# kube_proxy_nodeport_addresses_cidr: 10.0.1.0/24
# A string slice of values which specify the addresses to use for NodePorts.
# Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32).
# The default empty string slice ([]) means to use all local addresses.
# kube_proxy_nodeport_addresses_cidr is retained for legacy config
kube_proxy_nodeport_addresses: >-
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
[{{ kube_proxy_nodeport_addresses_cidr }}]
{%- else -%}
[]
{%- endif -%}
# If non-empty, will use this string as identification instead of the actual hostname
# kube_override_hostname: >-
# {%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%}
# {%- else -%}
# {{ inventory_hostname }}
# {%- endif -%}
## Encrypting Secret Data at Rest (experimental)
kube_encrypt_secret_data: false
@@ -110,10 +127,13 @@ kube_encrypt_secret_data: false
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
#manual_dns_server: 10.x.x.x
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: False
nodelocaldns_ip: 169.254.25.10
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
@@ -122,7 +142,6 @@ deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
## Container runtime
@@ -144,7 +163,7 @@ kubernetes_audit: false
dynamic_kubelet_configuration: false
# define kubelet config dir for dynamic kubelet
#kubelet_config_dir:
# kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
dynamic_kubelet_configuration_dir: "{{ kubelet_config_dir | default(default_kubelet_config_dir) }}"
@@ -156,10 +175,6 @@ podsecuritypolicy_enabled: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
# dnsmasq
# dnsmasq_upstream_dns_servers:
# - /resolvethiszone.with/10.0.4.250
# - 8.8.8.8
# Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (default true)
# kubelet_cgroups_per_qos: true
@@ -191,3 +206,8 @@ persistent_volumes_enabled: false
# nvidia_driver_version: "384.111"
## flavor can be tesla or gtx
# nvidia_gpu_flavor: gtx
## NVIDIA driver installer images. Change them if you have trouble accessing gcr.io.
# nvidia_driver_install_centos_container: atzedevries/nvidia-centos-driver-installer:2
# nvidia_driver_install_ubuntu_container: gcr.io/google-containers/ubuntu-nvidia-driver-installer@sha256:7df76a0f0a17294e86f691c81de6bbb7c04a1b4b3d4ea4e7e2cccdc42e1f6d63
## NVIDIA GPU device plugin image.
# nvidia_gpu_device_plugin_container: "k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"

Some files were not shown because too many files have changed in this diff Show More