Compare commits

...

355 Commits

Author SHA1 Message Date
Maxime Guyot
51d9e2f9b1 Update to Ansible v2.7.16 (#5850) 2020-03-30 06:21:54 -07:00
chz8494
941aaf93fd remove duplicate ppa step and replace with circtl package download (#5455)
fix error that crictl package not downloaded before install.
```
TASK [container-engine/cri-o : Install crictl] *********************************
fatal: [more-crab]: FAILED! => {"changed": false, "msg": "Source '/tmp/releases/crictl-v1.16.1-linux-amd64.tar.gz' does not exist"}
```
2020-03-30 01:11:53 -07:00
Etienne Champetier
68b3ee8ac1 Add v1.15.10 and v1.15.11 hashes (#5851)
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-27 23:07:53 -07:00
Etienne Champetier
55da185dfe Add proxy support to containerd, improves no_proxy (#5583) (#5830)
* containerd: add proxy support

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubespray-defaults: add kube_service_addresses / kube_pods_subnet to no_proxy

CIDR notation in no_proxy is supported by a lot of programs/languages,
including go: https://github.com/golang/go/issues/16704
Without that containerd cannot talk the the API server (kube_apiserver_ip),
but it should not go through an external proxy for the nodes/pods/services

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 9f2dd09628)
2020-03-27 08:10:23 -07:00
Bort Verwilst
f33aafefa2 added "Flatcar", "Flatcar Container Linux by Kinvolk" for all coreOS role (#5607) (#5818)
Co-authored-by: Sylvain Chateau <sylvain.chateau@epitech.eu>
2020-03-27 06:06:23 -07:00
Maxime Guyot
8f2ad2e2f7 Add moreutils in Dockerfile (#5840) 2020-03-27 06:02:24 -07:00
Etienne Champetier
980ac28d60 kube-proxy need conntrack (#5478) (#5828)
(cherry picked from commit 48c41bcbe7)

Co-authored-by: Damon Wang <wangdekui@inspur.com>
2020-03-26 08:52:26 -07:00
Etienne Champetier
fde234fda7 Fix certificates checking when adding etcd node to existing k8s node (#5807) (#5826)
Co-authored-by: alexkomrakov <alexkomrakov@gmail.com>
(cherry picked from commit 6ad6609872)
2020-03-26 08:50:25 -07:00
Etienne Champetier
de26988e05 containerd: bump to 1.2.13 (#5727) (#5832)
https://github.com/containerd/containerd/releases/tag/v1.2.11
CVE-2019-16884 / CVE-2019-17596

https://github.com/containerd/containerd/releases/tag/v1.2.12
CVE-2019-19921 / CVE-2019-16884 / CVE-2019-11253

https://github.com/containerd/containerd/releases/tag/v1.2.13

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit e2ec7c76a4)
2020-03-26 08:48:26 -07:00
Florent Monbillard
173314d9f1 [2.12 branch] Backport Kubernetes 1.16.8 (#5770) (#5774)
* Backport Kubernetes 1.16.8 (#5770)

* Kubernetes 1.16.8

* Upgrade etcd to 3.3.12 (#5718)

* Use kubespray 2.11.2 as start version for the upgrade test case
2020-03-22 23:58:44 -07:00
Kubernetes Prow Robot
e181530333 Backport remove dockerproject (#5682)
* Remove dockerproject org (#5548)

* Change dockerproject.org to download.docker.com

dockerproject.org was deprecated in 2017 and has gone down.

* Restore yum repo for containerd

Change-Id: I883bb512a2164a85865b1bd4fb569af0358c8c2b

Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>

* remove legacy docker repo in kubernetes/preinstall before any packages installed (#5640)

* Remove dockerproject_.+_repo_.+ variables (#5662)

This 38688a4486 change replaces the
value for dockerproject_.+_repo_.+ docker variables but their new
value was previously defined in other variables. This change removes
the dockerproject_.+_repo_.+ docker variables in favor of the older
ones.

* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo (#5569)

* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo

* move task 'Remove legacy docker repo file' to pre-upgrade.yml

* fix upgrade procedure when in playbook (#5695)

exists role kubernetes/preinstall and not exists role container-engine

 error 'yum_repo_dir' is undefined

Co-authored-by: Matthew Mosesohn <matthew.mosesohn@gmail.com>
Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>
Co-authored-by: Victor Morales <chipahuac@hotmail.com>
2020-03-05 02:34:38 -08:00
Etienne Champetier
366fb084ef Ensure we always fixup kube-proxy kubeconfig (#5524) (#5558)
When running with serial != 100%, like upgrade_cluster.yml, we need to apply this fixup each time
Problem was introduced in 05dc2b3a09

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 5e9479cded)
2020-02-20 04:15:05 -08:00
Florian Ruynat
34e883e6e2 Upgrade to Kubernetes 1.16.7 (#5627) 2020-02-13 00:36:35 -08:00
Florian Ruynat
22236bfab7 Upgrade to Kubernetes 1.16.6 (#5579) 2020-02-12 02:18:51 -08:00
Kessler
24d28de979 Fix invalid variable in host inventory script (#5482) 2020-01-27 01:59:02 -08:00
Maxime Guyot
86365d61e3 Rebase on 2.12 (#5488) 2020-01-17 02:10:56 -08:00
Andreas Krüger
370a0635fa Bump nodelocaldns version to 1.15.8 (#5447)
* Bump nodelocaldns version

* Add missing upstreamsvc
2019-12-13 02:22:55 -08:00
Bort Verwilst
db2ca014cb Add Helm 3.x support (#5441)
* Add Helm 3.x support

* tiller enabled when helm < 3.0.0
2019-12-12 09:24:32 -08:00
bfraz
f0f8379e1b Update aws tf (#5435)
* update aws tf to function as expected

* update tf version

* update syntax for tf v0.12

* update tf version in readme

* update per tf for v0.12
2019-12-12 03:42:33 -08:00
Maxime Guyot
815eebf1d7 Add wait for kubectl get ds after upgrades (#5433) 2019-12-11 11:23:55 -08:00
Maxime Guyot
95cf18ff00 Re introduce CI for upgrades (#5427) 2019-12-11 04:48:06 -08:00
Matthew Mosesohn
696fcaf391 Ensure 0644 mode for ca.crt on nodes (#5428)
Change-Id: I5e018dfaeffe314300b373aeb7ed5f59929cf4f9
2019-12-11 00:54:04 -08:00
Maxime Guyot
6ff5ccc938 Use kubespray/kubespray:v2.11.0 for CI (#5363) 2019-12-11 00:10:05 -08:00
Maxime Guyot
f8a18fcaca Update the release process doc (#5419) 2019-12-10 04:41:29 -08:00
Maxime Guyot
961c1be53e Remove Digital Ocean CI (#5418) 2019-12-10 04:39:29 -08:00
Maxime Guyot
eda1dcb7f6 Fix TF inventory script (#5424) 2019-12-10 03:41:29 -08:00
Etienne Champetier
5e0140d62c Add k8s 1.15.6 hashes (#5342)
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-10 00:45:30 -08:00
Craig Rodrigues
717fe3cf3a Add checksums for v1.17.0 (#5423) 2019-12-09 21:15:28 -08:00
Yujun Zhang
32d80ca438 Add default value for bin_dir in recover control plane (#5396) 2019-12-09 02:54:02 -08:00
ooneko
2a9aead50e Set kube_image_repo use {{ gcr_image_repo }} (#5314)
To aviod repeat "gcr.io" again.
2019-12-09 02:52:02 -08:00
Sergey
9fda84b1c9 set node label via kubectl label command (#5257)
* set varios node label via kubectl label command, not kubelet options

* remove node_labels from KUBELET_ARGS
2019-12-09 01:43:09 -08:00
Etienne Champetier
42702dc1a3 Fixes for CentOS 8 (#5213)
* Fix python3-libselinux installation for RHEL/CentOS 8

In bootstrap-centos.yml we haven't gathered the facts,
so #5127 couldn't work

Minimum ansible version to run kubespray is 2.7.8,
so ansible_distribution_major_version is defined an there is no need to default it

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Restart NetworkManager for RHEL/CentOS 8

network.service doesn't exist anymore
 # systemctl status network
 Unit network.service could not be found.

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Add module_hotfixes=True to docker / containerd yum repo config

https://bugzilla.redhat.com/show_bug.cgi?id=1734081
https://bugzilla.redhat.com/show_bug.cgi?id=1756473
Without this setting you end up with the following error:
 # yum install docker-ce
 Failed to set locale, defaulting to C
 Last metadata expiration check: 0:03:21 ago on Thu Sep 26 22:00:05 2019.
 Error:
  Problem: package docker-ce-3:19.03.2-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
   - cannot install the best candidate for the job
   - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
   - package containerd.io-1.2.2-3.el7.x86_64 is excluded
   - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-09 01:37:10 -08:00
Hugo Blom
40e35b3fa6 Support Openstack servergroups (#5412)
* add support for nova servergroups

* Add documentation for openstack nova servergroups

* uppdate to TF 0.12.12 format and fix etcd

* revert for_each change

* fix variables and formatting in main.tf

* try to avoid errors

* update variable

* Update main.tf

* Update main.tf

* update all other instance resources
2019-12-09 01:15:10 -08:00
Maxime Guyot
b15d41a96a Add support to Ansible 2.9 (#5361) 2019-12-05 07:24:32 -08:00
Matthew Mosesohn
7da2083986 Add toleration for calico-typha on master (#5405)
Change-Id: Iea9a366cf6ccc4d491bfc49c5d2dba6d98f81b69
2019-12-05 06:24:32 -08:00
Maxime Guyot
37df9a10ff Add CI for Amazon Linux 2 (#5410) 2019-12-05 05:44:32 -08:00
Maxime Guyot
0f845fb350 Add support for Debian 10 (#5408) 2019-12-05 05:42:32 -08:00
Maxime Guyot
23b8998701 Add OIDC to CI (#5407) 2019-12-05 05:40:32 -08:00
Maxime Guyot
401d441c10 Fix Python code style for inventory_builder (#5362) 2019-12-05 01:48:32 -08:00
Hugo Blom
f7aea8ed89 update oidc to contain quotes (#5406) 2019-12-05 00:24:32 -08:00
Maxime Guyot
a9b67d586b Add markdown CI (#5380) 2019-12-04 07:22:57 -08:00
Maxime Guyot
b1fbead531 Update to TF v0.12.12 (#5267) 2019-12-04 07:20:58 -08:00
Maxime Guyot
b06826e88a Fix OpenSUSE support (#5370) 2019-12-04 05:16:57 -08:00
Matthew Mosesohn
57fef8f75e Allow customizing kubelet healthz port and bind addr (#5403)
Change-Id: I1634ba2d2d3337243ffcdea86750003a559f2576
2019-12-03 11:56:58 -08:00
Matthew Mosesohn
f599a4a859 force other resolvers to be secondary when using systemd-resolved (#5391)
Change-Id: I33d46c7e0c5374467e22c5a652b282d1703dea85
2019-12-02 08:41:04 -08:00
Matthew Mosesohn
e44b0727d5 Allow inventory_builder to add nodes with hostname (#5398)
Change-Id: Ifd7dd7ce8778f4f1be2016cae8d74452173b5312
2019-12-02 08:13:04 -08:00
Matthew Mosesohn
18cee65c4b Add support for k8s v1.17.0-rc.1, remove hyperkube (#5378)
Change-Id: I3fff04f0211cd9c2e8235acaf51c3aa98abc8bb7
2019-11-28 05:41:03 -08:00
zhanwang
f779cb93d6 update URL for Gluster Getting Started Guide (#5390)
update URL for Gluster Getting Started Guide
2019-11-28 00:45:03 -08:00
Yujun Zhang
aec5080a47 kubernetes/masters: fix task name in kubeadm setup (#5377) 2019-11-27 06:05:20 -08:00
Anton Fayzrahmanov
80418a44d5 CoreDNS deployment extra tolerations (#5364)
* Add extra tolerations for coredns

* dns_extra_tolerations option

* dns_extra_tolerations

* missing starting space in comment
2019-11-27 05:49:21 -08:00
Florian Ruynat
257c20f39e add 1.16.3 checksums and set new version as default (#5384) 2019-11-27 01:29:20 -08:00
xieyanker
050de3ab7b update fedora atomic download url (#5357) 2019-11-26 07:53:10 -08:00
Aaron Crickenberger
f1498d4b53 fix OWNERS file (#5359)
Initially this was to fix a mis-indented approvers key. However, it turns
out that 'oilbeater' is not a member of kubernetes-sigs nor
kubernetes-incubator (the org this repo was migrated from). Thus this
OWNERS file is failing prow's validation check.

As a workaround I've opted to move them to emeritus_approver, which
isn't valiated and can be used as a hint for other approvers in this
repo
2019-11-25 17:59:11 -08:00
Etienne Champetier
18d19d9ed4 containerd: update to 1.2.10 (#5341)
Lot's of bugs and security fixes:
https://github.com/containerd/containerd/releases/tag/v1.2.10
CVE-2019-16884 / CVE-2019-16276
https://github.com/containerd/containerd/releases/tag/v1.2.9
CVE-2019-9512 / CVE-2019-9514 / CVE-2019-9515
https://github.com/containerd/containerd/releases/tag/v1.2.8
CVE-2019-9512 / CVE-2019-9514
https://github.com/containerd/containerd/releases/tag/v1.2.7

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-11-22 00:09:29 -08:00
Matthew Mosesohn
61d7d1459a Purge inactive reviewers. Promote miouge1 to approver (#5365)
Change-Id: I92b94ed9be85d47717e24a6099b4b22066717d02
2019-11-21 11:03:29 -08:00
Michael Shen
6924c6e5a3 [FIX] fix match because trim removes leading/trailing whitespace (#5356) 2019-11-19 22:35:18 -08:00
Matthew Mosesohn
85c851f519 scale down coredns on each master during graceful upgrade (#5344)
This fixes the scenario where masters are upgraded one at a time
and coredns gets improperly scaled back up to 2 replicas.

Change-Id: I7cc9283f40efcfd61b5813c89a5805c95d901567
2019-11-18 00:13:41 -08:00
Yumo Yang
5cd7d1a3c9 modify host.yml in README.md (#5338) 2019-11-17 18:15:40 -08:00
Matthew Mosesohn
8b67159239 Do not run kubeadm upgrade on first deploy (#5339)
Change-Id: I68a962a9dd28c83ef07eaeaf53eb98287f38bca9
2019-11-14 02:05:34 -08:00
LuciferInLove
4f70da2731 Added Amazon Linux 2 support for deploying with docker (#5301) 2019-11-11 07:05:41 -08:00
Matthew Mosesohn
db5040e6ea Set certs and files with kubeadm token to mode 0640 (#5325)
Change-Id: I298496e55a6889c158b2085fcadeda5e679a873e
2019-11-11 05:41:41 -08:00
Jacopo Secchiero
97764921ed Fix calico name resolution (#5291) 2019-11-11 04:01:41 -08:00
Michée lengronne
a6853cb79d library files added to setup.cfg (#5274)
It hopefully ensures the usability of Kubespray as pip.
2019-11-11 03:59:41 -08:00
Bjoern Teipel
8c15db53b2 Fix helm for Kubernetes 1.16.2 (#5332)
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
https://github.com/helm/helm/issues/6374

This PR closes issue #5331
2019-11-11 03:53:41 -08:00
Julien Pervillé
0200138a5d Pass ingress_nginx_extra_args when deploying the nginx-ingress addon (#5321) 2019-11-11 03:51:40 -08:00
Florent Monbillard
14af98ebdc Respect cri-tool supported version matrix (#5241)
| Kubernetes Version | cri-tools Version |
|--------------------|-------------------|
| 1.16.x             | v1.16.0           |
| 1.15.X             | v1.15.0           |
| 1.14.X             | v1.14.0           |
| 1.13.X             | v1.13.0           |
| 1.12.X             | v1.12.0           |
| 1.11.X             | v1.11.1           |

- Upgrade to cri-tools 1.16.1
- Add checksums for cri-tools 1.16.1
2019-11-11 03:45:42 -08:00
YichenWong
8a5434419b fix useradd etcd (#5281) 2019-11-11 03:27:41 -08:00
Quentin Gliech
8a406be48a Fix indentation in cilium-ds.yml template (#5305) 2019-11-11 03:25:41 -08:00
Johannes Scheuermann
feac802456 Remove default docker_options from sample (#5287) 2019-11-11 03:23:40 -08:00
Junho Suh
076f254a67 Add cilium_tunnel_mode variable to the cilium config (#5295) 2019-11-11 03:19:42 -08:00
holmesb
bc3a8a0039 Fixes issue #5299 (#5300) 2019-11-11 03:13:41 -08:00
Dmitry Chusovitin
45d151a69d containerd installation on Debian (#5326) 2019-11-11 02:41:41 -08:00
Matthew Mosesohn
bd014c409b Skip coredns image when evaluating kubeadm images (#5327)
It will be enabled correctly in downloads

Change-Id: Ief0b7aa2a8ee2ba6a6849820802f8542584b2c04
Related-story: PRODX-1171
2019-11-09 00:51:39 -08:00
Michael Shen
08421aad3d [FIX] fix incorrect link to downloads documentation (#5319) 2019-11-07 03:50:42 -08:00
Matthew Mosesohn
1c25ed669c Remove unnecessary and risky reload network for resolvconf propagation (#5322)
Change-Id: I54d706f7941b4b86c4c6cd45340295577155b884
2019-11-06 10:11:52 -08:00
Matthew Mosesohn
a005d19f6f Enable systemd-resolved DNS resolution mode (#5318)
Change-Id: If3e253a40782e03cde7fc4a91493517ae31fda17
2019-11-06 03:33:52 -08:00
Matthew Mosesohn
471589f1f4 Scale down coredns created by kubeadm upgrade to 0 replicas (#5308)
Change-Id: I128b0f9c1acbb956d9a6c4e5510b45a36e296af7
2019-11-05 03:34:38 -08:00
Ali Sanhaji
b0ee1f6cc6 Deploy Cinder CSI driver to provision volumes over OpenStack (#5184)
* Deploy Cinder CSI driver to provision volumes over OpenStack

* Deploy Cinder CSI StorageClass

* Cinder CSI doc
2019-11-01 00:59:24 -07:00
Pierre Ozoux
79128dcd6b Removes repetition. (#5310) 2019-10-30 06:12:53 -07:00
Samina Fu
dd7e1469e9 Fix typo of docs/dns-stack (#5307) 2019-10-30 02:00:55 -07:00
Matthew Mosesohn
186ec13579 Fix incorrect suggestion to enable old k8s apis (#5292)
Change-Id: If965cc6aa0daaca232dcf2ca0efd649aa097497f
2019-10-30 01:58:53 -07:00
Matthew Mosesohn
2c4e6b65d7 Raise delay and retry for rotate tokens (#5304)
Change-Id: I87844b43b9a18064e7a99567ce57c1ca1ffcc4a8
2019-10-30 01:56:52 -07:00
Eric Lake
108a6297e9 Terraform dynamic inventory 0.12.12 (#5298)
* Update parsing of terraform state file for 0.12.12

* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method

* Remove the setting of ansible_ssh_user to root for all Packet

Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.

* Default to root user for packet devices except on CoreOS

* Update TF_VERSION for packet in tf-validate-packet

Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests

* convert packet terraform files to TV_VERSION 4

* initalize terraform before copying the variable file to the top level dir
2019-10-29 00:02:42 -07:00
Matthew Mosesohn
94d4ce5a6f Retry cleaning up calico-node container (#5302)
Change-Id: Iad27b107860213759c7ae51f0891d7e5e7c6d96b
2019-10-28 05:11:25 -07:00
Matthew Mosesohn
81da231b1e Set cluster DNS in kubeadm config for kubelet dynamic config (#5293)
Change-Id: I23116efefe8626d361d1904fc6fb8448f66cf3c5
2019-10-25 02:23:40 -07:00
Ludovic Muller
1a87dcd9b9 readme: update url to the Kubernetes documentation page for Kubespray (#5294) 2019-10-24 22:39:39 -07:00
Matthew Mosesohn
a1fff30bd9 Generate TLS certs for calico typha (#5258)
* Generate TLS certs for calico typha

Change-Id: I3883f49c124c52d0fc5b900ca2b44e4e2ed0d707

* Add group vars note

Change-Id: I63550dfef616e884efdbd42010a90b2c04c5eb69
2019-10-17 07:02:38 -07:00
Sergey
81d57fe658 set calico_datastore default value in role kubespray-default (#5259) 2019-10-17 05:58:38 -07:00
Sergey
3118437e10 check on all cluster node - kubelet_max_pods <= (2 ** (32 - kube_network_node_prefix | int)) - 2 (#5279) 2019-10-17 05:48:38 -07:00
Sergey
65e461a7c0 download container always been on download_delegate host (#5177)
* download container always been on download_delegate host

* fix also check pull required
2019-10-17 05:38:38 -07:00
Michael Oglesby
c672681ce5 Revert Pull Request #5084 (#5120)
Kubespray Pull Request #5084 (https://github.com/kubernetes-sigs/kubespray/pull/5084) caused more problems than it solved due to limitations with the synchronize module. See comments on Kubespray Issues #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059) and #5116 (https://github.com/kubernetes-sigs/kubespray/issues/5116). Details from Ansible documentation: "Currently, synchronize is limited to elevating permissions via passwordless sudo. This is because rsync itself is connecting to the remote machine and rsync doesn’t give us a way to pass sudo credentials in. ... Currently there are only a few connection types which support synchronize (ssh, paramiko, local, and docker) because a sync strategy has been determined for those connection types. Note that the connection for these must not need a password as rsync itself is making the connection and rsync does not provide us a way to pass a password to the connection. ..." Thus, reverting Pull Request #5084.
2019-10-17 05:26:37 -07:00
yelhouti
d332a254ee install python3 instead of python2 for fedora >= 30 fixes 5056, fixes 4802 (#5111) 2019-10-17 05:04:38 -07:00
Sean Sube
f3c072f6b6 ignore gpg files in inventory (#5209) 2019-10-16 20:22:39 -07:00
Matthew Rapa
3debb8aab5 add KUBELET_VOLUME_PLUGIN to kubelet.env (#5128) 2019-10-16 20:08:38 -07:00
YichenWong
aada6e7e40 Add etcd_data_dir variable to the kubeadm config (#5263) 2019-10-16 19:50:39 -07:00
Matthew Mosesohn
ac60786c6f Add support for restart handlers for control plane on crio/containerd (#5250)
* Add support for restart handlers for control plane on crio/containerd

Change-Id: I8343cc4e9df7f55b732628ed01cc6e7ea5dcee85

* Update main.yml
2019-10-16 18:58:39 -07:00
Hugo Blom
db33dc6938 Add support for Kubernetes 1.16.2 (#5272)
* Add support for Kubernetes 1.16.1

* Defaults to 1.16.1

* add 1.16.2 checksums and set new version as default

* correct 1.16.2 checksums and add 1.15.5 checksums
2019-10-16 18:34:38 -07:00
Hugo Blom
9dfb25cafd fix typo (#5275) 2019-10-16 18:26:38 -07:00
Maxime Guyot
df8d2285b6 Update ingress-nginx to v0.26.1 (#5268) 2019-10-16 18:22:39 -07:00
Matthew Mosesohn
af6456d1ea Fix selector for calico-typha deployment (#5253)
Change-Id: I79f43379cbe1c495cb416f0572e65f695d5ec2b8
2019-10-16 07:53:42 -07:00
Maxime Guyot
6f57f7dd2f Update nginx image to latest (#5270) 2019-10-16 04:37:42 -07:00
Dennis Field
fd2ff675c3 Clarify process for upgrading more than one version (#5264)
Since it is unsupported to skip upgrades, I've detailed the steps for upgrading a step at a time and removed some language that indicated it should work
2019-10-16 04:35:41 -07:00
Xiaodu
bec23c8a41 Add k8s v1.15.4 hashes (#5235) 2019-10-16 04:33:41 -07:00
Robin Elfrink
faaff8bd72 Add RotateCertificates to kubelet config if kubelet_rotate_certificates is set. (#5152)
Signed-off-by: Robin Elfrink <robin.elfrink@eu.equinix.com>
2019-10-16 04:31:41 -07:00
andreyshestakov
8031c6c1e7 Update template for dashboard to support v2.x (#5187)
Secrets and ConfigMap should be created before dashboard pod run.
2019-10-16 04:29:41 -07:00
Erwan Miran
9d8fc8caad Fix getting nameserver and search for /etc/resolv.conf with comments (#5197) 2019-10-16 04:27:40 -07:00
Julien Pervillé
d1b1add176 contrib/heketi: use inventory node ip in topology instead of guessing it (#5233) 2019-10-16 04:25:42 -07:00
Qingkun Li
a51b729817 add ignore_errors to the kube-proxy deletion task (#5236)
When using cluster.yml or scale.yml to add/scale nodes in the existing
k8s cluster, the `kubeadm init` wouldn't run. As a result, kube-proxy
wouldn't be created, and therefore the kube-proxy deletion task would
fail, e.g. in the case where kube-router is used and "kube_proxy_remove"
is set to true. As a workaround, add ignore_errors to the kube-proxy
deletion task.
2019-10-16 04:23:40 -07:00
Maxime Guyot
19bc79b1a6 Update cert-manager to v0.11.0 (#5269) 2019-10-16 04:21:40 -07:00
Seref Acet
d2fa3c7796 Minimum required version of Kubernetes is v1.15 (#5266) 2019-10-15 06:59:53 -07:00
Mateus Caruccio
03cac2109c No need to gather facts on localhost (#5251)
It's unnecessary and breaks when running from within a docker container:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/sbin/udevadm info --query property --name /dev/mapper/vg00-root", "msg": "Timer expired after 10 seconds", "rc": 257}
```
2019-10-15 06:57:53 -07:00
Sergey
932935ecc7 fix wrong path in include install_host.yml in etcd role (#5256) 2019-10-13 18:16:34 -07:00
BenoitBOULANGER
e01118d36d Fix issue in remove-node/post-remove task (#5185) (#5186) 2019-10-10 05:17:43 -07:00
Matthew Mosesohn
dea9304968 Enable openstack_cacert to be either file or base64 string (#5243) 2019-10-09 02:19:49 -07:00
Matthew Mosesohn
2864e13ff9 Reset between kubeadm secondary control plane join attempts (#5240)
Change-Id: Ic9425bf90552d7e3d42b02409af9773d99376384
2019-10-08 00:15:12 -07:00
Hugo Blom
a8c5a0afdc Make it possible to disable access_ip (openstack provider) (#5239)
* Add a variable do disable access_ip

* Document the use of use_access_ip
2019-10-07 04:09:09 -07:00
Erwan Miran
0ba336b04e install helm client separately (#5212) 2019-10-04 05:14:02 -07:00
Matthew Mosesohn
89f1223f64 Fix selector workaround for helm install (#5237)
Change-Id: I826337b59814674c3feb4cd6a4904d9d53e01652
2019-10-03 23:41:56 -07:00
陈谭军
8bc0710073 clean up document (#5214) 2019-10-02 04:41:07 -07:00
Matthew Mosesohn
fb591bf232 Apply workaround for NetworkManager and calico (#5230)
Change-Id: I5cb2bdf1a57707c1b8da3e5ac0c80e5c353480a4
2019-10-02 04:37:07 -07:00
Matthew Mosesohn
a43e0d3f95 Switch to Kubernetes v1.16.0 (#5189)
* Switch to Kubernetes v1.16.0

Change-Id: I5d6a9528b2d443750fc5e031aff15ad3ffead158

* Fix download localhost cached file path

Change-Id: I65e79b70e3d1b37265ebc60f41b460cf4b0a0d47

* fix kubeadm etcd for v1.16

Change-Id: I6888a00fd48b530a38b0b31c4095492476af42d2

* disable tf packet jobs

Change-Id: I075c4666547fdea4c50ec04864f38e2cfaa79154

* Disable contiv packet jobs. Fix kube-router

Change-Id: I3170e8789e60711d4cee8faf65f2094480b79b8d

* bump sonobuoy version

Change-Id: Ib946905629c7c53ed88f08fb2f41c454457a0097
2019-10-02 02:21:07 -07:00
Besmir Zanaj
f16cc1f08d fix digital rebar url (#5223) 2019-09-30 01:05:38 -07:00
Maxime Guyot
8712bddcbe Add docs for TF vars introduced PR 4239 (#5201) 2019-09-26 04:31:07 -07:00
陈谭军
99dbc6d780 clean-up doc,spelling mistakes (#5206) 2019-09-26 04:25:08 -07:00
Richard Scott
75e4cc2fd9 Updated kubectl.sh (#5156)
The script is not usable unless you are in the '.vagrant/provisioners/ansible/inventory/artifacts' folder.
This update makes this usable from anywhere.
2019-09-26 04:23:07 -07:00
Etienne Champetier
81cb302399 MetalLB: fail if kube_proxy_strict_arp is false (#5180)
When using IPVS, kube_proxy_strict_arp = true is required
https://github.com/danderson/metallb/issues/153#issuecomment-518651132

Add kube_proxy_strict_arp to inventory/sample
2019-09-26 04:21:06 -07:00
陈谭军
3bcdf46937 fix-up some spelling mistakes (#5202) 2019-09-25 23:27:08 -07:00
Sergey
1cf6a99df4 generate kubeadm download image list with options useHyperKubeImage (#5203) 2019-09-25 18:03:06 -07:00
Robert Neumann
a5d165dc85 Customize host root volume size by Terrafrom provisioning (#4239)
* print hostnames (#5110)

Terrafrom - customize hosts root volume size

disable block_device by default value

Terraform formatting fix

Fixed typos

* fix resources after rebase

* Fix glusterfs image issue
2019-09-25 05:17:59 -07:00
Erwan Miran
f18e77f1db Blocksize for calico default pool should be configurable (#5198) 2019-09-25 04:44:00 -07:00
陈谭军
2fc02ed456 fix-typo (#5199) 2019-09-25 04:04:00 -07:00
pando85
9db61c45ed Upgrade nodelocaldns to 1.15.5 (#5191) 2019-09-22 20:13:22 -07:00
Sergey
8cb54cd74d fix broken scale procedure: (#5193)
- do not run etcd role when etcd_kubeadm_enabled == true
- remove default value 'systemd' for cgroup driver in containerd role.
  this value override autodetect in kubelet_cgroup_driver_detected from docker info
2019-09-22 01:07:22 -07:00
Florent Monbillard
a3f1ce25f8 Add support for k8s v1.14.6 (#5182) 2019-09-18 02:53:30 -07:00
Qingkun Li
3c7f682e90 Parameterize gcr, quay, and docker image repo defines (#5146)
This allows to easily override the gcr, quay, and docker repos with the
mirror repos in countries like China, where the default accesses are
blocked or unstable.
2019-09-18 02:49:30 -07:00
Sergey
8984096f35 use hyperkubeimage to run controlplane containers (#5178) 2019-09-17 18:33:28 -07:00
Mario
1ce7831f6d Update main.yml (#5166) 2019-09-17 05:36:24 -07:00
holmesb
46f6c09d21 Fixes issue #5160 (#5171) 2019-09-16 23:42:22 -07:00
Matthew Mosesohn
6fe2248314 Use more native way to update kubeconfigs using kubeadm (#5165)
Change-Id: I1076b418f85a26d9896be69910052128afc51cee
2019-09-13 03:40:29 -07:00
andreyshestakov
cb4f797d32 Fix macro on local_volume_provisioner (#5168)
mydict.keys() should be converted to list,
otherwise it causes errors in loop iteration.

Remove extra space after class name, which broke configmap.

Also allow set reclaimPolicy property.
2019-09-13 00:50:33 -07:00
Matthew Mosesohn
eb40ac163f Move cri_socket var to kubespray-defaults (#5149) 2019-09-10 12:30:55 -07:00
Matthew Mosesohn
27ec548b88 Add support for k8s v1.16.0-beta.2 (#5148)
Cleaned up deprecated APIs:
apps/v1beta1
apps/v1beta2
extensions/v1beta1 for ds,deploy,rs

Add workaround for deploying helm using incompatible
deployment manifest.
Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
2019-09-10 12:06:54 -07:00
Florent Monbillard
637f09f140 Fix ansible task titles (#5154)
* Fix ansible task titles for CRI connection tasks

* Fix Azure subscription ID check task title
2019-09-10 01:34:54 -07:00
Matthew Mosesohn
9b0f57a0a6 Adjust endpoints for kube-proxy,controller,scheduler to proper ip (#5150)
Change-Id: I5aa009358bee7035922b5a10327997e47c9ba434
2019-09-09 10:33:20 -07:00
leonmbecker
5f02068f90 Documenting Terraform variable az_list explicitly (#5132)
* added az_list to README section

* added az_list to cluster.tfvars
2019-09-09 07:41:19 -07:00
Matthew Mosesohn
7f74906d33 Make haproxy/nginx client timeout configurable (#5140)
Change-Id: I61319a06eb33d9fc868e19941924f387088b856b
2019-09-05 00:32:51 -07:00
Csergő Bálint
56523812d3 print hostnames (#5110) 2019-08-29 05:07:57 -07:00
Guangming Wang
602b2d198a Cleanup: fix typo in doc (#5105)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-28 02:35:15 -07:00
Richard Arends
4d95bb1421 Use python3-libselinux on RHEL8/Centos8 (#5127)
* Use python3-libselinux on RHEL8/Centos8

* The fact ansible_facts.distribution_major_version is not present on older Ansible version.
Default it to 0 in when not present and use libselinux-python as package to get current
default behaviour.
2019-08-28 02:33:15 -07:00
Matthew Mosesohn
184ac6a4e6 Parse calico nodes as json (#5114) 2019-08-27 10:16:42 -07:00
rptaylor
10e0fe86fb remove unimplemented custom_flags vars, document the extra_args vars (issue 4352) (#5108) 2019-08-23 01:21:18 -07:00
Matthew Mosesohn
7e1645845f Allow calico settings to be modified (#5101)
Previous logic used calicoctl.sh create --skip-exists, which
allowed setting initial values, but not permitting changes.
2019-08-23 00:01:19 -07:00
Neven Miculinic
f255ce3f02 Added CRI-O support for ubuntu (#4629)
* Added CRI-O support for ubuntu

* implemented feedback

* set crictl to fixed version

* Fix errors during rebasing

* Fix linting errors
2019-08-22 03:54:31 -07:00
Michael Oglesby
07ecef86e3 Replace fetch with synchronize due to memory error (#5084)
Fix for Kubespray Issue #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059). There is a known issue with the 'fetch' module that will sometimes lead to it failing with a memory error. See ansible/ansible#11702 (https://github.com/ansible/ansible/issues/11702). I encountered this issue with the "Copy kubectl binary to ansible host" task in kubespray/roles/kubernetes/client/tasks/main.yml, and it caused my entire deployment to error out (see "Output of ansible run" above). Replacing 'fetch' with 'synchronize' fixes this issue.
2019-08-22 02:40:32 -07:00
ewtang
3bc4b4c174 Use raw module for bootstrap-debian.yml (#5061)
Updated Openstack to terraform 0.12 (#5062)

* update openstack to terraform 0.12(.5)

* replace cluter.tf with cluster.tfvars

* update README.md to terraform 0.12

* update Openstack CI tests to use terraform 0.12

* specify terraform version in openstack README

* gitlab CI to copy cluster.tfvars in case of openstack provider

* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).

Additionally the script has been updated to Python 3.
2019-08-22 01:46:31 -07:00
Victor Morales
da089b5fca Update CRI-O in CentOS (#4582)
According to their compatibility matrix[1] the 1.11.5 version seems to
be deprecated. This change updates the CentOS repository reference.

[1] https://github.com/cri-o/cri-o#compatibility-matrix-cri-o---kubernetes-clusters
2019-08-22 01:16:32 -07:00
ewtang
d4f094cc11 Add localhost to ansible.limit. (#5037)
Upgrade to Kubernetes 1.15.3 (#5091)
2019-08-22 01:14:32 -07:00
Sergey
494a6512b8 fix bug: run Copy image to ansible host cache on download_delegate host (#5094)
* run 'task download_container | Copy image to ansible host cache' with synchronize on download_delegate host

* try to run task copy file to ansible host on all inventory, not only on first random host
2019-08-21 23:38:30 -07:00
mcayland
3732c3a9b1 terraform/openstack: add network_dns_domain variable (#5093)
This allows the user to optionally specify the dns_domain attribute on the
generated internal kubernetes network.
2019-08-21 05:09:15 -07:00
Tony Fouchard
f6a63d88a7 Allow to configure strict ARP on kube-proxy (#5092) 2019-08-20 18:21:17 -07:00
Andreas Krüger
86cc703c75 Upgrade to Kubernetes 1.15.3 (#5091) 2019-08-20 02:05:32 -07:00
Hugo Blom
4dba34bd02 add cinder max attached volumes (#5089) 2019-08-19 23:45:32 -07:00
Xiaodu
b0437516c1 Kube-router annotate.yml: Use group 'k8s-cluster' instead of 'all' (#5087) (#5088) 2019-08-19 04:53:29 -07:00
Hugo Blom
da015e0249 Updated Openstack to terraform 0.12 (#5062)
* update openstack to terraform 0.12(.5)

* replace cluter.tf with cluster.tfvars

* update README.md to terraform 0.12

* update Openstack CI tests to use terraform 0.12

* specify terraform version in openstack README

* gitlab CI to copy cluster.tfvars in case of openstack provider

* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).

Additionally the script has been updated to Python 3.
2019-08-18 01:30:05 -07:00
shlo
554857da97 add cluster name into filer if specifed in environment variable (#5085) 2019-08-16 19:28:08 -07:00
Guangming Wang
9bf23fa43b cleanup vars.md typos words (#5086)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-16 19:12:09 -07:00
Guangming Wang
42287066d3 fix word errors in downloads.md (#5083)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-15 20:10:33 -07:00
Ali Sanhaji
a1ff1de975 fix openstack_cacert conditional (#5078) 2019-08-15 05:50:34 -07:00
Zou Nengren
1bfbc5bbc4 remove resource-container default value for kube-proxy (#4994) 2019-08-15 05:30:33 -07:00
Bort Verwilst
c5b4d3ceaa upgrade Helm to 2.14.3 (#5075)
Signed-off-by: Bart Verwilst <bart@verwilst.be>
2019-08-15 04:34:33 -07:00
w33dw0r7d
8fc9c5d025 Upgrade ingress nginx to 0.25.1 (#5081) 2019-08-15 04:14:34 -07:00
Maxime Guyot
42bba66c02 Disable moderator (#5069)
* Test the CI

* Disable CI moderator
2019-08-15 03:02:34 -07:00
刘旭
53bc80bb59 Ingress nginx (#5066)
* remove svc-default-backend

* update ingress-nginx clusterrole
2019-08-15 02:34:33 -07:00
Matthew Mosesohn
771ce96e6d Set initial kubeadm token if specified in kubeadm init (#5057)
Change-Id: I7fd94ec6d195af60d237b3cfe91668ca1f707d26
2019-08-15 02:26:33 -07:00
Oilbeater
fc456ff0cd move kube-ovn images to dockerhub (#5063) 2019-08-14 04:02:24 -07:00
Sergey Kolekonov
b4f70db878 Fix broken containerd pinning on Ubuntu (#5072) 2019-08-13 19:26:23 -07:00
Matthew Mosesohn
5707f79b33 Allow to configure number of kube-masters (#5073)
Change-Id: Ia3f30a1216b3ea063cd72c839ef6dff753cf10c6
2019-08-13 18:52:24 -07:00
Matthew Mosesohn
0a2f4edfc6 Always download coredns images with kubeadm (#5071)
Fixes situation when using manual mode because it
tries to download coredns v1.3.1 from the same
image repository where kubernetes images are
downloaded from.

Change-Id: Ibbec8a72c8162ce8befa74e2013a268737ea5f8a
2019-08-13 08:53:43 -07:00
Danilo Riecken P. de Morais
56fa46716e Add missing coredns tag. (#5054) 2019-08-09 02:29:27 -07:00
Craig Rodrigues
b74abe56fd Bump minimum K8S version to 1.14 (#5055)
Signed-off-by: Craig Rodrigues <craig@portworx.com>
2019-08-09 02:13:26 -07:00
Simon Lelievre
62aecd1e4a multus | fix use last version (#5041) 2019-08-08 23:05:25 -07:00
Mario
973afef96e Fix variable for rbd_provisioner_user_secret (#5042)
* Update main.yml

* fix dead link 404
2019-08-08 20:03:25 -07:00
Bort Verwilst
a235605d2c go to k8s 1.15.2, update nodelocaldns to latest bugfix release (#5048) 2019-08-08 19:49:25 -07:00
Matthew Mosesohn
023108a733 Refactor calico route reflector to run in k8s cluster (#4975)
* Refactor calico-rr to run in k8s cluster with taint

Change-Id: I75a3169ff5b36ce8302fc7ef1c32d3eb697b5afa

* add preinstall checks

* rework calico/rr role

Change-Id: I2f0a7e6cb77cf91ad4a615923680760d2e5d9ca8

* add empty calico-rr group

Change-Id: I006c0a60db9b72d02245bf8fdfabcf982144a5ad
2019-08-08 07:37:22 -07:00
Matthew Mosesohn
75d1be8272 Fix check for removing etcd member (#5051)
Change-Id: Ib27d051ff111f813097a9b33a86465a2a30a6db0
2019-08-07 08:26:51 -07:00
Matthew Mosesohn
a44235d11b Refactor remove node to allow removing dead nodes and etcd members (#5009)
Change-Id: I1c59249f08f16d0f6fd60df6ab61f17a0a7df189
2019-08-07 04:46:50 -07:00
Matthew Mosesohn
7abf6a6958 Allow etcd member join by checking cluster health only on first etcd (#5032)
Change-Id: I9cc01cef3a437893225e2d9f58495826bbce7be9
2019-08-07 04:44:50 -07:00
Edward Vielmetti
0d0b1fdf82 Ansible version bump for CVE-2019-10156 (#5050) 2019-08-06 23:56:50 -07:00
Maxim Snezhkov
b710c72f04 Add ability to setup virtual ip for ingress-controller (#5044) 2019-08-06 19:24:50 -07:00
Matthew Mosesohn
678c316d01 Optionally refresh kubeadm token every time (#5045)
Change-Id: I278cb14aa93abf20160cc001f69e2f472504e6d8
2019-08-06 07:35:55 -07:00
Holger Frydrych
bc6de32faf Upgrade Cilium network plugin to v1.5.5. (#5014)
* Needs an additional cilium-operator deployment.
  * Added option to enable hostPort mappings.
2019-08-06 01:37:55 -07:00
Matthew Mosesohn
7cf8ad4dc7 Optionally refresh kubeadm token every time (#5043)
Change-Id: I278cb14aa93abf20160cc001f69e2f472504e6d8
2019-08-06 00:59:53 -07:00
Remous-Aris Koutsiamanis
02ec72fa40 Fix commands for using experimental kubeadm control plane (#5006) 2019-08-05 07:31:50 -07:00
Johannes Scheuermann
d22634a597 Refactor containerd ubuntu setup and remove redundant tasks (#5015) 2019-08-05 07:29:48 -07:00
Mark Janssen
4132cee687 Fix mistakes in downloads docs (#5038) 2019-08-04 18:25:47 -07:00
Mark Janssen
f3df0d5f4a Always create bash_completion.d folder (#5039) 2019-08-04 18:15:48 -07:00
Adrian Moisey
1d285e654d Fix small typo (#5029) 2019-08-01 06:52:16 -07:00
Vitaliy Dmitriev
dc6ad64ec7 [contrib/heketi]: tear down additions and fixes. Heketi updated to version 9 (#5027)
* lvm packages removal during tear down skipped by default
  * lvm utils execution PATH fixed for CentOS/RH
  * Heketi updated to the latest version 9

Signed-off-by: Vitaliy Dmitriev <vi7alya@gmail.com>
2019-08-01 04:00:16 -07:00
w33dw0r7d
92bfcf0467 Add CoreDNS endpoint_pod_names option (#5012) 2019-07-31 11:26:15 -07:00
koriukiv
54b1fe83f3 Add an option to reserve resources for OS system daemons (#5007) 2019-07-31 11:24:15 -07:00
Andreas Holmsten
5337cff179 Add packet_ubuntu18-flannel-containerd (#5004) 2019-07-31 11:22:14 -07:00
Oilbeater
1be788f785 add Kube-OVN cni to kubespray (#5020) 2019-07-30 20:10:20 -07:00
rptaylor
8afbf339f7 fix broken link (#5023) 2019-07-30 19:18:22 -07:00
Andreas Krüger
8c935dfb50 Update CoreDNS to 1.6.0 (#5021) 2019-07-30 18:58:21 -07:00
Johannes Scheuermann
66c5ed8406 Update critools to v1.15.0 (#5016) 2019-07-30 12:04:09 -07:00
Erwan Miran
4087e97505 Additional files and dirs to remove when running reset (#5000) 2019-07-30 12:02:08 -07:00
Jeff Bornemann
da50ed0936 move flexvolume plugin directory creation to preinstall (#4999)
* move flexvolume plugin directory creation to preinstall

* changes per pr feedback
2019-07-30 12:00:10 -07:00
okamototk
fbbfff3795 fix broken ubuntu containerd engine (#5002) 2019-07-30 11:58:11 -07:00
Aleksey Kasatkin
fb9103acd3 Update calico-typha deployment to address v3.7.x changes (#5003)
* Update calico-typha deployment to address v3.7.x changes

So that calico-typha works for Calico v3.7.x.

* Apply changes for v3.7.x only.
2019-07-24 09:12:16 -07:00
nico-netminded
49d921cf91 Restart canal after scale or upgrade. Just like PR#4531, but for canal (#4992) 2019-07-22 00:50:53 -07:00
刘旭
fe29c97ae8 add ansible_hostname and ansible_fqdn to apiserver_sans (#4990) 2019-07-22 00:48:53 -07:00
Hugo Blom
2abb6c8689 update to kubernetes 1.15.1 (#4989)
* update to kubernetes 1.15.1

* Revert to sonobuoy 0.15.0

* update test timeout from 3 to 5 minutes
2019-07-21 12:24:51 -07:00
Andreas Krüger
a3ca441998 Remove unused handlers from Flannel CNI (#4984)
* Only reload docker when is_atomic for Flannel

* Remove unused handlers from Flannel CNI
2019-07-21 00:16:54 -07:00
rptaylor
9cf503acb1 configure docker_options directly with template (#4912) 2019-07-21 00:12:53 -07:00
Matthew Mosesohn
1cbdd7ed5c Fixup etcdctl download for etcd kubeadm mode (#4991)
Change-Id: I8d8e59a97823390f40e8810905ca52430329f040
2019-07-21 00:08:53 -07:00
Sergey Kolekonov
428e52e0d1 Fix calico handler for containerd (#4985)
crictl tool must be used to delete containers in case of containerd
deployment
2019-07-16 08:35:24 -07:00
Matthew Mosesohn
70dc222719 Upgrade local volume provisioner to v2.3.2 (#4983) 2019-07-16 05:27:26 -07:00
Tilman Beitter
69f796f0c7 use the locally deployed kubectl binary within the kubectl.sh helper script (#4311) 2019-07-16 02:23:25 -07:00
Vincent Link
5826f0810c Check all apt config files for configured proxies (#4972) 2019-07-16 01:41:24 -07:00
刘旭
de9443a694 remove unused code (#4981) 2019-07-16 01:39:24 -07:00
Alex Barcelo
99c5f7e013 add k8s_external plugin to CoreDNS configuration (#4704) 2019-07-16 00:53:23 -07:00
Andreas Krüger
d9dedc2cd5 Let Premoderator script add labels (#4978)
* Let Premoderator script add labels

* Fix JQ error

* Minor fixes

* Debug patch label output

* Try again

* Try again

* Try again

* Try again

* Try again

* Minor cleanup
2019-07-16 00:43:23 -07:00
Matthew Mosesohn
23ae6027ab remove support for calico v2.x (#4974)
* Remove support for calico below version v3.0.0

Change-Id: If8fe3036b9e054901a8b2c48516eff1e1271970f

* Update main.yml

* fixup node peering

Change-Id: Ifac4d363deba826f0c80e390ce80a28df9827323

* fixups

Change-Id: Ic35417330af6741962003b3930604393c90804d1

* fixups

Change-Id: I0ea82d634bb0c81d9b7dc50569c70988bc8d3a3b
2019-07-15 07:47:09 -07:00
Waret
781b5691c9 prep_kubeadm_images: parse repo and tag (#4976) (#4977) 2019-07-15 03:15:09 -07:00
Matthew Mosesohn
fd9bbcb157 Enable nodes to run calicoctl for calico kdd mode (#4956)
* Enable nodes to run calicoctl

per-node tasks require waiting for calico-node to be applied

Change-Id: Ibe1076b7334a2da0332f2dd766fde0c3f172d1f2

* cleanup tasks that should run on master

Change-Id: I43a837879ef41596f14657ecd7f813899b6865ae

* Switch run_once calico logic to just run on first master

Change-Id: I6893711e354f63c5e1eaf6ac2e23d9a6347a555d
2019-07-15 01:59:06 -07:00
Gustavo Muniz do Carmo
e0410661fa azure loadbalancer vars generation (#4892) 2019-07-15 01:27:06 -07:00
TommyKTheDJ
8ef754678a Ansible 2.8.x note (#4908)
Update README.md to link to the open issue that shows Ansible 2.8.x doesn't work with Kubespray.  The requirements.txt file is already fixed to 2.7.8 so only the README needed updating, I think.
2019-07-15 01:05:07 -07:00
Andreas Krüger
161a8f55fa Update CoreDNS to 1.5.2 (#4970) 2019-07-15 00:59:06 -07:00
Andreas Krüger
7481cc31e1 Update Weave CNI to 2.5.2 (#4969) 2019-07-15 00:57:06 -07:00
Matthew Mosesohn
b15b6e834f fix parsing refresh of kubeadm cert key (#4971)
* fix parsing refresh of kubeadm cert key

Change-Id: I4de2a1df6498790a80351b4bc7d88e6c9e470358

* Update kubeadm-secondary-experimental.yml
2019-07-15 00:45:06 -07:00
Hugo Blom
76640cf1a1 update docker-ce to 18.09.7 (#4973) 2019-07-14 22:59:04 -07:00
jlacoline
374ea7b81d update supported calico version to 3.7.3 in README (#4966) 2019-07-12 03:31:05 -07:00
Sergey Kolekonov
46bef931e9 Fix images info logic for containerd (#4965)
As crictl tool is used to download images, it must be also used to gather
images info
2019-07-12 03:29:07 -07:00
Stanislav Makar
a36e9ae690 Add checksums for k8s 1.14.4 (#4959) 2019-07-11 23:19:05 -07:00
Jeff Bornemann
728155a2a1 Support for Oracle Linux (#3655)
Fixed Issue #1032

test case for OEL7 AIL with kubeadm

Add packet CI stuff for oracle 7
2019-07-11 23:17:05 -07:00
Matthew Mosesohn
cdf9a9f4fc Generate certificate key before kubeadm control plane config (#4964) 2019-07-11 05:30:54 -07:00
Matthew Mosesohn
29307740dd Enable containerd to deploy vanilla containerd package (#4951)
* Enable containerd to deploy vanilla containerd package

Fixes kubeadm references to CRI socket for containerd
Fixes download role cache feature to work with containerd

Change-Id: I2ab8f0031107e2f0d1a85c39b4beb66f08509a01

* use containerd for flannel-addons job

Change-Id: Ied375c7d65e64a625ffbd995ff16f2374067dee6

* add containerd vars

Change-Id: Ib9a8a04e501c481a86235413cbec63f3672baf91

* fixup vars

Change-Id: Ibea64e4b18405a578b52a13da100384582aa24c2

* more fixes

* fix rh repo

Change-Id: I00575a77cfb7b81d6095db5d918a52023c8f13ba

* Adjust helm host install for containerd
2019-07-10 23:46:54 -07:00
Vilvaramadurai Samidurai
a038d62644 doc-fix: azure.md azure cli parameter update (#4936) 2019-07-10 05:46:25 -07:00
jlacoline
20c7e31ea3 Add calico 3.7.3 support (#4953)
* Add calico 3.7.3 support

* add calico_datastore variable to policy controller role

* add missing clusterrole rules for calico policy controller

* disable calico kube controller when kdd mode is used for versions < 3.6
2019-07-09 12:42:28 -07:00
Matthew Mosesohn
65065e7fdf Default kubeadm_images to empty dict instead of set_fact (#4957)
Change-Id: I9ef52232176fe8e2a99b4f09ae05e1451f7ad1ff
2019-07-09 12:07:30 -07:00
Matthew Mosesohn
352297cf8d Fixup deploy of kubeadm etcd for Kubernetes v1.15.0 (#4952)
* Fixup deploy of kubeadm etcd for Kubernetes v1.15.0

Change-Id: If42c2c75c4d278ba9475ebf76c243f3e6ee4d02e

* undo renaming cloud config file

Change-Id: Iafbd27c3887d6a2a6d0819c711f150ecf70c515d
2019-07-09 15:41:59 +03:00
champtar
a67a50f9c0 nodelocaldns: allow to set health port, switch to 9254 by default (#4902)
8080 is a pretty common port, using nodelocaldns_ip:8080 still
prevents node processes or hostNetwork=true processes to bind to *:8080
so switch to 9254 by default (prometheus port is 9253)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-07-09 00:52:01 -07:00
rptaylor
324bc41097 Add support for Docker plugins (#4934)
* Add support for Docker plugins

* support multiple Docker plugins using looped include

* fix yamllint error
2019-07-08 06:44:35 -07:00
andreyshestakov
c81b443d93 Fix order of names in /etc/hosts (#4940)
Configure fqdn properly
2019-07-08 06:08:34 -07:00
Julian Tabel
dc16ab92f4 fix for calico with kdd datastore (#4922)
* fix for calico with kdd datastore

* remove AS number from daemonset

* revert changes to canal

* additionnal fixes for kdd datastore in calico
2019-07-08 12:20:03 +03:00
Timoses
53032a6695 Use kubespray-defaults in remove-node role (#4946) 2019-07-05 03:28:34 -07:00
Konrad Kurdej
d90a5f291b Using also uppercase proxy env variables (#4910) 2019-07-02 18:13:12 -07:00
Evans Tucker
3b7791501e Adding "-F /dev/null" to load null SSH config file. (#4933) 2019-07-02 01:53:08 -07:00
okamototk
f2b8a3614d Use K8s 1.15 (#4905)
* Use K8s 1.15

* Use Kubernetes 1.15 and use kubeadm.k8s.io/v1beta2 for
  InitConfiguration.
* bump to v1.15.0

* Remove k8s 1.13 checksums.

* Update README kubernetes version 1.15.0.

* Update metrics server 0.3.3 for k8s 1.15

* Remove less than k8s 1.14 related code

* Use kubeadm with --upload-certs instead of --experimental-upload-certs due to depricate

* Update dnsautoscaler 1.6.0

* Skip certificateKey if it's not defined

* Add kubeadm-conftolplane.v2beta2 for k8s 1.15 or later

* Support kubeadm control plane for k8s 1.15

* Update sonobuoy version 0.15.0 for k8s 1.15
2019-07-02 01:51:08 -07:00
Matthew Mosesohn
e89b47c7ee Add nginx stub metrics if health check enabled (#4938)
Change-Id: Iac90beef20e63fb4a539f91836231469c573f402
2019-07-01 13:38:37 -07:00
Matthew Mosesohn
2aa66eb12d Default to refreshing kubeadm etcd key (#4931)
Change-Id: Icc0176773b6d581c43647de433214079440d7321
2019-06-30 03:37:22 -07:00
okamototk
4c8b93e5b9 containerd support (#4664)
* Add limited containerd support

Containerd support for Ubuntu + Calico

* Added CRI-O support for ubuntu

* containerd support.

* Reset  containerd support.

* fix lint.

* implemented feedback

* Change task name cri xx instead of cri-o in reset task and timeout condition.

* set crictl to fixed version

* Use docker-ce's container.io package for containerd.

* Add check containerd is installable or not.

* Avoid stop docker when use containerd and optimize retry for reset.

* Add config.toml.

* Fixed containerd for kubelet.env.

* Merge PR #4629

* Remove unused ubuntu variable for containerd

* Polish code for containerd and cri-o

* Refactoring cri socket configuration.

* Configurable conmon.

* Remove unused crictl/runc download

* Now crictl and runc is downloaded by common crictl.yml.

* fixed yamllint error

* Fixed brokenfiles by conflict.

* Remove commented line in config.toml

* Remove readded v1.12.x version

* Fixed broken set_docker_image_facts

* Fix yamllint errors.

* Remove unused apt source

* Fix crictl could not be installed

* Add containerd config from skolekonov's PR #4601
2019-06-29 14:09:20 -07:00
Tony Fouchard
216631bf02 Repair kube_proxy_exclude_cidrs (#4909) 2019-06-28 00:39:37 -07:00
Erwan Miran
c7f3123e28 kubeadm_discovery_address should not contain proto (#4930) 2019-06-28 00:37:37 -07:00
Simon Lelievre
f599c2a691 add macvlan cni to kubespray (#4901)
* add macvlan cni to kubespray

* macvlan: lint yaml files and fix sample config file

* macvlan: add OWNERS file

* add macvlan to README

* macvlan : CI first shoot

* macvlan : CI add full masquerade

* delegate retrive pod cidr to master only

* macvlan: add config for CI

* macvlan: add netchecker deployment
2019-06-28 00:35:38 -07:00
Matthew Mosesohn
bc7d1f36ea Remove wasteful extra gather facts step (#4928)
Ansible will gather facts on the preinstall/download role
automatically at the start of that play.
2019-06-27 06:25:21 -07:00
Matthew Mosesohn
80fa294a31 Disable redundant CI test cases (#4918)
Change-Id: I1991bca8368adc20832d2bb15644411653446b51
2019-06-27 04:49:22 -07:00
Matthew Mosesohn
465dfd68bc Fix empty kube_override_hostname in apiserver_sans (#4916)
kubernetes/master role defines this value as an empty string
when using a cloud provider, not undefined. The check was updated
accordingly.

Change-Id: I58dc31ef4fd568a717a6753eb89ca687933018ae
2019-06-25 08:00:37 -07:00
Matthew Mosesohn
73f45fbe94 Revert "Filter undefined SANs for apiserver cert (#4913)" (#4914)
This reverts commit d270678bda.
2019-06-25 06:56:00 -07:00
Matthew Mosesohn
d270678bda Filter undefined SANs for apiserver cert (#4913)
Change-Id: I37442fb095fb4217f67f74744ad07c1d5d8229ea
2019-06-25 05:54:36 -07:00
Andreas Krüger
de028814e5 Upgrade to etcd version 3.3.10 per 1.14 release notes. (#4898)
* Upgrade to etcd version 3.3.10 per 1.14 release notes.

* Update etcd binary checksums
2019-06-24 01:27:55 -07:00
andreyshestakov
b5406b752d Add kube_override_hostname to kubeadm certs. (#4903) 2019-06-23 23:19:56 -07:00
Matthew Mosesohn
6025981ceb Allow skip kubeadm image prep but install kubeadm (#4904)
Change-Id: I744e9a192cd863a1ce22fbd16d217c5dfb16750c
2019-06-23 23:17:56 -07:00
Matthew Mosesohn
4348e78b24 Enable kubeadm etcd mode (#4818)
* Enable kubeadm etcd mode

Uses cert commands from kubeadm experimental control plane to
enable non-master nodes to obtain etcd certs.

Related story: PROD-29434

Change-Id: Idafa1d223e5c6ceadf819b6f9c06adf4c4f74178

* Add validation checks and exclude calico kdd mode

Change-Id: Ic234f5e71261d33191376e70d438f9f6d35f358c

* Move etcd mode test to ubuntu flannel HA job

Change-Id: I9af6fd80a1bbb1692ab10d6da095eb368f6bc732

* rename etcd_mode to etcd_kubeadm_enabled

Change-Id: Ib196d6c8a52f48cae370b026f7687ff9ca69c172
2019-06-20 11:12:51 -07:00
Andreas Holmsten
e2f9adc2ff Add holmsten to reviewers (slack - gix) (#4896) 2019-06-20 00:38:48 -07:00
Tony Fouchard
f67a24499b Allow to specify feature_control in calico cni config (#4879)
* Allow to specify feature_control in calico cni config

* list length checking

* double check

* remove 2 conditions
2019-06-16 23:14:07 -07:00
Simon Lelievre
5c704552d8 multus | use last version (#4880) 2019-06-16 23:12:07 -07:00
D瓜哥
d83ea51101 fix markdown style (#4886) 2019-06-14 05:42:21 -07:00
Edwin Jacques
fa6027e8f0 fix references to sample files in setup.cfg (#4882) 2019-06-14 03:52:21 -07:00
Simon Lelievre
2849191e67 CNI plugins: use last version 0.8.1 (#4878)
* CNI plugins: bump version 0.8.1

* cni plugins : update checksums

* cni : update readme
2019-06-14 02:42:23 -07:00
Edwin Jacques
0559eec681 Update Dockerfile to use python3 (#4885) 2019-06-14 01:54:24 -07:00
刘旭
a3a7fe7c8e fix start CoreDNS when init secondary master (#4867) 2019-06-11 04:56:18 -07:00
Maxime Guyot
9b2d176617 Enable packet_ubuntu-contiv-sep (#4595) 2019-06-11 03:28:16 -07:00
Maxime Guyot
7a3547e4d1 Enable packet_*-kube-router jobs (#4594) 2019-06-11 02:58:18 -07:00
Scott Charron
e6fb686156 added the ability to define and deploy multiple address pools to metallb (#4757) 2019-06-11 00:20:21 -07:00
Johnny Halfmoon
5e80603bbb updated vagrant doc (#3719) 2019-06-10 23:58:14 -07:00
Andreas Krüger
c8d95a1586 Remove dnsPolicy from PSP (#4864) 2019-06-10 23:34:16 -07:00
Neven Miculinic
27a99e0a3f Added configurable min memory assertions (#4307) 2019-06-10 23:22:15 -07:00
Andreas Krüger
3cc351dff9 Require min version of Kubernetes (#4860)
* Require minimum version of Kubernetes

* Remove checksums for kubernetes version 1.12

* Add kube_version to precheck output and add min required version to README

* Fix merge

* Fix defaults

* Fix typo in precheck
2019-06-10 23:18:15 -07:00
Johnny Halfmoon
23c9071c30 Added file and container image caching (#4828)
* File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant.

* When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching.

* The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache

* A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost.

* Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009

* Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused.

* Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml

* All features of commit d6fd0d2aca by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching.

Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical.

Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
2019-06-10 11:21:07 -07:00
Maxime Guyot
14141ec137 Rebase only on PRs (#4861) 2019-06-10 11:17:05 -07:00
rptaylor
5bec2edaf7 remove namespace from ClusterRole (#4856) 2019-06-10 11:15:12 -07:00
Matthew Mosesohn
f504d0ea99 Remove invalid field dnsPolicy from podSecurityPolicy (#4863)
Change-Id: I02864011bf5fda5dbd35c7513c73875769036f87
2019-06-10 07:11:10 -07:00
Matthew Mosesohn
3b7797b1a1 Ensure haproxy and nginx reload when config changes (#4862)
Change-Id: Ia9a41e7b1cfcb1e6acb2dbae6eecc541dce25a74
2019-06-10 05:59:08 -07:00
Aivars Sterns
aa63eb6196 disable ansible group name warning (#4852) 2019-06-10 03:29:09 -07:00
Andreas Krüger
23aa3e4638 Remove GCE tests and CNCF funding ended (#4859) 2019-06-10 00:31:06 -07:00
Trond Hasle Amundsen
56ae3bfec2 Add support for IPv6 for Openstack in terraform.py via metadata (#4716)
* Add support for IPv6 for Openstack in terraform.py via metadata

* document terraform.py metadata variables for openstack
2019-06-09 23:01:05 -07:00
Sergey Nuzhdin
4d5c4a13cb Add missing checksums, update default k8s version to 1.14.3 (#4850)
This PR adds missing checksums for kubeadm and hyperkube and changes
default version to 1.14.3

Signed-off-by: Sergey Nuzhdin <ipaq.lw@gmail.com>
2019-06-09 11:49:05 -07:00
AlawnWong
69a8f91512 Update dns-autoscaler.yml.j2 (#4857)
Merge two tolerations.  because the latest tolerations will cover the first tolerations.
2019-06-09 11:39:04 -07:00
Daniel Holbach
fa791cc344 update link to Weave Net Troubleshooting docs (#4853)
Signed-off-by: Daniel Holbach <daniel@weave.works>
2019-06-07 05:52:00 -07:00
Dani Comnea
456f743470 Fix etcd_events_cluster_enabled in CI due to wrong var used (#4849) 2019-06-06 07:10:17 -07:00
Frank Ritchie
ab6f0012cc Make local volume provisioner dir mode a variable (#4821)
* Make local volume provisioner dir mode a variable

I need to change this for Nagios monitoring. Others may
need to as well. Had to close previous commits, sorry for
the spam.

* Make local volume provisioner dir mode a variable

I need to change this for Nagios monitoring. Others may
need to as well. Had to close previous commits, sorry for
the spam.
2019-06-06 04:36:14 -07:00
Alberto Murillo
4afbf51d32 kube-router: Set ownership of /opt/cni/bin/* to kube (#4825)
Task "kube-roter | Set cni directory permissions"
sets ownership of /opt/cni/bin to "kube"

Task "kube-router | Copy cni plugins"
copies the binaries from the archive setting the ownership
back to "root"

Fix "kube-roter" typo

Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
2019-06-06 04:34:13 -07:00
Ivan Kukharchuk
d62684b617 Fixed missing meta for generic CNI network plugin (#4845) 2019-06-06 02:22:11 -07:00
mervynzhang
a8dfcbbfc7 Switch /root references to ansible_env.HOME (#4842)
* kube config dir for current/ansible become user

* remove extra /

* fix default value
2019-06-06 02:06:11 -07:00
Scott Charron
bbdc6210f5 use dpkg_selections module to hold docker-ce on Debian family hosts (#4820)
* use dpkg_selections module to hold docker-ce on Debian family hosts

* removed debian_docker.j2 template as it is no longer required
2019-06-06 01:16:13 -07:00
Maxime Guyot
c7f6ed1495 Move moderator between part1 and part2 (#4844) 2019-06-06 01:00:17 -07:00
Andreas Krüger
818aa7aeb1 Set dnsPolicy to ClusterFirstWithHostNet when hostNetwork is true (#4843) 2019-06-05 03:17:55 -07:00
Vladimir Kiselev
045acc724b fix relative paths for bastion host template (#4126)
This is a fix for #4124
2019-06-05 01:51:55 -07:00
Dani Comnea
d540560619 Preinstall fails on checking etcd group length (#4839) 2019-06-05 01:37:53 -07:00
Andreas Krüger
797bfd85b0 Only create kubeadm compat cert dir link if it does not exist (#4840) 2019-06-05 01:27:53 -07:00
Sergey Nuzhdin
07cb8ebef7 Add support for arm images for hyperkube, kubeadm and cni_binary (#4261)
* Add support for arm images for hyperkube, kubeadm and cni_binary

* Add dummy etcd checksum for arm

This commit adds dummy etcd checksum for arm to avoid "no attribute" error
during setup.

* Add etcd host assert check

* Add 1.13.4 checksums of kubeadm and hyperkube for arm

* Update checksums of kubeadm and hyperkube for arm

* Add dummy checksums for calicoctl_binary_checksums dict

* disable gather_facts because it causes tests to fail

* Remove architecture check for etcd, due to unable to run tests
2019-06-05 00:05:55 -07:00
Toni Pokki
54416cabfd prefer_udp for upstream dns servers (#4810) 2019-06-04 23:27:55 -07:00
Matthew Mosesohn
3617ae31f6 Optionally skip predownload of kubeadm images (#4832) 2019-06-04 04:35:02 -07:00
Maxime Guyot
4f05d801c3 Use short cluster_name for TF CI (#4835) 2019-06-04 04:25:00 -07:00
Maxime Guyot
956afcb33f Move tf-ovh to part2 (#4834) 2019-06-04 01:39:07 -07:00
Matthew Mosesohn
6347419233 Avoid duplicating nameservers (#4833) 2019-06-04 00:13:02 -07:00
Rodrigo Bermúdez Schettino
0c7a50fe1e README: Make usage section clearer (#4034)
Long option --become was used in the example but in the comment describing it the short option -b was used.
Use same option in description and example to avoid confusion.
2019-05-31 12:48:28 -07:00
Andreas Krüger
7423932510 Add ready plugin for CoreDNS (#4817) 2019-05-28 06:47:56 -07:00
Andreas Krüger
b41530ba5d Add missing extraArgs to kubeadm-config (#4814) 2019-05-28 03:57:52 -07:00
Maxime Guyot
29e916508c Update roadmap (#4811) 2019-05-28 02:05:54 -07:00
Maxime Guyot
b45f3f0004 Add tf-ovh_coreos CI job (#4763) 2019-05-28 01:51:53 -07:00
Dani Comnea
2a5721b4d4 Change CentOS CRI-O repo from developer repo to public one (#4807) 2019-05-27 05:33:51 -07:00
Maxime Guyot
e30a703c8e Add Kubernetes conformance tests (#4614) 2019-05-27 05:31:52 -07:00
Vitaliy Dmitriev
333f1a4a40 kubeadm join path fixed for RH linux (#4798) 2019-05-27 01:49:51 -07:00
Geert-Johan Riemer
84b278021a Update openstack.yml (#4795)
Fix comment style
2019-05-25 05:19:27 -07:00
Andreas Krüger
1e470b0473 Fix certificate-key param for kubeadm init (#4789)
* Fix certificate-key param for kubeadm init

* Fix yamllint error
2019-05-22 02:06:11 -07:00
André R. de Miranda
0ef3a7914c Added pod psp in Rancher Local Path Provisioner (#4385)
* Added pod psp in Rancher Local Path Provisioner

Added pod security policy (psp) in Rancher Local Path Provisioner.

Signed-off-by: André R. de Miranda <andre@miranda.work>

* Apply psp for Rancher Local Path Provisioner only when local_path_provisioner_namespace is not kube-system and also reorganized the templates
2019-05-22 00:16:08 -07:00
bobahspb
a3fff1e438 cordon all deleted nodes before drain (#4756)
Kubespray waits exit of every drain before run other one.
Running drain every after each other seems better than parallel, because we should check resources availability every time.
But, this way, we have one additional problem: possible restart pods on the nodes that are killed little bit later.
Fast cordon before heavy drain seems like an easy solution.
2019-05-21 23:36:05 -07:00
André R. de Miranda
4bc204925a Error in nginx when starting registry-proxy (#4785)
Error starting nginx because in requiredDropCapabilities is dropped all capabilities.

The nginx requires the following capabilities:
- CHOWN
- SETGID
- SETUID

Signed-off-by: André R. de Miranda <andre@miranda.work>
2019-05-20 11:27:15 -07:00
Jacopo Secchiero
5d9946184a Add ignore_assert_errors to "kube-master, ... (#4779)
... kube-node or etcd is empty" task
As a assert must be ignored if ignore_assert_errors is true
2019-05-20 11:25:14 -07:00
MarkusTeufelberger
5ba169a612 Ignore 2 ansible-lint rules (E204, E701) on purpose. (#4744) 2019-05-20 11:23:14 -07:00
marcstreeter
872b37f751 updated pinning to prevent breaking changes (#4783)
* updated ansible pinning to prevent more possibilities of breaking changes

* more exact pinning of ansible version

* more exact pinning of ansible version and also all the rest

* added testing requirements.txt pinning settings

* removed boto from testing requirements.txt
2019-05-20 11:21:14 -07:00
Mateus Caruccio
8485136f9a var node_labels as string (#4764) 2019-05-19 12:31:10 -07:00
Maxime Guyot
ff1bc739f1 Change default for kubelet_flexvolumes_plugins_dir (#4752) 2019-05-19 12:29:10 -07:00
MarioUhrik
594a0e7f1b Fix invalid YAML formatting within addons.yml (#4753) 2019-05-16 02:05:49 -07:00
Florent Monbillard
8e28ba38d2 Add Load Balancer IP to API servers SANs (#4775)
- Add loadbalancer_apiserver.address to apiserver_sans
2019-05-16 01:23:42 -07:00
MarkusTeufelberger
73c2ff17dd Fix Ansible-lint error [E502] (#4743) 2019-05-16 00:27:43 -07:00
Timoses
13f225e6ae Only pull images for destined host groups (#4735)
Without this, pulls are considered for all
hosts groups, even if not targetted by the downloads
`groups` list. Hence, a download/sync is triggered
even though the host does not require the image.
2019-05-16 00:25:48 -07:00
Maxime Guyot
3f62492a15 Use standard testcases job for TF CI (#4732) 2019-05-14 02:01:14 -07:00
Maxime Guyot
5e3bd2dff1 Use common playbook to wait for SSH (#4734) 2019-05-10 01:25:59 -07:00
Robert Neumann
787a9c74fa Terraform wait for floating IP instance has been associated (#4321)
* Add wait for floating ip associate with instance

* Terraform formatting fix

* Sort Open Telekom Cloud in compatible list
2019-05-09 02:16:50 -07:00
Aleksey Kasatkin
14749df6f3 Fix "netchecker-server" ClusterRole (#4730)
* Add sha256 hashes for calicoctl v3.6.1

Hashes are added to calicoctl_binary_checksums for both adm and arm platforms.

* Add rules for "network-checker.ext" resource to "netchecker-server" ClusterRole

So that it could access the resource after it is created.

Corresponding issues:
https://github.com/Mirantis/k8s-netchecker-server/issues/125
https://github.com/kubernetes-sigs/kubespray/issues/3281
2019-05-09 01:30:49 -07:00
Sandro Modarelli
2db2898112 Fixed runc path in runtime for RedHat os family (#4731) 2019-05-09 01:28:48 -07:00
Maxime Guyot
3776000fc4 Run TF tests from repo root (#4723) 2019-05-08 23:40:49 -07:00
Maxime Guyot
f0572e59e7 Always do OVH CI (#4722) 2019-05-08 23:38:53 -07:00
Andreas Krüger
6217184c7f Merge pull request #4720 from MarkusTeufelberger/patch-1
Update default CentOS version on Azure
2019-05-09 08:38:44 +02:00
Andreas Krüger
044dcbaed0 Add Kubelet config, remove deprecated flags and fix minor bugs (#4724)
* Add kubelet config

* Change kubelet_authorization_mode_webhook to true

* Fix lint

* Sync env file

* Refactor the kubernetes node folder

* Remove deprecated flag and fix lint
2019-05-08 13:38:36 -07:00
Andreas Krüger
8a5eae94ea Minor cleanups of CoreDNS issues and CI job (#4719)
* Minor cleanups

* Add comment in docs that nodelocaldns cache is enabled by default
2019-05-07 13:20:36 -07:00
Andreas Krüger
bf3c6aeed1 Add kube anon auth settings to kubeadm config templates (#4713)
* Disable kube_api_anonymous_auth by default to secure the setup

* Disable metrics-server in addons. Health endpoint is slow and unstable

* Fix anonymous-auth missing in configuration

* Cleanup a bit

* Fix kube anon auth
2019-05-07 12:52:34 -07:00
MarkusTeufelberger
f3fbf995ca Update default CentOS version on Azure 2019-05-07 13:37:42 +02:00
Dmitri Rubinstein
03bded2b6b Fix adding output of kubeadm to the admin.conf downloaded to the artifacts directory (#4696)
Fixes issue https://github.com/kubernetes-sigs/kubespray/issues/4695
2019-05-06 03:29:36 -07:00
Manuel Cintron
d5c0829d61 Removing unnecessary httplib2 install (#4708) 2019-05-03 17:55:38 -07:00
Alex Barcelo
00369303de Fixing msg parameter for debug module (#4702)
According to [`debug` module documentation](https://docs.ansible.com/ansible/latest/modules/debug_module.html?highlight=msg), the correct parameter name is `msg`.

With the previous `message` parameter name I was getting FAILED messages while ansible was trying to debug previous FAILED tasks.
2019-05-03 12:21:42 -07:00
okamototk
1f1479c0a7 Update ingress nginx 0.24.1. (#4691) 2019-05-03 12:19:39 -07:00
MarkusTeufelberger
e67f848abc ansible-lint: add spaces around variables [E206] (#4699) 2019-05-02 14:24:21 -07:00
MarkusTeufelberger
560f50d3cd Add support for http(s)_proxy to CoreOS, Fedora and OpenSUSE (#4669)
* Add support for http(s)_proxy to CoreOS and Fedora

* fix opensuse proxy support

* Fix CoreOS proxy support

* update documentation
2019-05-02 12:28:22 -07:00
Maxime Guyot
3f45122d0d Refactor Terraform CI (#4654) 2019-05-02 12:26:19 -07:00
Stas
50bdaa573c Apply etcd_extra_vars to etcd-events.env as well. (#4219)
This change ensures that etcd_extra_vars variable applies
to events etcd as well.
2019-05-02 12:24:27 -07:00
Maxime Guyot
24b6698cc9 Disable CI deploys on master (#4690) 2019-05-02 12:20:20 -07:00
Andreas Krüger
73885d3b9e Validate Vagrantfile in CI unit-tests (#4642)
* Validate vagrant file on CI

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Test vagrant validate
2019-05-02 11:24:21 -07:00
Maxime Guyot
f29387316f Fix ansible-lint 602 (#4688) 2019-05-01 23:42:17 -07:00
Timoses
d6fd0d2aca Enable delegating all downloads (binaries, images, kubeadm images) (#4420)
* Download to delegate and sync files when download_run_once

* Fail on error after saving container image

* Do not set changed status when downloaded container was up to date

* Only sync containers when they are actually required

Previously, non-required images (pull_required=false as
image existed on target host) were synced to the target
hosts. This failed as the image was not downloaded to
the download_delegate and hence was not available for
syncing.

* Sync containers when only missing on some hosts

* Consider images with multiple repo tags

* Enable kubeadm images pull/syncing with download_delegate

* Use kubeadm images list to pull/sync

'kubeadm config images pull' is replaced by collecting the images
list with 'kubeadm config images list' and using the commonly
used method of pull/syncing the images.

* Ensure containers are downloaded and synced for all hosts

* Fix download/syncing when download_delegate is a kubernetes host
2019-05-01 01:10:56 -07:00
MarkusTeufelberger
e814da1eec ansible-lint: Don't use the local_action module [E504] (#4666) 2019-05-01 00:38:55 -07:00
Andreas Krüger
e029a09345 Update CI to use 2.10.0 release (#4682)
* Update CI to use 2.10.0 release

* Add rsync as it's required to use synchronize
2019-04-30 07:29:37 -07:00
518 changed files with 10342 additions and 4477 deletions

View File

@@ -3,14 +3,25 @@ parseable: true
skip_list: skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules # see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors. # The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or they are skipped on purpose. # These either still need to be corrected in the repository and the rules re-enabled or documented why they are skipped on purpose.
- '204'
- '206'
- '301' - '301'
- '302'
- '303'
- '305' - '305'
- '306' - '306'
- '404' - '404'
- '502'
- '503' - '503'
- '504'
# These rules are intentionally skipped:
#
# [E204]: "Lines should be no longer than 160 chars"
# This could be re-enabled with a major rewrite in the future.
# For now, there's not enough value gain from strictly limiting line length.
# (Disabled in May 2019)
- '204'
# [E701]: "meta/main.yml should contain relevant info"
# Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701' - '701'

View File

@@ -1,8 +1,8 @@
--- ---
stages: stages:
- unit-tests - unit-tests
- moderator
- deploy-part1 - deploy-part1
- moderator
- deploy-part2 - deploy-part2
- deploy-gce - deploy-gce
- deploy-special - deploy-special
@@ -30,14 +30,15 @@ variables:
before_script: before_script:
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- /usr/bin/python -m pip install -r tests/requirements.txt - update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh - mkdir -p /.ssh
.job: &job .job: &job
tags: tags:
- packet - packet
variables: variables:
KUBESPRAY_VERSION: v2.9.0 KUBESPRAY_VERSION: v2.11.2
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
.testcases: &testcases .testcases: &testcases
@@ -45,6 +46,7 @@ before_script:
services: services:
- docker:dind - docker:dind
before_script: before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh - ./tests/scripts/testcases_prepare.sh
script: script:
@@ -60,11 +62,11 @@ ci-authorized:
script: script:
- /bin/sh scripts/premoderator.sh - /bin/sh scripts/premoderator.sh
except: ['triggers', 'master'] except: ['triggers', 'master']
# Disable ci moderator
only: []
include: include:
- .gitlab-ci/lint.yml - .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml - .gitlab-ci/shellcheck.yml
- .gitlab-ci/gce.yml
- .gitlab-ci/digital-ocean.yml
- .gitlab-ci/terraform.yml - .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml - .gitlab-ci/packet.yml

View File

@@ -1,19 +0,0 @@
---
.do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
.do: &do
extends: .testcases
tags:
- do
do_ubuntu-canal-ha:
stage: deploy-part2
extends: .do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -20,6 +20,8 @@
<<: *gce_variables <<: *gce_variables
tags: tags:
- gce - gce
except: ['triggers']
only: [/^pr-.*$/]
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables .centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1 # stage: deploy-part1
@@ -36,8 +38,6 @@ gce_ubuntu18-flannel-aio:
stage: deploy-part1 stage: deploy-part1
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2 ### PR JOBS PART2
@@ -45,15 +45,11 @@ gce_coreos-calico-aio:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_centos7-flannel-addons: gce_centos7-flannel-addons:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS ### MANUAL JOBS
@@ -64,36 +60,42 @@ gce_centos-weave-kubeadm-sep:
<<: *centos_weave_kubeadm_variables <<: *centos_weave_kubeadm_variables
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_ubuntu-weave-sep: gce_ubuntu-weave-sep:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
only: ['triggers'] only: ['triggers']
except: []
gce_coreos-calico-sep-triggers: gce_coreos-calico-sep-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_ubuntu-canal-ha-triggers: gce_ubuntu-canal-ha-triggers:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_centos7-flannel-addons-triggers: gce_centos7-flannel-addons-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_ubuntu-weave-sep-triggers: gce_ubuntu-weave-sep-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
# More builds for PRs/merges (manual) and triggers (auto) # More builds for PRs/merges (manual) and triggers (auto)
@@ -102,27 +104,23 @@ gce_ubuntu-canal-ha:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm: gce_ubuntu-canal-kubeadm:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm-triggers: gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_ubuntu-flannel-ha: gce_ubuntu-flannel-ha:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
gce_centos-weave-kubeadm-triggers: gce_centos-weave-kubeadm-triggers:
stage: deploy-gce stage: deploy-gce
@@ -131,99 +129,87 @@ gce_centos-weave-kubeadm-triggers:
<<: *centos_weave_kubeadm_variables <<: *centos_weave_kubeadm_variables
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_ubuntu-contiv-sep: gce_ubuntu-contiv-sep:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-cilium: gce_coreos-cilium:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu18-cilium-sep: gce_ubuntu18-cilium-sep:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave: gce_rhel7-weave:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave-triggers: gce_rhel7-weave-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_debian9-calico-upgrade: gce_debian9-calico-upgrade:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_debian9-calico-triggers: gce_debian9-calico-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_coreos-canal: gce_coreos-canal:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-canal-triggers: gce_coreos-canal-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_rhel7-canal-sep: gce_rhel7-canal-sep:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-canal-sep-triggers: gce_rhel7-canal-sep-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_centos7-calico-ha: gce_centos7-calico-ha:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-calico-ha-triggers: gce_centos7-calico-ha-triggers:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: on_success when: on_success
only: ['triggers'] only: ['triggers']
except: []
gce_centos7-kube-router: gce_centos7-kube-router:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-multus-calico: gce_centos7-multus-calico:
stage: deploy-gce stage: deploy-gce
@@ -231,6 +217,11 @@ gce_centos7-multus-calico:
variables: variables:
<<: *centos7_multus_calico_variables <<: *centos7_multus_calico_variables
when: manual when: manual
gce_oracle-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers'] except: ['triggers']
only: ['master', /^pr-.*$/] only: ['master', /^pr-.*$/]
@@ -238,27 +229,19 @@ gce_opensuse-canal:
stage: deploy-gce stage: deploy-gce
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613 # no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha: gce_coreos-alpha-weave-ha:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-kube-router: gce_coreos-kube-router:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-kube-router-sep: gce_ubuntu-kube-router-sep:
stage: deploy-special stage: deploy-special
<<: *gce <<: *gce
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -2,10 +2,21 @@
yamllint: yamllint:
extends: .job extends: .job
stage: unit-tests stage: unit-tests
variables:
LANG: C.UTF-8
script: script:
- yamllint --strict . - yamllint --strict .
except: ['triggers', 'master'] except: ['triggers', 'master']
vagrant-validate:
extends: .job
stage: unit-tests
script:
- curl -sL https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.deb -o /tmp/vagrant_2.2.4_x86_64.deb
- dpkg -i /tmp/vagrant_2.2.4_x86_64.deb
- vagrant validate --ignore-provider
except: ['triggers', 'master']
ansible-lint: ansible-lint:
extends: .job extends: .job
stage: unit-tests stage: unit-tests
@@ -33,8 +44,20 @@ syntax-check:
tox-inventory-builder: tox-inventory-builder:
stage: unit-tests stage: unit-tests
extends: .job extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
script: script:
- pip install tox - pip3 install tox
- cd contrib/inventory_builder && tox - cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master'] except: ['triggers', 'master']
markdownlint:
stage: unit-tests
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md

View File

@@ -1,123 +1,126 @@
--- ---
.packet_variables: &packet_variables
CI_PLATFORM: "packet"
SSH_USER: "kubespray"
.packet: &packet .packet: &packet
extends: .testcases extends: .testcases
variables: variables:
<<: *packet_variables CI_PLATFORM: "packet"
SSH_USER: "kubespray"
tags: tags:
- packet - packet
only: [/^pr-.*$/]
.test-upgrade: &test-upgrade except: ['triggers']
variables:
UPGRADE_TEST: "graceful"
packet_ubuntu18-calico-aio: packet_ubuntu18-calico-aio:
stage: deploy-part1 stage: deploy-part1
<<: *packet extends: .packet
when: on_success when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
# ### PR JOBS PART2 # ### PR JOBS PART2
packet_centos7-flannel-addons: packet_centos7-flannel-addons:
extends: .packet
stage: deploy-part2 stage: deploy-part2
<<: *packet
when: on_success when: on_success
except: ['triggers']
only: [/^pr-.*$/]
# ### MANUAL JOBS # ### MANUAL JOBS
packet_centos-weave-kubeadm-sep: packet_centos-weave-kubeadm-sep:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: on_success when: on_success
only: ['triggers'] variables:
UPGRADE_TEST: basic
packet_ubuntu-weave-sep: packet_ubuntu-weave-sep:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: manual when: manual
only: ['triggers']
# # More builds for PRs/merges (manual) and triggers (auto) # # More builds for PRs/merges (manual) and triggers (auto)
packet_ubuntu-canal-ha: packet_ubuntu-canal-ha:
stage: deploy-special stage: deploy-special
<<: *packet extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-canal-kubeadm: packet_ubuntu-canal-kubeadm:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: on_success when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-flannel-ha: packet_ubuntu-flannel-ha:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: on_success
except: ['triggers']
packet_ubuntu-contiv-sep:
stage: deploy-special
<<: *packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/] # Contiv does not work in k8s v1.16
# packet_ubuntu-contiv-sep:
# stage: deploy-part2
# extends: .packet
# when: on_success
packet_ubuntu18-cilium-sep: packet_ubuntu18-cilium-sep:
stage: deploy-special stage: deploy-special
<<: *packet extends: .packet
when: manual
packet_ubuntu18-flannel-containerd:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-macvlan-sep:
stage: deploy-part2
extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_debian9-calico-upgrade: packet_debian9-calico-upgrade:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
packet_debian10-containerd:
stage: deploy-part2
extends: .packet
when: on_success when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-calico-ha: packet_centos7-calico-ha:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: manual
packet_centos7-kube-ovn:
stage: deploy-part2
extends: .packet
when: on_success when: on_success
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-kube-router: packet_centos7-kube-router:
stage: deploy-special stage: deploy-part2
<<: *packet extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_centos7-multus-calico: packet_centos7-multus-calico:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_opensuse-canal: packet_opensuse-canal:
stage: deploy-part2 stage: deploy-part2
<<: *packet extends: .packet
when: manual
packet_oracle-7-canal:
stage: deploy-part2
extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
packet_ubuntu-kube-router-sep: packet_ubuntu-kube-router-sep:
stage: deploy-special stage: deploy-part2
<<: *packet extends: .packet
when: manual
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet
when: manual when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -3,96 +3,95 @@
.terraform_install: .terraform_install:
extends: .job extends: .job
before_script: before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh - ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
- ./tests/scripts/terraform_install.sh
# Set Ansible config # Set Ansible config
- cp ansible.cfg ~/.ansible.cfg - cp ansible.cfg ~/.ansible.cfg
# Install Terraform
- apt-get install -y unzip
- curl https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip > /tmp/terraform.zip
- unzip /tmp/terraform.zip && mv ./terraform /usr/local/bin/ && terraform --version
# Prepare inventory # Prepare inventory
- cp -LRp contrib/terraform/$PROVIDER/sample-inventory inventory/$CLUSTER - cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- cd inventory/$CLUSTER - ln -s contrib/terraform/$PROVIDER/hosts
- ln -s ../../contrib/terraform/$PROVIDER/hosts - terraform init contrib/terraform/$PROVIDER
- terraform init ../../contrib/terraform/$PROVIDER
# Copy SSH keypair # Copy SSH keypair
- mkdir -p ~/.ssh - mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa - echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa - chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub - echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
only: ['master', /^pr-.*$/]
.terraform_validate: .terraform_validate:
extends: .terraform_install extends: .terraform_install
stage: unit-tests stage: unit-tests
only: ['master', /^pr-.*$/]
script: script:
- terraform validate -var-file=cluster.tf ../../contrib/terraform/$PROVIDER - terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
- terraform fmt -check -diff ../../contrib/terraform/$PROVIDER - terraform fmt -check -diff contrib/terraform/$PROVIDER
.terraform_apply: .terraform_apply:
extends: .terraform_install extends: .terraform_install
stage: deploy-part2 stage: deploy-part2
when: manual when: manual
only: [/^pr-.*$/]
variables: variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true" ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
ANSIBLE_INVENTORY: hosts
CI_PLATFORM: tf
TF_VAR_ssh_user: $SSH_USER
TF_VAR_cluster_name: $CI_JOB_ID
script: script:
- terraform apply -auto-approve ../../contrib/terraform/$PROVIDER - tests/scripts/testcases_run.sh
- ansible-playbook -i hosts ../../cluster.yml --become
after_script: after_script:
# Cleanup regardless of exit code # Cleanup regardless of exit code
- cd inventory/$CLUSTER - ./tests/scripts/testcases_cleanup.sh
- terraform destroy -auto-approve ../../contrib/terraform/$PROVIDER
tf-validate-openstack: tf-validate-openstack:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: 0.11.11 TF_VERSION: 0.12.12
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet: tf-validate-packet:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: 0.11.11 TF_VERSION: 0.12.12
PROVIDER: packet PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws: tf-validate-aws:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: 0.11.11 TF_VERSION: 0.12.12
PROVIDER: aws PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-packet-ubuntu16-default: # tf-packet-ubuntu16-default:
extends: .terraform_apply # extends: .terraform_apply
variables: # variables:
TF_VERSION: 0.11.11 # TF_VERSION: 0.12.12
PROVIDER: packet # PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME # CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG # TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_masters: "1" # TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_number_of_k8s_nodes: "1" # TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_masters: t1.small.x86 # TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86 # TF_VAR_facility: ewr1
TF_VAR_facility: ewr1 # TF_VAR_public_key_path: ""
TF_VAR_public_key_path: "" # TF_VAR_operating_system: ubuntu_16_04
TF_VAR_operating_system: ubuntu_16_04 #
# tf-packet-ubuntu18-default:
tf-packet-ubuntu18-default: # extends: .terraform_apply
extends: .terraform_apply # variables:
variables: # TF_VERSION: 0.12.12
TF_VERSION: 0.11.11 # PROVIDER: packet
PROVIDER: packet # CLUSTER: $CI_COMMIT_REF_NAME
CLUSTER: $CI_COMMIT_REF_NAME # TF_VAR_number_of_k8s_masters: "1"
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG # TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_number_of_k8s_masters: "1" # TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_number_of_k8s_nodes: "1" # TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_plan_k8s_masters: t1.small.x86 # TF_VAR_facility: ams1
TF_VAR_plan_k8s_nodes: t1.small.x86 # TF_VAR_public_key_path: ""
TF_VAR_facility: ams1 # TF_VAR_operating_system: ubuntu_18_04
TF_VAR_public_key_path: ""
TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables .ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3 OS_AUTH_URL: https://auth.cloud.ovh.net/v3
@@ -105,15 +104,16 @@ tf-packet-ubuntu18-default:
OS_INTERFACE: public OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3" OS_IDENTITY_API_VERSION: "3"
tf-apply-ovh: tf-ovh_ubuntu18-calico:
extends: .terraform_apply extends: .terraform_apply
when: on_success
variables: variables:
<<: *ovh_variables <<: *ovh_variables
TF_VERSION: 0.11.11 TF_VERSION: 0.12.12
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60" ANSIBLE_TIMEOUT: "60"
TF_VAR_cluster_name: $CI_COMMIT_REF_SLUG SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "0" TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1" TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0" TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
@@ -131,3 +131,31 @@ tf-apply-ovh:
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8 TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04" TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]' TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_coreos-calico:
extends: .terraform_apply
when: on_success
variables:
<<: *ovh_variables
TF_VERSION: 0.12.12
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: core
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_flavor_k8s_node: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_image: "CoreOS Stable"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

2
.markdownlint.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
MD013: false

View File

@@ -4,8 +4,8 @@ RUN mkdir /kubespray
WORKDIR /kubespray WORKDIR /kubespray
RUN apt update -y && \ RUN apt update -y && \
apt install -y \ apt install -y \
libssl-dev python-dev sshpass apt-transport-https jq \ libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python-pip ca-certificates curl gnupg2 software-properties-common python3-pip rsync
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \ add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
@@ -13,6 +13,6 @@ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - &&
stable" \ stable" \
&& apt update -y && apt-get install docker-ce -y && apt update -y && apt-get install docker-ce -y
COPY . . COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl \ RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl && chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl

View File

@@ -4,17 +4,12 @@ aliases:
- mattymo - mattymo
- atoms - atoms
- chadswen - chadswen
- rsmitty - mirwan
- bogdando - miouge1
- bradbeam
- woopstar
- riverzhang - riverzhang
- holser
- smana
- verwilst - verwilst
- woopstar
kubespray-reviewers: kubespray-reviewers:
- jjungnickel - jjungnickel
- archifleks - archifleks
- chapsuk - holmsten
- mirwan
- miouge1

View File

@@ -1,7 +1,6 @@
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png) # Deploy a Production Ready Kubernetes Cluster
Deploy a Production Ready Kubernetes Cluster ![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
============================================
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/) You can get your invite [here](http://slack.k8s.io/)
@@ -12,8 +11,7 @@ You can get your invite [here](http://slack.k8s.io/)
- Supports most popular **Linux distributions** - Supports most popular **Linux distributions**
- **Continuous integration tests** - **Continuous integration tests**
Quick Start ## Quick Start
-----------
To deploy the cluster you can use : To deploy the cluster you can use :
@@ -21,6 +19,7 @@ To deploy the cluster you can use :
#### Usage #### Usage
```ShellSession
# Install dependencies from ``requirements.txt`` # Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
@@ -29,23 +28,26 @@ To deploy the cluster you can use :
# Update Ansible inventory file with inventory builder # Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]} CONFIG_FILE=inventory/mycluster/inventory.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars`` # Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root # Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `-b` is required, as for example writing SSL keys in /etc/, # The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons. # installing packages and interacting with various systemd daemons.
# Without -b the playbook will fail to run! # Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu). Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with: As a consequence, `ansible-playbook` command will fail with:
```
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
``` ```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault"). probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible. One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
@@ -56,16 +58,19 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
For Vagrant we need to install python dependencies for provisioning tasks. For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed: Check if Python and pip are installed:
```ShellSession
python -V && pip -V python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/> If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements Install the necessary requirements
```ShellSession
sudo pip install -r requirements.txt sudo pip install -r requirements.txt
vagrant up vagrant up
```
Documents ## Documents
---------
- [Requirements](#requirements) - [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md) - [Kubespray vs ...](docs/comparisons.md)
@@ -91,8 +96,7 @@ Documents
- [Upgrades basics](docs/upgrades.md) - [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md) - [Roadmap](docs/roadmap.md)
Supported Linux Distributions ## Supported Linux Distributions
-----------------------------
- **Container Linux by CoreOS** - **Container Linux by CoreOS**
- **Debian** Buster, Jessie, Stretch, Wheezy - **Debian** Buster, Jessie, Stretch, Wheezy
@@ -101,40 +105,41 @@ Supported Linux Distributions
- **Fedora** 28 - **Fedora** 28
- **Fedora/CentOS** Atomic - **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed - **openSUSE** Leap 42.3/Tumbleweed
- **Oracle Linux** 7
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
Supported Components ## Supported Components
--------------------
- Core - Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.14.1 - [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.8
- [etcd](https://github.com/coreos/etcd) v3.2.26 - [etcd](https://github.com/coreos/etcd) v3.3.12
- [docker](https://www.docker.com/) v18.06 (see note) - [docker](https://www.docker.com/) v18.06 (see note)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS) - [containerd](https://containerd.io/) v1.2.13
- [cri-o](http://cri-o.io/) v1.14.0 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin - Network Plugin
- [calico](https://github.com/projectcalico/calico) v3.4.0 - [cni-plugins](https://github.com/containernetworking/plugins) v0.8.1
- [calico](https://github.com/projectcalico/calico) v3.7.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions) - [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.3.0 - [cilium](https://github.com/cilium/cilium) v1.5.5
- [contiv](https://github.com/contiv/install) v1.2.1 - [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0 - [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5 - [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf - [multus](https://github.com/intel/multus-cni) v3.2.1
- [weave](https://github.com/weaveworks/weave) v2.5.1 - [weave](https://github.com/weaveworks/weave) v2.5.2
- Application - Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11 - [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11 - [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2 - [cert-manager](https://github.com/jetstack/cert-manager) v0.11.0
- [coredns](https://github.com/coredns/coredns) v1.5.0 - [coredns](https://github.com/coredns/coredns) v1.6.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0 - [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.26.1
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Requirements ## Requirements
------------
- **Ansible v2.7.8 (or newer) and python-netaddr is installed on the machine - **Minimum required version of Kubernetes is v1.15**
that will run Ansible commands** - **Ansible v2.7.16 and python-netaddr is installed on the machine that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** - **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment)) - The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.
@@ -153,10 +158,9 @@ These limits are safe guarded by Kubespray. Actual requirements for your workloa
- Node - Node
- Memory: 1024 MB - Memory: 1024 MB
Network Plugins ## Network Plugins
---------------
You can choose between 6 network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking. - [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
@@ -170,35 +174,36 @@ You can choose between 6 network plugins. (default: `calico`, except Vagrant use
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks. apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. - [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)). (Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational - [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy), simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers). iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs. It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc. - [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
The choice is defined with the variable `kube_network_plugin`. There is also an The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).
Community docs and resources ## Community docs and resources
----------------------------
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/) - [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr - [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty - [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8) - [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
Tools and projects on top of Kubespray ## Tools and projects on top of Kubespray
--------------------------------------
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst) - [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform) - [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests ## CI Tests
--------
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines) [![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)

View File

@@ -3,16 +3,19 @@
The Kubespray Project is released on an as-needed basis. The process is as follows: The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release 1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [OWNERS](OWNERS) must LGTM this release 2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` 3. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
4. The release issue is closed 4. An approver creates a release branch in the form `release-vX.Y`
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released` 5. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) docker image is built and tagged
6. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
7. The release issue is closed
8. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
## Major/minor releases, merge freezes and milestones ## Major/minor releases, merge freezes and milestones
* Kubespray does not maintain stable branches for releases. Releases are tags, not * Kubespray maintains one branch for major releases (vX.Y). Minor releases are available only as tags.
branches, and there are no backports. Therefore, there is no need for merge
freezes as well. * Security patches and bugs might be backported.
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered * Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
via maintenance releases (vX.Y.Z) and assigned to the corresponding open via maintenance releases (vX.Y.Z) and assigned to the corresponding open

17
Vagrantfile vendored
View File

@@ -21,10 +21,11 @@ SUPPORTED_OS = {
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"}, "ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"}, "ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"}, "centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"}, "centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"}, "fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"}, "opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
} }
# Defaults for config options defined in CONFIG # Defaults for config options defined in CONFIG
@@ -180,9 +181,17 @@ Vagrant.configure("2") do |config|
"flannel_interface": "eth1", "flannel_interface": "eth1",
"kube_network_plugin": $network_plugin, "kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking, "kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1", "download_run_once": "True",
"download_run_once": "False",
"download_localhost": "False", "download_localhost": "False",
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
# Make kubespray cache even when download_run_once is false
"download_force_cache": "True",
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
"download_keep_remote_cache": "False",
"docker_keepcache": "1",
# These two settings will put kubectl and admin.config in $inventory/artifacts
"kubeconfig_localhost": "True",
"kubectl_localhost": "True",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}", "local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}", "local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user] "ansible_ssh_user": SUPPORTED_OS[$os][:user]
@@ -197,7 +206,7 @@ Vagrant.configure("2") do |config|
ansible.inventory_path = $ansible_inventory_path ansible.inventory_path = $ansible_inventory_path
end end
ansible.become = true ansible.become = true
ansible.limit = "all" ansible.limit = "all,localhost"
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars ansible.host_vars = host_vars

View File

@@ -4,6 +4,8 @@ ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p #control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults] [defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore
host_key_checking=False host_key_checking=False
gathering = smart gathering = smart
@@ -14,6 +16,6 @@ library = ./library
callback_whitelist = profile_tasks callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg
[inventory] [inventory]
ignore_patterns = artifacts, credentials ignore_patterns = artifacts, credentials

View File

@@ -19,27 +19,14 @@
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]} - { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false gather_facts: false
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os} - { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr - hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: false
pre_tasks:
- name: gather facts from all instances
setup:
delegate_to: "{{item}}"
delegate_facts: true
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
run_once: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
@@ -52,13 +39,23 @@
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" } - role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster:calico-rr - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false } - role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster - hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -81,6 +78,13 @@
- { role: kubespray-defaults} - { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm} - { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network } - { role: network_plugin, tags: network }
- { role: kubernetes/node-label }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr']}
- hosts: kube-master[0] - hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -98,12 +102,6 @@
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller } - { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner } - { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: kube-master - hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}" any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles: roles:

View File

@@ -42,8 +42,11 @@ class SearchEC2Tags(object):
region = os.environ['REGION'] region = os.environ['REGION']
ec2 = boto3.resource('ec2', region) ec2 = boto3.resource('ec2', region)
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
instances = ec2.instances.filter(Filters=[{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]) cluster_name = os.getenv('CLUSTER_NAME')
if cluster_name:
filters.append({'Name': 'tag-key', 'Values': ['kubernetes.io/cluster/'+cluster_name]})
instances = ec2.instances.filter(Filters=filters)
for instance in instances: for instance in instances:
##Suppose default vpc_visibility is private ##Suppose default vpc_visibility is private

View File

@@ -7,7 +7,7 @@ cluster_name: example
# node that can be used to access the masters and minions # node that can be used to access the masters and minions
use_bastion: false use_bastion: false
# Set this to a prefered name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com. # Set this to a preferred name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host. # This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
# bastion_domain_prefix: k8s-bastion # bastion_domain_prefix: k8s-bastion

View File

@@ -4,8 +4,11 @@
command: azure vm list-ip-address --json {{ azure_resource_group }} command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd register: vm_list_cmd
- set_fact: - name: Set vm_list
set_fact:
vm_list: "{{ vm_list_cmd.stdout }}" vm_list: "{{ vm_list_cmd.stdout }}"
- name: Generate inventory - name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory" template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"

View File

@@ -8,9 +8,22 @@
command: az vm list -o json --resource-group {{ azure_resource_group }} command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd register: vm_list_cmd
- set_fact: - name: Query Azure Load Balancer Public IP
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd
- name: Set VM IP, roles lists and load balancer public IP
set_fact:
vm_ip_list: "{{ vm_ip_list_cmd.stdout }}" vm_ip_list: "{{ vm_ip_list_cmd.stdout }}"
vm_roles_list: "{{ vm_list_cmd.stdout }}" vm_roles_list: "{{ vm_list_cmd.stdout }}"
lb_pubip: "{{ lb_pubip_cmd.stdout }}"
- name: Generate inventory - name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory" template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"
- name: Generate Load Balancer variables
template:
src: loadbalancer_vars.j2
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"

View File

@@ -0,0 +1,8 @@
## External LB example config
apiserver_loadbalancer_domain_name: {{ lb_pubip.dnsSettings.fqdn }}
loadbalancer_apiserver:
address: {{ lb_pubip.ipAddress }}
port: 6443
## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: false

View File

@@ -29,7 +29,7 @@ sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
imageReference: imageReference:
publisher: "OpenLogic" publisher: "OpenLogic"
offer: "CentOS" offer: "CentOS"
sku: "7.2" sku: "7.5"
version: "latest" version: "latest"
imageReferenceJson: "{{imageReference|to_json}}" imageReferenceJson: "{{imageReference|to_json}}"

View File

@@ -1,10 +1,18 @@
--- ---
- set_fact: - name: Set base_dir
set_fact:
base_dir: "{{ playbook_dir }}/.generated/" base_dir: "{{ playbook_dir }}/.generated/"
- file: path={{base_dir}} state=directory recurse=true - name: Create base_dir
file:
path: "{{ base_dir }}"
state: directory
recurse: true
- template: src={{item}} dest="{{base_dir}}/{{item}}" - name: Store json files in base_dir
template:
src: "{{ item }}"
dest: "{{ base_dir }}/{{ item }}"
with_items: with_items:
- network.json - network.json
- storage.json - storage.json

View File

@@ -20,6 +20,8 @@
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5 # Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip: # Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3 # inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Add hosts with a specific hostname, ip, and optional access ip:
# inventory.py first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
# Delete a host: inventory.py -10.10.1.3 # Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1 # Delete a host by id: inventory.py -node1
# #
@@ -44,7 +46,8 @@ import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster', ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico-rr'] 'calico-rr']
PROTECTED_NAMES = ROLES PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load'] AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True, _boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False} '0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML() yaml = YAML()
@@ -59,6 +62,7 @@ def get_var_as_bool(name, default):
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml") CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
KUBE_MASTERS = int(os.environ.get("KUBE_MASTERS_MASTERS", 2))
# Reconfigures cluster distribution at scale # Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50)) SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200)) MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
@@ -78,7 +82,7 @@ class KubesprayInventory(object):
try: try:
self.hosts_file = open(config_file, 'r') self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file) self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError: except OSError:
pass pass
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS: if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
@@ -96,9 +100,10 @@ class KubesprayInventory(object):
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1 etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count]) self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
if len(self.hosts) >= SCALE_THRESHOLD: if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_master(list(self.hosts.keys())[etcd_hosts_count:5]) self.set_kube_master(list(self.hosts.keys())[
etcd_hosts_count:(etcd_hosts_count + KUBE_MASTERS)])
else: else:
self.set_kube_master(list(self.hosts.keys())[:2]) self.set_kube_master(list(self.hosts.keys())[:KUBE_MASTERS])
self.set_kube_node(self.hosts.keys()) self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD: if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count]) self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
@@ -192,8 +197,21 @@ class KubesprayInventory(object):
'ip': ip, 'ip': ip,
'access_ip': access_ip} 'access_ip': access_ip}
elif host[0].isalpha(): elif host[0].isalpha():
raise Exception("Adding hosts by hostname is not supported.") if ',' in host:
try:
hostname, ip, access_ip = host.split(',')
except Exception:
hostname, ip = host.split(',')
access_ip = ip
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
all_hosts[hostname] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
return all_hosts return all_hosts
def range2ips(self, hosts): def range2ips(self, hosts):
@@ -204,10 +222,10 @@ class KubesprayInventory(object):
# Python 3.x # Python 3.x
start = int(ip_address(start_address)) start = int(ip_address(start_address))
end = int(ip_address(end_address)) end = int(ip_address(end_address))
except: except Exception:
# Python 2.7 # Python 2.7
start = int(ip_address(unicode(start_address))) start = int(ip_address(str(start_address)))
end = int(ip_address(unicode(end_address))) end = int(ip_address(str(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)] return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts: for host in hosts:
@@ -346,6 +364,8 @@ class KubesprayInventory(object):
self.print_config() self.print_config()
elif command == 'print_ips': elif command == 'print_ips':
self.print_ips() self.print_ips()
elif command == 'print_hostnames':
self.print_hostnames()
elif command == 'load': elif command == 'load':
self.load_file(args) self.load_file(args)
else: else:
@@ -359,11 +379,13 @@ Available commands:
help - Display this message help - Display this message
print_cfg - Write inventory file to stdout print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group
Advanced usage: Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5 Add another host after initial creation: inventory.py 10.10.1.5
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5 Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3 Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
Delete a host: inventory.py -10.10.1.3 Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1 Delete a host by id: inventory.py -node1
@@ -379,6 +401,9 @@ MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
def print_config(self): def print_config(self):
yaml.dump(self.yaml_config, sys.stdout) yaml.dump(self.yaml_config, sys.stdout)
def print_hostnames(self):
print(' '.join(self.yaml_config['all']['hosts'].keys()))
def print_ips(self): def print_ips(self):
ips = [] ips = []
for host, opts in self.yaml_config['all']['hosts'].items(): for host, opts in self.yaml_config['all']['hosts'].items():

View File

@@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import inventory
import mock import mock
import unittest import unittest
@@ -22,7 +23,7 @@ path = "./contrib/inventory_builder/"
if path not in sys.path: if path not in sys.path:
sys.path.append(path) sys.path.append(path)
import inventory import inventory # noqa
class TestInventory(unittest.TestCase): class TestInventory(unittest.TestCase):
@@ -43,7 +44,7 @@ class TestInventory(unittest.TestCase):
def test_get_ip_from_opts_invalid(self): def test_get_ip_from_opts_invalid(self):
optstring = "notanaddr=value something random!chars:D" optstring = "notanaddr=value something random!chars:D"
self.assertRaisesRegexp(ValueError, "IP parameter not found", self.assertRaisesRegex(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring) self.inv.get_ip_from_opts, optstring)
def test_ensure_required_groups(self): def test_ensure_required_groups(self):
@@ -63,7 +64,7 @@ class TestInventory(unittest.TestCase):
def test_get_host_id_invalid(self): def test_get_host_id_invalid(self):
bad_hostnames = ['node', 'no99de', '01node', 'node.111111'] bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
for hostname in bad_hostnames: for hostname in bad_hostnames:
self.assertRaisesRegexp(ValueError, "Host name must end in an", self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname) self.inv.get_host_id, hostname)
def test_build_hostnames_add_one(self): def test_build_hostnames_add_one(self):
@@ -192,7 +193,7 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '10.90.0.3', ('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3', 'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})]) 'access_ip': '10.90.0.3'})])
self.assertRaisesRegexp(ValueError, "Unable to find host", self.assertRaisesRegex(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip) self.inv.delete_host_by_ip, existing_hosts, ip)
def test_purge_invalid_hosts(self): def test_purge_invalid_hosts(self):
@@ -309,7 +310,7 @@ class TestInventory(unittest.TestCase):
def test_range2ips_incorrect_range(self): def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e'] host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid", self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range) self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self): def test_build_hostnames_different_ips_add_one(self):

View File

@@ -1,7 +1,7 @@
[tox] [tox]
minversion = 1.6 minversion = 1.6
skipsdist = True skipsdist = True
envlist = pep8, py27 envlist = pep8, py33
[testenv] [testenv]
whitelist_externals = py.test whitelist_externals = py.test

View File

@@ -2,9 +2,11 @@
``` ```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers. MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
``` ```
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer. This playbook aims to automate [this](https://metallb.universe.tf/concepts/layer2/). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install ## Install
``` ```
Defaults can be found in contrib/metallb/roles/provision/defaults/main.yml. You can override the defaults by copying the contents of this file to somewhere in inventory/mycluster/group_vars such as inventory/mycluster/groups_vars/k8s-cluster/addons.yml and making any adjustments as required.
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
``` ```

View File

@@ -1,6 +1,12 @@
--- ---
metallb: metallb:
ip_range: "10.5.0.50-10.5.0.99" ip_range: "10.5.0.50-10.5.0.99"
protocol: "layer2"
# additional_address_pools:
# kube_service_pool:
# ip_range: "10.5.1.50-10.5.1.99"
# protocol: "layer2"
# auto_assign: false
limits: limits:
cpu: "100m" cpu: "100m"
memory: "100Mi" memory: "100Mi"

View File

@@ -1,4 +1,9 @@
--- ---
- name: "Kubernetes Apps | Check cluster settings for MetalLB"
fail:
msg: "MetalLB require kube_proxy_strict_arp = true, see https://github.com/danderson/metallb/issues/153#issuecomment-518651132"
when:
- "kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp"
- name: "Kubernetes Apps | Lay Down MetalLB" - name: "Kubernetes Apps | Lay Down MetalLB"
become: true become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" } template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }

View File

@@ -8,6 +8,14 @@ data:
config: | config: |
address-pools: address-pools:
- name: loadbalanced - name: loadbalanced
protocol: layer2 protocol: {{ metallb.protocol }}
addresses: addresses:
- {{ metallb.ip_range }} - {{ metallb.ip_range }}
{% if metallb.additional_address_pools is defined %}{% for pool in metallb.additional_address_pools %}
- name: {{ pool }}
protocol: {{ metallb.additional_address_pools[pool].protocol }}
addresses:
- {{ metallb.additional_address_pools[pool].ip_range }}
auto-assign: {{ metallb.additional_address_pools[pool].auto_assign }}
{% endfor %}
{% endif %}

View File

@@ -115,7 +115,7 @@ roleRef:
kind: Role kind: Role
name: config-watcher name: config-watcher
--- ---
apiVersion: apps/v1beta2 apiVersion: apps/v1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
namespace: metallb-system namespace: metallb-system
@@ -169,7 +169,7 @@ spec:
- net_raw - net_raw
--- ---
apiVersion: apps/v1beta2 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
namespace: metallb-system namespace: metallb-system

View File

@@ -0,0 +1,15 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -21,7 +21,7 @@ You can specify a `default_release` for apt on Debian/Ubuntu by overriding this
glusterfs_ppa_use: yes glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5" glusterfs_ppa_version: "3.5"
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info. For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
## Dependencies ## Dependencies

View File

@@ -3,7 +3,7 @@
- name: Include OS-specific variables. - name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml" include_vars: "{{ ansible_os_family }}.yml"
# Instal xfs package # Install xfs package
- name: install xfs Debian - name: install xfs Debian
apt: name=xfsprogs state=present apt: name=xfsprogs state=present
when: ansible_os_family == "Debian" when: ansible_os_family == "Debian"
@@ -36,7 +36,7 @@
- "{{ gluster_brick_dir }}" - "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}" - "{{ gluster_mount_dir }}"
- name: Configure Gluster volume. - name: Configure Gluster volume with replicas
gluster_volume: gluster_volume:
state: present state: present
name: "{{ gluster_brick_name }}" name: "{{ gluster_brick_name }}"
@@ -46,6 +46,18 @@
host: "{{ inventory_hostname }}" host: "{{ inventory_hostname }}"
force: yes force: yes
run_once: true run_once: true
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size - name: Mount glusterfs to retrieve disk size
mount: mount:

View File

@@ -1,6 +1,8 @@
--- ---
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV - name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}} template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
with_items: with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} - { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} - { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View File

@@ -14,3 +14,5 @@ ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contr
``` ```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
``` ```
Add `--extra-vars "heketi_remove_lvm=true"` to the command above to remove LVM packages from the system

View File

@@ -4,6 +4,7 @@
register: "initial_heketi_state" register: "initial_heketi_state"
changed_when: false changed_when: false
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json" command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
- name: "Bootstrap heketi." - name: "Bootstrap heketi."
when: when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0" - "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
@@ -16,15 +17,20 @@
register: "initial_heketi_pod" register: "initial_heketi_pod"
command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json" command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
changed_when: false changed_when: false
- name: "Ensure heketi bootstrap pod is up." - name: "Ensure heketi bootstrap pod is up."
assert: assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1" that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- set_fact:
- name: Store the initial heketi pod name
set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}" initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology." - name: "Test heketi topology."
changed_when: false changed_when: false
register: "heketi_topology" register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json" command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology." - name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0" when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml" include_tasks: "bootstrap/topology.yml"
@@ -42,6 +48,7 @@
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json" command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false changed_when: false
register: "heketi_storage_state" register: "heketi_storage_state"
# ensure endpoints actually exist before trying to move database data to it # ensure endpoints actually exist before trying to move database data to it
- name: "Create heketi storage." - name: "Create heketi storage."
include_tasks: "bootstrap/storage.yml" include_tasks: "bootstrap/storage.yml"

View File

@@ -1,11 +1,19 @@
--- ---
- register: "label_present" - name: Get storage nodes
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false changed_when: false
- name: "Assign storage label" - name: "Assign storage label"
when: "label_present.stdout_lines|length == 0" when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs" command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- register: "label_present"
- name: Get storage nodes again
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false changed_when: false
- assert: { that: "label_present|length > 0", msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs." }
- name: Ensure the label has been set
assert:
that: "label_present|length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@@ -1,19 +1,24 @@
--- ---
- name: "Kubernetes Apps | Lay Down Heketi" - name: "Kubernetes Apps | Lay Down Heketi"
become: true become: true
template: { src: "heketi-deployment.json.j2", dest: "{{ kube_config_dir }}/heketi-deployment.json" } template:
src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json"
register: "rendering" register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi" - name: "Kubernetes Apps | Install and configure Heketi"
kube: kube:
name: "GlusterFS" name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl" kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-deployment.json" filename: "{{ kube_config_dir }}/heketi-deployment.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}" state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Ensure heketi is up and running." - name: "Ensure heketi is up and running."
changed_when: false changed_when: false
register: "heketi_state" register: "heketi_state"
vars: vars:
heketi_state: { stdout: "{}" } heketi_state:
stdout: "{}"
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]" pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]" deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json" command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
@@ -22,5 +27,7 @@
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'" - "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60 retries: 60
delay: 5 delay: 5
- set_fact:
- name: Set the Heketi pod name
set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}" heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -1,31 +1,44 @@
--- ---
- register: "clusterrolebinding_state" - name: Get clusterrolebindings
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding." - name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout == \"\"" when: "clusterrolebinding_state.stdout == \"\""
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account" command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- register: "clusterrolebinding_state"
- name: Get clusterrolebindings again
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false changed_when: false
- assert:
- name: Make sure that clusterrolebindings are present now
assert:
that: "clusterrolebinding_state.stdout != \"\"" that: "clusterrolebinding_state.stdout != \"\""
msg: "Cluster role binding is not present." msg: "Cluster role binding is not present."
- register: "secret_state" - name: Get the heketi-config-secret secret
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false changed_when: false
- name: "Render Heketi secret configuration." - name: "Render Heketi secret configuration."
become: true become: true
template: template:
src: "heketi.json.j2" src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json" dest: "{{ kube_config_dir }}/heketi.json"
- name: "Deploy Heketi config secret" - name: "Deploy Heketi config secret"
when: "secret_state.stdout == \"\"" when: "secret_state.stdout == \"\""
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json" command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- register: "secret_state"
- name: Get the heketi-config-secret secret again
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true" command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false changed_when: false
- assert:
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout != \"\"" that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present." msg: "Heketi config secret is not present."

View File

@@ -1,6 +1,6 @@
{ {
"kind": "DaemonSet", "kind": "DaemonSet",
"apiVersion": "extensions/v1beta1", "apiVersion": "apps/v1",
"metadata": { "metadata": {
"name": "glusterfs", "name": "glusterfs",
"labels": { "labels": {

View File

@@ -30,7 +30,7 @@
}, },
{ {
"kind": "Deployment", "kind": "Deployment",
"apiVersion": "extensions/v1beta1", "apiVersion": "apps/v1",
"metadata": { "metadata": {
"name": "deploy-heketi", "name": "deploy-heketi",
"labels": { "labels": {
@@ -56,7 +56,7 @@
"serviceAccountName": "heketi-service-account", "serviceAccountName": "heketi-service-account",
"containers": [ "containers": [
{ {
"image": "heketi/heketi:7", "image": "heketi/heketi:9",
"imagePullPolicy": "Always", "imagePullPolicy": "Always",
"name": "deploy-heketi", "name": "deploy-heketi",
"env": [ "env": [

View File

@@ -44,7 +44,7 @@
}, },
{ {
"kind": "Deployment", "kind": "Deployment",
"apiVersion": "extensions/v1beta1", "apiVersion": "apps/v1",
"metadata": { "metadata": {
"name": "heketi", "name": "heketi",
"labels": { "labels": {
@@ -68,7 +68,7 @@
"serviceAccountName": "heketi-service-account", "serviceAccountName": "heketi-service-account",
"containers": [ "containers": [
{ {
"image": "heketi/heketi:7", "image": "heketi/heketi:9",
"imagePullPolicy": "Always", "imagePullPolicy": "Always",
"name": "heketi", "name": "heketi",
"env": [ "env": [

View File

@@ -16,7 +16,7 @@
{ {
"addresses": [ "addresses": [
{ {
"ip": "{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}" "ip": "{{ hostvars[node].ip }}"
} }
], ],
"ports": [ "ports": [

View File

@@ -12,7 +12,7 @@
"{{ node }}" "{{ node }}"
], ],
"storage": [ "storage": [
"{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}" "{{ hostvars[node].ip }}"
] ]
}, },
"zone": 1 "zone": 1

View File

@@ -0,0 +1,2 @@
---
heketi_remove_lvm: false

View File

@@ -14,6 +14,8 @@
when: "ansible_os_family == 'Debian'" when: "ansible_os_family == 'Debian'"
- name: "Get volume group information." - name: "Get volume group information."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2" shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups" register: "volume_groups"
@@ -21,12 +23,16 @@
changed_when: false changed_when: false
- name: "Remove volume groups." - name: "Remove volume groups."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true become: true
command: "vgremove {{ volume_group }} --yes" command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}" with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" } loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks." - name: "Remove physical volume from cluster disks."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true become: true
command: "pvremove {{ disk_volume_device_1 }} --yes" command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true ignore_errors: true
@@ -36,11 +42,11 @@
yum: yum:
name: "lvm2" name: "lvm2"
state: "absent" state: "absent"
when: "ansible_os_family == 'RedHat'" when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
- name: "Remove lvm utils (Debian)" - name: "Remove lvm utils (Debian)"
become: true become: true
apt: apt:
name: "lvm2" name: "lvm2"
state: "absent" state: "absent"
when: "ansible_os_family == 'Debian'" when: "ansible_os_family == 'Debian' and heketi_remove_lvm"

View File

@@ -10,7 +10,7 @@ This project will create:
* AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet * AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
**Requirements** **Requirements**
- Terraform 0.8.7 or newer - Terraform 0.12.0 or newer
**How to Use:** **How to Use:**

View File

@@ -1,5 +1,5 @@
terraform { terraform {
required_version = ">= 0.8.7" required_version = ">= 0.12.0"
} }
provider "aws" { provider "aws" {
@@ -16,7 +16,7 @@ data "aws_availability_zones" "available" {}
*/ */
module "aws-vpc" { module "aws-vpc" {
source = "modules/vpc" source = "./modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}" aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}" aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
@@ -27,7 +27,7 @@ module "aws-vpc" {
} }
module "aws-elb" { module "aws-elb" {
source = "modules/elb" source = "./modules/elb"
aws_cluster_name = "${var.aws_cluster_name}" aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}" aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
@@ -39,7 +39,7 @@ module "aws-elb" {
} }
module "aws-iam" { module "aws-iam" {
source = "modules/iam" source = "./modules/iam"
aws_cluster_name = "${var.aws_cluster_name}" aws_cluster_name = "${var.aws_cluster_name}"
} }
@@ -57,7 +57,7 @@ resource "aws_instance" "bastion-server" {
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public, count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"] vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -82,7 +82,7 @@ resource "aws_instance" "k8s-master" {
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"] vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
iam_instance_profile = "${module.aws-iam.kube-master-profile}" iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -109,7 +109,7 @@ resource "aws_instance" "k8s-etcd" {
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"] vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -129,7 +129,7 @@ resource "aws_instance" "k8s-worker" {
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}" availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}" subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"] vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
iam_instance_profile = "${module.aws-iam.kube-worker-profile}" iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}" key_name = "${var.AWS_SSH_KEY_NAME}"
@@ -148,7 +148,7 @@ resource "aws_instance" "k8s-worker" {
data "template_file" "inventory" { data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}" template = "${file("${path.module}/templates/inventory.tpl")}"
vars { vars = {
public_ip_address_bastion = "${join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))}" public_ip_address_bastion = "${join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}" connection_strings_master = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}" connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
@@ -165,7 +165,7 @@ resource "null_resource" "inventories" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}" command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
} }
triggers { triggers = {
template = "${data.template_file.inventory.rendered}" template = "${data.template_file.inventory.rendered}"
} }
} }

View File

@@ -28,7 +28,7 @@ resource "aws_security_group_rule" "aws-allow-api-egress" {
# Create a new AWS ELB for K8S API # Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" { resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}" name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"] subnets = var.aws_subnet_ids_public
security_groups = ["${aws_security_group.aws-elb.id}"] security_groups = ["${aws_security_group.aws-elb.id}"]
listener { listener {

View File

@@ -3,15 +3,15 @@ output "aws_vpc_id" {
} }
output "aws_subnet_ids_private" { output "aws_subnet_ids_private" {
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"] value = aws_subnet.cluster-vpc-subnets-private.*.id
} }
output "aws_subnet_ids_public" { output "aws_subnet_ids_public" {
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"] value = aws_subnet.cluster-vpc-subnets-public.*.id
} }
output "aws_security_group" { output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"] value = aws_security_group.kubernetes.*.id
} }
output "default_tags" { output "default_tags" {

View File

@@ -1,4 +1,5 @@
.terraform .terraform
*.tfvars *.tfvars
!sample-inventory\/cluster.tfvars
*.tfstate *.tfstate
*.tfstate.backup *.tfstate.backup

View File

@@ -16,14 +16,13 @@ most modern installs of OpenStack that support the basic services.
- [ELASTX](https://elastx.se/) - [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/) - [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/) - [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [OVH](https://www.ovh.com/) - [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/) - [Rackspace](https://www.rackspace.com/)
- [Ultimum](https://ultimum.io/) - [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/) - [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/) - [Zetta](https://www.zetta.io/)
### Known incompatible public clouds
- T-Systems / Open Telekom Cloud: requires `wait_until_associated`
## Approach ## Approach
The terraform configuration inspects variables found in The terraform configuration inspects variables found in
@@ -70,7 +69,7 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements ## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) 0.12 or later
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) - [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in Glance - you already have a suitable OS image in Glance
- you already have a floating IP pool created - you already have a floating IP pool created
@@ -220,12 +219,14 @@ set OS_PROJECT_DOMAIN_NAME=Default
The construction of the cluster is driven by values found in The construction of the cluster is driven by values found in
[variables.tf](variables.tf). [variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`. For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|Variable | Description | |Variable | Description |
|---------|-------------| |---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. | |`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated | |`network_name` | The name to be given to the internal network that will be generated |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. | |`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated | |`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to | |`external_net` | UUID of the external network that will be routed to |
@@ -246,6 +247,13 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default | |`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default | |`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default | |`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`use_server_group` | Create and use openstack nova servergroups, default: false |
#### Terraform state files #### Terraform state files
@@ -276,7 +284,7 @@ This should finish fairly quickly telling you Terraform has successfully initial
You can apply the Terraform configuration to your cluster with the following command You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`): issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession ```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack $ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
``` ```
if you chose to create a bastion host, this script will create if you chose to create a bastion host, this script will create
@@ -290,7 +298,7 @@ pick it up automatically.
You can destroy your new cluster with the following command issued from the cluster's inventory directory: You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession ```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/openstack $ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
``` ```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup: If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
@@ -325,6 +333,30 @@ $ ssh-add ~/.ssh/id_rsa
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`). If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Metadata variables
The [python script](../terraform.py) that reads the
generated`.tfstate` file to generate a dynamic inventory recognizes
some variables within a "metadata" block, defined in a "resource"
block (example):
```
resource "openstack_compute_instance_v2" "example" {
...
metadata {
ssh_user = "ubuntu"
prefer_ipv6 = true
python_bin = "/usr/bin/python3"
}
...
}
```
As the example shows, these let you define the SSH username for
Ansible, a Python binary which is needed by Ansible if
`/usr/bin/python` doesn't exist, and whether the IPv6 address of the
instance should be preferred over IPv4.
#### Bastion host #### Bastion host
Bastion access will be determined by: Bastion access will be determined by:
@@ -391,6 +423,14 @@ kube_network_plugin: flannel
# For Container Linux by CoreOS: # For Container Linux by CoreOS:
resolvconf_mode: host_resolvconf resolvconf_mode: host_resolvconf
``` ```
- Set max amount of attached cinder volume per host (default 256)
```
node_volume_attach_limit: 26
```
- Disable access_ip, this will make all innternal cluster traffic to be sent over local network when a floating IP is attached (default this value is set to 1)
```
use_access_ip: 0
```
### Deploy Kubernetes ### Deploy Kubernetes

View File

@@ -3,18 +3,19 @@ provider "openstack" {
} }
module "network" { module "network" {
source = "modules/network" source = "./modules/network"
external_net = "${var.external_net}" external_net = "${var.external_net}"
network_name = "${var.network_name}" network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}" subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}" cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}" dns_nameservers = "${var.dns_nameservers}"
network_dns_domain = "${var.network_dns_domain}"
use_neutron = "${var.use_neutron}" use_neutron = "${var.use_neutron}"
} }
module "ips" { module "ips" {
source = "modules/ips" source = "./modules/ips"
number_of_k8s_masters = "${var.number_of_k8s_masters}" number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}" number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
@@ -27,7 +28,7 @@ module "ips" {
} }
module "compute" { module "compute" {
source = "modules/compute" source = "./modules/compute"
cluster_name = "${var.cluster_name}" cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}" az_list = "${var.az_list}"
@@ -40,6 +41,11 @@ module "compute" {
number_of_bastions = "${var.number_of_bastions}" number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}" number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}" number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
bastion_root_volume_size_in_gb = "${var.bastion_root_volume_size_in_gb}"
etcd_root_volume_size_in_gb = "${var.etcd_root_volume_size_in_gb}"
master_root_volume_size_in_gb = "${var.master_root_volume_size_in_gb}"
node_root_volume_size_in_gb = "${var.node_root_volume_size_in_gb}"
gfs_root_volume_size_in_gb = "${var.gfs_root_volume_size_in_gb}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}" gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}" public_key_path = "${var.public_key_path}"
image = "${var.image}" image = "${var.image}"
@@ -63,6 +69,9 @@ module "compute" {
supplementary_master_groups = "${var.supplementary_master_groups}" supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}" supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}" worker_allowed_ports = "${var.worker_allowed_ports}"
wait_for_floatingip = "${var.wait_for_floatingip}"
use_access_ip = "${var.use_access_ip}"
use_server_groups = "${var.use_server_groups}"
network_id = "${module.network.router_id}" network_id = "${module.network.router_id}"
} }

View File

@@ -1,3 +1,11 @@
data "openstack_images_image_v2" "vm_image" {
name = "${var.image}"
}
data "openstack_images_image_v2" "gfs_image" {
name = "${var.image_gfs == "" ? var.image : var.image_gfs}"
}
resource "openstack_compute_keypair_v2" "k8s" { resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}" name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}" public_key = "${chomp(file(var.public_key_path))}"
@@ -22,20 +30,20 @@ resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
resource "openstack_networking_secgroup_v2" "bastion" { resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion" name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions ? 1 : 0}" count = "${var.number_of_bastions != "" ? 1 : 0}"
description = "${var.cluster_name} - Bastion Server" description = "${var.cluster_name} - Bastion Server"
delete_default_rules = true delete_default_rules = true
} }
resource "openstack_networking_secgroup_rule_v2" "bastion" { resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${var.number_of_bastions ? length(var.bastion_allowed_remote_ips) : 0}" count = "${var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ips) : 0}"
direction = "ingress" direction = "ingress"
ethertype = "IPv4" ethertype = "IPv4"
protocol = "tcp" protocol = "tcp"
port_range_min = "22" port_range_min = "22"
port_range_max = "22" port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}" remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}" security_group_id = "${openstack_networking_secgroup_v2.bastion[count.index].id}"
} }
resource "openstack_networking_secgroup_v2" "k8s" { resource "openstack_networking_secgroup_v2" "k8s" {
@@ -87,9 +95,27 @@ resource "openstack_networking_secgroup_rule_v2" "worker" {
security_group_id = "${openstack_networking_secgroup_v2.worker.id}" security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
} }
resource "openstack_compute_servergroup_v2" "k8s_master" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-master-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_node" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-node-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_etcd" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-etcd-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_instance_v2" "bastion" { resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}" name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.number_of_bastions}" count = "${var.bastion_root_volume_size_in_gb == 0 ? var.number_of_bastions : 0}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}" flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}" key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@@ -99,23 +125,60 @@ resource "openstack_compute_instance_v2" "bastion" {
} }
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}", security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}", "${element(openstack_networking_secgroup_v2.bastion.*.name, count.index)}",
] ]
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion" kubespray_groups = "bastion"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml" command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "bastion_custom_volume_size" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.bastion_root_volume_size_in_gb > 0 ? var.number_of_bastions : 0}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.bastion_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${element(openstack_networking_secgroup_v2.bastion.*.name, count.index)}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > group_vars/no-floating.yml"
} }
} }
resource "openstack_compute_instance_v2" "k8s_master" { resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}" name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
@@ -129,20 +192,72 @@ resource "openstack_compute_instance_v2" "k8s_master" {
"${openstack_networking_secgroup_v2.k8s.name}", "${openstack_networking_secgroup_v2.k8s.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault" kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml" command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
} }
} }
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}" name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
@@ -156,20 +271,72 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
"${openstack_networking_secgroup_v2.k8s.name}", "${openstack_networking_secgroup_v2.k8s.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault" kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml" command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
} }
} }
resource "openstack_compute_instance_v2" "etcd" { resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}" name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}" count = "${var.etcd_root_volume_size_in_gb == 0 ? var.number_of_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}" flavor_id = "${var.flavor_etcd}"
@@ -181,16 +348,62 @@ resource "openstack_compute_instance_v2" "etcd" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"] security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_etcd[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating" kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "etcd_custom_volume_size" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.etcd_root_volume_size_in_gb > 0 ? var.number_of_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.etcd_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_etcd[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
} }
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}" name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
@@ -204,16 +417,64 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
"${openstack_networking_secgroup_v2.k8s.name}", "${openstack_networking_secgroup_v2.k8s.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating" kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
} }
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" { resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}" name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_floating_ip_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}" flavor_id = "${var.flavor_k8s_master}"
@@ -227,16 +488,64 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
"${openstack_networking_secgroup_v2.k8s.name}", "${openstack_networking_secgroup_v2.k8s.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating" kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_floating_ip_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
} }
resource "openstack_compute_instance_v2" "k8s_node" { resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}" name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}" count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}" flavor_id = "${var.flavor_k8s_node}"
@@ -250,20 +559,72 @@ resource "openstack_compute_instance_v2" "k8s_node" {
"${openstack_networking_secgroup_v2.worker.name}", "${openstack_networking_secgroup_v2.worker.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}" kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
provisioner "local-exec" { provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml" command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_custom_volume_size" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.node_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
} }
} }
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" { resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}" name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}" count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}" image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}" flavor_id = "${var.flavor_k8s_node}"
@@ -277,47 +638,132 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
"${openstack_networking_secgroup_v2.worker.name}", "${openstack_networking_secgroup_v2.worker.name}",
] ]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user}" ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}" kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.node_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
} }
resource "openstack_compute_floatingip_associate_v2" "bastion" { resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = "${var.number_of_bastions}" count = "${var.bastion_root_volume_size_in_gb == 0 ? var.number_of_bastions : 0}"
floating_ip = "${var.bastion_fips[count.index]}" floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}" instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "bastion_custom_volume_size" {
count = "${var.bastion_root_volume_size_in_gb > 0 ? var.number_of_bastions : 0}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion_custom_volume_size.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
} }
resource "openstack_compute_floatingip_associate_v2" "k8s_master" { resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}" instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}" floating_ip = "${var.k8s_master_fips[count.index]}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_custom_volume_size" {
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_custom_volume_size.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
wait_until_associated = "${var.wait_for_floatingip}"
} }
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" { resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}" count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}" instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}" floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
} }
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd_custom_volume_size" {
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_etcd : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd_custom_volume_size.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" { resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}" count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0}"
floating_ip = "${var.k8s_node_fips[count.index]}" floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}" instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node_custom_volume_size" {
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes : 0}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node_custom_volume_size.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
} }
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" { resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}" name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}" count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume_custom_volume_size" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
description = "Non-ephemeral volume for GlusterFS" description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}" size = "${var.gfs_volume_size_in_gb}"
} }
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" { resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}" name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}" count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}" availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}" image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}" flavor_id = "${var.flavor_gfs_node}"
@@ -329,15 +775,67 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"] security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = { metadata = {
ssh_user = "${var.ssh_user_gfs}" ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating" kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}" depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.gfs_image.id}"
source_type = "image"
volume_size = "${var.gfs_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
} }
} }
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" { resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = "${var.number_of_gfs_nodes_no_floating_ip}" count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}" instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}" volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
} }
resource "openstack_compute_volume_attach_v2" "glusterfs_volume_custom_root_volume_size" {
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip_custom_volume_size.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume_custom_volume_size.*.id, count.index)}"
}

View File

@@ -22,6 +22,16 @@ variable "number_of_bastions" {}
variable "number_of_gfs_nodes_no_floating_ip" {} variable "number_of_gfs_nodes_no_floating_ip" {}
variable "bastion_root_volume_size_in_gb" {}
variable "etcd_root_volume_size_in_gb" {}
variable "master_root_volume_size_in_gb" {}
variable "node_root_volume_size_in_gb" {}
variable "gfs_root_volume_size_in_gb" {}
variable "gfs_volume_size_in_gb" {} variable "gfs_volume_size_in_gb" {}
variable "public_key_path" {} variable "public_key_path" {}
@@ -82,6 +92,8 @@ variable "k8s_allowed_egress_ips" {
type = "list" type = "list"
} }
variable "wait_for_floatingip" {}
variable "supplementary_master_groups" { variable "supplementary_master_groups" {
default = "" default = ""
} }
@@ -93,3 +105,9 @@ variable "supplementary_node_groups" {
variable "worker_allowed_ports" { variable "worker_allowed_ports" {
type = "list" type = "list"
} }
variable "use_access_ip" {}
variable "use_server_groups" {
type = bool
}

View File

@@ -1,5 +1,5 @@
resource "null_resource" "dummy_dependency" { resource "null_resource" "dummy_dependency" {
triggers { triggers = {
dependency_id = "${var.router_id}" dependency_id = "${var.router_id}"
} }
} }

View File

@@ -1,15 +1,15 @@
output "k8s_master_fips" { output "k8s_master_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"] value = "${openstack_networking_floatingip_v2.k8s_master[*].address}"
} }
output "k8s_master_no_etcd_fips" { output "k8s_master_no_etcd_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master_no_etcd.*.address}"] value = "${openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address}"
} }
output "k8s_node_fips" { output "k8s_node_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"] value = "${openstack_networking_floatingip_v2.k8s_node[*].address}"
} }
output "bastion_fips" { output "bastion_fips" {
value = ["${openstack_networking_floatingip_v2.bastion.*.address}"] value = "${openstack_networking_floatingip_v2.bastion[*].address}"
} }

View File

@@ -8,13 +8,14 @@ resource "openstack_networking_router_v2" "k8s" {
resource "openstack_networking_network_v2" "k8s" { resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}" name = "${var.network_name}"
count = "${var.use_neutron}" count = "${var.use_neutron}"
dns_domain = var.network_dns_domain != null ? "${var.network_dns_domain}" : null
admin_state_up = "true" admin_state_up = "true"
} }
resource "openstack_networking_subnet_v2" "k8s" { resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network" name = "${var.cluster_name}-internal-network"
count = "${var.use_neutron}" count = "${var.use_neutron}"
network_id = "${openstack_networking_network_v2.k8s.id}" network_id = "${openstack_networking_network_v2.k8s[count.index].id}"
cidr = "${var.subnet_cidr}" cidr = "${var.subnet_cidr}"
ip_version = 4 ip_version = 4
dns_nameservers = "${var.dns_nameservers}" dns_nameservers = "${var.dns_nameservers}"
@@ -22,6 +23,6 @@ resource "openstack_networking_subnet_v2" "k8s" {
resource "openstack_networking_router_interface_v2" "k8s" { resource "openstack_networking_router_interface_v2" "k8s" {
count = "${var.use_neutron}" count = "${var.use_neutron}"
router_id = "${openstack_networking_router_v2.k8s.id}" router_id = "${openstack_networking_router_v2.k8s[count.index].id}"
subnet_id = "${openstack_networking_subnet_v2.k8s.id}" subnet_id = "${openstack_networking_subnet_v2.k8s[count.index].id}"
} }

View File

@@ -2,6 +2,8 @@ variable "external_net" {}
variable "network_name" {} variable "network_name" {}
variable "network_dns_domain" {}
variable "cluster_name" {} variable "cluster_name" {}
variable "dns_nameservers" { variable "dns_nameservers" {

View File

@@ -1,6 +1,9 @@
# your Kubernetes cluster name here # your Kubernetes cluster name here
cluster_name = "i-didnt-read-the-docs" cluster_name = "i-didnt-read-the-docs"
# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]
# SSH key to use for access to nodes # SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub" public_key_path = "~/.ssh/id_rsa.pub"

View File

@@ -44,6 +44,26 @@ variable "number_of_gfs_nodes_no_floating_ip" {
default = 0 default = 0
} }
variable "bastion_root_volume_size_in_gb" {
default = 0
}
variable "etcd_root_volume_size_in_gb" {
default = 0
}
variable "master_root_volume_size_in_gb" {
default = 0
}
variable "node_root_volume_size_in_gb" {
default = 0
}
variable "gfs_root_volume_size_in_gb" {
default = 0
}
variable "gfs_volume_size_in_gb" { variable "gfs_volume_size_in_gb" {
default = 75 default = 75
} }
@@ -55,12 +75,12 @@ variable "public_key_path" {
variable "image" { variable "image" {
description = "the image to use" description = "the image to use"
default = "ubuntu-14.04" default = ""
} }
variable "image_gfs" { variable "image_gfs" {
description = "Glance image to use for GlusterFS" description = "Glance image to use for GlusterFS"
default = "ubuntu-16.04" default = ""
} }
variable "ssh_user" { variable "ssh_user" {
@@ -103,6 +123,12 @@ variable "network_name" {
default = "internal" default = "internal"
} }
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = "string"
default = null
}
variable "use_neutron" { variable "use_neutron" {
description = "Use neutron" description = "Use neutron"
default = 1 default = 1
@@ -125,6 +151,11 @@ variable "floatingip_pool" {
default = "external" default = "external"
} }
variable "wait_for_floatingip" {
description = "Terraform will poll the instance until the floating IP has been associated."
default = "false"
}
variable "external_net" { variable "external_net" {
description = "uuid of the external/public network" description = "uuid of the external/public network"
} }
@@ -175,3 +206,11 @@ variable "worker_allowed_ports" {
}, },
] ]
} }
variable "use_access_ip" {
default = 1
}
variable "use_server_groups" {
default = false
}

View File

@@ -38,7 +38,7 @@ now six total etcd replicas.
## SSH Key Setup ## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error. An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command: If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
@@ -72,7 +72,7 @@ If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see: For more information on how to generate an API key or find your project ID, please see:
https://support.packet.com/kb/articles/api-integrations https://support.packet.com/kb/articles/api-integrations
The Packet Project ID associated with the key will be set later in cluster.tf. The Packet Project ID associated with the key will be set later in cluster.tfvars.
For more information about the API, please see: For more information about the API, please see:
https://www.packet.com/developers/api/ https://www.packet.com/developers/api/
@@ -88,7 +88,7 @@ Note that to deploy several clusters within the same project you need to use [te
The construction of the cluster is driven by values found in The construction of the cluster is driven by values found in
[variables.tf](variables.tf). [variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`. For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster. The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster. This helps when identifying which hosts are associated with each cluster.
@@ -138,7 +138,7 @@ This should finish fairly quickly telling you Terraform has successfully initial
You can apply the Terraform configuration to your cluster with the following command You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`): issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession ```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet $ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
$ export ANSIBLE_HOST_KEY_CHECKING=False $ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible-playbook -i hosts ../../cluster.yml $ ansible-playbook -i hosts ../../cluster.yml
``` ```
@@ -147,7 +147,7 @@ $ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory: You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession ```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet $ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
``` ```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup: If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@@ -4,59 +4,60 @@ provider "packet" {
} }
resource "packet_ssh_key" "k8s" { resource "packet_ssh_key" "k8s" {
count = "${var.public_key_path != "" ? 1 : 0}" count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}" name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}" public_key = chomp(file(var.public_key_path))
} }
resource "packet_device" "k8s_master" { resource "packet_device" "k8s_master" {
depends_on = ["packet_ssh_key.k8s"] depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters}" count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}" hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = "${var.plan_k8s_masters}" plan = var.plan_k8s_masters
facilities = ["${var.facility}"] facilities = [var.facility]
operating_system = "${var.operating_system}" operating_system = var.operating_system
billing_cycle = "${var.billing_cycle}" billing_cycle = var.billing_cycle
project_id = "${var.packet_project_id}" project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"] tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
} }
resource "packet_device" "k8s_master_no_etcd" { resource "packet_device" "k8s_master_no_etcd" {
depends_on = ["packet_ssh_key.k8s"] depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters_no_etcd}" count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}" hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = "${var.plan_k8s_masters_no_etcd}" plan = var.plan_k8s_masters_no_etcd
facilities = ["${var.facility}"] facilities = [var.facility]
operating_system = "${var.operating_system}" operating_system = var.operating_system
billing_cycle = "${var.billing_cycle}" billing_cycle = var.billing_cycle
project_id = "${var.packet_project_id}" project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"] tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
} }
resource "packet_device" "k8s_etcd" { resource "packet_device" "k8s_etcd" {
depends_on = ["packet_ssh_key.k8s"] depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_etcd}" count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}" hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = "${var.plan_etcd}" plan = var.plan_etcd
facilities = ["${var.facility}"] facilities = [var.facility]
operating_system = "${var.operating_system}" operating_system = var.operating_system
billing_cycle = "${var.billing_cycle}" billing_cycle = var.billing_cycle
project_id = "${var.packet_project_id}" project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "etcd"] tags = ["cluster-${var.cluster_name}", "etcd"]
} }
resource "packet_device" "k8s_node" { resource "packet_device" "k8s_node" {
depends_on = ["packet_ssh_key.k8s"] depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_nodes}" count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}" hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = "${var.plan_k8s_nodes}" plan = var.plan_k8s_nodes
facilities = ["${var.facility}"] facilities = [var.facility]
operating_system = "${var.operating_system}" operating_system = var.operating_system
billing_cycle = "${var.billing_cycle}" billing_cycle = var.billing_cycle
project_id = "${var.packet_project_id}" project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"] tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
} }

View File

@@ -1,15 +1,16 @@
output "k8s_masters" { output "k8s_masters" {
value = "${packet_device.k8s_master.*.access_public_ipv4}" value = packet_device.k8s_master.*.access_public_ipv4
} }
output "k8s_masters_no_etc" { output "k8s_masters_no_etc" {
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}" value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
} }
output "k8s_etcds" { output "k8s_etcds" {
value = "${packet_device.k8s_etcd.*.access_public_ipv4}" value = packet_device.k8s_etcd.*.access_public_ipv4
} }
output "k8s_nodes" { output "k8s_nodes" {
value = "${packet_device.k8s_node.*.access_public_ipv4}" value = packet_device.k8s_node.*.access_public_ipv4
} }

View File

@@ -54,3 +54,4 @@ variable "number_of_etcd" {
variable "number_of_k8s_nodes" { variable "number_of_k8s_nodes" {
default = 0 default = 0
} }

View File

@@ -0,0 +1,4 @@
terraform {
required_version = ">= 0.12"
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python2 #!/usr/bin/env python3
# #
# Copyright 2015 Cisco Systems, Inc. # Copyright 2015 Cisco Systems, Inc.
# #
@@ -20,15 +20,15 @@
Dynamic inventory for Terraform - finds all `.tfstate` files below the working Dynamic inventory for Terraform - finds all `.tfstate` files below the working
directory and generates an inventory based on them. directory and generates an inventory based on them.
""" """
from __future__ import unicode_literals, print_function
import argparse import argparse
from collections import defaultdict from collections import defaultdict
import random
from functools import wraps from functools import wraps
import json import json
import os import os
import re import re
VERSION = '0.3.0pre' VERSION = '0.4.0pre'
def tfstates(root=None): def tfstates(root=None):
@@ -38,15 +38,58 @@ def tfstates(root=None):
if os.path.splitext(name)[-1] == '.tfstate': if os.path.splitext(name)[-1] == '.tfstate':
yield os.path.join(dirpath, name) yield os.path.join(dirpath, name)
def convert_to_v3_structure(attributes, prefix=''):
""" Convert the attributes from v4 to v3
Receives a dict and return a dictionary """
result = {}
if isinstance(attributes, str):
# In the case when we receive a string (e.g. values for security_groups)
return {'{}{}'.format(prefix, random.randint(1,10**10)): attributes}
for key, value in attributes.items():
if isinstance(value, list):
if len(value):
result['{}{}.#'.format(prefix, key, hash)] = len(value)
for i, v in enumerate(value):
result.update(convert_to_v3_structure(v, '{}{}.{}.'.format(prefix, key, i)))
elif isinstance(value, dict):
result['{}{}.%'.format(prefix, key)] = len(value)
for k, v in value.items():
result['{}{}.{}'.format(prefix, key, k)] = v
else:
result['{}{}'.format(prefix, key)] = value
return result
def iterresources(filenames): def iterresources(filenames):
for filename in filenames: for filename in filenames:
with open(filename, 'r') as json_file: with open(filename, 'r') as json_file:
state = json.load(json_file) state = json.load(json_file)
tf_version = state['version']
if tf_version == 3:
for module in state['modules']: for module in state['modules']:
name = module['path'][-1] name = module['path'][-1]
for key, resource in module['resources'].items(): for key, resource in module['resources'].items():
yield name, key, resource yield name, key, resource
elif tf_version == 4:
# In version 4 the structure changes so we need to iterate
# each instance inside the resource branch.
for resource in state['resources']:
name = resource['provider'].split('.')[-1]
for instance in resource['instances']:
key = "{}.{}".format(resource['type'], resource['name'])
if 'index_key' in instance:
key = "{}.{}".format(key, instance['index_key'])
data = {}
data['type'] = resource['type']
data['provider'] = resource['provider']
data['depends_on'] = instance.get('depends_on', [])
data['primary'] = {'attributes': convert_to_v3_structure(instance['attributes'])}
if 'id' in instance['attributes']:
data['primary']['id'] = instance['attributes']['id']
data['primary']['meta'] = instance['attributes'].get('meta',{})
yield name, key, data
else:
raise KeyError('tfstate version %d not supported' % tf_version)
## READ RESOURCES ## READ RESOURCES
PARSERS = {} PARSERS = {}
@@ -109,7 +152,7 @@ def calculate_mantl_vars(func):
def _parse_prefix(source, prefix, sep='.'): def _parse_prefix(source, prefix, sep='.'):
for compkey, value in source.items(): for compkey, value in list(source.items()):
try: try:
curprefix, rest = compkey.split(sep, 1) curprefix, rest = compkey.split(sep, 1)
except ValueError: except ValueError:
@@ -127,7 +170,7 @@ def parse_attr_list(source, prefix, sep='.'):
idx, key = compkey.split(sep, 1) idx, key = compkey.split(sep, 1)
attrs[idx][key] = value attrs[idx][key] = value
return attrs.values() return list(attrs.values())
def parse_dict(source, prefix, sep='.'): def parse_dict(source, prefix, sep='.'):
@@ -139,6 +182,9 @@ def parse_list(source, prefix, sep='.'):
def parse_bool(string_form): def parse_bool(string_form):
if type(string_form) is bool:
return string_form
token = string_form.lower()[0] token = string_form.lower()[0]
if token == 't': if token == 't':
@@ -167,7 +213,7 @@ def packet_device(resource, tfvars=None):
'state': raw_attrs['state'], 'state': raw_attrs['state'],
# ansible # ansible
'ansible_ssh_host': raw_attrs['network.0.address'], 'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # it's always "root" on Packet 'ansible_ssh_user': 'root', # Use root by default in packet
# generic # generic
'ipv4_address': raw_attrs['network.0.address'], 'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'], 'public_ipv4': raw_attrs['network.0.address'],
@@ -177,6 +223,10 @@ def packet_device(resource, tfvars=None):
'provider': 'packet', 'provider': 'packet',
} }
if raw_attrs['operating_system'] == 'coreos_stable':
# For CoreOS set the ssh_user to core
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs # add groups based on attrs
groups.append('packet_operating_system=' + attrs['operating_system']) groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked']) groups.append('packet_locked=%s' % attrs['locked'])
@@ -239,6 +289,12 @@ def openstack_host(resource, module_name):
attrs['private_ipv4'] = raw_attrs['network.0.fixed_ip_v4'] attrs['private_ipv4'] = raw_attrs['network.0.fixed_ip_v4']
try: try:
if 'metadata.prefer_ipv6' in raw_attrs and raw_attrs['metadata.prefer_ipv6'] == "1":
attrs.update({
'ansible_ssh_host': re.sub("[\[\]]", "", raw_attrs['access_ip_v6']),
'publicly_routable': True,
})
else:
attrs.update({ attrs.update({
'ansible_ssh_host': raw_attrs['access_ip_v4'], 'ansible_ssh_host': raw_attrs['access_ip_v4'],
'publicly_routable': True, 'publicly_routable': True,
@@ -252,9 +308,9 @@ def openstack_host(resource, module_name):
if 'metadata.ssh_user' in raw_attrs: if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user'] attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
if 'volume.#' in raw_attrs.keys() and int(raw_attrs['volume.#']) > 0: if 'volume.#' in list(raw_attrs.keys()) and int(raw_attrs['volume.#']) > 0:
device_index = 1 device_index = 1
for key, value in raw_attrs.items(): for key, value in list(raw_attrs.items()):
match = re.search("^volume.*.device$", key) match = re.search("^volume.*.device$", key)
if match: if match:
attrs['disk_volume_device_'+str(device_index)] = value attrs['disk_volume_device_'+str(device_index)] = value
@@ -272,7 +328,7 @@ def openstack_host(resource, module_name):
groups.append('os_image=' + attrs['image']['name']) groups.append('os_image=' + attrs['image']['name'])
groups.append('os_flavor=' + attrs['flavor']['name']) groups.append('os_flavor=' + attrs['flavor']['name'])
groups.extend('os_metadata_%s=%s' % item groups.extend('os_metadata_%s=%s' % item
for item in attrs['metadata'].items()) for item in list(attrs['metadata'].items()))
groups.append('os_region=' + attrs['region']) groups.append('os_region=' + attrs['region'])
# groups specific to Mantl # groups specific to Mantl
@@ -290,14 +346,20 @@ def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list''' '''Update hosts that have an entry in the floating IP list'''
for host in hosts: for host in hosts:
host_id = host[1]['id'] host_id = host[1]['id']
if host_id in ips: if host_id in ips:
ip = ips[host_id] ip = ips[host_id]
host[1].update({ host[1].update({
'access_ip_v4': ip, 'access_ip_v4': ip,
'access_ip': ip, 'access_ip': ip,
'public_ipv4': ip, 'public_ipv4': ip,
'ansible_ssh_host': ip, 'ansible_ssh_host': ip,
}) })
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0":
host[1].pop('access_ip')
yield host yield host

View File

@@ -13,7 +13,7 @@
/usr/local/share/ca-certificates/vault-ca.crt /usr/local/share/ca-certificates/vault-ca.crt
{%- elif ansible_os_family == "RedHat" -%} {%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/vault-ca.crt /etc/pki/ca-trust/source/anchors/vault-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%} {%- elif ansible_os_family in ["Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"] -%}
/etc/ssl/certs/vault-ca.pem /etc/ssl/certs/vault-ca.pem
{%- endif %} {%- endif %}
@@ -25,7 +25,7 @@
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/CoreOS) - name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/CoreOS)
command: update-ca-certificates command: update-ca-certificates
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"] when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: bootstrap/ca_trust | update ca-certificates (RedHat) - name: bootstrap/ca_trust | update ca-certificates (RedHat)
command: update-ca-trust extract command: update-ca-trust extract

View File

@@ -21,7 +21,7 @@
- name: bootstrap/sync_secrets | Print out warning message if secrets are not available and vault is initialized - name: bootstrap/sync_secrets | Print out warning message if secrets are not available and vault is initialized
pause: pause:
prompt: > prompt: >
Vault orchestration may not be able to proceed. The Vault cluster is initialzed, but Vault orchestration may not be able to proceed. The Vault cluster is initialized, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are 'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps. needed for many vault orchestration steps.
when: vault_cluster_is_initialized and not vault_secrets_available when: vault_cluster_is_initialized and not vault_secrets_available

View File

@@ -3,7 +3,6 @@
* [Getting started](/docs/getting-started.md) * [Getting started](/docs/getting-started.md)
* [Ansible](docs/ansible.md) * [Ansible](docs/ansible.md)
* [Variables](/docs/vars.md) * [Variables](/docs/vars.md)
* [Ansible](/docs/ansible.md)
* Operations * Operations
* [Integration](docs/integration.md) * [Integration](docs/integration.md)
* [Upgrades](/docs/upgrades.md) * [Upgrades](/docs/upgrades.md)

View File

@@ -1,9 +1,7 @@
Ansible variables # Ansible variables
===============
## Inventory
Inventory
-------------
The inventory is composed of 3 groups: The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run. * **kube-node** : list of kubernetes nodes where the pods will run.
@@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain the _etcd_ group into the _k8s-cluster_, unless you are certain
to do that and you have it fully contained in the latter: to do that and you have it fully contained in the latter:
``` ```ShellSession
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
``` ```
@@ -32,7 +30,7 @@ There are also two special groups:
Below is a complete inventory example: Below is a complete inventory example:
``` ```ini
## Configure 'ip' variable to bind kubernetes services on a ## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface ## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1 node1 ansible_host=95.54.0.12 ip=10.3.0.1
@@ -63,8 +61,7 @@ kube-node
kube-master kube-master
``` ```
Group vars and overriding variables precedence ## Group vars and overriding variables precedence
----------------------------------------------
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
@@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
task vars (only for the task) | Unused for roles, but only for helper scripts task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml`` **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
Ansible tags ## Ansible tags
------------
The following tags are defined in playbooks: The following tags are defined in playbooks:
| Tag name | Used for | Tag name | Used for
@@ -145,21 +142,25 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for" tags found in the codebase. New tags will be listed with the empty "Used for"
field. field.
Example commands ## Example commands
----------------
Example command to filter and apply only DNS configuration tasks and skip Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers: everything else related to host OS configuration and downloading images of containers:
``` ```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
``` ```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files: And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
``` ```
And this prepares all container images locally (at the ansible runner node) without installing And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes: or upgrading related stuff or trying to upload container to K8s cluster nodes:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \ -e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade --tags download --skip-tags upload,upgrade
@@ -167,14 +168,14 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
Bastion host ## Bastion host
--------------
If you prefer to not make your nodes publicly accessible (nodes with private IPs only), If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion, you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host. bastion host.
``` ```ShellSession
[bastion] [bastion]
bastion ansible_host=x.x.x.x bastion ansible_host=x.x.x.x
``` ```

View File

@@ -1,6 +1,7 @@
## Architecture compatibility # Architecture compatibility
The following table shows the impact of the CPU architecture on compatible features: The following table shows the impact of the CPU architecture on compatible features:
- amd64: Cluster using only x86/amd64 CPUs - amd64: Cluster using only x86/amd64 CPUs
- arm64: Cluster using only arm64 CPUs - arm64: Cluster using only arm64 CPUs
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs - amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs

View File

@@ -1,23 +1,22 @@
Atomic host bootstrap # Atomic host bootstrap
=====================
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`. Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
Note: Flannel is the only plugin that has currently been tested with atomic Note: Flannel is the only plugin that has currently been tested with atomic
### Vagrant ## Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host * For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`. * Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2` * Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted. * Networking on vagrant hosts has to be brought up manually once they are booted.
``` ```ShellSession
vagrant ssh vagrant ssh
sudo /sbin/ifup enp0s8 sudo /sbin/ifup enp0s8
``` ```
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/ * For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://getfedora.org/en/atomic/download/ * For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
Then you can proceed to [cluster deployment](#run-deployment) Then you can proceed to [cluster deployment](#run-deployment)

View File

@@ -1,11 +1,10 @@
AWS # AWS
===============
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider. To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role. Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`. You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targeted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled. Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
@@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic
You can now create your cluster! You can now create your cluster!
### Dynamic Inventory ### ## Dynamic Inventory
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome. There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
This will produce an inventory that is passed into Ansible that looks like the following: This will produce an inventory that is passed into Ansible that looks like the following:
```
```json
{ {
"_meta": { "_meta": {
"hostvars": { "hostvars": {
@@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
``` ```
Guide: Guide:
- Create instances in AWS as needed. - Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd` - Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory. - Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal: - Set the following AWS credentials and info as environment variables in your terminal:
```
```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx" export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy" export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2" export REGION="us-east-2"
``` ```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml` - We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
## Kubespray configuration ## Kubespray configuration
@@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000. aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead. aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment. aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.

View File

@@ -1,5 +1,4 @@
Azure # Azure
===============
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`. To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
@@ -7,68 +6,77 @@ All your instances are required to run in a resource group and a routing table h
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status) Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
### Parameters ## Parameters
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file. Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
After installation you have to run `azure login` to get access to your account. After installation you have to run `azure login` to get access to your account.
### azure\_tenant\_id + azure\_subscription\_id
#### azure\_tenant\_id + azure\_subscription\_id
run `azure account show` to retrieve your subscription id and tenant id: run `azure account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field `azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field `azure_subscription_id` -> ID field
### azure\_location
#### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list` The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
### azure\_resource\_group
#### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `azure group list` The name of the resource group your instances are in, can be retrieved via `azure group list`
#### azure\_vnet\_name ### azure\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list` The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
#### azure\_subnet\_name ### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list RESOURCE_GROUP VNET_NAME`
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
### azure\_security\_group\_name
#### azure\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `azure network nsg list` The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
#### azure\_aad\_client\_id + azure\_aad\_client\_secret ### azure\_aad\_client\_id + azure\_aad\_client\_secret
These will have to be generated first: These will have to be generated first:
- Create an Azure AD Application with: - Create an Azure AD Application with:
`azure ad app create --name kubernetes --identifier-uris http://kubernetes --home-page http://example.com --password CLIENT_SECRET` `azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
The name, identifier-uri, home-page and the password can be choosen display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output. Note the AppId in the output.
- Create Service principal for the application with: - Create Service principal for the application with:
`azure ad sp create --applicationId AppId` `azure ad sp create --id AppId`
This is the AppId from the last command This is the AppId from the last command
- Create the role assignment with: - Create the role assignment with:
`azure role assignment create --spn http://kubernetes -o "Owner" -c /subscriptions/SUBSCRIPTION_ID` `azure role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret. azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
### azure\_loadbalancer\_sku
#### azure\_loadbalancer\_sku
Sku of Load Balancer and Public IP. Candidate values are: basic and standard. Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
#### azure\_exclude\_master\_from\_standard\_lb ### azure\_exclude\_master\_from\_standard\_lb
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer. azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
#### azure\_disable\_outbound\_snat ### azure\_disable\_outbound\_snat
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`. azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
#### azure\_primary\_availability\_set\_name ### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure (Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field. pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
#### azure\_use\_instance\_metadata ### azure\_use\_instance\_metadata
Use instance metadata service where possible
Use instance metadata service where possible
## Provisioning Azure with Resource Group Templates ## Provisioning Azure with Resource Group Templates

View File

@@ -1,82 +1,83 @@
Calico # Calico
===========
N.B. **Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
---
**N.B. Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3 If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3. After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3.
**PLEASE TEST upgrade before upgrading production cluster.** **PLEASE TEST upgrade before upgrading production cluster.**
---
Check if the calico-node container is running Check if the calico-node container is running
``` ```ShellSession
docker ps | grep calico docker ps | grep calico
``` ```
The **calicoctl** command allows to check the status of the network workloads. The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes * Check the status of Calico nodes
``` ```ShellSession
calicoctl node status calicoctl node status
``` ```
or for versions prior to *v1.0.0*: or for versions prior to *v1.0.0*:
``` ```ShellSession
calicoctl status calicoctl status
``` ```
* Show the configured network subnet for containers * Show the configured network subnet for containers
``` ```ShellSession
calicoctl get ippool -o wide calicoctl get ippool -o wide
``` ```
or for versions prior to *v1.0.0*: or for versions prior to *v1.0.0*:
``` ```ShellSession
calicoctl pool show calicoctl pool show
``` ```
* Show the workloads (ip addresses of containers and their located) * Show the workloads (ip addresses of containers and their located)
``` ```ShellSession
calicoctl get workloadEndpoint -o wide calicoctl get workloadEndpoint -o wide
``` ```
and and
``` ```ShellSession
calicoctl get hostEndpoint -o wide calicoctl get hostEndpoint -o wide
``` ```
or for versions prior *v1.0.0*: or for versions prior *v1.0.0*:
``` ```ShellSession
calicoctl endpoint show --detail calicoctl endpoint show --detail
``` ```
##### Optional : Define network backend ## Configuration
### Optional : Define network backend
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value. In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
To re-define you need to edit the inventory and add a group variable `calico_network_backend` To re-define you need to edit the inventory and add a group variable `calico_network_backend`
``` ```yml
calico_network_backend: none calico_network_backend: none
``` ```
##### Optional : Define the default pool CIDR ### Optional : Define the default pool CIDR
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool. By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml): In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
``` ```ShellSession
calico_pool_cidr: 10.233.64.0/20 calico_pool_cidr: 10.233.64.0/20
``` ```
##### Optional : BGP Peering with border routers ### Optional : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes. In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located. For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
@@ -84,11 +85,11 @@ The following variables need to be set:
`peer_with_router` to enable the peering with the datacenter's border router (default value: false). `peer_with_router` to enable the peering with the datacenter's border router (default value: false).
you'll need to edit the inventory and add a hostvar `local_as` by node. you'll need to edit the inventory and add a hostvar `local_as` by node.
``` ```ShellSession
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
``` ```
##### Optional : Defining BGP peers ### Optional : Defining BGP peers
Peers can be defined using the `peers` variable (see docs/calico_peer_example examples). Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global". In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
@@ -97,16 +98,17 @@ NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining bot
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs. Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml) This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
```
```yml
calico_advertise_cluster_ips: true calico_advertise_cluster_ips: true
``` ```
##### Optional : Define global AS number ### Optional : Define global AS number
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key). Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
It defaults to "64512". It defaults to "64512".
##### Optional : BGP Peering with route reflectors ### Optional : BGP Peering with route reflectors
At large scale you may want to disable full node-to-node mesh in order to At large scale you may want to disable full node-to-node mesh in order to
optimize your BGP topology and improve `calico-node` containers' start times. optimize your BGP topology and improve `calico-node` containers' start times.
@@ -114,20 +116,20 @@ optimize your BGP topology and improve `calico-node` containers' start times.
To do so you can deploy BGP route reflectors and peer `calico-node` with them as To do so you can deploy BGP route reflectors and peer `calico-node` with them as
recommended here: recommended here:
* https://hub.docker.com/r/calico/routereflector/ * <https://hub.docker.com/r/calico/routereflector/>
* https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric * <https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric>
You need to edit your inventory and add: You need to edit your inventory and add:
* `calico-rr` group with nodes in it. At the moment it's incompatible with * `calico-rr` group with nodes in it. `calico-rr` can be combined with
`kube-node` due to BGP port conflict with `calico-node` container. So you `kube-node` and/or `kube-master`. `calico-rr` group also must be a child
should not have nodes in both `calico-rr` and `kube-node` groups. group of `k8s-cluster` group.
* `cluster_id` by route reflector node/group (see details * `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/)) [here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kubespray inventory with route reflectors: Here's an example of Kubespray inventory with standalone route reflectors:
``` ```ini
[all] [all]
rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10 rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11 rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
@@ -154,6 +156,7 @@ node5
[k8s-cluster:children] [k8s-cluster:children]
kube-node kube-node
kube-master kube-master
calico-rr
[calico-rr] [calico-rr]
rr0 rr0
@@ -176,35 +179,35 @@ The inventory above will deploy the following topology assuming that calico's
![Image](figures/kubespray-calico-rr.png?raw=true) ![Image](figures/kubespray-calico-rr.png?raw=true)
##### Optional : Define default endpoint to host action ### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see <https://github.com/projectcalico/felix/issues/660> and <https://github.com/projectcalico/calicoctl/issues/1389).> Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
To re-define default action please set the following variable in your inventory: To re-define default action please set the following variable in your inventory:
```
```yml
calico_endpoint_to_host_action: "ACCEPT" calico_endpoint_to_host_action: "ACCEPT"
``` ```
##### Optional : Define address on which Felix will respond to health requests ## Optional : Define address on which Felix will respond to health requests
Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost. Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
To re-define health host please set the following variable in your inventory: To re-define health host please set the following variable in your inventory:
```
```yml
calico_healthhost: "0.0.0.0" calico_healthhost: "0.0.0.0"
``` ```
Cloud providers configuration ## Cloud providers configuration
=============================
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined. Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting ### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting: By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
``` ```yml
calico_node_ignorelooserpf: true calico_node_ignorelooserpf: true
``` ```
@@ -212,7 +215,7 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups,
otherwise you will experience timeouts. otherwise you will experience timeouts.
To do this you must add a rule which allows it, for example: To do this you must add a rule which allows it, for example:
``` ```ShellSession
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
``` ```

102
docs/cinder-csi.md Normal file
View File

@@ -0,0 +1,102 @@
# Cinder CSI Driver
Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
To enable Cinder CSI driver, uncomment the `cinder_csi_enabled` option in `group_vars/all/openstack.yml` and set it to `true`.
To set the number of replicas for the Cinder CSI controller, you can change `cinder_csi_controller_replicas` option in `group_vars/all/openstack.yml`.
You need to source the OpenStack credentials you use to deploy your machines that will host Kubernetes: `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
Make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. Otherwise [cinder](https://docs.openstack.org/cinder/latest/) won't work as expected.
If you want to deploy the cinder provisioner used with Cinder CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.
## Usage example
To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running:
```ShellSession
$ kubectl -n kube-system get pods | grep cinder
csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m
csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
```
Check the associated storage class (if you enabled persistent_volumes):
```ShellSession
$ kubectl get storageclass
NAME PROVISIONER AGE
cinder-csi cinder.csi.openstack.org 100m
```
You can run a PVC and an Nginx Pod using this file `nginx.yaml`:
```yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-cinderplugin
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: cinder-csi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/lib/www/html
name: csi-data-cinderplugin
volumes:
- name: csi-data-cinderplugin
persistentVolumeClaim:
claimName: csi-pvc-cinderplugin
readOnly: false
```
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
You should see the PVC provisioned and bound:
```ShellSession
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s
```
And the volume mounted to the Nginx Pod (wait until the Pod is Running):
```ShellSession
kubectl exec -it nginx -- df -h | grep /var/lib/www/html
/dev/vdb 976M 2.6M 958M 1% /var/lib/www/html
```
## Compatibility with in-tree cloud provider
It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work.
Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named.
## Cinder v2 support
For the moment, only Cinder v3 is supported by the CSI Driver.
## More info
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md).

View File

@@ -1,13 +1,13 @@
Cloud providers # Cloud providers
==============
#### Provisioning ## Provisioning
You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation. You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
#### Deploy kubernetes ## Deploy kubernetes
With ansible-playbook command With ansible-playbook command
```
```ShellSession
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
``` ```

View File

@@ -1,5 +1,6 @@
Kubespray vs [Kops](https://github.com/kubernetes/kops) # Comparaison
---------------
## Kubespray vs [Kops](https://github.com/kubernetes/kops)
Kubespray runs on bare metal and most clouds, using Ansible as its substrate for Kubespray runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration provisioning and orchestration. Kops performs the provisioning and orchestration
@@ -10,8 +11,7 @@ however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future. one platform for the foreseeable future.
Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm) ## Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so management, including self-hosted layouts, dynamic discovery services and so

View File

@@ -1,5 +1,4 @@
Contiv # Contiv
======
Here is the [Contiv documentation](http://contiv.github.io/documents/). Here is the [Contiv documentation](http://contiv.github.io/documents/).
@@ -10,7 +9,6 @@ There are two ways to manage Contiv:
* a web UI managed by the api proxy service * a web UI managed by the api proxy service
* a CLI named `netctl` * a CLI named `netctl`
### Interfaces ### Interfaces
#### The Web Interface #### The Web Interface
@@ -27,7 +25,6 @@ contiv_generate_certificate: true
The default credentials to log in are: admin/admin. The default credentials to log in are: admin/admin.
#### The Command Line Interface #### The Command Line Interface
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster: The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
@@ -44,7 +41,6 @@ contiv_netmaster_port: 9999
The CLI doesn't use the authentication process needed by the web interface. The CLI doesn't use the authentication process needed by the web interface.
### Network configuration ### Network configuration
The default configuration uses VXLAN to create an overlay. Two networks are created by default: The default configuration uses VXLAN to create an overlay. Two networks are created by default:

View File

@@ -6,6 +6,7 @@ Example with Ansible:
Before running the cluster playbook you must satisfy the following requirements: Before running the cluster playbook you must satisfy the following requirements:
General CoreOS Pre-Installation Notes: General CoreOS Pre-Installation Notes:
- Ensure that the bin_dir is set to `/opt/bin` - Ensure that the bin_dir is set to `/opt/bin`
- ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task. - ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task.
- The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box. - The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box.

View File

@@ -1,5 +1,4 @@
CRI-O # CRI-O
===============
[CRI-O] is a lightweight container runtime for Kubernetes. [CRI-O] is a lightweight container runtime for Kubernetes.
Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster. Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster.
@@ -10,14 +9,14 @@ Kubespray supports basic functionality for using CRI-O as the default container
_To use CRI-O instead of Docker, set the following variables:_ _To use CRI-O instead of Docker, set the following variables:_
#### all.yml ## all.yml
```yaml ```yaml
download_container: false download_container: false
skip_downloads: false skip_downloads: false
``` ```
#### k8s-cluster.yml ## k8s-cluster.yml
```yaml ```yaml
etcd_deployment_type: host etcd_deployment_type: host

View File

@@ -1,5 +1,4 @@
Debian Jessie # Debian Jessie
===============
Debian Jessie installation Notes: Debian Jessie installation Notes:
@@ -9,7 +8,7 @@ Debian Jessie installation Notes:
to /etc/default/grub. Then update with to /etc/default/grub. Then update with
``` ```ShellSession
sudo update-grub sudo update-grub
sudo update-grub2 sudo update-grub2
sudo reboot sudo reboot
@@ -23,7 +22,7 @@ Debian Jessie installation Notes:
- Add the Ansible repository and install Ansible to get a proper version - Add the Ansible repository and install Ansible to get a proper version
``` ```ShellSession
sudo add-apt-repository ppa:ansible/ansible sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update sudo apt-get update
sudo apt-get install ansible sudo apt-get install ansible
@@ -34,5 +33,4 @@ Debian Jessie installation Notes:
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr``` ```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment) Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -1,5 +1,4 @@
K8s DNS stack by Kubespray # K8s DNS stack by Kubespray
======================
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/) For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md) [cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
@@ -9,78 +8,89 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its
Other nodes in the inventory, like external storage nodes or a separate etcd cluster Other nodes in the inventory, like external storage nodes or a separate etcd cluster
node group, considered non-cluster and left up to the user to configure DNS resolve. node group, considered non-cluster and left up to the user to configure DNS resolve.
## DNS variables
DNS variables
=============
There are several global variables which can be used to modify DNS settings: There are several global variables which can be used to modify DNS settings:
#### ndots ### ndots
ndots value to be used in ``/etc/resolv.conf`` ndots value to be used in ``/etc/resolv.conf``
It is important to note that multiple search domains combined with high ``ndots`` It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely. values lead to poor performance of DNS stack, so please choose it wisely.
#### searchdomains ### searchdomains
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``). Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
Most Linux systems limit the total number of search domains to 6 and the total length of all search domains Most Linux systems limit the total number of search domains to 6 and the total length of all search domains
to 256 characters. Depending on the length of ``dns_domain``, you're limitted to less then the total limit. to 256 characters. Depending on the length of ``dns_domain``, you're limited to less than the total limit.
Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as
additional search domains. Please take this into the accounts for the limits. additional search domains. Please take this into the accounts for the limits.
#### nameservers ### nameservers
This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts
``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable ``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable
is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified). is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified).
#### upstream_dns_servers ### upstream_dns_servers
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS servers in early cluster deployment when no cluster DNS is available yet. DNS servers in early cluster deployment when no cluster DNS is available yet.
DNS modes supported by Kubespray ## DNS modes supported by Kubespray
============================
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``. You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
## dns_mode ### dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available: ``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### coredns (default) #### dns_mode: coredns (default)
This installs CoreDNS as the default cluster DNS for all queries. This installs CoreDNS as the default cluster DNS for all queries.
#### coredns_dual #### dns_mode: coredns_dual
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack. This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
#### manual #### dns_mode: manual
This does not install coredns, but allows you to specify This does not install coredns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS. `manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after Use this method if you plan to install your own DNS server in the cluster after
initial deployment. initial deployment.
#### none #### dns_mode: none
This does not install any of DNS solution at all. This basically disables cluster DNS completely and This does not install any of DNS solution at all. This basically disables cluster DNS completely and
leaves you with a non functional cluster. leaves you with a non functional cluster.
## resolvconf_mode ## resolvconf_mode
``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers. ``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
There are three modes available: There are three modes available:
#### docker_dns (default) ### resolvconf_mode: docker_dns (default)
This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags. This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags.
The following nameservers are added to the docker daemon (in the same order as listed here): The following nameservers are added to the docker daemon (in the same order as listed here):
* cluster nameserver (depends on dns_mode) * cluster nameserver (depends on dns_mode)
* content of optional upstream_dns_servers variable * content of optional upstream_dns_servers variable
* host system nameservers (read from hosts /etc/resolv.conf) * host system nameservers (read from hosts /etc/resolv.conf)
The following search domains are added to the docker daemon (in the same order as listed here): The following search domains are added to the docker daemon (in the same order as listed here):
* cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``) * cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``)
* content of optional searchdomains variable * content of optional searchdomains variable
* host system search domains (read from hosts /etc/resolv.conf) * host system search domains (read from hosts /etc/resolv.conf)
The following dns options are added to the docker daemon The following dns options are added to the docker daemon
* ndots:{{ ndots }} * ndots:{{ ndots }}
* timeout:2 * timeout:2
* attempts:2 * attempts:2
@@ -96,8 +106,9 @@ DNS queries to the cluster DNS will timeout after a few seconds, resulting in th
used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS
servers, which in turn will forward queries to the system nameserver if required. servers, which in turn will forward queries to the system nameserver if required.
#### host_resolvconf #### resolvconf_mode: host_resolvconf
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
This activates the classic Kubespray behavior that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode). configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
@@ -108,19 +119,21 @@ the other nameservers as backups.
Also note, existing records will be purged from the `/etc/resolv.conf`, Also note, existing records will be purged from the `/etc/resolv.conf`,
including resolvconf's base/head/cloud-init config files and those that come from dhclient. including resolvconf's base/head/cloud-init config files and those that come from dhclient.
#### none #### resolvconf_mode: none
Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases. Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases.
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
cluster service names. cluster service names.
## Nodelocal DNS cache ## Nodelocal DNS cache
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns / core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md). More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md).
**As per the 2.10 release, Nodelocal DNS cache is enabled by default.**
Limitations ## Limitations
-----------
* Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can * Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can
not answer with authority to arbitrary recursive resolvers. This task is left not answer with authority to arbitrary recursive resolvers. This task is left
@@ -129,9 +142,7 @@ Limitations
* There is * There is
[no way to specify a custom value](https://github.com/kubernetes/kubernetes/issues/33554) [no way to specify a custom value](https://github.com/kubernetes/kubernetes/issues/33554)
for the SkyDNS ``ndots`` param via an for the SkyDNS ``ndots`` param.
[option for KubeDNS](https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-dns/app/options/options.go)
add-on, while SkyDNS supports it though.
* the ``searchdomains`` have a limitation of a 6 names and 256 chars * the ``searchdomains`` have a limitation of a 6 names and 256 chars
length. Due to default ``svc, default.svc`` subdomains, the actual length. Due to default ``svc, default.svc`` subdomains, the actual

View File

@@ -1,25 +1,23 @@
Downloading binaries and containers # Downloading binaries and containers
===================================
Kubespray supports several download/upload modes. The default is: Kubespray supports several download/upload modes. The default is:
* Each node downloads binaries and container images on its own, which is * Each node downloads binaries and container images on its own, which is ``download_run_once: False``.
``download_run_once: False``.
* For K8s apps, pull policy is ``k8s_image_pull_policy: IfNotPresent``. * For K8s apps, pull policy is ``k8s_image_pull_policy: IfNotPresent``.
* For system managed containers, like kubelet or etcd, pull policy is * For system managed containers, like kubelet or etcd, pull policy is ``download_always_pull: False``, which is pull if only the wanted repo and tag/sha256 digest differs from that the host has.
``download_always_pull: False``, which is pull if only the wanted repo and
tag/sha256 digest differs from that the host has.
There is also a "pull once, push many" mode as well: There is also a "pull once, push many" mode as well:
* Override the ``download_run_once: True`` to download container images only once * Setting ``download_run_once: True`` will make kubespray download container images and binaries only once and then push them to the cluster nodes. The default download delegate node is the first `kube-master`.
then push to cluster nodes in batches. The default delegate node * Set ``download_localhost: True`` to make localhost the download delegate. This can be useful if cluster nodes cannot access external addresses. To use this requires that docker is installed and running on the ansible master and that the current user is either in the docker group or can do passwordless sudo, to be able to access docker.
for pushing images is the first `kube-master`.
* If your ansible runner node (aka the admin node) have password-less sudo and NOTE: When `download_run_once` is true and `download_localhost` is false, all downloads will be done on the delegate node, including downloads for container images that are not required on that node. As a consequence, the storage required on that node will probably be more than if download_run_once was false, because all images will be loaded into the docker instance on that node, instead of just the images required for that node.
docker enabled, you may want to define the ``download_localhost: True``, which
makes that node a delegate for pushing images while running the deployment with On caching:
ansible. This maybe the case if cluster nodes cannot access each over via ssh
or you want to use local docker images as a cache for multiple clusters. * When `download_run_once` is `True`, all downloaded files will be cached locally in `download_cache_dir`, which defaults to `/tmp/kubespray_cache`. On subsequent provisioning runs, this local cache will be used to provision the nodes, minimizing bandwidth usage and improving provisioning time. Expect about 800MB of disk space to be used on the ansible node for the cache. Disk space required for the image cache on the kubernetes nodes is a much as is needed for the largest image, which is currently slightly less than 150MB.
* By default, if `download_run_once` is false, kubespray will not retrieve the downloaded images and files from the remote node to the local cache, or use that cache to pre-provision those nodes. To force the use of the cache, set `download_force_cache` to `True`.
* By default, cached images that are used to pre-provision the remote nodes will be deleted from the remote nodes after use, to save disk space. Setting download_keep_remote_cache will prevent the files from being deleted. This can be useful while developing kubespray, as it can decrease provisioning times. As a consequence, the required storage for images on the remote nodes will increase from 150MB to about 550MB, which is currently the combined size of all required container images.
Container images and binary files are described by the vars like ``foo_version``, Container images and binary files are described by the vars like ``foo_version``,
``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``, ``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``,
@@ -29,15 +27,16 @@ Container images may be defined by its repo and tag, for example:
`andyshinn/dnsmasq:2.72`. Or by repo and tag and sha256 digest: `andyshinn/dnsmasq:2.72`. Or by repo and tag and sha256 digest:
`andyshinn/dnsmasq@sha256:7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193`. `andyshinn/dnsmasq@sha256:7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193`.
Note, the sha256 digest and the image tag must be both specified and correspond Note, the SHA256 digest and the image tag must be both specified and correspond
to each other. The given example above is represented by the following vars: to each other. The given example above is represented by the following vars:
```
```yaml
dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193 dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193
dnsmasq_image_repo: andyshinn/dnsmasq dnsmasq_image_repo: andyshinn/dnsmasq
dnsmasq_image_tag: '2.72' dnsmasq_image_tag: '2.72'
``` ```
The full list of available vars may be found in the download's ansible role defaults.
Those also allow to specify custom urls and local repositories for binaries and container The full list of available vars may be found in the download's ansible role defaults. Those also allow to specify custom urls and local repositories for binaries and container
images as well. See also the DNS stack docs for the related intranet configuration, images as well. See also the DNS stack docs for the related intranet configuration,
so the hosts can resolve those urls and repos. so the hosts can resolve those urls and repos.
@@ -46,7 +45,7 @@ so the hosts can resolve those urls and repos.
In case your servers don't have access to internet (for example when deploying on premises with security constraints), you'll have, first, to setup the appropriate proxies/caches/mirrors and/or internal repositories and registries and, then, adapt the following variables to fit your environment before deploying: In case your servers don't have access to internet (for example when deploying on premises with security constraints), you'll have, first, to setup the appropriate proxies/caches/mirrors and/or internal repositories and registries and, then, adapt the following variables to fit your environment before deploying:
* At least `foo_image_repo` and `foo_download_url` as described before (i.e. in case of use of proxies to registries and binaries repositories, checksums and versions do not necessarily need to be changed). * At least `foo_image_repo` and `foo_download_url` as described before (i.e. in case of use of proxies to registries and binaries repositories, checksums and versions do not necessarily need to be changed).
NB: Regarding `foo_image_repo`, when using insecure registries/proxies, you will certainly have to append them to the `docker_insecure_registries` variable in group_vars/all/docker.yml NOTE: Regarding `foo_image_repo`, when using insecure registries/proxies, you will certainly have to append them to the `docker_insecure_registries` variable in group_vars/all/docker.yml
* `pyrepo_index` (and optionally `pyrepo_cert`) * `pyrepo_index` (and optionally `pyrepo_cert`)
* Depending on the `container_manager` * Depending on the `container_manager`
* When `container_manager=docker`, `docker_foo_repo_base_url`, `docker_foo_repo_gpgkey`, `dockerproject_bar_repo_base_url` and `dockerproject_bar_repo_gpgkey` (where `foo` is the distribution and `bar` is system package manager) * When `container_manager=docker`, `docker_foo_repo_base_url`, `docker_foo_repo_gpgkey`, `dockerproject_bar_repo_base_url` and `dockerproject_bar_repo_gpgkey` (where `foo` is the distribution and `bar` is system package manager)

View File

@@ -1,9 +1,8 @@
Flannel # Flannel
==============
* Flannel configuration file should have been created there * Flannel configuration file should have been created there
``` ```ShellSession
cat /run/flannel/subnet.env cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.233.0.0/18 FLANNEL_NETWORK=10.233.0.0/18
FLANNEL_SUBNET=10.233.16.1/24 FLANNEL_SUBNET=10.233.16.1/24
@@ -13,7 +12,7 @@ FLANNEL_IPMASQ=false
* Check if the network interface has been created * Check if the network interface has been created
``` ```ShellSession
ip a show dev flannel.1 ip a show dev flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
@@ -25,7 +24,7 @@ ip a show dev flannel.1
* Try to run a container and check its ip address * Try to run a container and check its ip address
``` ```ShellSession
kubectl run test --image=busybox --command -- tail -f /dev/null kubectl run test --image=busybox --command -- tail -f /dev/null
replicationcontroller "test" created replicationcontroller "test" created
@@ -33,7 +32,7 @@ kubectl describe po test-34ozs | grep ^IP
IP: 10.233.16.2 IP: 10.233.16.2
``` ```
``` ```ShellSession
kubectl exec test-34ozs -- ip a show dev eth0 kubectl exec test-34ozs -- ip a show dev eth0
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff

View File

@@ -1,8 +1,6 @@
Getting started # Getting started
===============
Building your own inventory ## Building your own inventory
---------------------------
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
an example inventory located an example inventory located
@@ -18,55 +16,65 @@ certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` hel
Example inventory generator usage: Example inventory generator usage:
```ShellSession
cp -r inventory/sample inventory/mycluster cp -r inventory/sample inventory/mycluster
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]} CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
```
Then use `inventory/mycluster/hosts.yml` as inventory file. Then use `inventory/mycluster/hosts.yml` as inventory file.
Starting custom deployment ## Starting custom deployment
--------------------------
Once you have an inventory, you may want to customize deployment data vars Once you have an inventory, you may want to customize deployment data vars
and start the deployment: and start the deployment:
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars: **IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
```
See more details in the [ansible guide](ansible.md). See more details in the [ansible guide](ansible.md).
Adding nodes ### Adding nodes
------------
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters. You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)). - Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
- Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`: - Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`:
```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
Remove nodes
------------
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key
Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
``` ```
### Remove nodes
You may want to remove **master**, **worker**, or **etcd** nodes from your
existing cluster. This can be done by re-running the `remove-node.yml`
playbook. First, all specified nodes will be drained, then stop some
kubernetes services and delete some certificates,
and finally execute the kubectl command to delete these nodes.
This can be combined with the add node function. This is generally helpful
when doing something like autoscaling your clusters. Of course, if a node
is not working, you can remove the node and install it again.
Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node(s) you want to delete.
```ShellSession
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \ ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
--private-key=~/.ssh/private_key \ --private-key=~/.ssh/private_key \
--extra-vars "node=nodename,nodename2" --extra-vars "node=nodename,nodename2"
``` ```
Connecting to Kubernetes If a node is completely unreachable by ssh, add `--extra-vars reset_nodes=no`
------------------------ to skip the node reset step. If one node is unavailable, but others you wish
to remove are able to connect via SSH, you could set reset_nodes=no as a host
var in inventory.
## Connecting to Kubernetes
By default, Kubespray configures kube-master hosts with insecure access to By default, Kubespray configures kube-master hosts with insecure access to
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case, kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
@@ -88,8 +96,7 @@ file yourself.
For more information on kubeconfig and accessing a Kubernetes cluster, refer to For more information on kubeconfig and accessing a Kubernetes cluster, refer to
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Accessing Kubernetes Dashboard ## Accessing Kubernetes Dashboard
------------------------------
As of kubernetes-dashboard v1.7.x: As of kubernetes-dashboard v1.7.x:
@@ -106,8 +113,7 @@ Or you can run 'kubectl proxy' from your local machine to access dashboard in yo
It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: <https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above> It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: <https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above>
Accessing Kubernetes API ## Accessing Kubernetes API
------------------------
The main client of Kubernetes is `kubectl`. It is installed on each kube-master The main client of Kubernetes is `kubectl`. It is installed on each kube-master
host and can optionally be configured on your ansible host by setting host and can optionally be configured on your ansible host by setting
@@ -118,7 +124,9 @@ host and can optionally be configured on your ansible host by setting
You can see a list of nodes by running the following commands: You can see a list of nodes by running the following commands:
```ShellSession
cd inventory/mycluster/artifacts cd inventory/mycluster/artifacts
./kubectl.sh get nodes ./kubectl.sh get nodes
```
If desired, copy admin.conf to ~/.kube/config. If desired, copy admin.conf to ~/.kube/config.

View File

@@ -1,19 +1,18 @@
HA endpoints for K8s # HA endpoints for K8s
====================
The following components require a highly available endpoints: The following components require a highly available endpoints:
* etcd cluster, * etcd cluster,
* kube-apiserver service instances. * kube-apiserver service instances.
The latter relies on a 3rd side reverse proxy, like Nginx or HAProxy, to The latter relies on a 3rd side reverse proxy, like Nginx or HAProxy, to
achieve the same goal. achieve the same goal.
Etcd ## Etcd
----
The etcd clients (kube-api-masters) are configured with the list of all etcd peers. If the etcd-cluster has multiple instances, it's configured in HA already. The etcd clients (kube-api-masters) are configured with the list of all etcd peers. If the etcd-cluster has multiple instances, it's configured in HA already.
Kube-apiserver ## Kube-apiserver
--------------
K8s components require a loadbalancer to access the apiservers via a reverse K8s components require a loadbalancer to access the apiservers via a reverse
proxy. Kubespray includes support for an nginx-based proxy that resides on each proxy. Kubespray includes support for an nginx-based proxy that resides on each
@@ -50,7 +49,8 @@ provides access for external clients, while the internal LB accepts client
connections only to the localhost. connections only to the localhost.
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
an example configuration for a HAProxy service acting as an external LB: an example configuration for a HAProxy service acting as an external LB:
```
```raw
listen kubernetes-apiserver-https listen kubernetes-apiserver-https
bind <VIP>:8383 bind <VIP>:8383
option ssl-hello-chk option ssl-hello-chk
@@ -66,7 +66,8 @@ listen kubernetes-apiserver-https
And the corresponding example global vars for such a "cluster-aware" And the corresponding example global vars for such a "cluster-aware"
external LB with the cluster API access modes configured in Kubespray: external LB with the cluster API access modes configured in Kubespray:
```
```yml
apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com" apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com"
loadbalancer_apiserver: loadbalancer_apiserver:
address: <VIP> address: <VIP>
@@ -102,13 +103,14 @@ exclusive to `loadbalancer_apiserver_localhost`.
Access API endpoints are evaluated automatically, as the following: Access API endpoints are evaluated automatically, as the following:
| Endpoint type | kube-master | non-master | external | | Endpoint type | kube-master | non-master | external |
|------------------------------|----------------|---------------------|---------------------| |------------------------------|------------------|-------------------------|-----------------------|
| Local LB (default) | https://bip:sp | https://lc:nsp | https://m[0].aip:sp | | Local LB (default) | `https://bip:sp` | `https://lc:nsp` | `https://m[0].aip:sp` |
| Local LB + Unmanaged here LB | https://bip:sp | https://lc:nsp | https://ext | | Local LB + Unmanaged here LB | `https://bip:sp` | `https://lc:nsp` | `https://ext` |
| External LB, no internal | https://bip:sp | https://lb:lp | https://lb:lp | | External LB, no internal | `https://bip:sp` | `<https://lb:lp>` | `https://lb:lp` |
| No ext/int LB | https://bip:sp | https://m[0].aip:sp | https://m[0].aip:sp | | No ext/int LB | `https://bip:sp` | `<https://m[0].aip:sp>` | `https://m[0].aip:sp` |
Where: Where:
* `m[0]` - the first node in the `kube-master` group; * `m[0]` - the first node in the `kube-master` group;
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`; * `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray; * `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
@@ -132,16 +134,19 @@ Kubespray, the masters' APIs are accessed via the insecure endpoint, which
consists of the local `kube_apiserver_insecure_bind_address` and consists of the local `kube_apiserver_insecure_bind_address` and
`kube_apiserver_insecure_port`. `kube_apiserver_insecure_port`.
Optional configurations ## Optional configurations
------------------------
### ETCD with a LB ### ETCD with a LB
In order to use an external loadbalancing (L4/TCP or L7 w/ SSL Passthrough VIP), the following variables need to be overridden in group_vars In order to use an external loadbalancing (L4/TCP or L7 w/ SSL Passthrough VIP), the following variables need to be overridden in group_vars
* `etcd_access_addresses` * `etcd_access_addresses`
* `etcd_client_url` * `etcd_client_url`
* `etcd_cert_alt_names` * `etcd_cert_alt_names`
* `etcd_cert_alt_ips` * `etcd_cert_alt_ips`
#### Example of a VIP w/ FQDN #### Example of a VIP w/ FQDN
```yaml ```yaml
etcd_access_addresses: https://etcd.example.com:2379 etcd_access_addresses: https://etcd.example.com:2379
etcd_client_url: https://etcd.example.com:2379 etcd_client_url: https://etcd.example.com:2379

View File

@@ -8,7 +8,8 @@
2. Add **forked repo** as submodule to desired folder in your existent ansible repo(for example 3d/kubespray): 2. Add **forked repo** as submodule to desired folder in your existent ansible repo(for example 3d/kubespray):
```git submodule add https://github.com/YOUR_GITHUB/kubespray.git kubespray``` ```git submodule add https://github.com/YOUR_GITHUB/kubespray.git kubespray```
Git will create _.gitmodules_ file in your existent ansible repo: Git will create _.gitmodules_ file in your existent ansible repo:
```
```ini
[submodule "3d/kubespray"] [submodule "3d/kubespray"]
path = 3d/kubespray path = 3d/kubespray
url = https://github.com/YOUR_GITHUB/kubespray.git url = https://github.com/YOUR_GITHUB/kubespray.git
@@ -21,7 +22,8 @@
```git remote add upstream https://github.com/kubernetes-sigs/kubespray.git``` ```git remote add upstream https://github.com/kubernetes-sigs/kubespray.git```
5. Sync your master branch with upstream: 5. Sync your master branch with upstream:
```
```ShellSession
git checkout master git checkout master
git fetch upstream git fetch upstream
git merge upstream/master git merge upstream/master
@@ -33,19 +35,21 @@
***Never*** use master branch of your repository for your commits. ***Never*** use master branch of your repository for your commits.
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project): 7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
```
```ini
... ...
library = 3d/kubespray/library/ library = 3d/kubespray/library/
roles_path = 3d/kubespray/roles/ roles_path = 3d/kubespray/roles/
... ...
``` ```
8. Copy and modify configs from kubespray `group_vars` folder to corresponging `group_vars` folder in your existent project. 8. Copy and modify configs from kubespray `group_vars` folder to corresponding `group_vars` folder in your existent project.
You could rename *all.yml* config to something else, i.e. *kubespray.yml* and create corresponding group in your inventory file, which will include all hosts groups related to kubernetes setup. You could rename *all.yml* config to something else, i.e. *kubespray.yml* and create corresponding group in your inventory file, which will include all hosts groups related to kubernetes setup.
9. Modify your ansible inventory file by adding mapping of your existent groups (if any) to kubespray naming. 9. Modify your ansible inventory file by adding mapping of your existent groups (if any) to kubespray naming.
For example: For example:
```
```ini
... ...
#Kargo groups: #Kargo groups:
[kube-node:children] [kube-node:children]
@@ -65,54 +69,62 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
[kubespray:children] [kubespray:children]
kubernetes kubernetes
``` ```
* Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project. * Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project.
10. Now you can include kubespray tasks in you existent playbooks by including cluster.yml file: 10. Now you can include kubespray tasks in you existent playbooks by including cluster.yml file:
```
```yml
- name: Include kubespray tasks - name: Include kubespray tasks
include: 3d/kubespray/cluster.yml include: 3d/kubespray/cluster.yml
``` ```
Or your could copy separate tasks from cluster.yml into your ansible repository. Or your could copy separate tasks from cluster.yml into your ansible repository.
11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo. 11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo.
When you update your "work" branch you need to commit changes to ansible repo as well. When you update your "work" branch you need to commit changes to ansible repo as well.
Other members of your team should use ```git submodule sync```, ```git submodule update --init``` to get actual code from submodule. Other members of your team should use ```git submodule sync```, ```git submodule update --init``` to get actual code from submodule.
# Contributing ## Contributing
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo. If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
0. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md). 1. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
1. Change working directory to git submodule directory (3d/kubespray). 2. Change working directory to git submodule directory (3d/kubespray).
2. Setup desired user.name and user.email for submodule. 3. Setup desired user.name and user.email for submodule.
If kubespray is only one submodule in your repo you could use something like: If kubespray is only one submodule in your repo you could use something like:
```git submodule foreach --recursive 'git config user.name "First Last" && git config user.email "your-email-addres@used.for.cncf"'``` ```git submodule foreach --recursive 'git config user.name "First Last" && git config user.email "your-email-addres@used.for.cncf"'```
3. Sync with upstream master: 4. Sync with upstream master:
```
```ShellSession
git fetch upstream git fetch upstream
git merge upstream/master git merge upstream/master
git push origin master git push origin master
``` ```
4. Create new branch for the specific fixes that you want to contribute:
5. Create new branch for the specific fixes that you want to contribute:
```git checkout -b fixes-name-date-index``` ```git checkout -b fixes-name-date-index```
Branch name should be self explaining to you, adding date and/or index will help you to track/delete your old PRs. Branch name should be self explaining to you, adding date and/or index will help you to track/delete your old PRs.
5. Find git hash of your commit in "work" repo and apply it to newly created "fix" repo: 6. Find git hash of your commit in "work" repo and apply it to newly created "fix" repo:
```
```ShellSession
git cherry-pick <COMMIT_HASH> git cherry-pick <COMMIT_HASH>
``` ```
6. If your have several temporary-stage commits - squash them using [```git rebase -i```](http://eli.thegreenplace.net/2014/02/19/squashing-github-pull-requests-into-a-single-commit)
7. If your have several temporary-stage commits - squash them using [```git rebase -i```](http://eli.thegreenplace.net/2014/02/19/squashing-github-pull-requests-into-a-single-commit)
Also you could use interactive rebase (```git rebase -i HEAD~10```) to delete commits which you don't want to contribute into original repo. Also you could use interactive rebase (```git rebase -i HEAD~10```) to delete commits which you don't want to contribute into original repo.
7. When your changes is in place, you need to check upstream repo one more time because it could be changed during your work. 8. When your changes is in place, you need to check upstream repo one more time because it could be changed during your work.
Check that you're on correct branch: Check that you're on correct branch:
```git status``` ```git status```
And pull changes from upstream (if any): And pull changes from upstream (if any):
```git pull --rebase upstream master``` ```git pull --rebase upstream master```
8. Now push your changes to your **fork** repo with ```git push```. If your branch doesn't exists on github, git will propose you to use something like ```git push --set-upstream origin fixes-name-date-index```. 9. Now push your changes to your **fork** repo with ```git push```. If your branch doesn't exists on github, git will propose you to use something like ```git push --set-upstream origin fixes-name-date-index```.
9. Open you forked repo in browser, on the main page you will see proposition to create pull request for your newly created branch. Check proposed diff of your PR. If something is wrong you could safely delete "fix" branch on github using ```git push origin --delete fixes-name-date-index```, ```git branch -D fixes-name-date-index``` and start whole process from the beginning. 10. Open you forked repo in browser, on the main page you will see proposition to create pull request for your newly created branch. Check proposed diff of your PR. If something is wrong you could safely delete "fix" branch on github using ```git push origin --delete fixes-name-date-index```, ```git branch -D fixes-name-date-index``` and start whole process from the beginning.
If everything is fine - add description about your changes (what they do and why they're needed) and confirm pull request creation. If everything is fine - add description about your changes (what they do and why they're needed) and confirm pull request creation.

49
docs/kube-ovn.md Normal file
View File

@@ -0,0 +1,49 @@
# Kube-OVN
Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
For more information please check [Kube-OVN documentation](https://github.com/alauda/kube-ovn)
## How to use it
Enable kube-ovn in `group_vars/k8s-cluster/k8s-cluster.yml`
```yml
...
kube_network_plugin: kube-ovn
...
```
## Verifying kube-ovn install
Kube-OVN run ovn and controller in `kube-ovn` namespace
* Check the status of kube-ovn pods
```ShellSession
# From the CLI
kubectl get pod -n kube-ovn
# Output
NAME READY STATUS RESTARTS AGE
kube-ovn-cni-49lsm 1/1 Running 0 2d20h
kube-ovn-cni-9db8f 1/1 Running 0 2d20h
kube-ovn-cni-wftdk 1/1 Running 0 2d20h
kube-ovn-controller-68d7bb48bd-7tnvg 1/1 Running 0 2d21h
ovn-central-6675dbb7d9-d7z8m 1/1 Running 0 4d16h
ovs-ovn-hqn8p 1/1 Running 0 4d16h
ovs-ovn-hvpl8 1/1 Running 0 4d16h
ovs-ovn-r5frh 1/1 Running 0 4d16h
```
* Check the default and node subnet
```ShellSession
# From the CLI
kubectl get subnet
# Output
NAME PROTOCOL CIDR PRIVATE NAT
join IPv4 100.64.0.0/16 false false
ovn-default IPv4 10.16.0.0/16 false true
```

View File

@@ -1,5 +1,4 @@
Kube-router # Kube-router
===========
Kube-router is a L3 CNI provider, as such it will setup IPv4 routing between Kube-router is a L3 CNI provider, as such it will setup IPv4 routing between
nodes to provide Pods' networks reachability. nodes to provide Pods' networks reachability.
@@ -12,7 +11,7 @@ Kube-router runs its pods as a `DaemonSet` in the `kube-system` namespace:
* Check the status of kube-router pods * Check the status of kube-router pods
``` ```ShellSession
# From the CLI # From the CLI
kubectl get pod --namespace=kube-system -l k8s-app=kube-router -owide kubectl get pod --namespace=kube-system -l k8s-app=kube-router -owide
@@ -29,7 +28,7 @@ kube-router-x2xs7 1/1 Running 0 2d 192.168.186.10 my
* Peek at kube-router container logs: * Peek at kube-router container logs:
``` ```ShellSession
# From the CLI # From the CLI
kubectl logs --namespace=kube-system -l k8s-app=kube-router | grep Peer.Up kubectl logs --namespace=kube-system -l k8s-app=kube-router | grep Peer.Up
@@ -56,24 +55,24 @@ You need to `kubectl exec -it ...` into a kube-router container to use these, se
## Kube-router configuration ## Kube-router configuration
You can change the default configuration by overriding `kube_router_...` variables You can change the default configuration by overriding `kube_router_...` variables
(as found at `roles/network_plugin/kube-router/defaults/main.yml`), (as found at `roles/network_plugin/kube-router/defaults/main.yml`),
these are named to follow `kube-router` command-line options as per these are named to follow `kube-router` command-line options as per
<https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>. <https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>.
## Advanced BGP Capabilities ## Advanced BGP Capabilities
https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities
<https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities>
If you have other networking devices or SDN systems that talk BGP, kube-router will fit in perfectly. If you have other networking devices or SDN systems that talk BGP, kube-router will fit in perfectly.
From a simple full node-to-node mesh to per-node peering configurations, most routing needs can be attained. From a simple full node-to-node mesh to per-node peering configurations, most routing needs can be attained.
The configuration is Kubernetes native (annotations) just like the rest of kube-router. The configuration is Kubernetes native (annotations) just like the rest of kube-router.
For more details please refer to the https://github.com/cloudnativelabs/kube-router/blob/master/docs/bgp.md. For more details please refer to the <https://github.com/cloudnativelabs/kube-router/blob/master/docs/bgp.md.>
Next options will set up annotations for kube-router, using `kubectl annotate` command. Next options will set up annotations for kube-router, using `kubectl annotate` command.
``` ```yml
kube_router_annotations_master: [] kube_router_annotations_master: []
kube_router_annotations_node: [] kube_router_annotations_node: []
kube_router_annotations_all: [] kube_router_annotations_all: []

View File

@@ -26,7 +26,7 @@ By default the normal behavior looks like:
> etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum > etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
> nodes. > nodes.
# Failure ## Failure
Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
`nodeStatusUpdateRetry` is constantly set to 5 in `nodeStatusUpdateRetry` is constantly set to 5 in
@@ -50,7 +50,7 @@ Kube proxy has a watcher over API. Once pods are evicted, Kube proxy will
notice and will update iptables of the node. It will remove endpoints from notice and will update iptables of the node. It will remove endpoints from
services so pods from failed node won't be accessible anymore. services so pods from failed node won't be accessible anymore.
# Recommendations for different cases ## Recommendations for different cases
## Fast Update and Fast Reaction ## Fast Update and Fast Reaction

41
docs/macvlan.md Normal file
View File

@@ -0,0 +1,41 @@
# Macvlan
## How to use it
* Enable macvlan in `group_vars/k8s-cluster/k8s-cluster.yml`
```yml
...
kube_network_plugin: macvlan
...
```
* Adjust the `macvlan_interface` in `group_vars/k8s-cluster/k8s-net-macvlan.yml` or by host in the `host.yml` file:
```yml
all:
hosts:
node1:
ip: 10.2.2.1
access_ip: 10.2.2.1
ansible_host: 10.2.2.1
macvlan_interface: ens5
```
## Issue encountered
* Service DNS
reply from unexpected source:
add `kube_proxy_masquerade_all: true` in `group_vars/all/all.yml`
* Disable nodelocaldns
The nodelocal dns IP is not reacheable.
Disable it in `sample/group_vars/k8s-cluster/k8s-cluster.yml`
```yml
enable_nodelocaldns: false
```

View File

@@ -1,5 +1,4 @@
Multus # Multus
===========
Multus is a meta CNI plugin that provides multiple network interface support to Multus is a meta CNI plugin that provides multiple network interface support to
pods. For each interface, Multus delegates CNI calls to secondary CNI plugins pods. For each interface, Multus delegates CNI calls to secondary CNI plugins
@@ -10,17 +9,19 @@ See [multus documentation](https://github.com/intel/multus-cni).
## Multus installation ## Multus installation
Since Multus itself does not implement networking, it requires a master plugin, which is specified through the variable `kube_network_plugin`. To enable Multus an additional variable `kube_network_plugin_multus` must be set to `true`. For example, Since Multus itself does not implement networking, it requires a master plugin, which is specified through the variable `kube_network_plugin`. To enable Multus an additional variable `kube_network_plugin_multus` must be set to `true`. For example,
```
```yml
kube_network_plugin: calico kube_network_plugin: calico
kube_network_plugin_multus: true kube_network_plugin_multus: true
``` ```
will install Multus and Calico and configure Multus to use Calico as the primary network plugin. will install Multus and Calico and configure Multus to use Calico as the primary network plugin.
## Using Multus ## Using Multus
Once Multus is installed, you can create CNI configurations (as a CRD objects) for additional networks, in this case a macvlan CNI configuration is defined. You may replace the config field with any valid CNI configuration where the CNI binary is available on the nodes. Once Multus is installed, you can create CNI configurations (as a CRD objects) for additional networks, in this case a macvlan CNI configuration is defined. You may replace the config field with any valid CNI configuration where the CNI binary is available on the nodes.
``` ```ShellSession
cat <<EOF | kubectl create -f - cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1" apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition kind: NetworkAttachmentDefinition
@@ -48,7 +49,7 @@ EOF
You may then create a pod with and additional interface that connects to this network using annotations. The annotation correlates to the name in the NetworkAttachmentDefinition above. You may then create a pod with and additional interface that connects to this network using annotations. The annotation correlates to the name in the NetworkAttachmentDefinition above.
``` ```ShellSession
cat <<EOF | kubectl create -f - cat <<EOF | kubectl create -f -
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
@@ -66,8 +67,8 @@ EOF
You may now inspect the pod and see that there is an additional interface configured: You may now inspect the pod and see that there is an additional interface configured:
``` ```ShellSession
$ kubectl exec -it samplepod -- ip a kubectl exec -it samplepod -- ip a
``` ```
For more details on how to use Multus, please visit https://github.com/intel/multus-cni For more details on how to use Multus, please visit <https://github.com/intel/multus-cni>

View File

@@ -1,5 +1,4 @@
Network Checker Application # Network Checker Application
===========================
With the ``deploy_netchecker`` var enabled (defaults to false), Kubespray deploys a With the ``deploy_netchecker`` var enabled (defaults to false), Kubespray deploys a
Network Checker Application from the 3rd side `l23network/k8s-netchecker` docker Network Checker Application from the 3rd side `l23network/k8s-netchecker` docker
@@ -14,14 +13,17 @@ logs.
To get the most recent and cluster-wide network connectivity report, run from To get the most recent and cluster-wide network connectivity report, run from
any of the cluster nodes: any of the cluster nodes:
```
```ShellSession
curl http://localhost:31081/api/v1/connectivity_check curl http://localhost:31081/api/v1/connectivity_check
``` ```
Note that Kubespray does not invoke the check but only deploys the application, if Note that Kubespray does not invoke the check but only deploys the application, if
requested. requested.
There are related application specific variables: There are related application specific variables:
```
```yml
netchecker_port: 31081 netchecker_port: 31081
agent_report_interval: 15 agent_report_interval: 15
netcheck_namespace: default netcheck_namespace: default
@@ -33,7 +35,7 @@ combination of the ``netcheck_namespace.dns_domain`` vars, for example the
to the non default namespace, make sure as well to adjust the ``searchdomains`` var to the non default namespace, make sure as well to adjust the ``searchdomains`` var
so the resulting search domain records to contain that namespace, like: so the resulting search domain records to contain that namespace, like:
``` ```yml
search: foospace.cluster.local default.cluster.local ... search: foospace.cluster.local default.cluster.local ...
nameserver: ... nameserver: ...
``` ```

View File

@@ -5,6 +5,9 @@ To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cl
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`. After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
For those who prefer to pass the OpenStack CA certificate as a string, one can
base64 encode the cacert file and store it in the variable `openstack_cacert`.
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected. Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.

View File

@@ -1,11 +1,10 @@
openSUSE Leap 15.0 and Tumbleweed # openSUSE Leap 15.0 and Tumbleweed
===============
openSUSE Leap installation Notes: openSUSE Leap installation Notes:
- Install Ansible - Install Ansible
``` ```ShellSession
sudo zypper ref sudo zypper ref
sudo zypper -n install ansible sudo zypper -n install ansible
@@ -15,5 +14,4 @@ openSUSE Leap installation Notes:
```sudo zypper -n install python-Jinja2 python-netaddr``` ```sudo zypper -n install python-Jinja2 python-netaddr```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment) Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View File

@@ -1,5 +1,4 @@
Packet # Packet
===============
Kubespray provides support for bare metal deployments using the [Packet bare metal cloud](http://www.packet.com). Kubespray provides support for bare metal deployments using the [Packet bare metal cloud](http://www.packet.com).
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
@@ -37,10 +36,11 @@ Terraform is required to deploy the bare metal infrastructure. The steps below a
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html) [More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
Grab the latest version of Terraform and install it. Grab the latest version of Terraform and install it.
```bash ```bash
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_darwin_amd64.zip" echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_darwin_amd64.zip"
sudo yum install unzip sudo yum install unzip
sudo unzip terraform_0.11.11_linux_amd64.zip -d /usr/local/bin/ sudo unzip terraform_0.12.12_linux_amd64.zip -d /usr/local/bin/
``` ```
## Download Kubespray ## Download Kubespray
@@ -67,8 +67,9 @@ Details about the cluster, such as the name, as well as the authentication token
for Packet need to be defined. To find these values see [Packet API Integration](https://support.packet.com/kb/articles/api-integrations) for Packet need to be defined. To find these values see [Packet API Integration](https://support.packet.com/kb/articles/api-integrations)
```bash ```bash
vi cluster.tf vi cluster.tfvars
``` ```
* cluster_name = alpha * cluster_name = alpha
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456 * packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV * public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
@@ -84,7 +85,7 @@ terraform init ../../contrib/terraform/packet/
Run Terraform to deploy the hardware. Run Terraform to deploy the hardware.
```bash ```bash
terraform apply -var-file=cluster.tf ../../contrib/terraform/packet terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
``` ```
## Run Kubespray Playbooks ## Run Kubespray Playbooks
@@ -94,4 +95,3 @@ With the bare metal infrastructure deployed, Kubespray can now install Kubernete
```bash ```bash
ansible-playbook --become -i inventory/alpha/hosts cluster.yml ansible-playbook --become -i inventory/alpha/hosts cluster.yml
``` ```

Some files were not shown because too many files have changed in this diff Show More