Compare commits

..

724 Commits

Author SHA1 Message Date
rptaylor
6f97687d19 Release 2.8 robust san handling (#4478)
* robust handling of API server SANs for 2.8 branch

* use apiserver_loadbalancer_domain_name if it is defined, according to PR 3977
2019-04-10 04:30:15 -07:00
Daniel Werdermann
447605ca0e Add oidc prefixes to kubeadm templates (#4462) 2019-04-09 01:07:06 -07:00
Bort Verwilst
3901480bc1 go to k8s 1.12.7 (#4400) 2019-03-28 06:20:46 -07:00
Rong Zhang
c42cb8f9b2 Merge pull request #4373 from prabdeb/prabdeb-fix-2.8-volume-mount-apiServer
Fixing volume mount issue for apiServerExtraVolumes in kubeadm-config.v1alpha2.yaml.j2
2019-03-27 14:59:45 +08:00
Prabal Deb (prabdeb)
5c28bb0679 Fixing volume issue 2019-03-20 21:34:44 +05:30
Bort Verwilst
6d53229986 Make 1.12.6 the default k8s release (#4306) 2019-03-04 00:38:56 -08:00
Xavi
1e57d2e21a [SECURITY] Docker patches for CVE-2019-5736 Ubuntu Bionic (#4267) 2019-02-19 03:19:36 -08:00
Bort Verwilst
ea41fc5e74 backport cve-2019-5736 to release-2.8 (#4234)
* [SECURITY] Docker patches for CVE-2019-5736 (#4223)

This updates docker 18.06 and 18.09 with the two patches released
yesterday to address the new runc exploit. Details here:
https://kubernetes.io/blog/2019/02/11/runc-and-cve-2019-5736/

* keep edge versions to same minor

* keep edge versions to same minor
2019-02-14 00:55:54 -08:00
Bort Verwilst
4167807f17 Upgrade to 1.12.5 (#4066) 2019-01-18 06:30:36 -08:00
Bort Verwilst
2ac1c7562f More Feature/2.8 backports for 2.8.1 (#3911)
* Move node-cidr-mask-size to ControllerManagerextraArgs (#3845)

* Fix apiServerCertSANs in kubeadm config file (#3839)

* Backport #3908

* Update kubernetes to 1.12.4
2018-12-25 21:43:03 -08:00
Bort Verwilst
2d6e31d281 Backport of fixes to release-2.8 for 2.8.1? (#3897)
* Fix assertion for alone etcd nodes (#3847)

* Fix error with ipvs on cluster reset task (#3848)

* Reset: Check for kube-ipvs0 presence before remove it (#3816)
2018-12-18 05:29:58 -08:00
Andreas Kruger
0a19d1bf01 Update current release in README 2018-12-03 20:04:31 +01:00
Andreas Krüger
432f8e9841 Fix basic auth tokens for kubeadm deployment. (#3801)
* Fix basic auth tokens for kubeadm deployment.

* Tokens should be a dependancy on master, not nodes
2018-12-03 10:44:29 -08:00
Erwan Miran
19792cfae7 Remove iface kube-ipvs0 on reset when kube_proxy_mode is ipvs (#3802) 2018-12-03 10:38:51 -08:00
Andreas Krüger
9463b70edd Cleanup defaults file from kubernetes-apps and add dashboard to download role (#3800)
* Remove variables defined in download role. Fixes #3799

* Cleanup some more variables

* Fix bad templating

* Minor fix

* Add dashboard to download role. Fixes #3736
2018-12-03 10:29:42 -08:00
karbyshevds
b109f52dab Set configure-cloud-routes=false as default if no network plugin is used (#3788)
* Set configure-cloud-routes=false as default if no network plugin is used

As configure-cloud-routes default value is `true`, so it need to be set to `false` when not required to avoid error messages like:
"Couldn't reconcile node routes: error listing routes: unable to find route table for AWS cluster" 
on, for example, AWS installations that don't use cloud native routing.

* Update kube-controller-manager.manifest.j2

remove extra spaces
2018-12-03 05:04:03 -08:00
Rong Zhang
e0781483fa Use download binary instead of copying from the container (#3786) 2018-12-03 02:22:17 -08:00
Andreas Krüger
ffcea384a6 Merge pull request #3773 from toddnni/disable_facts_from_deprecation_notice
Disable gather_facts from non-kubeadm deprecation notice
2018-12-03 10:29:15 +01:00
Wong Hoi Sing Edison
deff6a82fa ingress-nginx: Upgrade to 0.21.0 (#3789)
Upstream Changes:

  - ingress-nginx 0.21.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.21.0)

Our Changes:

  - Sync templates with upstream changes
  - Remove --default-backend-service requirement. Use the flag only for custom default backends
2018-11-30 02:48:50 -08:00
Toni Ylenius
919a268de3 Disable gather_facts from non-kubeadm deprecation notice
fact gathering causes errors when using become (-b) and there is no sudo access
locally
2018-11-29 18:35:12 +02:00
Chad Swenson
487cfa5e6c Add options for configuring control plane component extra volumes (#3779)
This takes care of a few arbitrary use cases that may require custom mounts
inside of apiserver, controller manager, or scheduler.
2018-11-28 23:16:55 -08:00
Andreas Krüger
5fcda86f8c Update gitlab to new repository (#3784) 2018-11-28 06:13:28 -08:00
Joost Cassee
f2635776cd Make Calico Felix log level configurable (#3781) 2018-11-28 00:55:01 -08:00
Aivars Sterns
d30dbdde23 Update all kubernetes-incubator/kubespray refs to kubernetes-sigs/kubespray (#3780) 2018-11-28 09:15:25 +01:00
Chad Swenson
b59d5c35bc Fix kubeadm_controller_extra_args (#3778) 2018-11-27 19:30:43 -08:00
Michal Belica
8331f7b056 Add support for setting custom node taints (#3774)
Introduced variable node_taints which can be set in inventory for
specific hosts or in group_vars, which generates --register-with-taints
command line argument for kubelet.
2018-11-27 15:56:49 -08:00
Andreas Krüger
92274a74f7 Merge pull request #3777 from kubernetes-sigs/woopstar-patch-1
Fix path to Kubespray in CI auth check
2018-11-27 22:34:17 +01:00
Andreas Krüger
1739c479ed Fix path to Kubespray in CI auth check 2018-11-27 22:32:44 +01:00
Erwan Miran
551317f1cd Fix docker_options jinja syntax (#3770) 2018-11-27 07:13:15 -08:00
Rong Zhang
ddc19f43ba Add cloud provider config to kubeadm deployments (#3766) 2018-11-27 05:03:03 -08:00
Michal Belica
993b8e2791 Add support to set tolerations for ingress-nginx (#3742)
Introduced variable `ingress_nginx_tolerations` to set custom
tolerations for Ingress nginx daemonset, to be able to schedule
ingress-nginx on dedicated nodes with taints.
2018-11-27 03:30:16 -08:00
Egor
9a5438ce2f Fix kubeadm-config: add kube_network_node_prefix (#3761) 2018-11-27 00:12:16 -08:00
Erwan Miran
d33434647b Fix node selector for contiv etcd proxy (#3765) 2018-11-27 00:10:33 -08:00
Rong Zhang
02169e8f85 Upgrade kubernetes to 1.12.3 (#3767) 2018-11-26 23:22:15 -08:00
Aivars Sterns
b07e93e08b Merge pull request #3754 from MiaoZhou/fix-aws-node-label-error
Fix AWS Node Labels Error
2018-11-27 09:09:54 +02:00
Andreas Krüger
bad886ca9b Update defaults to match k8s 1.12 suggestions (#3760)
* Update defaults to match k8s 1.12 suggestions

* Test if Netchecker works with node ip instead of localhost

* Update defaults to ipvs and coredns

* Update defaults for kube_apiserver_insecure_port

* Update main.yaml
2018-11-26 15:36:39 -08:00
okamototk
967a042321 Add flag to deploy container engine manually. (#3753)
This feature was removed by PR#3061. But change flag manage_docker to deploy_container_engine.
2018-11-26 07:26:40 -08:00
Miao Zhou
a585318b1a Fix Sync Container Permission (#3752)
When `ansible_user` is not root, using `-b` option.
And with `download_run_once` and `download_localhost` set `true`.

Ansible will executes `container_download | upload container images to nodes` task.

It uses rsync to upload images to `/tmp/release/container/`, but the
`container` directory owned by `root`.
2018-11-26 07:00:34 -08:00
Rong Zhang
07d2f1aa36 Add some warning information about deprecating non-kubeadm code (#3759) 2018-11-26 01:17:31 -08:00
Erwan Miran
b15e685a0b sysctl related PodSecurityPolicy spec since 1.12 (#3743) 2018-11-26 00:13:51 -08:00
Miao Zhou
885c6cff71 Fix AWS Node Labels Error
Now the `kubespray-aws-inventory.py` script always set a node_labels key
to ansible_host.

When AWS instance did not set property labels, it would be an empty
string.

The TASK `Write kubelet config file (kubeadm or non-kubeadm)` will
failed with a msg:

`AnsibleUndefinedVariable: 'unicode object' has no attribute 'items'`.
2018-11-23 17:37:41 +08:00
okamototk
c5e425b02b Support Metrics Server as addon (#3560). (#3563)
* Support Metrics Server as addon (#3560).

* Update metrics server v0.3.1.

* Add metrics server test.

* Replace metrics server manifests with kubernetes/cluster/addons's.

* Modify metrics server manifests for kubespray.

* Follow PR#3558 node label node-role.kubernetes.io/master change

* Fix metrics server parameters base_metrics_server_... to metrics_server_...

* Fix too hard corded metrics_server_memory_per_node

* Add configurable insecure tls for metrics-apiservice

* Downloadable addon-resizer and extract parameter as variables

* Remove metrics server version from deployment name

* Metrics Server work when all masters has node role

* Download metrics-server and add-resizer container only on master

* ServiceAccount and ConfigMap is separated and fix application name

* Remove old metrics server clusterrole template

* Fix addon-resizer image specify

* Make InternalIP default for metrics_server_kubelet_preferred_address_types

Make InternalIP default because multiple preferrred address types does not work.
2018-11-23 00:36:21 -08:00
Egor
3fa81bb86e Fix dns-autoscaler nodeAffinity: set to empty (#3747) 2018-11-22 05:29:09 -08:00
Egor
5daadc022d Fix: nodeAffinity for coredns-deployment and kubedns-deployment (#3746) 2018-11-22 05:27:25 -08:00
Rong Zhang
0cfcd39d55 Switch to kubeadm deployment mode (#3461)
* Switch to kubeadm deployment mode

Discuss:https://github.com/kubernetes-incubator/kubespray/issues/3301

* Add non-kubeadm upgrage to kubeadm cluster
2018-11-21 01:35:40 -08:00
Aivars Sterns
7875c38023 Merge pull request #3663 from gfleury/patch-1
Update getting-started.md
2018-11-21 10:14:51 +02:00
Wong Hoi Sing Edison
edfec26988 cert-manager: Upgrade to 0.5.2 (#3741)
Upstream Changes:

-   cert-manager 0.5.2 (https://github.com/jetstack/cert-manager/releases/tag/v0.5.2)

Our Changes:

-   Templates sync with upstream manifests
2018-11-20 05:13:01 -08:00
Matthew Mosesohn
daa290100c Fix helper script to refer to admin.conf as relative path (#3738) 2018-11-19 18:28:51 -08:00
Rong Zhang
b4eb25197b Merge pull request #3730 from elementyang/pr-docker-options
fix modify deprecated --graph flag
2018-11-20 10:23:16 +08:00
Matthew Mosesohn
ac00d23b80 Skip etcd upgrade steps in kubeadm because it is not used (#3737) 2018-11-19 06:29:58 -08:00
Danny Kulchinsky
9ae2eefb9a Add resource-container flag to kube-proxy manifest (#3519)
* Add resource-container flag to kube-proxy manifest

* add resourceContainer: "" to kubeadm kube-proxy configs
2018-11-19 00:39:29 -08:00
Andreas Krüger
8c18f053aa Fix DNS Autoscaler for coredns_dual deployment (#3726)
* Fix DNS Autoscaler for coredns_dual deployment

* Fix templating

* Fix templating again
2018-11-19 00:35:53 -08:00
Oleg Dolya
2aefa25448 fix args peer router ips and asns (#3644) 2018-11-19 00:34:05 -08:00
Andreas Krüger
6e01c1e377 Fix missing run_once (#3733) 2018-11-18 21:39:29 -08:00
Rong Zhang
3c6ee19785 Merge pull request #3731 from riverzhang/suse
Fix OpenSuse set hostname
2018-11-17 21:33:58 +08:00
rongzhang
0e2d3fb923 Fix OpenSuse set hostname 2018-11-17 20:41:07 +08:00
Zohar Mamedov
af5e05d08d etcd_log_package_levels for /etc/etcd.env (#3700) 2018-11-16 23:59:40 -08:00
marcstreeter
c83bfc9df6 fix dns_prevent_single_point_failure variable (#3728)
comparison that happens during `TASK [kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template]` where the `dns-autoscaler` template is deployed causes coredns to fail deployment.  The error is caused by the variable `dns_prevent_single_point_failure` where an integer is being compared with a string. The resulting error:

```bash
'>' not supported between instances of 'int' and 'str'
```

prevents successful deployment of CoreDNS.  

The change makes the comparison happen between integers and allows CoreDNS to succeed.
2018-11-16 23:57:47 -08:00
elementyang
1ebb670141 fix modify deprecated --graph flag 2018-11-17 14:22:14 +08:00
Johnny Halfmoon
9d0786cbb0 cleaned up vagrantfile (#3717) 2018-11-16 15:28:29 +01:00
Johnny Halfmoon
53bde23a5e fixed ansible include/import inheritance issue (#3716) 2018-11-16 04:33:23 -08:00
Ryler Hockenbury
187798086a Tag win_nodes roles with master (#3704)
* Tag win_nodes roles with master

* Dummy change
2018-11-16 04:01:48 -08:00
Erwan Miran
1540bc9759 Fix patch type in kubectl patch for hostnameOverride (#3725) 2018-11-16 02:35:02 -08:00
Johnny Halfmoon
618ab93b42 added rpm caching for to docker repo (#3718) 2018-11-16 02:33:23 -08:00
Andreas Krüger
5ba67c55a2 Merge pull request #3721 from kubernetes-incubator/readme-fix
Update README with correct versions
2018-11-15 20:46:27 +01:00
Andreas Krüger
d8ad9aedad Update README with correct versions
README contains wrong versions. Let's fix that.
2018-11-15 19:52:49 +01:00
Erwan Miran
3e6d0a50e8 Addition of the missing patch file hostnameOverride-patch.json from PR#3708 (#3714) 2018-11-15 10:37:57 -08:00
Matthew Mosesohn
ff09141a14 Retry kubeadm proxy and secondary master init tasks (#3715)
Due to suboptimal external loadbalancer configs, the LoadBalancer
might point to a downed kube-apiserver that is not set up yet.
2018-11-15 10:03:23 -08:00
Arslanbekov Denis
d188876a91 Added feature-gates flags in kubelet.env (for kubeadm) (#3713) 2018-11-15 10:01:53 -08:00
Andreas Krüger
6f6274d0d9 Update CoreDNS, KubeDNS and Autoscaler to newest templates (#3711)
* Update DNS Autoscaler to latest

* Update CoreDNS to latest

* Update KubeDNS to latest

* Add KubeDNS config map

* Fix filename

* Add missing selector to DNS Autoscaler

* Add missing tolerations
2018-11-15 09:52:12 -08:00
Evgeny Zislis
29ee581067 pin hvac version to 0.6.4 (#3692) 2018-11-15 02:08:33 -08:00
Andreas Krüger
17f07e2613 Enable DNS AutoScaler for CoreDNS (#3707)
* Enable AutoScaler for CoreDNS

* Only use one template for dns autoscaler

* Rename a few variables for replicas and minimum pods

* Rename a few variables for replicas and minimum pods

* Remove replicas to make autoscale work

* Cleanup kubedns-autoscaler as it has been renamed
2018-11-15 01:28:03 -08:00
Wong Hoi Sing Edison
9ebdf0e3cf weave: Upgrade to 2.5.0 (#3660)
* weave: Upgrade to 2.5.0

Upstream Changes:

-   weave 2.5.0 (https://github.com/weaveworks/weave/releases/tag/v2.5.0)
-   Adds support for Kubernetes `hostPort` mapping
-   Adds support for Kubernetes `ipBlock` NetworkPolicy feature

Our Changes:

-   Templates sync with upstream manifests
-   Remove legacy nodePort fix

* BC for weave < 2.5.0
2018-11-14 23:38:51 -08:00
Andreas Krüger
730caa3d58 Add PriorityClasses on the last master. (#3706) 2018-11-14 15:59:20 -08:00
Mark Eisenblätter
7deb842030 calico-node: add prometheus annotations (#3645)
add prometheus annotations to calico-node if
calico_felix_prometheusmetricsenabled is enabled.

This will allow a kubernetes_sd to automaticly find the pods and start
scraping.
2018-11-14 15:01:35 -08:00
Jack Zhou
5f7d5e1e80 ansible: Fix version check in remove_node.yml (#3628) 2018-11-14 14:48:10 -08:00
Andreas Krüger
931c76e58f Add DNS entries to node certs (#3710) 2018-11-14 13:58:17 -08:00
Erwan Miran
3fafa583d1 hostnameOverride on a per-node basis (#3708) 2018-11-14 09:37:53 -08:00
Ryler Hockenbury
d8e9b0f675 Netchecker version and namespace (#3705)
* Revert netchecker image and version

* Create namespace for netchecker

* Remove extra slashes
2018-11-14 09:27:45 -08:00
Andreas Krüger
846c7a26e8 Merge pull request #3582 from LinuxGit/Louis/gen_tags-script
Fix gen_tags.sh script
2018-11-14 09:42:19 +01:00
Dann
98d766c68e Moves apiserver port to bindPort when using controlPlaneEndpoint (#3449) 2018-11-14 00:23:30 -08:00
Mateus Caruccio
087b7fa38e Set node labels from AWS instance tag (#3544)
Set kubernetes node labels from a given comma-separated list of `key=value` from AWS instance tag `kubespray-node-labels`.
2018-11-14 00:21:56 -08:00
Mateus Caruccio
92877e8bf8 Avoid gather_facts on scale.yml (#3631)
Use case is when kubespray runs from unprivileged user without sudo permission.
2018-11-13 13:47:21 -08:00
Bort Verwilst
d3ef41b603 Upgrade helm from 2.9 to 2.11 (#3638) 2018-11-13 11:24:29 -08:00
Arnaud MAZIN
633bfa7ebc Bring static tokens and user back to 1.12 (#3593) 2018-11-13 10:25:59 -08:00
Egor
13af4c1f40 remove-node: fix assert for ansible_version (#3703) 2018-11-13 16:18:31 +01:00
Andreas Krüger
afc3f7dce4 Create certificates for each node too (#3698) 2018-11-13 07:10:59 -08:00
Ryler Hockenbury
e8901a2422 Apply linux node selector to coreDNS deployment (#3688)
* Apply linux node selector to coreDNS deployment

* Remove comment before linux node selector on manifests

* mend
2018-11-13 04:54:15 -08:00
Wilmar den Ouden
c888de8b38 fix: Coredns tag wasn't updated in #3619 (#3634) 2018-11-13 00:30:29 -08:00
Miao Zhou
fefa1670a6 fix calico_version wrong get (#3694)
the ':' makes wrong return of calico_version after the calicoctl downloaded && before the cluster is up
2018-11-12 07:35:21 -08:00
Antoine Legrand
589d22da0b Update ha-mode.md (#3696)
* Update ha-mode.md
2018-11-12 11:49:23 +01:00
Antoine Legrand
3dcb914607 Remove Vault (#3684)
* Remove Vault

* Remove reference to 'kargo' in the doc

* change check order
2018-11-10 08:51:24 -08:00
Bily Zhang
b2b421840c Fix some typos (#3690)
Signed-off-by: mooncake <xcoder@tenxcloud.com>
2018-11-10 15:53:58 +01:00
Egor
5c7eef70b4 Fix kube-router annotations: add conditions (#3670) 2018-11-09 08:15:27 -08:00
RuriRyan
c2710899ed Fixes network restart for Ubuntu Bionic Beaver (#3600)
As Ubuntu Bionic Beaver uses systemd-networkd the step fails
if it tries to restart networking, as it is nonexistent.
2018-11-09 08:13:57 -08:00
Erwan Miran
b997912ebe Fix dead link to exmaples in vsphere.md (#3673) 2018-11-09 02:32:25 -08:00
Igor Ivanov
e5d07f3a3d use force umount when reset cluster (#3672)
reset role hang and can't umount PersistenceVolume (ceph cluster)
2018-11-09 02:30:55 -08:00
Thomas Nys
fb9155c450 Add the option to create a DNS record for bastion deployed to Azure (#3675)
This is rather convenient if you want to configure exceptions on a
company firewall.
2018-11-09 11:30:35 +01:00
Thomas Nys
dc3195310c Add the option to add multiple ssh public keys for Azure infrastructure (#3674)
This give users the option to define multiple ssh public keys when
deploying the base infrastructure on Azure.
2018-11-08 15:25:07 +01:00
Giacomo Longo
9f7c2b08a5 Idempotency fixes to roles/pre-upgrade (#3497) 2018-11-07 16:31:29 -08:00
Erwan Miran
a6932b6b81 Install ipvsadm when kube_proxy_mode is ipvs (#3548) 2018-11-07 14:04:11 -08:00
Erwan Miran
77d705ca9f cluster_name is to be set in initConfiguration too (#3661) 2018-11-07 12:41:11 -08:00
Erwan Miran
89ac53acd7 _ansible_item_label is not necessarily set (#3646) 2018-11-07 12:39:44 -08:00
Erwan Miran
1e22c83f0f kube_override_hostname must be in kubernetes/master role defaults (#3647) 2018-11-07 12:38:19 -08:00
Erwan Miran
1ad1e80ae3 Checking new CA key presence is not relevant to determine if kubeadm has already run (#3653) 2018-11-07 11:46:11 -08:00
George Fleury
bc785196c8 Update getting-started.md 2018-11-07 17:18:03 +01:00
Anton Patsev
dfdf530723 Fix work yum in Install packages requirements for bootstrap (#3630)
* Fix Failure talking to yum: Cannot find a valid baseurl for repo: base/7/x86_64 if Install packages in CentOS using proxy

* Add proxy to /etc/yum.conf if http_proxy is defined
2018-11-06 22:44:37 -08:00
Lear Li
33f33a7358 Fix docker-storage was not found issue (#3584) 2018-11-06 17:50:14 -08:00
Kuldip Madnani
113dd2146a Added some minor changes to the docker orphan clean up process. (#3657)
* Added changes to clean up orphan containers and reload docker & kubelet directories.

* Added new files for cleaning up orphans and docker & kubelet directories

* Added new lines at the end of these files

* removed the trailing whitespaces from main.yml and clean-up.yml

* Updated as per the review comments

* Updated as per the review comments

* Removed service_facts and package_facts because they are not supported in ansible 2.4.0

* Corrected yaml syntax errors

* Removed the use of json_query filter and utilized selectattr

* Removed trailing spaces

* Changed the default value of docker_clean_up to false

* Added Changes to only include cleanup-docker-orphans.sh

* Reverted back changes done inside handler.

* Removed trailing spaces and made default value of docker_orphan_clean_up as true

* Reverted the default value of docker_orphan_clean_up as false

* Made the docker clean up as drop in

* Made the docker clean up as drop in

* Reverted the value of boolean docker_orphan_clean_up to false

* Converted ExecStop to ExecSTartPost. Removed the live restore check from the orphan script
2018-11-06 16:50:19 -08:00
Erwan Miran
14c2df0418 Replace raw module with shell to avoid warning (#3652) 2018-11-06 11:07:11 -08:00
Wilmar den Ouden
b316518864 Bump coredns to 1.2.6 (#3641) 2018-11-06 05:58:20 -08:00
Rong Zhang
612663667c Merge pull request #3639 from holmsten/ops-readme-capitalisation
Fix company name capitalisation
2018-11-06 15:00:08 +08:00
Bily Zhang
6c14f35f00 Fix some typos (#3636)
Signed-off-by: mooncake <xcoder@tenxcloud.com>
2018-11-05 15:22:16 -08:00
Andreas Holmsten
289be0a0db Fix capitalisation 2018-11-05 12:47:23 +01:00
Aivars Sterns
a4de023c29 Merge pull request #3635 from Kusanagi9999/Multus-README-fix
Move Multus under network plugin section
2018-11-04 15:55:18 +02:00
Kusanagi9999
e3fdd4a0ac Move Multus under network plugin section 2018-11-04 04:52:22 -08:00
Louis Woods
bc9e14a762 Adds support for Multus (multiple interfaces) CNI plugin (#3166)
* Adds support for Multus (multiple interfaces) CNI plugin

Multus is a latin word for "Multi". As the name suggests, it acts as a
Multi plugin in Kubernetes and provides multiple network interface
support in a pod. Multus uses the concept of invoking delegates by
grouping multiple plugins into delegates and invoking them in the
sequential order of the CNI configuration file provided in json format.

* Change CNI version (0.1.0->0.3.1) of Contiv to be compatible with Multus
2018-11-04 01:07:38 -08:00
Aivars Sterns
3c5f20190f Merge pull request #3629 from holmsten/terraform-ops-worker-allowed-ports
[contrib/terraform/openstack] Allow user defined port ranges for worker security group
2018-11-03 17:52:00 +02:00
ankitcharolia
9c83551a0e add certificate authority file (#3433) 2018-11-02 08:27:53 -07:00
Rong Zhang
99c139dd5a Merge pull request #3621 from elementyang/pr-check-docker-packages
fix modify the way of the command 'yum remove xxx', e.g. docker-selin…
2018-11-02 18:48:33 +08:00
Andreas Holmsten
6c34745958 Add worker_allowed_ports
* [contrib/terraform/openstack] Add worker_allowed_ports

  Allow user to define in terraform template which ports and remote
  IPs that are allowed to access worker nodes. This is useful when you
  don't want to open up whole NodePort range to the outside world, or
  ports outside NodePort range.
2018-11-01 17:48:37 +01:00
Matthew Mosesohn
2ba4e9bda5 Skip most of kubernetes/preinstall role during late DNS config (#3627)
When using resolvconf_mode host_resolvconf, there is an early DNS
config stage where Kubernetes cluster DNS is not injected for host
DNS intially. Later, the cluster DNS is enabled, but we do not
need to run every task from the kubernetes/preinstall role.
2018-11-01 08:08:50 -07:00
Robert Liotta
2a00c931e4 Added the missing environment for proxy for get_url (#3603)
* Added the missing environment for proxy for get_url

* Update upgrade.yml

* Fixed spaces

* Fixed spaces

* Update upgrade.yml
2018-11-01 06:20:57 -07:00
Wong Hoi Sing Edison
1e6ad5acb6 Fixup #3595: coredns: Upgrade to v1.2.5 (#3619)
Upstream Changes:

   - coredns v1.2.5 (https://github.com/coredns/coredns/releases/tag/v1.2.5)

NOTE:

   - Switch image repo to https://hub.docker.com/r/coredns/coredns/ (https://github.com/kubernetes-incubator/kubespray/pull/3595#issuecomment-433962973)
2018-11-01 06:05:17 -07:00
Matthew Mosesohn
bc74a37696 Calculate etcd client cert serial for appropriate groups (#3605)
Standalone etcd nodes do not generate node-$hostname certs and do
not need this serial calculated.
2018-11-01 05:50:26 -07:00
Aivars Sterns
0cb326b10f Merge pull request #3624 from xichengliudui/fix181101
Correct the wrong words
2018-11-01 09:49:05 +02:00
xichengliudui
4daa9aa443 Correct the wrong words 2018-10-31 22:42:05 -04:00
Aivars Sterns
667364143c Merge pull request #3623 from yeya24/patch-1
fix typo doesnt -> doesn't
2018-10-31 15:30:16 +02:00
Ye Ben
d8b357ce49 fix typo doesnt -> doesn't
fix typos in line 114 and 116: doesnt -> doesn't
2018-10-31 21:27:58 +08:00
Antoine Legrand
479d0e858d Add playbook to install mitogen (#3622) 2018-10-31 11:52:47 +01:00
Matthew Mosesohn
152c15b19f Disable gather facts when checking ansible version (#3615) 2018-10-31 03:19:17 -07:00
Wong Hoi Sing Edison
ce5a34d86c ansible: Upgrade to 2.7.1 (#3618)
Only exclude buggy Ansible v2.7.0 (https://github.com/ansible/ansible/issues/46600#issuecomment-433863628)

Fixup #3589
2018-10-31 03:01:19 -07:00
AdamDang
b8bafb2893 Fix a typo in dind/README.md (#3620)
appropiate->appropriate
2018-10-31 11:01:13 +01:00
Yumo Yang
5da18854a3 fix modify the way of the command 'yum remove xxx', e.g. docker-selinux and docker-engine-selinux packages 2018-10-31 17:16:35 +08:00
Dmitriy Zinin
d269e7f46c cilium v1.3.0 (#3564) 2018-10-31 00:42:56 -07:00
Anton Patsev
8c636f67af Added support proxy to 'Install pip for bootstrap' (#3609) 2018-10-31 00:35:57 -07:00
Louis
a84508d6b9 remove deprecated parameters of blockinfile module (#3581) 2018-10-30 05:56:58 -07:00
Rong Zhang
22c234040e Merge pull request #3608 from xichengliudui/fix181030
Correct the wrong word
2018-10-30 20:52:02 +08:00
Rong Zhang
4a1be18361 Merge pull request #3614 from liyongxin/master
typo fix about officially
2018-10-30 20:41:30 +08:00
Yongxin Li
3b6df70f11 typo fix about officially
Signed-off-by: Yongxin Li <yxli@alauda.io>
2018-10-30 20:38:37 +08:00
Rong Zhang
48390d37c2 Merge pull request #3613 from mirake/fix-typos
Fix some typos
2018-10-30 20:23:44 +08:00
Rui Cao
0d3beb4e5a Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-10-30 20:07:52 +08:00
Andreas Krüger
6e192d487b Merge pull request #3604 from Intermax-Cloudsourcing/fix-coredns
Revert "CoreDNS v1.2.5 (#3595)"
2018-10-30 10:04:51 +01:00
xichengliudui
306c61a968 Remove duplicate words 2018-10-30 04:51:36 -04:00
Ted Wexler
58b4fea2b1 Add an 'access_ip' for openstack resources to the terraform inventory builder script (#3592)
* Add an 'access_ip' for openstack resources to the terraform inventory builder script

* Update Openstack README

* Only use ipv4

* If there's a floating IP assigned to an openstack instance, use that for access_ip
2018-10-29 19:28:23 +01:00
wilmardo
2149bfbc5b Revert "CoreDNS v1.2.5 (#3595)"
This reverts commit 8ba6b601b0.
2018-10-29 16:33:52 +01:00
Bart Laarhoven
0acb823d96 Distribute node etcd certificates like it's done in kubernetes/secrets (#3486)
* do it like in kubernetes/secrets

* fix indentation

* processed comments

* missed one, sorry

* trailing space fix
2018-10-29 11:45:32 +01:00
Dmitriy Zinin
8ba6b601b0 CoreDNS v1.2.5 (#3595) 2018-10-29 03:20:03 -07:00
Aivars Sterns
06f981ffed Merge pull request #3601 from xichengliudui/fix181029
Fix typo
2018-10-29 12:12:45 +02:00
xichengliudui
4a4a3f759c Fix typo 2018-10-29 06:10:33 -04:00
Yumo Yang
8fbebf4e83 fix readme.md sample/* (#3541) 2018-10-29 11:04:51 +01:00
Yumo Yang
8371beb915 fix bootstrap os_family error in multi-plantform (#3594) 2018-10-29 09:37:30 +01:00
Rong Zhang
b39b32a48c Fix set coreos hostname failed (#3599)
need set hostname by kubeadm
2018-10-29 00:59:25 -07:00
Rong Zhang
dbe99b59a7 Upgrade kubernetes to v1.12.2 (#3597) 2018-10-29 00:58:24 -07:00
Aivars Sterns
3cc413fe9a Merge pull request #3598 from AdamDang/patch-2
Update vsphere.md
2018-10-28 12:39:20 +02:00
AdamDang
59d0138bcd Update vsphere.md 2018-10-28 16:38:05 +08:00
Rong Zhang
801bbcbc63 Merge pull request #3591 from AdamDang/patch-1
Fix some typos
2018-10-26 22:58:10 +08:00
AdamDang
4560ff7386 Update vars.md 2018-10-26 21:57:04 +08:00
AdamDang
477841d8c0 Update ha-mode.md 2018-10-26 21:55:54 +08:00
AdamDang
a89dc49c52 Update ansible.md 2018-10-26 21:49:57 +08:00
Antoine Legrand
90d8f7aa6a Assert if ansible 2.7 is used (#3589) 2018-10-26 00:29:21 -07:00
Louis
abc1421def Fix gen_tags.sh script 2018-10-25 02:16:48 +08:00
Rong Zhang
7abd4eeafd Merge pull request #3578 from LinuxGit/Louis/fix-typo
fix typo
2018-10-24 13:45:31 +08:00
Aivars Sterns
27c79088e6 Merge pull request #3556 from Miouge1/routerless-master
[contrib/terraform/openstack] Add support for router less deployments
2018-10-24 08:33:33 +03:00
Aivars Sterns
ce2a3a80db Merge pull request #3577 from fritchie/master
Add bin_dir to kubectl version check
2018-10-24 08:33:03 +03:00
Erwan Miran
79bf74e90f Offline deployment: PyPi repo (#3542) 2018-10-23 22:22:09 -07:00
Erwan Miran
4f12ba00d1 Fix calico peering with router(s) (#3547) 2018-10-23 22:19:50 -07:00
Louis
93104d9224 fix typo 2018-10-24 11:39:15 +08:00
Frank Ritchie
b5f4a79365 Add bin_dir to kubectl version check 2018-10-23 15:51:17 -04:00
Matthew Mosesohn
7e84de2ae1 Purge /root/.kube/config when migrating to kubeadm (#3566) 2018-10-23 05:09:11 -07:00
Wong Hoi Sing Edison
06e1f81801 ingress-nginx: Upgrade to 0.20.0 (#3565)
Upstream Changes:

-   ingress-nginx 0.20.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.20.0)

Our Changes:

-   Sync templates with upstream changes
2018-10-23 05:08:03 -07:00
Egor
ccc3f89060 Add kube-router annotations (#3533) 2018-10-21 00:35:52 -07:00
Maxim Makarov
8a17de327e Not necessary run on Nginx proxy all cpu cores (#3559) 2018-10-20 13:56:53 -07:00
Erwan Miran
3b787123e3 Fix tasks to avoid ansible warning about raw module environment (#3545) 2018-10-20 07:13:54 -07:00
Matthew Mosesohn
127969d65f Align node-role value for kubeadm compatibility (#3558)
kubeadm sets node label node-role.kubernetes.io/master=''
and this is not configurable. We should use it everywhere.
2018-10-20 07:12:54 -07:00
Andreas Krüger
4b711e29ef Merge pull request #3557 from Zefool/patch-1
Fix typo
2018-10-20 16:12:02 +02:00
Antoine Legrand
2a3aa591e0 Download role (#3553)
* codestyle tests

* Download destination can be different than local_release_dir
2018-10-20 13:56:55 +02:00
Aivars Sterns
56cafc3fb3 Merge pull request #3550 from Kusanagi9999/fix-kube-router-docs-link
Fix missing s in link to kube-router docs
2018-10-19 20:17:55 +03:00
Zefool
b434456f54 Fix typo 2018-10-19 17:12:37 +02:00
Kusanagi9999
6923d350f4 Fixed docker version comment in README.md 2018-10-19 06:29:20 -07:00
Maxime Guyot
38beab8fe8 Add support for router less deployments 2018-10-19 12:39:34 +02:00
Matthew Mosesohn
4bdd0ce417 Allow kubeadm master untaint to fail (#3549) 2018-10-19 00:38:12 -07:00
Kusanagi9999
e5c4e1ecc3 Fix missing s in link to kube-router docs 2018-10-18 14:55:10 -07:00
Aivars Sterns
a48131f1e1 Merge pull request #3543 from Miouge1/openstack-public-clouds
[contrib/terraform/openstack] Add list of know working OpenStack clouds
2018-10-18 13:23:26 +03:00
Miouge1
6e34918b52 Add list of know working OpenStack clouds 2018-10-18 11:04:04 +02:00
JuanJo Ciarlante
66fddb2d52 [jjo] upgrade kube-router to v0.2.1 (#3535)
kube-router v0.2.1 highlights from changelog:
- IPv6 WIP but pretty close to full working functionality
- fully support network policy semantics with addition of support for
  ipblock and except
2018-10-18 00:09:42 -07:00
Erwan Miran
87193fd270 Fix ansible syntax to avoid ansible warnings (one more) (#3536)
* warning on meta flush_handlers

* avoid rm

* avoid "Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually" warning on subsequent tasks using blockinfile

* is match
2018-10-17 12:27:11 -07:00
Andreas Krüger
52b5309385 Merge pull request #3532 from jjo/jjo-improve-dind
[jjo] improve contrib/dind/run-test-distros.sh via spec files
2018-10-17 09:47:16 +02:00
Mateus Caruccio
7f4e048052 kubespray-aws-inventory.py ported to python 3 (#3528) 2018-10-16 23:53:39 -07:00
JuanJo Ciarlante
7fe7357154 improve contrib/dind/README.md 2018-10-16 21:25:41 -03:00
JuanJo Ciarlante
635261eb12 [jjo] improve dind run-test-distros.sh via spec files 2018-10-16 21:25:41 -03:00
Samina Fu
5a5cf15c04 Add clear ipvs virtual server table when reset k8s (#3530) 2018-10-16 16:29:43 -07:00
Erwan Miran
4d2b6b71f2 Fix contiv api certificate generation (#3531) 2018-10-16 15:34:33 -07:00
Erwan Miran
7bec169d58 Fix ansible syntax to avoid ansible deprecation warnings (#3512)
* failed

* version_compare

* succeeded

* skipped

* success

* version_compare becomes version since ansible 2.5

* ansible minimal version updated in doc and spec

* last version_compare
2018-10-16 15:33:30 -07:00
Erwan Miran
bfd4ccbeaa Calico: Ability to define global peers (#3493) 2018-10-16 15:32:26 -07:00
Rong Zhang
76fe84fe93 Use imageRepository instead of the unifiedControlPlaneImage (#3484) 2018-10-16 07:26:04 -07:00
刘旭
cf4dd645a7 fix --etcd-servers-overrides invalid (#3470) 2018-10-16 07:25:03 -07:00
JuanJo Ciarlante
a5edd0d709 [jjo] add kube-router support (#3339)
* [jjo] add kube-router support

Fixes cloudnativelabs/kube-router#147.

* add kube-router as another network_plugin choice
* support most used kube-router flags via
  `kube_router_foo` vars as other plugins
* implement replacing kube-proxy (--run-service-proxy=true) via
  `kube_proxy_mode: none`, verified in a _non kubeadm_enabled_
  install, should also work for recent kubeadm releases via
  `skipKubeProxyInstall: true` config

* [jjo] address PR#3339 review from @woopstar

* add busybox image used by kube-router to downloads

* fix busybox download groups key

* rework kubeadm_enabled + kube_router_run_service_proxy

- verify it working ok w/the kubeadm_enabled and
  kube_router_run_service_proxy true or false

- introduce `kube_proxy_remove` fact, to decouple logic
  from kube_proxy_mode (which affects kubeadm configmap
  settings, thus no-good to ab-use it to 'none')

* improve kube-router.md re: kubeadm_enabled and kube_router_run_service_proxy

* address @woopstar latest review

* add inventory/sample/group_vars/k8s-cluster/k8s-net-kube-router.yml

* fix kube_router_run_service_proxy conditional for kube-proxy removal

* fix kube_proxy_remove fact (w/ |bool), add some needed kube-proxy tags on my and existing changes

* update kube-router tolerations for 1.12 compatibility

* add PriorityClass to kube-router DaemonSet
2018-10-16 07:15:05 -07:00
anarcat
c33e08c3fa show FQDN first in /etc/hosts (closes: #3521) (#3522)
The hosts(5) manpage clearly states that the first entry is the
"canonical name", or FQDN (Fully-Qualified Domain Name):

    IP_address canonical_hostname [aliases...]

By using the alias as a first entry, `hostname -f` does not return the
correct domain which breaks all sorts of unrelated functionality (it
has impact over email server configuration, for example).
2018-10-16 03:55:55 -07:00
Aivars Sterns
9b773185c3 Merge pull request #3184 from oracle/new_oci_controls
Add new OCI cloud controls
2018-10-16 11:29:13 +03:00
Andreas Krüger
b1974ab3cf Merge pull request #3515 from SataQiu/fix-20181012
fix typo
2018-10-16 09:11:08 +02:00
Erwan Miran
b4e2b85745 Replace shell with command in order to allow the task to fail when openssl x509 does return zero (#3516) 2018-10-15 23:48:12 -07:00
Erwan Miran
fcd8d850dc Fix ansible syntax to avoid ansible warnings (again) (#3509)
* Fix ansible syntax to avoid ansible warnings (again)

* warn: false on tar -cfz

* wrong placement of warn:false
2018-10-15 23:47:04 -07:00
Erwan Miran
6549b8f8ae Ability to define the asNumber on a per node basis when route reflectors are not used in order to peer directly with routers (#3492) 2018-10-15 23:44:49 -07:00
Rong Zhang
1ea7ec3189 Fix nginx_config_dir value not defined when use reset.yml (#3524) 2018-10-15 01:46:55 -07:00
JuanJo Ciarlante
4077934519 [jjo] add DIND support to contrib/ (#3468)
* [jjo] add DIND support to contrib/

- add contrib/dind with ansible playbook to
  create "node" containers, and setup them to mimic
  host nodes as much as possible (using Ubuntu images),
  see contrib/dind/README.md

- nodes' /etc/hosts editing via `blockinfile` and
  `lineinfile` need `unsafe_writes: yes` because /etc/hosts
  are mounted by docker, and thus can't be handled atomically
  (modify copy + rename)

* dind-host role: set node container hostname on creation

* add "Resulting deployment" section with some CLI outputs

* typo

* selectable node_distro: debian, ubuntu

* some fixes for node_distro: ubuntu

* cpu optimization: add early `pkill -STOP agetty`

* typo

* add centos dind support ;)

* add kubespray-dind.yaml, support fedora

- add kubespray-dind.yaml (former custom.yaml at README.md)
- rework README.md as per above
- use some YAML power to share distros' commonality
- add fedora support

* create unique /etc/machine-id and other updates

- create unique /etc/machine-id in each docker node,
  used as seed for e.g. weave mac addresses

- with above, now netchecker 100% passes WoHooOO!
  🎉 🎉 🎉

- updated README.md output from (1.12.1, verified
  netcheck)

* minor typos

* fix centos node creation, needs earlier udevadm removal to avoid flaky facts, also verified netcheck Ok \o/

* add Q&D test-distros.sh, back to manual /etc/machine-id hack

* run-test-distros.sh cosmetics and minor fixes

* run-test-distros.sh: $rc fix and minor formatting changes

* run-test-distros.sh output cosmetics
2018-10-15 09:44:02 +02:00
Julien Senon
fac8aaa44e Update template for bastion (#3523)
Update template to have bastion section
2018-10-15 09:42:22 +02:00
Kuldip Madnani
fd422a0646 Add Priority class for tiller and fix tiller override. (#3494)
* Added Priority class to tiller installation and also fixed tiller override implementation.

* Added changes to handle priority classes separately in tiller, instead of using the variable tiller_override
2018-10-12 11:46:39 -07:00
Kuldip Madnani
d7bb4d954a Handling docker clean up during docker upgrade and docker config changes. (#3321)
* Added changes to clean up orphan containers and reload docker & kubelet directories.

* Added new files for cleaning up orphans and docker & kubelet directories

* Added new lines at the end of these files

* removed the trailing whitespaces from main.yml and clean-up.yml

* Updated as per the review comments

* Updated as per the review comments

* Removed service_facts and package_facts because they are not supported in ansible 2.4.0

* Corrected yaml syntax errors

* Removed the use of json_query filter and utilized selectattr

* Removed trailing spaces

* Changed the default value of docker_clean_up to false

* Added Changes to only include cleanup-docker-orphans.sh

* Reverted back changes done inside handler.

* Removed trailing spaces and made default value of docker_orphan_clean_up as true

* Reverted the default value of docker_orphan_clean_up as false

* Made the docker clean up as drop in

* Made the docker clean up as drop in

* Reverted the value of boolean docker_orphan_clean_up to false
2018-10-12 10:29:51 -07:00
Loic Gouarin
36322901a6 fix kube-controller-manager config with openstack-cacert (#3435) 2018-10-12 06:39:58 -07:00
Oz N Tiram
31d8fc086b Specify that the cluster.yml playbook should run as root (#3474)
* Specify that the cluster.yml playbook should run as root

This is a possible fix for #3388.
The following examples show the option `-b` too:

https://kubernetes.io/docs/setup/custom-cloud/kubespray/
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment

* Update invocation to include specific root user

* Update comment text according to suggestions
2018-10-12 02:29:01 -07:00
SataQiu
9ca583d984 fix typo 2018-10-12 15:53:30 +08:00
Anupam Basak
3ce933051a calico CALICO_IPV4POOL_IPIP overriding variable (#3507) 2018-10-12 00:09:36 -07:00
Rui Cao
3b750cafc1 Fix some typos (#3510)
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-10-11 08:18:22 -07:00
Johann Queuniet
1911fe5ca8 fix nginx proxy configuration conflicts (#3489)
* Allow configuration of nginx proxy config path

* Fix the internal nginx configuration location

Signed-off-by: Johann Queuniet <contact@lordran.net>
2018-10-11 06:33:18 -07:00
Andreas Krüger
2117e8167d Update pre-install verify settings with network checks and etc. (#3504)
* Update pre-install verify settings with network checks and etc.

* Remove upstream dns server check. It's bogus
2018-10-11 06:28:21 -07:00
Antoine Legrand
c66d1ad6cb Update scale.yml (#3511) 2018-10-11 14:08:56 +02:00
IgLiv
bd0383a4e3 Update vsphere.md (#3467) 2018-10-11 02:43:25 -07:00
Erwan Miran
dd5327ef9e Fix ansible syntax to avoid ansible warnings (#3499) 2018-10-11 00:45:00 -07:00
Andreas Krüger
cdce8c81da Update CoreDNS templates to newest version and fix kubedns-autoscaler (#3483)
* Update CoreDNS templates to newest version

* Add watch to ClusterRole. Fixes #3460
2018-10-11 00:12:58 -07:00
Giacomo Longo
3f786542d3 Automatically infer bootstrap_os (#3498)
* Automatically infer bootstrap_os

* Rename bootstrap os to os_family
2018-10-10 23:32:10 -07:00
LiuDui
e813b26963 Remove excess Spaces (#3452) 2018-10-10 19:28:39 -07:00
Pierluigi Lenoci
abe711dcb5 Missing [all] sections inside the sample (#3500)
* Missing [all] sections inside the sample

* Update hosts.ini
2018-10-10 21:37:47 +02:00
pastushenko
b35a9fcb04 #3475 - make dnsmasq to send queries to all servers in upstream. Make… (#3481)
* #3475 - make dnsmasq to send queries to all servers in upstream. Make dnsmasq config file customizable.

* Code style fixes. Return current behaviour for dnsmasq strict-order flag.
2018-10-09 23:29:06 -07:00
Antoine Legrand
c27a91f7f0 Split deploy steps in separate playbooks: part1 (#3451)
* Fix bootstrap_os/ubuntu idempotency

* Update bastion role

* move container_engine in sub-roles

* requires ansible 2.5

* ubuntu18 as first CI job
2018-10-09 19:14:33 -07:00
Erwan Miran
2ab2f3a0a3 Ability to define SSL certificates duration and SSL key size (#3482)
* Ability to specify ssl certificate duration and ssl key size - etcd/secrets

* Ability to specify ssl certificate duration and ssl key size - helm/contiv + fix contiv missing copy certs generation script
2018-10-09 04:43:30 -07:00
okamototk
c825f4d180 Untaint master when it has node role (#3466) 2018-10-09 01:40:43 -07:00
Andreas Krüger
7e195b06a6 Fix DNS loop when resolvconf_mode is set to host_resolvconf (#3390)
* Fix DNS loop when resolvconf_mode is set to host_resolvconf

* Make sure upstream_dns_servers is defined when using resolvconf_mode == 'host_resolvconf'

* Only set upstream dns servers on KubeDNS and CoreDNS if they are defined

* Only set upstream dns servers on KubeDNS and CoreDNS if they are defined
2018-10-08 07:08:51 -07:00
Dylan
30132d8c35 Removed hostname truncation. (#3409) 2018-10-08 05:14:01 -07:00
Giacomo Longo
0d89db5141 Split Vagrantfile Ubuntu versions into 1604 and 1804 (#3440)
Split Vagrantfile Ubuntu versions into 1604 and 1804 (#3440)
2018-10-08 12:40:20 +02:00
Matthew Mosesohn
4b7d59224d Fix tag based deploy of apps by skipping kubeadm dns tasks (#3462) 2018-10-08 01:22:57 -07:00
SataQiu
72157a7514 fix typo: remove redundant space (#3429) 2018-10-08 01:21:45 -07:00
Rong Zhang
4f51607145 Upgrade kubernetes to v1.12.1 (#3463)
https://github.com/kubernetes/kubernetes/issues/69214
2018-10-07 13:33:13 -07:00
Chad Swenson
6602760a48 Support multiple local volume provisioner StorageClasses (#3450)
- Local Volume StorageClass configuration is now manged by `local_volume_provisioner_storage_classes`, a list of maps that specifies local storage classes with `name` `host_dir` and `mount_dir` keys per entry
- Tasks and templates updated to loop through local volume storage classes
- Previous defaults for path/class names were not changed
- Fixed an issue where a `kubernetes/preinstall` was creating directories inconsistently with the `kubernetes-apps/external_provisioner/local_volume_provisioner` task
2018-10-05 05:52:25 -07:00
Erwan Miran
9232261665 serviceaccounts is required in resources list of cluster role (#3455) 2018-10-04 11:32:37 -07:00
Rong Zhang
af97febb04 Upgrade kubernetes to v1.12.0 (#3410)
* Upgrade kubernetes to v1.12.0

Use kubeadm v1alpha3 config

* Upgrade coredns and etcd

* Upgrage docker to 18.06
2018-10-04 02:05:55 -07:00
Aivars Sterns
c818dc1ce8 Merge pull request #3427 from EppO/patch-1
Add note to README about offline environments
2018-10-04 09:27:25 +03:00
Tupin Laurent
05dabb7e7b Fix Bionic networking restart error #3430 (#3431) 2018-10-02 03:10:52 -07:00
Florent Monbillard
ad50f376a5 Add note about offline environments
Internet access is not mandatory as long as the user configures all container image repositories to point to internal container registries, in case of on-premises installation with firewall rules preventing direct Internet access.
2018-10-01 09:50:48 -04:00
okamototk
66e304c41b Fixed Ubuntu 18.04's docker version(fixes #3424). (#3425) 2018-10-01 04:26:51 -07:00
LiuDui
192f7967c9 Remove excess space (#3421) 2018-10-01 00:09:45 -07:00
SataQiu
f67d82a9db fix typo: delete duplicate words (#3422) 2018-10-01 00:07:25 -07:00
Luke Seelenbinder
3cfbc1a79a Add Pod IP to Flannel manifest. (#3379) 2018-10-01 00:06:13 -07:00
rboyapat
d9f495d391 Fix the dic iteration method in the kubelet template (#3415)
* Fix the jinja expression for openstack_tenant_id

OS_PROJECT_ID is obsolete in keystone v3 and jinja expression
doesn't set openstack_tenant_id as expected because of
undefined env var. Fixed the expression.

* Fix the dic iteration method in the kubelet template

Kubelet template rendering errors when additional Node lables are
added and using Python3. Update the method to be compatible to both
python2/3

Node lables doesn't work
2018-09-30 05:10:12 -07:00
SataQiu
71f6c018ce fix typo: remove repeated words(is) (#3419) 2018-09-29 21:04:43 -07:00
LiuDui
0401f4afff remove the redundant space (#3420) 2018-09-29 21:03:27 -07:00
Mikael Berthe
b4989b5a2a Fix netcheck agent/server image variable names (#3417)
According to the documentation, container images are described
by vars like `foo_image_repo` and `foo_image_tag`.
The variables netcheck_{agent,server}_{img_repo,tag} do not
follow that convention.
2018-09-29 20:44:01 -07:00
SataQiu
6f4054679e Remove the redundant space (#3418) 2018-09-29 20:31:57 -07:00
KMilhan
df7d53b9ef Fix ready to not to be an AnsibleUnsafeText (#3404) 2018-09-28 05:07:27 -07:00
Andreas Holmsten
0a9a42b544 Change from Nova security groups to Neutron (#2910)
* Replace `openstack_compute_secgroup_v2` with `openstack_networking_secgroup_v2`

The `openstack_networking_secgroup_v2` resource allow specifications of
both ingress and egress. Nova security groups define ingress rules only.

This change will also allow for more user-friendly specified security
rules, as the different security group resources have different HCL
syntax.
2018-09-28 11:35:02 +02:00
Rong Zhang
0232e755f3 Upgrade kubedns and kubednsautoscaler (#3407) 2018-09-28 01:20:08 -07:00
sangwook
0536125f75 Better fix for openstack cinder zone issue using ignore-volume-az option (#2980)
* Better fix for openstack cinder zone issue[1][2]
using ignore-volume-az option[3].
[1]: https://github.com/kubernetes-incubator/kubespray/pull/2155
[2]: https://github.com/kubernetes-incubator/kubespray/pull/2346
[3]: https://github.com/kubernetes/kubernetes/pull/53523

* Remove kube-scheduler-policy.yaml
2018-09-27 22:15:47 -07:00
Cédric de Saint Martin
53d87e53c5 All CNIs: support ANY toleration. (#3391)
Before, Nodes tainted with NoExecute policy did not have calico/weave Pod.
Network pod should run on all nodes whatever happens on a specific node.

Also always set the Pods to be critical.
Also remove deprecated scheduler.alpha.kubernetes.io/tolerations annotations.
2018-09-27 05:28:54 -07:00
Erwan Miran
232020ef96 skip-exists is an flag for create command, not for calicoctl (#3401) 2018-09-27 04:57:02 -07:00
Antoine Legrand
a22d74b165 Update README.md (#3402) 2018-09-27 13:32:28 +02:00
Shida Qiu
8b8e534769 remove the redundant space (#3400) 2018-09-27 03:32:26 -07:00
arzarif
6b71229d3f Resolve issues associated with Calico deployment in policy-only mode. (#3392) 2018-09-27 03:31:14 -07:00
刘旭
145e5c8943 use copy and slurp module (#3313) 2018-09-27 02:12:02 -07:00
Ryan McGuire
28315ca933 Fix README links to new inventory file paths. (#3398) 2018-09-27 01:09:51 -07:00
Victor Palma
dced082e5f fixes roles/docker/vars/ubuntu-bionic.yml points to xenial (#3395)
* fixes: #3387
2018-09-27 01:08:39 -07:00
Tupin Laurent
408faac3c9 Pip is required for vault #3376 (#3378)
* Change execution order for pip

* Remove spaces
2018-09-26 00:28:54 -07:00
Tupin Laurent
cd4a606cb1 UI is required for vault #3376 (#3377) 2018-09-26 00:27:38 -07:00
Hoat Le
c7c3effd6f Ansible var should be quoted (#3393)
to fix the follow problem in case quote is not used:

PLAY [k8s-cluster:etcd:calico-rr] **********************************************
ERROR! Syntax Error while loading YAML.
  expected <block end>, but found '<scalar>'

The error appears to have been in '/tmp/vagrant-ansible/inventory/group_vars/k8s-cluster.yml': line 59, column 39, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

kube_oidc_ca_file: {{ kube_cert_dir }}/openid-ca.pem
                                      ^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes.  Always quote template expression brackets when they
start a value. For instance:

    with_items:
      - {{ foo }}

Should be written as:

    with_items:
      - "{{ foo }}"

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
2018-09-25 23:35:35 -07:00
Kuldip Madnani
36898a2c39 Adding pod priority for all the components. (#3361)
* Changes to assign pod priority to kube components.

* Removed the boolean flag pod_priority_assignment

* Created new priorityclass k8s-cluster-critical

* Created new priorityclass k8s-cluster-critical

* Fixed the trailing spaces

* Fixed the trailing spaces

* Added kube version check while creating Priority Class k8s-cluster-critical

* Moved k8s-cluster-critical.yml

* Moved k8s-cluster-critical.yml to kube_config_dir
2018-09-25 07:50:22 -07:00
Wilmar den Ouden
8526c30b63 Replaces nonexisting system_namespace variable (#3389) 2018-09-25 01:39:02 -07:00
Andreas Krüger
d6ebe8c3e7 Sync manifests with kubeadm (#3383) 2018-09-24 02:17:18 -07:00
Rui Cao
02de35cfc3 Fix some typos (#3382)
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-23 06:33:17 -07:00
Arnaud M
7d8e21634c add CRI-O in the list of core components supported (#3381) 2018-09-22 23:03:53 -07:00
k8s-ci-robot
6b598eaacb Merge pull request #3367 from mgsergio/master
Add check that kube-master, kube-node and etcd groups are not empty.
2018-09-21 07:09:44 -07:00
Sergey Magidovich
2197330727 Add check that kube-master, kube-node and etcd groups are not empty. 2018-09-21 17:02:53 +03:00
k8s-ci-robot
1d8627eb8b Merge pull request #3370 from AnatolyRugalev/issue-3357
Added download_validate_certs option
2018-09-21 04:44:05 -07:00
Anatoly Rugalev
8f85ea89fa Added download_validate_certs option which allows to disables SSL validation for file downloads 2018-09-21 11:51:17 +02:00
k8s-ci-robot
51a5f54fc4 Merge pull request #3335 from AtzeDeVries/fix/ubuntu-xenial-resolv-conf
Fix/ubuntu xenial resolv conf
2018-09-20 23:16:11 -07:00
k8s-ci-robot
e5550b5140 Merge pull request #3369 from crandles/rm-varlibcni
remove /var/lib/cni directory in reset playbook
2018-09-20 19:32:23 -07:00
Chris Randles
a1d6078d46 remove /var/lib/cni directory 2018-09-20 15:36:25 -04:00
k8s-ci-robot
7fd87b95cf Merge pull request #3368 from woopstar/fedora_fix_1
Fix CI issue (Fedora task introduce new lookup plugin)
2018-09-20 08:16:22 -07:00
Rajitha Perera
e3d562bcdb Support for AWS cloud-config (#1465)
* Support for AWS cloud-config

* Update docs

* Fix version incompatibilities

* Do not use shorthand `default`

* Add new cloud config variable, roleArn
2018-09-20 16:31:28 +02:00
Andreas Kruger
442e6e55b6 Fix CI issue with Fedora 2018-09-20 15:45:15 +02:00
k8s-ci-robot
1f1a87bd3d Merge pull request #3366 from riverzhang/fix-error
Remove some useless files
2018-09-20 05:27:28 -07:00
rongzhang
4d1055f5d5 Remove some useless files 2018-09-20 20:24:06 +08:00
k8s-ci-robot
68acdd71f1 Merge pull request #3172 from Atoms/additional-proxy
Add additional no proxy parameter for more customization
2018-09-20 03:26:29 -07:00
k8s-ci-robot
62b1ea2b48 Merge pull request #3360 from gabibbo97/master
Support Fedora 28
2018-09-20 02:22:53 -07:00
k8s-ci-robot
9fa23ffa21 Merge pull request #3364 from SataQiu/fix-20180920
Remove duplicate persistent_volumes_enabled element in k8s-cluster.yml
2018-09-20 02:21:41 -07:00
k8s-ci-robot
1dda89dbe3 Merge pull request #3363 from woopstar/remove_efk
Remove EFK from Kubespray
2018-09-20 02:19:31 -07:00
SataQiu
2a1f77efc6 remove duplicate persistent_volumes_enabled parameter in k8s-cluster.yml 2018-09-20 17:02:26 +08:00
k8s-ci-robot
f9502e0964 Merge pull request #3362 from mirake/fix-typos
Fix some typos
2018-09-20 01:46:58 -07:00
Andreas Kruger
09b67c1ad5 Remove EFK from Kubespray 2018-09-20 10:44:17 +02:00
Rui Cao
66475f98b9 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-20 16:27:16 +08:00
k8s-ci-robot
8512cc5cca Merge pull request #3280 from wozniakjan/openstack/openstack_cacert
Check `openstack_cacert` for empty string
2018-09-19 22:42:37 -07:00
k8s-ci-robot
3a65c66a3e Merge pull request #3355 from wwt/rr-v3
Uses etcdv3 for calico 3 rr_v4 resources
2018-09-19 22:35:02 -07:00
Giacomo Longo
492b3e525d Support Fedora 28 2018-09-19 20:11:07 +02:00
Kevin Schuck
639010b3df Uses environment vars for etcd cert paths 2018-09-19 12:32:16 -05:00
k8s-ci-robot
34d1f0bff2 Merge pull request #3351 from woopstar/kubeadm_token_basic_auth_fix
Mount basic auth or token auth dirs to support it on kubeadm deployments
2018-09-19 07:50:43 -07:00
Jan Wozniak
a330b281e8 Check openstack_cacert for empty string 2018-09-19 16:37:24 +02:00
Kevin Schuck
6f9f80acee Uses etcdv3 for calico 3 rr_v4 resources 2018-09-19 09:22:52 -05:00
k8s-ci-robot
a8a62afd74 Merge pull request #3304 from kubernetes-incubator/gpu2
Add support for GPU accelerator
2018-09-19 07:12:32 -07:00
k8s-ci-robot
7fa682bdd5 Merge pull request #3342 from okamototk/fix_path_for_kubeadm_join
Add kubelet path for kubeadm.
2018-09-19 06:17:47 -07:00
Aivars Sterns
34019291b8 Merge pull request #3143 from jbcraig/add_os_trust_id
add support for openstack trust to cloud provider config
2018-09-19 16:07:03 +03:00
Aivars Sterns
847390dd9c Merge pull request #3225 from niallmcandrew/patch-1
Fix test readme formatting
2018-09-19 16:06:21 +03:00
Antoine Legrand
08179018d4 Merge branch 'master' into gpu2 2018-09-19 15:02:51 +02:00
k8s-ci-robot
b796226869 Merge pull request #3325 from firaxis/configurable_felix_healthhost
Make Felix healthhost configurable
2018-09-19 06:02:29 -07:00
Romain GUICHARD
131d565498 fix openstack cli syntax (#3353)
* fix openstack cli syntax

* 'allowed-address' is also a dash, not an underscore

* multiple allowed-address

multiple allowed-address must be in separate parameters
2018-09-19 14:50:38 +02:00
k8s-ci-robot
084af7b6e5 Merge pull request #3354 from mirwan/offline_env
Offline environment documentation
2018-09-19 05:36:37 -07:00
Aivars Sterns
bacd8c70e1 Merge pull request #3149 from rguichard/fix-router-id-output
fix the output of router_id with the right id
2018-09-19 15:34:03 +03:00
Erwan Miran
963c3479a9 Offline environment documentation 2018-09-19 14:18:51 +02:00
k8s-ci-robot
39c567de47 Merge pull request #3307 from kaarolch/upgrade_docs
Calico version verification before cluster upgrade begin.
2018-09-19 05:15:55 -07:00
k8s-ci-robot
da4cc74498 Merge pull request #3340 from wwt/master
Fixes Calico 3.x BGPPeer resources
2018-09-19 04:43:35 -07:00
Andreas Kruger
cac485756b Mount basic auth or token auth dirs to support it on kubeadm deployments 2018-09-19 13:21:58 +02:00
k8s-ci-robot
118a7cd4ae Merge pull request #3350 from woopstar/kubeadm_audit_fix_2
Remove audit again from Kubeadm 1.10.x. Write mounts not supported un…
2018-09-19 04:17:29 -07:00
Andreas Kruger
c058e7a5ec Remove audit again from Kubeadm 1.10.x. Write mounts not supported untill 1.11 2018-09-19 13:15:14 +02:00
k8s-ci-robot
1c10c3e2ff Merge pull request #3348 from woopstar/kubelet_node_custom_flags
Add support for kubelet_node_custom_flags
2018-09-19 04:10:29 -07:00
Andreas Kruger
e0ddabc463 Add support for kubelet_node_custom_flags 2018-09-19 12:58:06 +02:00
k8s-ci-robot
13da9bf75e Merge pull request #3337 from LuckySB/groupvars-networkplugin
create separate options files for network plugins
2018-09-19 03:56:29 -07:00
k8s-ci-robot
e47eeb67ee Merge pull request #3344 from woopstar/kubeadm-minor-fix
Sync manifests from non-kubeadm to kubeadm deploy
2018-09-19 03:48:32 -07:00
k8s-ci-robot
824199fc7f Merge pull request #3347 from mirake/fix-error
Fix some typos
2018-09-19 03:43:29 -07:00
Rui Cao
c004896a40 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-19 18:22:08 +08:00
Andreas Kruger
940d2fdbb1 Add missing enforce-node-allocatable to kubelet for kubeadm deployments 2018-09-19 11:54:34 +02:00
Andreas Kruger
1c999b2a61 Move kube_kubeadm_controller_extra_args to controllerManagerExtraArgs section. It was placed in controllerManagerExtraVolumes 2018-09-19 11:24:19 +02:00
Andreas Kruger
8e37841a2e Add audit support to v1alpha1 of Kubeadm 2018-09-19 11:01:30 +02:00
Andreas Kruger
8d1c0c469c Added missing enable-aggregator-routing option 2018-09-19 10:58:46 +02:00
Rong Zhang
8f5b0c777b Merge pull request #3345 from mirake/fix-typos
Fix some typos
2018-09-19 16:50:14 +08:00
Rui Cao
0dd82293f1 Fix some typos
Signed-off-by: Rui Cao <ruicao@alauda.io>
2018-09-19 16:47:58 +08:00
Andreas Kruger
26d7380c2e Sync manifests from non-kubeadm to kubeadm deploy 2018-09-19 10:01:45 +02:00
Takashi Okamoto
95703fb6f2 Add kubelet path for kubeadm. 2018-09-19 03:04:03 +00:00
Karol Chrapek
0121bce9e5 Instead of doc update, change the verify step 2018-09-18 22:13:15 +02:00
Sergey Bondarev
e766dd5582 move calico options from all.yml to k8s-cluster/k8s-net-calico.yml 2018-09-18 21:30:49 +03:00
Kevin Schuck
fb1678d425 Ensures BGPPeer resource names are unique 2018-09-18 10:48:30 -05:00
Alex Yakovenko
884053aaa7 Make Felix healthhost configurable 2018-09-18 15:48:29 +03:00
Sergey Bondarev
93429bc661 create separate options files for network plugins
remove plugin options from common files
2018-09-18 14:29:53 +03:00
k8s-ci-robot
3d27007750 Merge pull request #3329 from riverzhang/checksum
Keep list of k8s checksums for hyperkube and kubeadm
2018-09-18 02:42:59 -07:00
AtzeDeVries
4cbd97667d Merge remote-tracking branch 'upstream/master' into fix/ubuntu-xenial-resolv-conf 2018-09-18 09:51:46 +02:00
k8s-ci-robot
2730c90dcd Merge pull request #3320 from riverzhang/kubelet
Support dynamic kubelet config
2018-09-18 00:16:04 -07:00
rongzhang
09a1bcb30b Keep list of k8s checksums for hyperkube and kubeadm
Keep a list of checksums for kubeadm and hyperkube downloads.
Makes it easier to switch version
2018-09-18 15:05:17 +08:00
rongzhang
77e08ba204 Support dynamic kubelet config
https://kubernetes.io/blog/2018/07/11/dynamic-kubelet-configuration/
2018-09-18 08:44:39 +08:00
Kevin Schuck
d3adf09bde Fixes BGPPeer resource for calico >= 3.0.0 2018-09-17 15:22:28 -05:00
k8s-ci-robot
26db1afd1a Merge pull request #3227 from mirwan/contiv121
Upgrade contiv to 1.2.1 with some enhancements
2018-09-17 08:15:23 -07:00
Erwan Miran
afa2a5f1c4 enhanced reset for contiv 2018-09-17 16:46:19 +02:00
Erwan Miran
bcaf2f9ea3 contiv 1.2.1 2018-09-17 16:45:05 +02:00
Rong Zhang
3cd38e0d4c Merge pull request #3245 from ctang1989/patch-1
terraform.tfvars.example is not correct, remove.
2018-09-17 20:41:08 +08:00
k8s-ci-robot
d16b562b18 Merge pull request #3316 from mattymo/tiller_override_fix
Fix tiller override command
2018-09-17 05:12:05 -07:00
k8s-ci-robot
0538f8a70d Merge pull request #3290 from riverzhang/fix-upgrade
Fix upgrade k8s
2018-09-17 04:26:47 -07:00
k8s-ci-robot
1a426ada3c Merge pull request #3324 from alvistack/cert-manager-v0.5.0
cert-manager: Upgrade to 0.5.0
2018-09-17 04:20:56 -07:00
k8s-ci-robot
d96e17451e Merge pull request #3326 from alvistack/weave-v2.4.1
weave: Upgrade to 2.4.1
2018-09-17 04:19:39 -07:00
Wong Hoi Sing Edison
a544e54578 weave: Upgrade to 2.4.1
Upstream Changes:

-   weave 2.4.1 (https://github.com/weaveworks/weave/releases/tag/v2.4.1)

Our Changes:

-   Templates sync with upstream manifests
2018-09-17 17:09:19 +08:00
Wong Hoi Sing Edison
f34a6699ef cert-manager: Upgrade to 0.5.0
Upstream Changes:

-   cert-manager 0.5.0 (https://github.com/jetstack/cert-manager/releases/tag/v0.5.0)

Our Changes:

-   Templates sync with upstream manifests
2018-09-17 16:58:04 +08:00
AtzeDeVries
482857611a added extra var for ubuntu 18 netplan resolv 2018-09-17 09:01:55 +02:00
AtzeDeVries
8d8bbc294a fix for resolvconf in ubuntu18 2018-09-17 09:00:55 +02:00
k8s-ci-robot
7f91f6e034 Merge pull request #3287 from Kami-no/coredns_metrics
Monitor CoreDNS over svc
2018-09-16 23:39:59 -07:00
rongzhang
84c4c7dc82 Use synchronize module 2018-09-16 20:36:44 +08:00
rongzhang
1d4aa7abcc Fix upgrade k8s 2018-09-16 10:35:12 +08:00
Matthew Mosesohn
fe35c32c62 Fix tiller override command 2018-09-15 16:35:19 +03:00
Rong Zhang
aa0da221e9 Merge pull request #2880 from hfinucane/rh7-paths
Fix #2261 by supporting Red Hat's limited PATH
2018-09-15 19:27:22 +08:00
k8s-ci-robot
f1403493df Merge pull request #3296 from rabi/fix_cilium_crio
Add volume and volumeMount for crio-socket
2018-09-15 03:23:02 -07:00
k8s-ci-robot
36901d8394 Merge pull request #3309 from ant31/fix_download_file
Fix download file
2018-09-15 03:18:23 -07:00
k8s-ci-robot
a789707027 Merge pull request #3310 from mirwan/document_psp_auditing
Document podsecuritypolicy_enabled and kubernetes_audit
2018-09-15 02:09:37 -07:00
k8s-ci-robot
e6a2e34dd1 Merge pull request #3315 from riverzhang/upgrade-kubedns
Upgrade kubedns to 1.14.11
2018-09-15 02:08:20 -07:00
rongzhang
934d92f09c Upgrade kubedns to 1.14.11 2018-09-15 15:22:38 +08:00
Antoine Legrand
016ba4cdfa update hyperkube checksum 2018-09-14 00:44:36 +02:00
k8s-ci-robot
b227e44498 Merge pull request #3265 from torvitas/fix_metallb_configuration
[bugfix] fix path to metallb configuration
2018-09-13 14:47:38 -07:00
k8s-ci-robot
5e59541faa Merge pull request #3258 from okamototk/fix_kubectl_path
absolute path for kubectl.
2018-09-13 14:38:20 -07:00
Antoine Legrand
d94b7fd57c Don't download binary if docker is selected 2018-09-13 22:06:51 +02:00
k8s-ci-robot
9964ba77ee Merge pull request #3305 from mattymo/fixup_upgrade
Fixes for upgrade mode
2018-09-13 12:57:23 -07:00
k8s-ci-robot
153661cc47 Merge pull request #3284 from mattymo/more_calico_legacy
Put back legacy support for calico ippools and bgp settings
2018-09-13 09:25:26 -07:00
Erwan Miran
166da2ffd0 Document podsecuritypolicy_enabled and kubernetes_audit 2018-09-13 18:07:15 +02:00
Matthew Mosesohn
8becd905b8 Fixes for upgrade mode
Uses correct flag for draining with a pod selector
Verifies minimum kubectl version for compatibility
2018-09-13 18:42:01 +03:00
Matthew Mosesohn
c83350e597 refactor to base on calico_version 2018-09-13 18:05:10 +03:00
Karol Chrapek
730866f431 Update upgrades.md 2018-09-13 15:58:41 +02:00
k8s-ci-robot
ffbe9e7fd8 Merge pull request #1973 from guenhter/rsync-cmd-to-synchronize
Replace the raw rsync command with the synchronize module
2018-09-13 03:12:05 -07:00
AtzeDeVries
91b02c057e Add support for GPU accelerator 2018-09-13 11:53:11 +02:00
Matthew Mosesohn
55d76ea3d8 Update install.yml 2018-09-13 12:04:53 +03:00
rabi
1df0b67ec1 Add volume and volumeMount for crio-socket
This commit fixes #3295
2018-09-13 14:34:44 +05:30
Sascha Marcel Schmidt
cd3b30d3bf fix path to configuration 2018-09-13 10:15:31 +02:00
k8s-ci-robot
53a685dbf8 Merge pull request #3262 from torvitas/fix_bin_path
[bugfix] Use bin_dir to find kubectl in contrib/metallb
2018-09-13 00:51:45 -07:00
k8s-ci-robot
218e527363 Merge pull request #3243 from mirwan/helm_binary_should_be_installed_on_all_masters
Install Helm client on all masters
2018-09-13 00:39:36 -07:00
k8s-ci-robot
27fc391f71 Merge pull request #3291 from mirwan/remove_insecure-bind-address_when_insecure_port_is_0
Remove --insecure-bind-address when insecure-port=0
2018-09-13 00:34:39 -07:00
Matthew Mosesohn
1091e82327 Update install.yml 2018-09-12 22:15:46 +03:00
k8s-ci-robot
a5cc8537f9 Merge pull request #3283 from mattymo/more_upgrade_options
Extra options for upgrade mode
2018-09-12 10:50:33 -07:00
Matthew Mosesohn
d692737a13 Extra options for upgrade mode
Optionally do not drain nodes by setting drain_nodes to false
Optionally set a labelselector to target which pods should be drained.
2018-09-12 17:05:41 +03:00
Matthew Mosesohn
cc79125d3e Update install.yml 2018-09-12 17:03:55 +03:00
k8s-ci-robot
a801e02cea Merge pull request #3261 from mattymo/etcd_ssl_dir_perms
Ensure etcd file permissions are correct when using vault
2018-09-12 01:10:26 -07:00
Zinin D.A
29c7775ea1 Monitor CoreDNS over svc 2018-09-12 10:24:15 +03:00
k8s-ci-robot
cbf099de4d Merge pull request #3285 from mirwan/fix_netchecker_sa_when_psp
Fix wrong sa name in crb when psp is enabled
2018-09-12 00:20:38 -07:00
k8s-ci-robot
c8630f46fd Merge pull request #3286 from fritchie/master
Change update strategy to RollingUpdate
2018-09-12 00:18:05 -07:00
Erwan Miran
af74d85b7d Remove --insecure-bind-address when insecure-port=0 2018-09-12 08:22:11 +02:00
Chad Swenson
b8e7b4c0cd Merge pull request #3288 from kubernetes-incubator/revert-3252-remove_insecure-bind-address_when_insecure-bind-port_is_0
Revert "Remove insecure-port and insecure-bind-address when possible"
2018-09-11 17:44:59 -05:00
Chad Swenson
97e5f28537 Revert "Remove insecure-port and insecure-bind-address when possible" 2018-09-11 17:42:12 -05:00
Frank Ritchie
f42e0a4711 Change update strategy to RollingUpdate.
When enable_network_policy is set to True with Calico 3 kubectl
apply fails with the error:

The Deployment "calico-kube-controllers" is invalid:
spec.strategy.rollingUpdate: Forbidden: may not be specified when
strategy type is 'Recreate'

See

https://github.com/kubernetes-incubator/kubespray/issues/3267

Changing the update strategy to RollingUpdate avoids this error.
2018-09-11 12:03:42 -04:00
Sascha Marcel Schmidt
6a5c828b6c fix bin_dir 2018-09-11 16:27:20 +02:00
Sascha Marcel Schmidt
97aa87612a use bin_dir 2018-09-11 16:27:17 +02:00
Matthew Mosesohn
d91f9e14e6 Put back legacy support for calico ippools and bgp settings 2018-09-11 16:40:11 +03:00
Erwan Miran
e24b1220a0 Fix wrong sa name in crb when psp is enabled 2018-09-11 15:04:55 +02:00
k8s-ci-robot
18f0531bba Merge pull request #3266 from mirwan/doc_mixed_ansible_installation
Precision on control machine mixed Ansible installation
2018-09-11 03:04:26 -07:00
k8s-ci-robot
0a720b35af Merge pull request #3270 from riverzhang/fix-registry
Add insecure_registry config to docker options
2018-09-10 04:28:52 -07:00
rongzhang
f557b54489 Add docker_ to values 2018-09-10 18:05:49 +08:00
Erwan Miran
04852ad753 Install Helm on all masters 2018-09-10 11:39:26 +02:00
k8s-ci-robot
ee4f437aa2 Merge pull request #3276 from riverzhang/1.11.3
Upgrade kubernetes to v1.11.3
2018-09-10 02:15:52 -07:00
Matthew Mosesohn
aaa9a4efac Ensure vault file permissions are correct 2018-09-10 12:04:04 +03:00
rongzhang
0140cf71c8 Upgrade kubernetes to v1.11.3 2018-09-10 15:52:49 +08:00
rongzhang
51794e4c13 Deploying k8s clusters in a private environment 2018-09-09 11:06:00 +08:00
rongzhang
b249b06036 Move docker options to kubespray-defaults 2018-09-09 10:21:18 +08:00
rongzhang
20caaf9d1f Delete gitignore file 2018-09-09 02:09:02 +08:00
rongzhang
cb133cba68 Add registry_mirrors config to docker options 2018-09-09 01:21:32 +08:00
rongzhang
c41ca22a78 Planning the configuration of docker parameters 2018-09-09 00:59:59 +08:00
rongzhang
009d2ffc6c Add insecure_registry config to docker options 2018-09-08 23:24:35 +08:00
k8s-ci-robot
baf1aba239 Merge pull request #3257 from georgejdli/feature-helm-tls-2
[helm-tls] add option to secure helm tiller with tls
2018-09-07 10:34:58 -07:00
georgejdli
b891d77679 add option to secure helm tiller with tls 2018-09-07 10:29:31 -05:00
Erwan Miran
1d2ae39cff Precision on control machine mixed Ansible installation 2018-09-07 17:26:55 +02:00
k8s-ci-robot
7bf09945f2 Merge pull request #3259 from okamototk/fix_indent
Fix indent error by yamllint.
2018-09-07 08:25:20 -07:00
k8s-ci-robot
5c2e9a5376 Merge pull request #3252 from mirwan/remove_insecure-bind-address_when_insecure-bind-port_is_0
Remove insecure-port and insecure-bind-address when possible
2018-09-07 07:41:21 -07:00
k8s-ci-robot
b3a689658b Merge pull request #3255 from mlushpenko/calico_check
Fix calico health checks
2018-09-07 07:39:20 -07:00
Takashi Okamoto
d182d4f979 absolute path for kubectl. 2018-09-07 09:33:43 -04:00
k8s-ci-robot
9c49e071d3 Merge pull request #3260 from riverzhang/discoverytimeout
Add discovery_timeout to join configuration
2018-09-07 05:20:19 -07:00
rongzhang
0f63924ed4 Add discovery_timeout to join configuration
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha2#JoinConfiguration
2018-09-07 16:28:53 +08:00
Takashi Okamoto
b2a7a27dfb Fix indent error by yamllint. 2018-09-06 16:23:00 -04:00
k8s-ci-robot
b79dd602f3 Merge pull request #3256 from torvitas/heketi-fix-storageclass
[bugfix] heketi storageclass privilege
2018-09-06 07:52:08 -07:00
Sascha Marcel Schmidt
157639e451 use privileged user 2018-09-06 16:38:11 +02:00
mlushpenko
ea2c9d8f57 Fix yaml checks 2018-09-06 16:26:57 +02:00
mlushpenko
f958b32c83 Fix calico health checks 2018-09-06 15:57:21 +02:00
k8s-ci-robot
2faa8f1e37 Merge pull request #3254 from mattymo/calico_upgrade_tweaks
Fix backward compatibility with calico 2.6
2018-09-06 06:20:52 -07:00
k8s-ci-robot
ab462d92b8 Merge pull request #3249 from mattymo/fix_missing_var_kube_proxy_nodeport
Add missing variable kube_proxy_nodeport_addresses
2018-09-06 06:18:23 -07:00
k8s-ci-robot
27905bbddf Merge pull request #3250 from mattymo/openstack_cacert
Fix openstack cacert task
2018-09-06 06:15:59 -07:00
Matthew Mosesohn
dc3e317d20 Fix backward compatibility with calico 2.6 2018-09-06 15:54:20 +03:00
k8s-ci-robot
ac87ba5c0d Merge pull request #3253 from mattymo/reduce_gce_size
Reduce instance sizes in gce
2018-09-06 05:45:23 -07:00
Matthew Mosesohn
ea918e1999 Reduce instance sizes in gce 2018-09-06 15:22:46 +03:00
Erwan Miran
a5509fc2ce Remove insecure-port and insecure-bind-address when possible 2018-09-06 13:46:09 +02:00
Matthew Mosesohn
b614a3504b Fix openstack cacert task 2018-09-06 14:06:06 +03:00
k8s-ci-robot
661d455ab4 Merge pull request #3248 from mattymo/fixupfixup
put back endif in kubelet rkt template
2018-09-06 03:51:50 -07:00
Matthew Mosesohn
cd8e469b9c Add missing variable kube_proxy_nodeport_addresses 2018-09-06 13:36:17 +03:00
Matthew Mosesohn
991b3dbe54 put back endif in kubelet rkt template 2018-09-06 13:21:22 +03:00
k8s-ci-robot
a3caeba242 Merge pull request #2931 from torvitas/master
Heketi/GlusterFS
2018-09-06 03:07:35 -07:00
k8s-ci-robot
f5251f7d27 Merge pull request #3247 from mattymo/kubelet_rkt_Fix
remove broken endifs in kubelet rkt mode
2018-09-06 02:49:35 -07:00
Matthew Mosesohn
faedfb6307 remove broken endifs in kubelet rkt mode 2018-09-06 11:59:25 +03:00
k8s-ci-robot
1940495817 Merge pull request #3246 from riverzhang/pause
Upgrade pause image to 3.1
2018-09-06 00:48:05 -07:00
rongzhang
b979fb0116 Upgrade pause image to 3.1 2018-09-06 14:15:51 +08:00
Antoine Legrand
7e140e5f3c Merge pull request #3122 from jbcraig/fix_cacert_feature
resolve issues with new cacert feature
2018-09-05 23:31:53 +02:00
k8s-ci-robot
0e5393f203 Merge pull request #3224 from riverzhang/fix-feature-gates
Fix feature-gates
2018-09-05 08:55:48 -07:00
Sascha Marcel Schmidt
df6cf9aa51 add cleanup 2018-09-05 17:18:53 +02:00
Sascha Marcel Schmidt
5cf1396cb7 removes unnecessary check 2018-09-05 17:17:49 +02:00
rongzhang
435e098751 Fix feature-gates 2018-09-05 22:55:51 +08:00
Sascha Marcel Schmidt
6ffddbff24 fix database not available heketi error 2018-09-05 16:03:32 +02:00
Sascha Marcel Schmidt
64b0ce974d use bin_dir variable 2018-09-05 16:02:55 +02:00
Sascha Marcel Schmidt
ce776f0f6a actually use heketi auth 2018-09-05 15:59:56 +02:00
Sascha Marcel Schmidt
949984601f actually use heketi auth 2018-09-05 15:58:44 +02:00
Antoine Legrand
055e80f846 Merge pull request #3244 from ant31/calico31
Reverts calico update to 3.2.0, fixes #3223
2018-09-05 11:45:22 +02:00
Antoine Legrand
15363530ae Reverts calico update to 3.2.0, fixes #3223 2018-09-05 11:44:32 +02:00
唐超
ca6c5e2a6a terraform.tfvars.example is not correct, remove. 2018-09-05 17:41:34 +08:00
k8s-ci-robot
73ddb62c58 Merge pull request #3234 from warmchang/tryUpdateNodeStatus
Fix the tryUpdateNodeStatus link
2018-09-05 00:21:33 -07:00
k8s-ci-robot
a512f68650 Merge pull request #3236 from luisyonaldo/fix-configure-calico-network-pool
Fix configure calico network pool for ipipMode = CrossSubnet
2018-09-04 23:22:33 -07:00
Jeff Bornemann
83838b7fbc Add new OCI cloud controls 2018-09-04 14:03:17 -04:00
Antoine Legrand
769f99b369 Merge pull request #3233 from mgsergio/patch-2
Hint on how to join the slack channel README.md
2018-09-04 17:27:10 +02:00
Antoine Legrand
bf1b9649d0 Merge pull request #3235 from mirwan/docker_version_emphasis
Emphasis on docker recommended version
2018-09-04 17:11:29 +02:00
Luis Nunez
6569180654 remove capitalize filter 2018-09-04 14:56:53 +02:00
Erwan Miran
ae0ed87c0f Emphasis on docker recommended version 2018-09-04 14:34:04 +02:00
Sascha Marcel Schmidt
9cc8ef4b91 MetalLB as loadbalancer for on premise deployments (#3027)
* add metallb as loadbalancer for on premise deployments

* improve configuration

* add variables to DaemonSet
2018-09-04 15:17:23 +03:00
k8s-ci-robot
ad33f71ac2 Merge pull request #3228 from mirwan/credentials_dir
Introducing credentials_dir variable in order to be able to override it
2018-09-04 04:35:11 -07:00
William Zhang
30634b3a25 Fix the tryUpdateNodeStatus link
Signed-off-by: William Zhang <zhang.wanmin@zte.com.cn>
2018-09-04 19:17:05 +08:00
mgsergio
b31cf0284d Update README.md
It wan't obvious for me how to join this channel.
2018-09-04 12:33:55 +03:00
k8s-ci-robot
50c6a98b15 Merge pull request #3229 from mirwan/docker_1806_ubuntu_under_bionic
Docker 18.06 for ubuntu versions before bionic
2018-09-03 11:37:13 -07:00
Antoine Legrand
e7234c9114 Merge pull request #3232 from rabi/master
Document correct var kubeadm_enabled
2018-09-03 20:34:16 +02:00
Erwan Miran
a644b7c267 Introducing credentials_dir in order to be able to override it 2018-09-03 18:04:50 +02:00
Erwan Miran
f0af7262b1 credentials directory should be ignored as inventory 2018-09-03 18:04:34 +02:00
rabi
0865bef382 Document correct var kubeadm_enabled 2018-09-03 21:14:53 +05:30
Atoms
8c9588ab59 Add additional no proxy parameter for more customization 2018-09-03 17:09:58 +03:00
Erwan Miran
c0ce875743 change edge to 18.06 for ubuntu 2018-09-03 14:11:25 +02:00
Erwan Miran
a22d28e1c1 docker 18.06 for ubuntu version before bionic 2018-09-03 14:10:51 +02:00
k8s-ci-robot
c32145057d Merge pull request #3178 from gitphill/patch-1
Add azure-container-registry-config for Azure
2018-09-03 05:06:01 -07:00
rboyapat
fbb98b0070 Fix the jinja expression for openstack_tenant_id (#3151)
OS_PROJECT_ID is obsolete in keystone v3 and jinja expression
doesn't set openstack_tenant_id as expected because of
undefined env var. Fixed the expression.
2018-09-03 14:59:49 +03:00
k8s-ci-robot
db11394711 Merge pull request #3200 from pablodav/feature/k8s_win_v1.11
Required support to start working on windows node support
2018-09-03 04:51:23 -07:00
k8s-ci-robot
72f6b3f836 Merge pull request #3210 from kubernetes-incubator/re-org-group_vars
Split group-variables
2018-09-03 04:40:26 -07:00
Antoine Legrand
0a08268efb Merge pull request #3226 from mattymo/always_run_helm_init
Always run helm init to allow for settings changes
2018-09-03 12:49:05 +02:00
Antoine Legrand
ccda9664e7 remove duplicated var 2018-09-03 12:09:31 +02:00
Antoine Legrand
e98ba9e839 Split group-variables 2018-09-03 12:09:31 +02:00
Matthew Mosesohn
fd57fde075 Always run helm init to allow for settings changes 2018-09-03 11:16:01 +03:00
k8s-ci-robot
6204b85a37 Merge pull request #3222 from alvistack/nginx-0.19.0
ingress-nginx: Upgrade to 0.19.0
2018-09-03 00:11:38 -07:00
Wong Hoi Sing Edison
9fc8f9a07d ingress-nginx: Upgrade to 0.19.0
Upstream Changes:

-   ingress-nginx 0.19.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.19.0)

Our Changes:

-   Sync templates with upstream changes
2018-09-03 08:00:08 +08:00
niallmcandrew
8745486fb3 Fix test readme formatting
Adds a missing vertical bar at the star of a table so its correct markdown.
2018-09-03 08:38:21 +12:00
Pablo Estigarribia
7cbe3c2171 ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

remove empty when line

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

force kubeadm upgrade due to failure without --force flag

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

added nodeSelector to have compatibility with hybrid cluster with win nodes, also fix for download with missing container type

fixes in syntax and LF for newline in files

fix on yamllint check

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

some cleanup for innecesary lines

remove conditions for nodeselector
2018-09-02 12:47:06 -03:00
k8s-ci-robot
a47c9239e8 Merge pull request #3221 from alvistack/cephfs-provisioner-v2.1.0-k8s1.11
cephfs-provisioner: Upgrade to v2.1.0-k8s1.11
2018-09-02 04:16:17 -07:00
k8s-ci-robot
635ca1a0b8 Merge pull request #3220 from alvistack/coredns-1.2.2
coredns: Upgrade to v1.2.2
2018-09-02 04:13:53 -07:00
Wong Hoi Sing Edison
32fdfbcd5a cephfs-provisioner: Upgrade to v2.1.0-k8s1.11
Upstream Changes:

-   cephfs-provisioner v2.1.0-k8s1.11 (https://github.com/kubernetes-incubator/external-storage/releases/tag/cephfs-provisioner-v2.1.0-k8s1.11)

Our Changes:

-   Sync clusterrole and role with upstream changes
2018-09-02 11:51:28 +08:00
k8s-ci-robot
dee9324d4b Merge pull request #3219 from mlushpenko/kubeadm-ha
Fix ports for kubeadm client and master configs for ha setups
2018-09-01 20:49:22 -07:00
Wong Hoi Sing Edison
df8b27c03c coredns: Upgrade to v1.2.2
Upstream Changes:

-   coredns v1.2.2 (https://github.com/coredns/coredns/releases/tag/v1.2.2)

NOTE:

-   coredns image for 1.2.0 and 1.2.1 had been removed from https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/coredns
2018-09-02 11:37:21 +08:00
mlushpenko
8e95974930 Fix ports for kubeadm client and master configs for ha setups 2018-09-01 18:02:52 +02:00
k8s-ci-robot
64b32146ca Merge pull request #3217 from mirwan/fix_3215
Fix docker_options definition to remove newlines
2018-09-01 07:29:47 -07:00
Erwan Miran
36a7bdfac1 Fix docker_options definition to remove newlines 2018-09-01 09:55:04 +02:00
k8s-ci-robot
13dda0e36e Merge pull request #3207 from mirwan/fix_3206
Fix target hosts generation when /etc/hosts does not contain 127.0.0.1 or ::1
2018-08-31 17:50:56 -07:00
k8s-ci-robot
6e7100f283 Merge pull request #3208 from mirwan/etcd_ha_doc_n_cleaning
Add documentation about having HA for etcd
2018-08-31 08:06:05 -07:00
Erwan Miran
059cd17b47 Fix target hosts generation when /etc/hosts does not contain 127.0.0.1 or ::1 2018-08-31 16:33:18 +02:00
k8s-ci-robot
fb7b3305dc Merge pull request #3209 from mirwan/use_etcd_events_access_address
etcd_events_access_address should be used for peer_url and client_url
2018-08-31 07:26:25 -07:00
k8s-ci-robot
0e1f24e95a Merge pull request #3140 from kubernetes-incubator/preinstall-tasks_num
Add support for etcd arm64
2018-08-31 06:20:56 -07:00
Erwan Miran
81c3f2c971 etcd_events_access_address should be used for peer_url and client_url 2018-08-31 15:03:07 +02:00
Erwan Miran
82a28d6bb3 Add documentation about having HA for etcd 2018-08-31 14:40:25 +02:00
Antoine Legrand
22f9114630 update calico to 3.2.0 2018-08-31 13:45:08 +02:00
Antoine Legrand
1704d699c4 CI: switch ubuntu18 to manual job 2018-08-31 13:45:08 +02:00
Antoine Legrand
f2f0cdd0ff add arch vars for docker 2018-08-31 13:45:08 +02:00
Antoine Legrand
da06c8e5a9 etcd UNSUPPORTED for all arch 2018-08-31 13:45:08 +02:00
Antoine Legrand
2f1fe44762 update images to use arch 2018-08-31 13:45:08 +02:00
Antoine Legrand
19268ded23 Fix some arm64 errors 2018-08-31 13:45:08 +02:00
Antoine Legrand
f67933d2ac add ETCD_UNSUPPORTED_ARCH=arm64 flag 2018-08-31 13:45:08 +02:00
Antoine Legrand
247b9e83d8 etcd arch-image 2018-08-31 13:45:08 +02:00
Antoine Legrand
9c2098b8fa fix kubelet_max_pod assert 2018-08-31 13:45:08 +02:00
Antoine Legrand
48c0c8d854 Update dir list 2018-08-31 13:45:08 +02:00
k8s-ci-robot
f5f7b1626b Merge pull request #3203 from riverzhang/doc
Update readme
2018-08-31 02:57:55 -07:00
k8s-ci-robot
c87a373c53 Merge pull request #3204 from riverzhang/fix-copy-ssl-ca
Fix copy etcd-ssl-ca failed
2018-08-31 02:00:01 -07:00
rongzhang
2609ec0dc3 Fix copy etcd-ssl-ca failed 2018-08-31 15:06:03 +08:00
rongzhang
61ed9886c1 Update readme 2018-08-31 14:12:16 +08:00
k8s-ci-robot
aafd034ab8 Merge pull request #3202 from riverzhang/fix-ipvs
Fix ipvs by kubeadm v1alpha1
2018-08-30 13:26:02 -07:00
k8s-ci-robot
d14394c691 Merge pull request #3185 from mirwan/helm_install_docker_insecureport_0
Mount /root/.kube to helm container
2018-08-30 08:11:33 -07:00
rongzhang
16fc22a207 Fix ipvs by kubeadm v1alpha1 2018-08-30 23:04:57 +08:00
k8s-ci-robot
d9ea937493 Merge pull request #3187 from mirwan/kubeadm-config_syntax
Fix kubeadm-config for audit-log-path and feature-gates
2018-08-30 06:55:43 -07:00
k8s-ci-robot
a96a0ee307 Merge pull request #3198 from riverzhang/fix-kubeadm-v1alpha1
Fix kubeadm v1alpha1 configure
2018-08-30 04:11:37 -07:00
k8s-ci-robot
f48468b83b Merge pull request #3195 from mirwan/fix_psp_templates
Fix some addons when PodSecurityPolicy is enabled
2018-08-30 03:37:52 -07:00
Aivars Sterns
5b79ec8e3b Merge pull request #3199 from kubernetes-incubator/ant31-patch-2
Add mirwan as Reviewer
2018-08-30 13:24:15 +03:00
Antoine Legrand
3f4acbc5f6 Add mirwan as Reviewer 2018-08-30 11:53:50 +02:00
rongzhang
35e5adaf0a Fix kubeadm v1alpha1 configure 2018-08-30 17:44:00 +08:00
k8s-ci-robot
a268a49e1a Merge pull request #3197 from riverzhang/kubeadm-test
Enable kubeadm test
2018-08-29 22:56:46 -07:00
rongzhang
91a83a3a0f Enable kubeadm test
Need to test the kubeadm deployment cluster, most of the functional changes, will involve kubeadm.
2018-08-30 12:58:00 +08:00
k8s-ci-robot
a247c2c713 Merge pull request #3191 from fcgravalos/make-canal-mount-xtables-lock
canal should mount xtables.lock to share the lock with other processe…
2018-08-29 08:57:32 -07:00
k8s-ci-robot
4feb62f6bf Merge pull request #3193 from riverzhang/fix-lb-kubeadm
Fix kubeadm lb
2018-08-29 04:22:40 -07:00
Fernando Crespo Grávalos
ac4ef719cc canal should mount xtables.lock to share the lock with other processes like kube-proxy 2018-08-29 13:08:51 +02:00
Erwan Miran
ceb97e5809 Fix wrong syntax for jinja sub list extraction and addition of missing role template 2018-08-29 12:58:10 +02:00
k8s-ci-robot
3bfda55fca Merge pull request #3061 from okamototk/crio
cri-o support
2018-08-29 03:48:40 -07:00
rongzhang
9eade647e6 Fix kubeadm lb 2018-08-29 18:29:24 +08:00
k8s-ci-robot
f82a1933b0 Merge pull request #3176 from equinix-ms/master
Add option to change the Tiller Deployment namespace.
2018-08-29 03:03:40 -07:00
Robin Elfrink
bbdd1c8f06 Add option to change the Tiller Deployment namespace. 2018-08-29 11:20:41 +02:00
k8s-ci-robot
f876c89081 Merge pull request #3189 from Arslanbekov/up-dashboard-version
Up dashboard version to 1.10.0
2018-08-29 02:08:40 -07:00
Phill Garrett
1babbcca85 Fix elif azure statement 2018-08-28 15:43:03 +01:00
k8s-ci-robot
58ecd312a7 Merge pull request #3186 from mirwan/fix_etchosts_localhost_handling
Fix localhost handling when /etc/hosts contains parenthesis
2018-08-28 07:07:53 -07:00
Takashi Okamoto
c0dfa72707 Separate RedHat specific vars for cri-o. 2018-08-28 13:36:14 +00:00
Arslanbekov Denis
fe1e758856 Up dashboard version to 1.10.0 2018-08-28 14:10:19 +03:00
Phill Garrett
f325d13082 Add azure-container-registry-config for Azure
Seperated out KUBELET_CLOUDPROVIDER env var assignment when cloud_provider equals azure
Appended azure-container-registry-config parameter
2018-08-28 10:23:25 +00:00
Erwan Miran
52ab54eeea Fix missing quotes for audit-log-path and wrong placement of feature-gates 2018-08-28 09:05:57 +02:00
Takashi Okamoto
d407a590a6 container_manager variable to specify runtime. 2018-08-28 06:23:38 +00:00
Takashi Okamoto
5eb805f098 Change timeout for kubeadm 600s.
* kubeadm timeout is too short and it may interrupt by timeout.
2018-08-28 04:51:38 +00:00
Takashi Okamoto
dfdcb56784 Delete all cri-o containers when execute reset.yml. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
659cccc507 Update sample. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
f47c31dce5 Add cri-o document. 2018-08-28 02:25:33 +00:00
Takashi Okamoto
236f066635 kubeadm cri-o support. 2018-08-28 02:24:45 +00:00
Takashi Okamoto
5ab8a712d9 Add download_container flag to avoid docker pull when use cri-o. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
cf7b9cfeef Support crio in kubelet service. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
6090af29e7 Add cri-o role. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
359009bb05 Download etcd and hyperkube binary. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
bdbfa4d403 Add ipvs support for kubeadm 1.10 or later. 2018-08-28 01:24:26 +00:00
Takashi Okamoto
6849788ebc Fix copy ca cert and ca key for kubeadm. 2018-08-28 01:24:25 +00:00
Takashi Okamoto
ac639b2a17 Change kubeadm config to run etcd by kubeadm. 2018-08-28 01:24:25 +00:00
Takashi Okamoto
b18ed5922b Add etcd default value in kubespray-default. 2018-08-28 01:24:25 +00:00
Erwan Miran
b395bb953f Fix wrong when condition that ends up with jinja error when the content of /etc/hosts contains parenthesis 2018-08-27 21:20:57 +02:00
Erwan Miran
b652792a93 /root/.kube must to mounted in order for helm to read kubeconfig and not fallback to localhost:8080 2018-08-27 18:17:26 +02:00
k8s-ci-robot
7efe287c74 Merge pull request #2474 from mirwan/localhost_in_etc_hosts
Localhost in hosts files should be updated (if necessary), not overriden
2018-08-27 06:25:43 -07:00
k8s-ci-robot
881b46f458 Merge pull request #3095 from mirwan/dnsmasq_template_rendering_filename
Dnsmasq manifests should not have j2 extension but templates should
2018-08-27 02:51:43 -07:00
k8s-ci-robot
d43cd9a24c Merge pull request #3104 from maxbrunet/hotfix/replace-local_actions
Use delegate_to: localhost instead of local_action
2018-08-27 02:50:42 -07:00
guenhter
fff48d24ea Replace the raw rsync command with the synchronize module 2018-08-27 10:00:21 +02:00
k8s-ci-robot
f4feb17629 Merge pull request #2958 from elementyang/etcd-pr
change the way that getting etcd_member_name
2018-08-26 23:55:04 -07:00
Maxime Brunet
33135f2ada k8s/preinstall: Turn AND condition into a list 2018-08-25 14:33:31 -04:00
k8s-ci-robot
d6f4d10075 Merge pull request #3153 from alvistack/remove-image_tag-suffix
Remove *_image_tag suffix from ReplicaSet/Deployment
2018-08-25 04:42:19 -07:00
k8s-ci-robot
f97515352b Merge pull request #3161 from nutellinoit/kube_proxy_nodeport_addresses
--nodeport-addresses added on kube-proxy.manifest.j2 and on k8s-cluster.yml
2018-08-25 02:00:19 -07:00
k8s-ci-robot
f765ed8f1c Merge pull request #3179 from kubernetes-incubator/removeubuntu18
move ubuntu18 to CI part2
2018-08-24 10:08:57 -07:00
Antoine Legrand
84bfcbc0d8 move ubuntu18 to CI part2 2018-08-24 18:18:27 +02:00
Aivars Sterns
2c98efb781 Merge pull request #3158 from tiri/fix-glusterfs-inventory
Fix node hostname in glusterfs inventory.example
2018-08-24 16:35:34 +03:00
Aivars Sterns
f7f58bf070 Merge pull request #3173 from msimonin/fix-3164
Fix createhome directory for adduser role
2018-08-24 16:34:57 +03:00
Erwan Miran
1432e511a2 same work with less lines 2018-08-24 14:06:07 +02:00
Aivars Sterns
1ddc420e39 Merge pull request #3058 from vasrem/feature_add_etcd_quota_backend_bytes
Add ETCD_QUOTA_BACKEND_BYTES environment variable
2018-08-24 14:17:55 +03:00
Vasilis Remmas
b61eb7d7f3 Add ETCD_QUOTA_BACKEND_BYTES environment variable 2018-08-24 12:17:34 +02:00
Aivars Sterns
dd55458315 Merge pull request #3174 from kubernetes-incubator/revert-3147-etcd-cleanup
Revert "gen_certs_script: refactor using stdin (Ansible 2.4+)"
2018-08-24 12:51:45 +03:00
Aivars Sterns
1567a977c3 Revert "gen_certs_script: refactor using stdin (Ansible 2.4+)" 2018-08-24 12:35:31 +03:00
Samuele Chiocca
cb8be37f72 fix on v1alpha1 2018-08-24 11:19:06 +02:00
Samuele Chiocca
e5dd4e1e70 added on v1alpha1 2018-08-24 10:59:06 +02:00
Antoine Legrand
6d74a3db7a Merge pull request #3163 from kubernetes-incubator/fix-docker-ubuntu1804
Fix docker apt-repo for Ubuntu18
2018-08-24 00:51:59 +02:00
ant31
1da5926a94 Use xenial repo for ubuntu18 2018-08-23 22:34:44 +00:00
Antoine Legrand
4882531c29 Merge pull request #3115 from oracle/oracle_oci_controller
Cloud provider support for OCI (Oracle Cloud Infrastructure)
2018-08-23 18:22:45 +02:00
Antoine Legrand
f59b80b80b Merge pull request #3147 from ishitatsuyuki/etcd-cleanup
gen_certs_script: refactor using stdin (Ansible 2.4+)
2018-08-23 18:19:28 +02:00
Antoine Legrand
f7d0e4208e Merge pull request #3142 from riverzhang/fix-kubeadm-lb
Fix kubeadm LB configure
2018-08-23 16:40:59 +02:00
rongzhang
7b61a0eff0 Fix kubeadm LB configure
1. join node add LB discoveryTokenAPIServers
2. kubeadm_config_api_fqdn support ipddress and domain_name
2018-08-23 22:22:34 +08:00
Aivars Sterns
23fd3461bc calico upgrade to v3 (#3086)
* calico upgrade to v3

* update calico_rr version

* add missing file

* change contents of main.yml as it was left old version

* enable network policy by default

* remove unneeded task

* Fix kubelet calico settings

* fix when statement

* switch back to node-kubeconfig.yaml
2018-08-23 17:17:18 +03:00
msimonin
e22e15afda Fix createhome directory for adduser role
A typo in the adduser role prevents the createhome
variable to be taken into account.

Fix #3164
2018-08-23 08:55:11 +02:00
Rong Zhang
f453567cce Merge pull request #3144 from riverzhang/fix-audit-log
Fix install audit failed
2018-08-23 14:41:37 +08:00
Tatsuyuki Ishi
69786b2d16 gen_certs_script: refactor using stdin (Ansible 2.4+) 2018-08-23 11:19:17 +09:00
rongzhang
5a4352657d Fix install audit failed
1.fix audit log not write
2.fix Parameter not recognized
3.delete kubedm futuregates auditing and use apiServerExtraArgs
2018-08-23 01:47:15 +08:00
Samuele Chiocca
f13bc796d9 added nodePortAddresses on kubeadm conf v1alpha2 (not present on v1alpha1) 2018-08-22 18:43:03 +02:00
Antoine Legrand
7a2cfb8578 Merge pull request #3102 from mirwan/psp
PodSecurityPolicy admission controller support
2018-08-22 18:37:40 +02:00
Erwan Miran
a6a14e7f77 create the service account and roles even if the rbac is not enabled. it will just be ignored 2018-08-22 18:17:11 +02:00
Erwan Miran
80cfeea957 psp, roles and rbs for PodSecurityPolicy when podsecuritypolicy_enabled is true 2018-08-22 18:16:13 +02:00
ant31
2c90208486 Fix docker apt-repo for Ubuntu18 2018-08-22 15:53:14 +00:00
Antoine Legrand
4eea7f7eb9 Merge pull request #3152 from johnzheng1975/cilium_1.2.0
new cilium stable version: 1.2.0
2018-08-22 17:11:42 +02:00
Antoine Legrand
3c59657f59 Merge pull request #3165 from hadrien-toma/patch-1
Update ansible.md
2018-08-22 16:58:29 +02:00
Hadrien TOMA
6598beb804 Update ansible.md 2018-08-22 16:40:17 +02:00
Antoine Legrand
32049efbc2 Merge pull request #3162 from kubernetes-incubator/add-ubuntu1804-ci
Add ubuntu18 ci job
2018-08-22 16:27:19 +02:00
Antoine Legrand
78be27e18f Add ubuntu18 job 2018-08-22 16:02:07 +02:00
Samuele Chiocca
5d9908c2c3 --nodeport-addresses added on kube-proxy.manifest.j2
Changed author
2018-08-22 15:32:07 +02:00
Antoine Legrand
7eb4d7bb19 Merge pull request #3155 from alvistack/rbac_enabled
Always create service account even rbac_enabled = false
2018-08-22 13:26:20 +02:00
Erwan Miran
a7b0c454db Localhost in hosts files should be updated (if necessary), not overriden 2018-08-22 12:10:49 +02:00
Timo Ribbers
83e3b72220 Fix node hostname in glusterfs inventory.example
Remove duplicate hostname usage.
2018-08-22 11:03:38 +02:00
john
7e2e3ddd32 update new cilium version 1.2.0 in README.md 2018-08-22 15:29:42 +08:00
Wong Hoi Sing Edison
c3b3572025 Always create service account even rbac_enabled = false 2018-08-22 11:41:29 +08:00
Wong Hoi Sing Edison
f897596844 Remove *_image_tag suffix from ReplicaSet/Deployment 2018-08-22 11:02:56 +08:00
john
6df71956c4 new cilium stable version: 1.2.0 2018-08-22 10:52:24 +08:00
Jeff Bornemann
94df70be98 Cloud provider support for OCI (Oracle Cloud Infrastructure)
Signed-off-by: Jeff Bornemann <jeff.bornemann@oracle.com>
2018-08-21 17:36:42 -04:00
rguichard
6650bc6b25 fix the output of router_id with the right id 2018-08-21 13:21:25 +02:00
Antoine Legrand
7398858572 Merge pull request #3141 from qeqar/bad-hostname
allow '.' in hostnames for verify bad hostnames
2018-08-21 11:39:49 +02:00
Mark Eisenblaetter
0c0a2138d9 allow '.' in hostnames
we use FQDN as inventory_hostname
2018-08-21 08:24:33 +02:00
Jonathan Craig
5bf152886b add support for openstack trust to cloud provider config 2018-08-20 12:51:25 -04:00
Mark Eisenblätter
08353f291b scaling: issue etcd certs for new nodes (#3125) 2018-08-20 14:40:44 +03:00
Andreas Krüger
497db69c9f Merge pull request #3130 from riverzhang/add-control-plane
Add kubeadm controlplaneEndpoint
2018-08-20 10:43:50 +02:00
Andreas Krüger
c7de737551 Merge pull request #3133 from mirwan/auditlog_to_stdout_w_kubeadm
Audit log to stdout with kubeadm
2018-08-20 10:43:22 +02:00
Andreas Krüger
69749a5b7b Merge pull request #3132 from mirwan/custom_audit_policy
Custom audit policy
2018-08-20 10:42:38 +02:00
Andreas Krüger
b3e32c1393 Merge pull request #3094 from hedayat/master
Add --dns-loop-detect to dnsmasq used in kube-dns
2018-08-20 09:27:15 +02:00
Erwan Miran
fc38b6d0ca Ability to define custom audit polcy rules 2018-08-20 07:04:56 +02:00
Erwan Miran
c34900e569 Define apiserver flags directly instead of relying on auditPolicy section in order to have the ability to redirect audit log to stdout with kubeadm 2018-08-20 07:00:53 +02:00
Rong Zhang
855f2a55cb Merge pull request #3135 from ishitatsuyuki/patch-1
Add bad hostname preflight check
2018-08-20 12:08:02 +08:00
Rong Zhang
ea35e6be9b Merge pull request #3139 from alvistack/cephfs-provisioner-v2.0.1-k8s1.11
cephfs-provisioner: Upgrade to v2.0.1-k8s1.11
2018-08-20 12:04:53 +08:00
Wong Hoi Sing Edison
71fdc257bc cephfs-provisioner: Upgrade to v2.0.1-k8s1.11 2018-08-20 11:55:04 +08:00
Rong Zhang
fd16f77e20 Merge pull request #3017 from seungkyua/fix_kubeadm_client_conf
Fix kubeadm client conf
2018-08-20 10:51:02 +08:00
Tatsuyuki Ishi
3eef8dc8d0 Add bad hostname preflight check
Hostname must be a valid DNS name, which is checked as https://github.com/kubernetes/apimachinery/blob/master/pkg/util/validation/validation.go#L115

The situation I have encountered is that my hostname contained underscore which is disallowed and apiserver refused to start.
2018-08-20 09:09:00 +09:00
rongzhang
59176ebbb9 Add kubeadm controlplaneEndpoint
Nginx LB(default)
Other LB by kubeadm controlplane
2018-08-20 00:57:13 +08:00
Rong Zhang
3663061b38 Merge pull request #3137 from riverzhang/packages
Fix install nss
2018-08-20 00:47:53 +08:00
rongzhang
b421d0ed5b Fix install nss 2018-08-20 00:07:31 +08:00
Rong Zhang
f7097fbe07 Merge pull request #3134 from riverzhang/image
Fix pull dns image error
2018-08-19 23:29:57 +08:00
rongzhang
35efc387c4 Fix pull dns image error 2018-08-19 22:47:17 +08:00
Rong Zhang
fb309ca446 Merge pull request #3128 from riverzhang/delete-kubeadm
Remove unused configuration
2018-08-19 10:01:33 +08:00
Antoine Legrand
c833a8872b Merge pull request #3131 from 3cky/patch-1
Fix k8s-dns-dnsmasq-nanny repo path
2018-08-19 01:31:45 +02:00
Antoine Legrand
1d4f88eea8 Fix typo in image url 2018-08-19 01:30:54 +02:00
Victor Antonovich
e9b8c8956d Fix k8s-dns-dnsmasq-nanny repo path 2018-08-19 00:01:19 +03:00
rongzhang
095ccef8bd Remove unused configuration 2018-08-19 01:23:20 +08:00
Rong Zhang
0df969ad19 Merge pull request #3117 from mirwan/audit_usecases
Audit support improvement
2018-08-19 01:13:22 +08:00
Antoine Legrand
3e5b6a5481 Merge pull request #3105 from mirwan/remove_cilium_device_at_reset_plus_move_network_to_network_plugin_roles
Move network_plugin specific reset tasks to its role directory
2018-08-17 22:27:16 +02:00
Antoine Legrand
3201f17058 Merge pull request #3119 from hoatle/improvements/ansible-ignored-patterns
add ignore_patterns to ansible.cfg
2018-08-17 22:13:16 +02:00
Antoine Legrand
c36744e96d Merge pull request #3120 from alvistack/cephfs-provisioner-v2.0.0-k8s1.11
cephfs-provisioner: Upgrade to v2.0.0-k8s1.11
2018-08-17 22:11:15 +02:00
Antoine Legrand
e51c5dc0a6 Merge pull request #3123 from mathieuherbert/until-restart-etcd
add until option for etcd backup commands
2018-08-17 22:09:08 +02:00
Antoine Legrand
d297b82e82 Merge pull request #3126 from LuckySB/etcd_restart_on_update
add etcd version to etcd environment file to trigger a reload
2018-08-17 22:05:34 +02:00
Antoine Legrand
ca649b57e6 Merge pull request #1942 from jerrypeng/patch-1
SERIOUS Bug in download main.yml
2018-08-17 18:23:05 +02:00
Antoine Legrand
2c587f9ea5 Merge pull request #2104 from xd007/multi-arch-support
add support for non-amd64 arch gcr.io images
2018-08-17 16:38:14 +02:00
Erwan Miran
98b818bbaf comply with ansible syntax consistency guideline 2018-08-17 16:37:33 +02:00
Antoine Legrand
26bf719a02 Merge branch 'master' into multi-arch-support 2018-08-17 16:35:50 +02:00
Antoine Legrand
7e37aa4aca Merge pull request #2103 from xd007/docker_aarch64_pkg
Update docker package info for aarch64
2018-08-17 16:26:56 +02:00
Sergey Bondarev
ce6854e726 add version to environment file
Trigger reboot handler when version upgrade during update script
2018-08-17 17:25:35 +03:00
Antoine Legrand
ac49bbb336 Merge pull request #2168 from xd007/docker_arm64
fix docker opts incompatible running on aarch64 Redhat/Centos
2018-08-17 16:24:07 +02:00
Antoine Legrand
b490231f59 Merge pull request #2025 from kubernetes-incubator/terraform-aws-inventory
contrib/terraform/aws: Make path to generated inventory configurable
2018-08-17 15:55:38 +02:00
Antoine Legrand
6c7eabb53b Merge pull request #2001 from b0r1sp/patch-3
Quote false and yes, otherwise they'll be transformed to 'False', 'Yes'
2018-08-17 15:52:15 +02:00
Antoine Legrand
7a0f0126f7 Merge pull request #1295 from xuhuilong/master
fix curl get calico status error ( error in tls version, centos 7.3 1611)
2018-08-17 14:29:01 +02:00
Mathieu Herbert
59d89a37cc add until option for etcd backup commands 2018-08-17 11:05:57 +02:00
Wong Hoi Sing Edison
1a07c87af7 cephfs-provisioner: Upgrade to v2.0.0-k8s1.11
Upstream Changes:

-   cephfs-provisioner v2.0.0-k8s1.11 (https://github.com/kubernetes-incubator/external-storage/releases/tag/cephfs-provisioner-v2.0.0-k8s1.11)
-   Update ClusterRole

Our Changes:

-   Fix typo in defaults/main.yml (rs -> deploy)
-   Manifests cleanup
2018-08-17 12:41:56 +08:00
Seungkyu Ahn
29894293eb Fix kubeadm client conf
Fix DiscoveryTokenCACertHashes key to discoveryTokenCACertHashes in kubeadm-client.conf
2018-08-17 04:40:08 +00:00
Jonathan Craig
4d783fff0d resolve issues with new cacert feature 2018-08-16 23:31:21 -04:00
hoatle
a7a53d1f38 add ignore_patterns to ansible.cfg
To avoid warning message when artifacts is generated within
the inventory directory
2018-08-17 09:22:02 +07:00
Erwan Miran
7f16b46ed5 Reset tasks specific to a network_plugin moved inside its role directory + Reset tasks specific to cilium 2018-08-16 17:34:33 +02:00
Antoine Legrand
58ee5f1cc9 Merge pull request #3089 from mattymo/cloudconfig
Remove erroneous cloud-config task
2018-08-16 16:17:01 +02:00
Antoine Legrand
bc844ca96e Merge pull request #3079 from wikiselev/master
fix glusterfs ppa and glusterfs server command name errors
2018-08-16 16:15:32 +02:00
Antoine Legrand
253dc4f606 Merge pull request #3114 from woopstar/coredns-1.2.0
Update CoreDNS to 1.2.0
2018-08-16 16:14:13 +02:00
Antoine Legrand
b54ce3e66e Merge pull request #3043 from jerryrelmore/patch-3
Update openstack.md
2018-08-16 16:09:12 +02:00
Antoine Legrand
a642931422 Merge pull request #3019 from holmsten/terraform-ops-worker-groups
[contrib/terraform/openstack] Add supplementary node groups
2018-08-16 16:06:53 +02:00
Antoine Legrand
2228f0dabc Merge pull request #3116 from kubernetes-incubator/update-owners
Update OWNERS
2018-08-16 15:57:53 +02:00
Antoine Legrand
a619dfb03e Update OWNERS 2018-08-16 13:32:46 +02:00
Erwan Miran
54548d3b95 kubeadm mounts the hostpaths itself 2018-08-16 13:17:30 +02:00
Erwan Miran
58d4d65fab minor variable fix and reuse + handle auditlog redirected to stdout 2018-08-16 12:51:09 +02:00
Rong Zhang
364ab2a6b7 Merge pull request #3113 from riverzhang/support-audit
Support audit
2018-08-16 15:33:43 +08:00
Andreas Krüger
fdbb078aa9 Merge pull request #3111 from alvistack/cert-manager-0.4.1
cert-manager: Upgrade to 0.4.1
2018-08-16 09:13:46 +02:00
rongzhang
2ffc1afe40 Support audit 2018-08-16 14:38:07 +08:00
Wong Hoi Sing Edison
18612b3501 cert-manager: Upgrade to 0.4.1
Upstream Changes:

-   cert-manager 0.4.1 (https://github.com/jetstack/cert-manager/releases/tag/v0.4.1)

Our Changes:

-   Better templates sync with upstream manifests
-   Remove fancy resources requests/limits customization
2018-08-16 08:47:01 +08:00
Andreas Krüger
d635a97088 Merge pull request #3112 from alvistack/ingress-nginx-0.18.0
ingress-nginx: Upgrade to 0.18.0
2018-08-15 17:07:24 +02:00
Andreas Kruger
9da5d67728 Update CoreDNS to 1.2.0 2018-08-15 13:39:05 +02:00
Wong Hoi Sing Edison
bd413e36a3 ingress-nginx: Upgrade to 0.18.0
Upstream Changes:

-   ingress-nginx 0.18.0 (https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.18.0)
2018-08-15 11:40:42 +08:00
Chad Swenson
2c5781ace1 Merge pull request #2932 from wiremind/efk-fluentd-no-nodeselector
fluentd daemonset: do not set old nodeSelector.
2018-08-14 13:48:30 -05:00
JohnZheng
b50b3430be Disable locksmithd on CoreOS if coreos_auto_upgrade set to false (#3088)
* Disable locksmithd on CoreOS if coreos_auto_upgrade set to false

* change when format to support multiple-condition
2018-08-14 13:42:16 -05:00
Chad Swenson
0e3518f2ca Merge pull request #2871 from fritchie/lptolerate
Local volume provisioner: tolerate NoSchedule
2018-08-14 13:39:57 -05:00
Chad Swenson
238f04c931 Merge pull request #3097 from sdemura/vagrantfile-playbook
Define custom playbook in Vagrantfile
2018-08-14 13:31:50 -05:00
Chad Swenson
3a85a2f81c Merge pull request #3080 from mirwan/netchecker_template_rendering_filename
Netchecker manifests should not have j2 extension
2018-08-14 13:24:16 -05:00
Chad Swenson
5dbfa0384e Merge pull request #3101 from chenhonggc/uninstall_old_versions_of_docker
Uninstall old versions of Docker
2018-08-14 11:32:23 -05:00
edemsea
80c87db148 Define custom playbook in Vagrantfile
This change allows the playbook used in Vagrant to be
defined by the end user.

This is useful in the case where a developer may want to use
their own playbook that imports Kubespray, but also leverage
the Kubespray Vagrantfile.
2018-08-14 12:12:07 -04:00
Rong Zhang
d0d7777d68 Merge pull request #3108 from riverzhang/upgrade-coredns
Upgrade coredns to 1.1.3
2018-08-15 00:08:09 +08:00
rongzhang
48b6128814 Upgrade coredns to 1.1.3 2018-08-15 00:05:55 +08:00
Maxime Brunet
70b28288a3 Use delegate_to: localhost instead of local_action
Allow to use `ansible_become: true` (#2969)
And set it to `false` for `localhost` with an `host_var`
2018-08-14 10:08:43 -04:00
Rong Zhang
a11e1eba9e Upgrade kubernetes to V1.11.x (#3078)
Upgrade Kubernetes to V1.11.2
The kubeadm configuration file version has been upgraded from v1alpha1 to v1alpha2
Add bootstrap kubeadm-config.yaml with external etcd
2018-08-14 15:13:44 +03:00
Chen Hong
2dfa928c90 Uninstall old versions of Docker 2018-08-14 17:48:30 +08:00
Erwan Miran
d3c0fe1fcb Templates (even without actual templating inside) should have j2 extension but should not be rendered with j2 extension 2018-08-13 09:51:26 +02:00
Rong Zhang
36e8683cf5 Merge pull request #3091 from mauromedda/master
Add the path to kubectl binary
2018-08-13 10:04:59 +08:00
Hedayat Vatankhah
c0221c2e72 Add --dns-loop-detect to dnsmasq used in kube-dns
It prevents DNS loops when host's DNS server is a localhost DNS server,
or when DNS server of cluster is also added as an upstream DNS server
2018-08-12 20:36:33 +04:30
mauromedda
9cef20187c Add the path to kubectl binary
The post-remove action fails during the kubectl delete node action because with rc: 2, command not found. The kubectl is not in the system PATH and the full path to the binary is required
2018-08-12 10:50:50 +02:00
Anton Fayzrahmanov
95f1e4634a local-volume-provisioner: use mountPropagation HostToContainer and version bump (#3081)
* Update local-volume-provisioner-ds.yml.j2

After v1.10.2 default mountPropagation is "None"

* local_volume_provisioner version bump

v2.1.0 uses the beta nodeAffinity API by default which is available starting 1.10

* Update local-volume-provisioner-ds.yml.j2

MY_NAMESPACE env

* Update README.md

Raw block devices docs.
2018-08-10 17:14:34 +03:00
Matthew Mosesohn
581a30fdec Remove erroneous cloud-config task 2018-08-10 15:59:18 +03:00
Sascha Marcel Schmidt
19e2868484 fix path to bootstrap tear down 2018-08-10 13:42:28 +02:00
Erwan Miran
494ff9522b j2 extension should only be used for template filename, not target file on remote host 2018-08-09 11:29:45 +02:00
wikiselev
53aee6dc24 fix glusterfs ppa and glusterfs server command name errors 2018-08-09 10:14:14 +01:00
Sascha Marcel Schmidt
9fba448053 recator to use kube module, finally fix race condition in storage tasks 2018-08-08 14:22:50 +02:00
Simon Li
d284961d47 Change heketi-tear-down to run on nodes instead of localhost delegate_to 2018-08-07 13:52:49 +02:00
Simon Li
8ac57201a7 Prefix heketi kubectl calls with {{ bin_dir }} 2018-08-07 13:48:16 +02:00
Jerry Elmore
e30847e231 Update openstack.md
Neutron cli is deprecated - replaced neutron cli commands with equivalent openstack cli commands.
2018-07-31 14:34:04 -04:00
Sascha Marcel Schmidt
2bd8fbb2dd add missing templates 2018-07-25 16:46:12 +02:00
Sascha Marcel Schmidt
205ea33b10 "fix" race condition 2018-07-25 16:42:57 +02:00
Sascha Marcel Schmidt
c42397d7db run kubectl on one of the masters 2018-07-25 16:42:30 +02:00
Sascha Marcel Schmidt
306a6a751f wait for job to complete 2018-07-08 13:16:25 +02:00
Sascha Marcel Schmidt
318c69350e pin heketi image version 2018-07-08 13:15:54 +02:00
Sascha Marcel Schmidt
6d1804d8a4 also remove storage class 2018-07-05 14:19:18 +02:00
Sascha Marcel Schmidt
ee67ece641 suppress unnecessary change 2018-07-05 14:18:27 +02:00
Sascha Marcel Schmidt
f703814561 add tear down playbook 2018-07-05 02:15:05 +02:00
Sascha Marcel Schmidt
c39835628d prevent some race conditions, increase over all time limits 2018-07-05 02:14:36 +02:00
Sascha Marcel Schmidt
1253725975 add necessary chdir 2018-07-04 19:31:25 +02:00
Sascha Marcel Schmidt
f4c1d6a5d7 remove unnecessary check for existing artifact 2018-07-04 19:08:02 +02:00
Sascha Marcel Schmidt
d7abdced05 fix typo 2018-07-04 18:58:45 +02:00
Sascha Marcel Schmidt
78aeef074e add hint on how to install heketi-cli 2018-07-04 18:40:48 +02:00
Sascha Marcel Schmidt
0b7aa33bc2 add jmespath as requirement 2018-07-04 18:25:35 +02:00
elementyang
5a4f07adca change the way of getting etcd_member_name 2018-07-05 00:06:37 +08:00
elementyang
effd27a5f6 change the way that getting etcd_member_name 2018-07-03 22:02:44 +08:00
Andreas Holmsten
b900bd6e94 [contrib/terraform/openstack] Add supplementary node groups
* Add supplementary node groups

  To add additional ansible groups to the k8s nodes, such as
  `kube-ingress` for running ingress controller pods. Empty by default.
2018-06-28 16:46:20 +02:00
Sascha Marcel Schmidt
8e275ab2bd change order and validation of bootstrap and rest tasks as well as
volumes
2018-06-27 12:30:14 +02:00
Sascha Marcel Schmidt
b56f465145 fix creation of heketi volumes and storage provisioning validation 2018-06-27 10:12:23 +02:00
Sascha Marcel Schmidt
74cad6b811 pin versions of container images 2018-06-27 10:11:14 +02:00
Cédric de Saint Martin
a260412c7e fluentd daemonset: do not set arbitrary nodeSelector. 2018-06-25 15:19:56 +02:00
Sascha Marcel Schmidt
8ef0cf771f update link 2018-06-25 15:09:22 +02:00
Sascha Marcel Schmidt
9516170ce5 remove unnecessary become flag 2018-06-25 15:09:19 +02:00
Sascha Marcel Schmidt
5aefa847df add fences 2018-06-25 15:09:16 +02:00
Sascha Marcel Schmidt
831ef7ea2c add readme 2018-06-25 15:09:13 +02:00
Sascha Marcel Schmidt
9c7e30e4b4 add sample inventory 2018-06-25 15:09:03 +02:00
Sascha Marcel Schmidt
8c5bfc7718 add debian compatibility 2018-06-25 15:08:53 +02:00
Sascha Marcel Schmidt
61046a6923 move heketi playbook 2018-06-25 15:08:35 +02:00
Sascha Marcel Schmidt
9d2fabc9b9 add heketi/glusterfs as additional contributional network storage 2018-06-25 15:08:18 +02:00
Henry Finucane
3ad9e9c5eb Fix #2261 by supporting Red Hat's limited PATH
Red Hat has this theory that binaries in sbin are too dangerous to be on
the default path, but we need them anyway.

RH7 has /sbin and /usr/sbin as symlinks, so that is no longer important.

I'm adding it to the `PATH` instead of making the path to `modinfo`
absolute because I am worried about breaking support for other
distributions.
2018-06-15 12:49:22 -07:00
Frank Ritchie
cfe939ff08 Tolerate NoSchedule by default 2018-06-11 20:10:13 -04:00
Di Xu
1081f620d2 add support for non-amd64 arch gcr.io images
Currently all the gcr.io images used in kubespray can only run on x86.
Also gcr.io has not fully support multi-arch docker images.

Add extra var "image_arch" (default is amd64) to support running other
platforms, like arm64.

Change-Id: I8e1c9af533c021cb96ade291a1ce58773b40e271
2018-06-05 17:29:02 +08:00
Di Xu
6019a84fb3 Update docker package info for aarch64
Missing corresponding package docker-engine on aarch64, use docker instead.

Change-Id: If5df58337746a81752b5d477e0473600eaee8381
2018-06-05 16:30:28 +08:00
Di Xu
f4d762bb95 fix docker opts incompatible running on aarch64 Redhat/Centos
On Aarch64, the default cgroup driver for docker is systemd
instead of cgroupfs. Should conform kubelet to use systemd
as cgroup driver as well to keep it consistent with docker.

Without this change, below exception will be raised.
/usr/bin/docker-current: Error response from daemon: shim
error: docker-runc not installed on system.

Change-Id: Id496ec9eaac6580e4da2f3ef1a386c9abc2a5129
2018-06-05 16:17:16 +08:00
Jan Jungnickel
8766b36144 Make path to generated inventory configurable 2017-12-04 16:41:35 +01:00
brx
2ffcfdcd25 Update main.yml 2017-11-24 20:13:38 +01:00
Boyang Jerry Peng
8d460a7300 Bug in download main.yml
I think there was a mistake here:

"{{ peer_with_calico_rr is defined and peer_with_calico_rr }} and kube_network_plugin == 'calico'"

should be

"{{ peer_with_calico_rr is defined and peer_with_calico_rr and kube_network_plugin == 'calico' }}"

this is causing calico_rr to be download even if you are using something other than calico
2017-11-07 17:17:19 -08:00
xuhuilong
71dabf9fb3 fix curl get calico status error ( error in tls version) :https://bugzilla.redhat.com/show_bug.cgi?id=1272504 2017-05-15 08:12:26 -04:00
593 changed files with 11559 additions and 3514 deletions

View File

@@ -7,7 +7,7 @@ stages:
variables:
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-incubator__kubespray'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
# DOCKER_HOST: tcp://localhost:2375
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
@@ -42,7 +42,7 @@ before_script:
tags:
- kubernetes
- docker
image: quay.io/kubespray/kubespray:latest
image: quay.io/kubespray/kubespray:v2.7
.docker_service: &docker_service
services:
@@ -93,10 +93,10 @@ before_script:
# Check out latest tag if testing upgrade
# Uncomment when gitlab kubespray repo has tags
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
- test "${UPGRADE_TEST}" != "false" && git checkout 02cd5418c22d51e40261775908d55bc562206023
- test "${UPGRADE_TEST}" != "false" && git checkout 53d87e53c5899d4ea2904ab7e3883708dd6363d3
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
@@ -240,9 +240,9 @@ before_script:
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
MOVED_TO_GROUP_VARS: "true"
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
@@ -252,6 +252,10 @@ before_script:
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
@@ -263,7 +267,7 @@ before_script:
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
@@ -272,7 +276,7 @@ before_script:
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.debian8_calico_variables: &debian8_calico_variables
.debian9_calico_variables: &debian9_calico_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
@@ -292,23 +296,31 @@ before_script:
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_kube_router_variables: &centos7_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-part2
UPGRADE_TEST: "graceful"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_kube_router_variables: &coreos_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_vault_sep_variables: &ubuntu_vault_sep_variables
# stage: deploy-part1
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.coreos_vault_upgrade_variables: &coreos_vault_upgrade_variables
# stage: deploy-part1
UPGRADE_TEST: "basic"
.ubuntu_flannel_variables: &ubuntu_flannel_variables
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
@@ -319,10 +331,24 @@ before_script:
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_coreos-calico-aio:
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *job
<<: *gce
variables:
<<: *ubuntu18_flannel_aio_variables
<<: *gce_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *coreos_calico_aio_variables
<<: *gce_variables
@@ -330,7 +356,6 @@ gce_coreos-calico-aio:
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2
gce_centos7-flannel-addons:
stage: deploy-part2
<<: *job
@@ -342,6 +367,30 @@ gce_centos7-flannel-addons:
except: ['triggers']
only: [/^pr-.*$/]
gce_centos-weave-kubeadm-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_ubuntu-flannel-ha:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_ubuntu-weave-sep:
stage: deploy-part2
<<: *job
@@ -349,11 +398,10 @@ gce_ubuntu-weave-sep:
variables:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: on_success
when: manual
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_coreos-calico-sep-triggers:
stage: deploy-part2
<<: *job
@@ -365,7 +413,7 @@ gce_coreos-calico-sep-triggers:
only: ['triggers']
gce_ubuntu-canal-ha-triggers:
stage: deploy-part2
stage: deploy-special
<<: *job
<<: *gce
variables:
@@ -407,7 +455,7 @@ do_ubuntu-canal-ha:
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-ha:
stage: deploy-part2
stage: deploy-special
<<: *job
<<: *gce
variables:
@@ -438,17 +486,6 @@ gce_ubuntu-canal-kubeadm-triggers:
when: on_success
only: ['triggers']
gce_centos-weave-kubeadm:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos-weave-kubeadm-triggers:
stage: deploy-part2
<<: *job
@@ -513,24 +550,24 @@ gce_rhel7-weave-triggers:
when: on_success
only: ['triggers']
gce_debian8-calico-upgrade:
gce_debian9-calico-upgrade:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian8_calico_variables
<<: *debian9_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_debian8-calico-triggers:
gce_debian9-calico-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian8_calico_variables
<<: *debian9_calico_variables
when: on_success
only: ['triggers']
@@ -597,6 +634,28 @@ gce_centos7-calico-ha-triggers:
when: on_success
only: ['triggers']
gce_centos7-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-multus-calico:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_multus_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-part2
<<: *job
@@ -620,6 +679,17 @@ gce_coreos-alpha-weave-ha:
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-rkt-sep:
stage: deploy-part2
<<: *job
@@ -631,35 +701,13 @@ gce_ubuntu-rkt-sep:
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-vault-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_vault_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-vault-upgrade:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_vault_upgrade_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-flannel-sep:
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
<<: *ubuntu_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]

View File

@@ -6,11 +6,11 @@ RUN apt update -y && \
apt install -y \
libssl-dev python-dev sshpass apt-transport-https \
ca-certificates curl gnupg2 software-properties-common python-pip
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt

5
Makefile Normal file
View File

@@ -0,0 +1,5 @@
mitogen:
ansible-playbook -c local mitogen.yaml -vv
clean:
rm -rf dist/
rm *.retry

12
OWNERS
View File

@@ -1,9 +1,7 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/kubernetes/blob/master/docs/devel/owners.md
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
owners:
- Smana
- ant31
- bogdando
- mattymo
- rsmitty
approvers:
- kubespray-approvers
reviewers:
- kubespray-reviewers

18
OWNERS_ALIASES Normal file
View File

@@ -0,0 +1,18 @@
aliases:
kubespray-approvers:
- ant31
- mattymo
- atoms
- chadswen
- rsmitty
- bogdando
- bradbeam
- woopstar
- riverzhang
- holser
- smana
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan

View File

@@ -1,11 +1,12 @@
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-incubator/kubespray/master/docs/img/kubernetes-logo.png)
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
Deploy a Production Ready Kubernetes Cluster
============================================
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@@ -16,8 +17,17 @@ Quick Start
To deploy the cluster you can use :
### Current release
2.8.2
### Ansible
#### Ansible version
Ansible v2.7.0 is failing and/or produce unexpected results due to [ansible/ansible/issues/46600](https://github.com/ansible/ansible/issues/46600)
#### Usage
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
@@ -29,11 +39,24 @@ To deploy the cluster you can use :
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all.yml
cat inventory/mycluster/group_vars/k8s-cluster.yml
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `-b` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without -b the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`.
### Vagrant
@@ -78,9 +101,10 @@ Supported Linux Distributions
-----------------------------
- **Container Linux by CoreOS**
- **Debian** Jessie, Stretch, Wheezy
- **Ubuntu** 16.04
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
@@ -90,23 +114,27 @@ Supported Components
--------------------
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.10.4
- [etcd](https://github.com/coreos/etcd) v3.2.18
- [docker](https://www.docker.com/) v17.03 (see note)
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.12.7
- [etcd](https://github.com/coreos/etcd) v3.2.24
- [docker](https://www.docker.com/) v18.06 (see note)
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v2.6.8
- [calico](https://github.com/projectcalico/calico) v3.1.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.1.2
- [contiv](https://github.com/contiv/install) v1.1.7
- [cilium](https://github.com/cilium/cilium) v1.3.0
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.10.0
- [weave](https://github.com/weaveworks/weave) v2.4.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.1
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
- [weave](https://github.com/weaveworks/weave) v2.5.0
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v1.1.0-k8s1.10
- [cert-manager](https://github.com/jetstack/cert-manager) v0.4.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.17.1
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
- [coredns](https://github.com/coredns/coredns) v1.2.6
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
@@ -116,10 +144,10 @@ plugins can be deployed for a given single cluster.
Requirements
------------
- **Ansible v2.4 (or newer) and python-netaddr is installed on the machine
- **Ansible v2.5 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images.
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
@@ -147,6 +175,13 @@ You can choose between 6 network plugins. (default: `calico`, except Vagrant use
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
@@ -164,7 +199,7 @@ Tools and projects on top of Kubespray
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Fuel-ccp-installer](https://github.com/openstack/fuel-ccp-installer)
- [Terraform Contrib](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests
--------

150
Vagrantfile vendored
View File

@@ -1,6 +1,8 @@
# -*- mode: ruby -*-
# # vi: set ft=ruby :
# For help on using kubespray with vagrant, check out docs/vagrant.md
require 'fileutils'
Vagrant.require_version ">= 2.0.0"
@@ -13,13 +15,16 @@ COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd
DISK_UUID = Time.now.utc.to_i
SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", bootstrap_os: "coreos", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu" => {box: "bento/ubuntu-16.04", bootstrap_os: "ubuntu", user: "vagrant"},
"centos" => {box: "centos/7", bootstrap_os: "centos", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", bootstrap_os: "opensuse", use: "vagrant"},
"coreos-stable" => {box: "coreos-stable", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-42.3-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
}
# Defaults for config options defined in CONFIG
@@ -31,8 +36,10 @@ $vm_cpus = 1
$shared_folders = {}
$forwarded_ports = {}
$subnet = "172.17.8"
$os = "ubuntu"
$os = "ubuntu1804"
$network_plugin = "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking = false
# The first three nodes are etcd servers
$etcd_instances = $num_instances
# The first two nodes are kube masters
@@ -44,7 +51,7 @@ $kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$local_release_dir = "/vagrant/temp"
$playbook = "cluster.yml"
host_vars = {}
@@ -54,13 +61,13 @@ end
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = File.join(File.dirname(__FILE__), "inventory", "sample") if ! $inventory
$inventory = "inventory/sample" if ! $inventory
$inventory = File.absolute_path($inventory, File.dirname(__FILE__))
# if $inventory has a hosts file use it, otherwise copy over vars etc
# to where vagrant expects dynamic inventory to be.
if ! File.exist?(File.join(File.dirname($inventory), "hosts"))
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant",
"provisioners", "ansible")
# if $inventory has a hosts.ini file use it, otherwise copy over
# vars etc to where vagrant expects dynamic inventory to be
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
if ! File.exist?(File.join($vagrant_ansible,"inventory"))
FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
@@ -75,76 +82,60 @@ if Vagrant.has_plugin?("vagrant-proxyconf")
end
Vagrant.configure("2") do |config|
# always use Vagrants insecure key
config.ssh.insert_key = false
config.vm.box = $box
if SUPPORTED_OS[$os].has_key? :box_url
config.vm.box_url = SUPPORTED_OS[$os][:box_url]
end
config.ssh.username = SUPPORTED_OS[$os][:user]
# plugin conflict
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end
# always use Vagrants insecure key
config.ssh.insert_key = false
(1..$num_instances).each do |i|
config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
config.vm.hostname = vm_name
config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
node.vm.hostname = vm_name
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ENV['HTTP_PROXY'] || ENV['http_proxy'] || ""
config.proxy.https = ENV['HTTPS_PROXY'] || ENV['https_proxy'] || ""
config.proxy.no_proxy = $no_proxy
end
if $expose_docker_tcp
config.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
end
$forwarded_ports.each do |guest, host|
config.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
node.proxy.http = ENV['HTTP_PROXY'] || ENV['http_proxy'] || ""
node.proxy.https = ENV['HTTPS_PROXY'] || ENV['https_proxy'] || ""
node.proxy.no_proxy = $no_proxy
end
["vmware_fusion", "vmware_workstation"].each do |vmware|
config.vm.provider vmware do |v|
node.vm.provider vmware do |v|
v.vmx['memsize'] = $vm_memory
v.vmx['numvcpus'] = $vm_cpus
end
end
config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
$shared_folders.each do |src, dst|
config.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
end
config.vm.provider :virtualbox do |vb|
vb.gui = $vm_gui
node.vm.provider :virtualbox do |vb|
vb.memory = $vm_memory
vb.cpus = $vm_cpus
vb.gui = $vm_gui
vb.linked_clone = true
end
config.vm.provider :libvirt do |lv|
lv.memory = $vm_memory
end
ip = "#{$subnet}.#{i+100}"
host_vars[vm_name] = {
"ip": ip,
"bootstrap_os": SUPPORTED_OS[$os][:bootstrap_os],
"local_release_dir" => $local_release_dir,
"download_run_once": "False",
"kube_network_plugin": $network_plugin
}
config.vm.network :private_network, ip: ip
# Disable swap for each vm
config.vm.provision "shell", inline: "swapoff -a"
node.vm.provider :libvirt do |lv|
lv.memory = $vm_memory
lv.cpus = $vm_cpus
lv.default_prefix = 'kubespray'
# Fix kernel panic on fedora 28
if $os == "fedora"
lv.cpu_mode = "host-passthrough"
end
end
if $kube_node_instances_with_disks
# Libvirt
driverletters = ('a'..'z').to_a
config.vm.provider :libvirt do |lv|
node.vm.provider :libvirt do |lv|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
@@ -153,24 +144,51 @@ Vagrant.configure("2") do |config|
end
end
# Only execute once the Ansible provisioner,
# when all the machines are up and ready.
if $expose_docker_tcp
node.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
end
$forwarded_ports.each do |guest, host|
node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
end
node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
$shared_folders.each do |src, dst|
node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
end
ip = "#{$subnet}.#{i+100}"
node.vm.network :private_network, ip: ip
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
host_vars[vm_name] = {
"ip": ip,
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1",
"download_run_once": "True",
"download_localhost": "False"
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
if i == $num_instances
config.vm.provision "ansible" do |ansible|
ansible.playbook = "cluster.yml"
if File.exist?(File.join(File.dirname($inventory), "hosts"))
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
if File.exist?(File.join( $inventory, "hosts.ini"))
ansible.inventory_path = $inventory
end
ansible.become = true
ansible.limit = "all"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache"]
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "--ask-become-pass"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-0[1:#{$etcd_instances}]"],
"kube-master" => ["#{$instance_name_prefix}-0[1:#{$kube_master_instances}]"],
"kube-node" => ["#{$instance_name_prefix}-0[1:#{$kube_node_instances}]"],
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube-master" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
"kube-node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
"k8s-cluster:children" => ["kube-master", "kube-node"],
}
end

View File

@@ -3,6 +3,8 @@ pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
host_key_checking=False
gathering = smart
fact_caching = jsonfile
@@ -13,3 +15,5 @@ callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
[inventory]
ignore_patterns = artifacts, credentials

View File

@@ -1,5 +1,33 @@
---
- hosts: localhost
gather_facts: false
tasks:
- name: "Check ansible version !=2.7.0"
assert:
msg: "Ansible V2.7.0 can't be used until: https://github.com/ansible/ansible/issues/46600 is fixed"
that:
- ansible_version.string is version("2.7.0", "!=")
- ansible_version.string is version("2.5.0", ">=")
tags:
- check
vars:
ansible_connection: local
- hosts: localhost
gather_facts: false
tasks:
- name: deploy warning for non kubeadm
debug:
msg: "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- name: deploy cluster for non kubeadm
pause:
prompt: "Are you sure you want to deploy cluster using the deprecated non-kubeadm mode."
echo: no
when: not kubeadm_enabled and not skip_non_kubeadm_warning
- hosts: bastion[0]
gather_facts: False
roles:
- { role: kubespray-defaults}
@@ -33,20 +61,10 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker, when: manage_docker|default(true) }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{proxy_env}}"
- hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_management == 'vault'" }
environment: "{{proxy_env}}"
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
@@ -59,13 +77,6 @@
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: vault, tags: vault, when: "cert_management == 'vault'"}
environment: "{{proxy_env}}"
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
@@ -93,6 +104,7 @@
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"], when: "kubeadm_enabled" }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@@ -114,10 +126,10 @@
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"
- hosts: kube-master[0]
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python
from __future__ import print_function
import boto3
import os
import argparse
@@ -13,7 +14,7 @@ class SearchEC2Tags(object):
self.search_tags()
if self.args.host:
data = {}
print json.dumps(data, indent=2)
print(json.dumps(data, indent=2))
def parse_args(self):
@@ -44,18 +45,29 @@ class SearchEC2Tags(object):
instances = ec2.instances.filter(Filters=[{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
##Suppose default vpc_visibility is private
dns_name = instance.private_dns_name
ansible_host = {
'ansible_ssh_host': instance.private_ip_address
}
##Override when vpc_visibility actually is public
if self.vpc_visibility == "public":
hosts[group].append(instance.public_dns_name)
hosts['_meta']['hostvars'][instance.public_dns_name] = {
'ansible_ssh_host': instance.public_ip_address
}
else:
hosts[group].append(instance.private_dns_name)
hosts['_meta']['hostvars'][instance.private_dns_name] = {
'ansible_ssh_host': instance.private_ip_address
dns_name = instance.public_dns_name
ansible_host = {
'ansible_ssh_host': instance.public_ip_address
}
##Set when instance actually has node_labels
node_labels_tag = list(filter(lambda t: t['Key'] == 'kubespray-node-labels', instance.tags))
if node_labels_tag:
ansible_host['node_labels'] = dict([ label.strip().split('=') for label in node_labels_tag[0]['Value'].split(',') ])
hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host
hosts['k8s-cluster'] = {'children':['kube-master', 'kube-node']}
print json.dumps(hosts, sort_keys=True, indent=2)
print(json.dumps(hosts, sort_keys=True, indent=2))
SearchEC2Tags()

View File

@@ -1,5 +1,5 @@
# Due to some Azure limitations (ex:- Storage Account's name must be unique),
# Due to some Azure limitations (ex:- Storage Account's name must be unique),
# this name must be globally unique - it will be used as a prefix for azure components
cluster_name: example
@@ -7,6 +7,10 @@ cluster_name: example
# node that can be used to access the masters and minions
use_bastion: false
# Set this to a prefered name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
# bastion_domain_prefix: k8s-bastion
number_of_k8s_masters: 3
number_of_k8s_nodes: 3
@@ -20,7 +24,8 @@ admin_username: devops
admin_password: changeme
# MAKE SURE TO CHANGE THIS TO YOUR PUBLIC KEY to access your azure machines
ssh_public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLRzcxbsFDdEibiyXCSdIFh7bKbXso1NqlKjEyPTptf3aBXHEhVil0lJRjGpTlpfTy7PHvXFbXIOCdv9tOmeH1uxWDDeZawgPFV6VSZ1QneCL+8bxzhjiCn8133wBSPZkN8rbFKd9eEUUBfx8ipCblYblF9FcidylwtMt5TeEmXk8yRVkPiCuEYuDplhc2H0f4PsK3pFb5aDVdaDT3VeIypnOQZZoUxHWqm6ThyHrzLJd3SrZf+RROFWW1uInIDf/SZlXojczUYoffxgT1lERfOJCHJXsqbZWugbxQBwqsVsX59+KPxFFo6nV88h3UQr63wbFx52/MXkX4WrCkAHzN ablock-vwfs@dell-lappy"
ssh_public_keys:
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLRzcxbsFDdEibiyXCSdIFh7bKbXso1NqlKjEyPTptf3aBXHEhVil0lJRjGpTlpfTy7PHvXFbXIOCdv9tOmeH1uxWDDeZawgPFV6VSZ1QneCL+8bxzhjiCn8133wBSPZkN8rbFKd9eEUUBfx8ipCblYblF9FcidylwtMt5TeEmXk8yRVkPiCuEYuDplhc2H0f4PsK3pFb5aDVdaDT3VeIypnOQZZoUxHWqm6ThyHrzLJd3SrZf+RROFWW1uInIDf/SZlXojczUYoffxgT1lERfOJCHJXsqbZWugbxQBwqsVsX59+KPxFFo6nV88h3UQr63wbFx52/MXkX4WrCkAHzN ablock-vwfs@dell-lappy"
# Disable using ssh using password. Change it to false to allow to connect to ssh by password
disablePasswordAuthentication: true

View File

@@ -1,5 +1,5 @@
{% for vm in vm_ip_list %}
{% for vm in vm_ip_list %}
{% if not use_bastion or vm.virtualMachine.name == 'bastion' %}
{{ vm.virtualMachine.name }} ansible_ssh_host={{ vm.virtualMachine.network.publicIpAddresses[0].ipAddress }} ip={{ vm.virtualMachine.network.privateIpAddresses[0] }}
{% else %}

View File

@@ -15,7 +15,12 @@
"name": "{{bastionIPAddressName}}",
"location": "[resourceGroup().location]",
"properties": {
"publicIPAllocationMethod": "Static"
"publicIPAllocationMethod": "Static",
"dnsSettings": {
{% if bastion_domain_prefix %}
"domainNameLabel": "{{ bastion_domain_prefix }}"
{% endif %}
}
}
},
{
@@ -66,10 +71,12 @@
"disablePasswordAuthentication": "true",
"ssh": {
"publicKeys": [
{% for key in ssh_public_keys %}
{
"path": "{{sshKeyPath}}",
"keyData": "{{ssh_public_key}}"
}
"keyData": "{{key}}"
}{% if loop.index < ssh_public_keys | length %},{% endif %}
{% endfor %}
]
}
}

View File

@@ -162,10 +162,12 @@
"disablePasswordAuthentication": "{{disablePasswordAuthentication}}",
"ssh": {
"publicKeys": [
{% for key in ssh_public_keys %}
{
"path": "{{sshKeyPath}}",
"keyData": "{{ssh_public_key}}"
}
"keyData": "{{key}}"
}{% if loop.index < ssh_public_keys | length %},{% endif %}
{% endfor %}
]
}
}

View File

@@ -79,10 +79,12 @@
"disablePasswordAuthentication": "{{disablePasswordAuthentication}}",
"ssh": {
"publicKeys": [
{% for key in ssh_public_keys %}
{
"path": "{{sshKeyPath}}",
"keyData": "{{ssh_public_key}}"
}
"keyData": "{{key}}"
}{% if loop.index < ssh_public_keys | length %},{% endif %}
{% endfor %}
]
}
}

176
contrib/dind/README.md Normal file
View File

@@ -0,0 +1,176 @@
# Kubespray DIND experimental setup
This ansible playbook creates local docker containers
to serve as Kubernetes "nodes", which in turn will run
"normal" Kubernetes docker containers, a mode usually
called DIND (Docker-IN-Docker).
The playbook has two roles:
- dind-host: creates the "nodes" as containers in localhost, with
appropriate settings for DIND (privileged, volume mapping for dind
storage, etc).
- dind-cluster: customizes each node container to have required
system packages installed, and some utils (swapoff, lsattr)
symlinked to /bin/true to ease mimicking a real node.
This playbook has been test with Ubuntu 16.04 as host and ubuntu:16.04
as docker images (note that dind-cluster has specific customization
for these images).
The playbook also creates a `/tmp/kubespray.dind.inventory_builder.sh`
helper (wraps up running `contrib/inventory_builder/inventory.py` with
node containers IPs and prefix).
## Deploying
See below for a complete successful run:
1. Create the node containers
~~~~
# From the kubespray root dir
cd contrib/dind
pip install -r requirements.txt
ansible-playbook -i hosts dind-cluster.yaml
# Back to kubespray root
cd ../..
~~~~
NOTE: if the playbook run fails with something like below error
message, you may need to specifically set `ansible_python_interpreter`,
see `./hosts` file for an example expanded localhost entry.
~~~
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"}
~~~
2. Customize kubespray-dind.yaml
Note that there's coupling between above created node containers
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro`
(as set in `group_vars/all/all.yaml`), and docker settings.
~~~
$EDITOR contrib/dind/kubespray-dind.yaml
~~~
3. Prepare the inventory and run the playbook
~~~
INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
~~~
NOTE: You could also test other distros without editing files by
passing `--extra-vars` as per below commandline,
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`:
~~~
cd contrib/dind
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO
cd ../..
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO
~~~
## Resulting deployment
See below to get an idea on how a completed deployment looks like,
from the host where you ran kubespray playbooks.
### node_distro: debian
Running from an Ubuntu Xenial host:
~~~
$ uname -a
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1835dd183b75 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node5
30b0af8d2924 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node4
3e0d1510c62f debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node3
738993566f94 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node2
c581ef662ed2 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node1
$ docker exec kube-node1 kubectl get node
NAME STATUS ROLES AGE VERSION
kube-node1 Ready master,node 18m v1.12.1
kube-node2 Ready master,node 17m v1.12.1
kube-node3 Ready node 17m v1.12.1
kube-node4 Ready node 17m v1.12.1
kube-node5 Ready node 17m v1.12.1
$ docker exec kube-node1 kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default netchecker-agent-67489 1/1 Running 0 2m51s
default netchecker-agent-6qq6s 1/1 Running 0 2m51s
default netchecker-agent-fsw92 1/1 Running 0 2m51s
default netchecker-agent-fw6tl 1/1 Running 0 2m51s
default netchecker-agent-hostnet-8f2zb 1/1 Running 0 3m
default netchecker-agent-hostnet-gq7ml 1/1 Running 0 3m
default netchecker-agent-hostnet-jfkgv 1/1 Running 0 3m
default netchecker-agent-hostnet-kwfwx 1/1 Running 0 3m
default netchecker-agent-hostnet-r46nm 1/1 Running 0 3m
default netchecker-agent-lxdrn 1/1 Running 0 2m51s
default netchecker-server-864bd4c897-9vstl 1/1 Running 0 2m40s
default sh-68fcc6db45-qf55h 1/1 Running 1 12m
kube-system coredns-7598f59475-6vknq 1/1 Running 0 14m
kube-system coredns-7598f59475-l5q5x 1/1 Running 0 14m
kube-system kube-apiserver-kube-node1 1/1 Running 0 17m
kube-system kube-apiserver-kube-node2 1/1 Running 0 18m
kube-system kube-controller-manager-kube-node1 1/1 Running 0 18m
kube-system kube-controller-manager-kube-node2 1/1 Running 0 18m
kube-system kube-proxy-5xx9d 1/1 Running 0 17m
kube-system kube-proxy-cdqq4 1/1 Running 0 17m
kube-system kube-proxy-n64ls 1/1 Running 0 17m
kube-system kube-proxy-pswmj 1/1 Running 0 18m
kube-system kube-proxy-x89qw 1/1 Running 0 18m
kube-system kube-scheduler-kube-node1 1/1 Running 4 17m
kube-system kube-scheduler-kube-node2 1/1 Running 4 18m
kube-system kubernetes-dashboard-5db4d9f45f-548rl 1/1 Running 0 14m
kube-system nginx-proxy-kube-node3 1/1 Running 4 17m
kube-system nginx-proxy-kube-node4 1/1 Running 4 17m
kube-system nginx-proxy-kube-node5 1/1 Running 4 17m
kube-system weave-net-42bfr 2/2 Running 0 16m
kube-system weave-net-6gt8m 2/2 Running 0 16m
kube-system weave-net-88nnc 2/2 Running 0 16m
kube-system weave-net-shckr 2/2 Running 0 16m
kube-system weave-net-xr46t 2/2 Running 0 16m
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null}
~~~
## Using ./run-test-distros.sh
You can use `./run-test-distros.sh` to run a set of tests via DIND,
and excerpt from this script, to get an idea:
~~~
# The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos)
# EXTRAS=(
# 'kube_network_plugin=calico'
# 'kube_network_plugin=flannel'
# 'kube_network_plugin=weave'
# )
# that will be tested in a "combinatory" way (e.g. from above there'll be
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
#
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run.
~~~
See e.g. `test-some_distros-most_CNIs.env` and
`test-some_distros-kube_router_combo.env` in particular for a richer
set of CNI specific `--extra-vars` combo.

View File

@@ -0,0 +1,9 @@
---
- hosts: localhost
gather_facts: False
roles:
- { role: dind-host }
- hosts: containers
roles:
- { role: dind-cluster }

View File

@@ -0,0 +1,2 @@
# See distro.yaml for supported node_distro images
node_distro: debian

View File

@@ -0,0 +1,40 @@
distro_settings:
debian: &DEBIAN
image: "debian:9.5"
user: "debian"
pid1_exe: /lib/systemd/systemd
init: |
sh -c "apt-get -qy update && apt-get -qy install systemd-sysv dbus && exec /sbin/init"
raw_setup: apt-get -qy update && apt-get -qy install dbus python sudo iproute2
raw_setup_done: test -x /usr/bin/sudo
agetty_svc: getty@*
ssh_service: ssh
extra_packages: []
ubuntu:
<<: *DEBIAN
image: "ubuntu:16.04"
user: "ubuntu"
init: |
/sbin/init
centos: &CENTOS
image: "centos:7"
user: "centos"
pid1_exe: /usr/lib/systemd/systemd
init: |
/sbin/init
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables
raw_setup_done: test -x /usr/bin/sudo
agetty_svc: getty@* serial-getty@*
ssh_service: sshd
extra_packages: []
fedora:
<<: *CENTOS
image: "fedora:latest"
user: "fedora"
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables; mkdir -p /etc/modules-load.d
extra_packages:
- hostname
- procps
- findutils
- kmod
- iputils

15
contrib/dind/hosts Normal file
View File

@@ -0,0 +1,15 @@
[local]
# If you created a virtualenv for ansible, you may need to specify running the
# python binary from there instead:
#localhost ansible_connection=local ansible_python_interpreter=/home/user/kubespray/.venv/bin/python
localhost ansible_connection=local
[containers]
kube-node1
kube-node2
kube-node3
kube-node4
kube-node5
[containers:vars]
ansible_connection=docker

View File

@@ -0,0 +1,22 @@
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND
# See contrib/dind/README.md
kube_api_anonymous_auth: true
kubeadm_enabled: true
kubelet_fail_swap_on: false
# Docker nodes need to have been created with same "node_distro: debian"
# at contrib/dind/group_vars/all/all.yaml
bootstrap_os: debian
docker_version: latest
docker_storage_options: -s overlay2 --storage-opt overlay2.override_kernel_check=true -g /dind/docker
dns_mode: coredns
deploy_netchecker: True
netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent
netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server
netcheck_agent_image_tag: v1.0
netcheck_server_image_tag: v1.0

View File

@@ -0,0 +1 @@
docker

View File

@@ -0,0 +1,70 @@
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: set_fact other distro settings
set_fact:
distro_user: "{{ distro_setup['user'] }}"
distro_ssh_service: "{{ distro_setup['ssh_service'] }}"
distro_extra_packages: "{{ distro_setup['extra_packages'] }}"
- name: Null-ify some linux tools to ease DIND
file:
src: "/bin/true"
dest: "{{item}}"
state: link
force: yes
with_items:
# DIND box may have swap enable, don't bother
- /sbin/swapoff
# /etc/hosts handling would fail on trying to copy file attributes on edit,
# void it by successfully returning nil output
- /usr/bin/lsattr
# disable selinux-isms, sp needed if running on non-Selinux host
- /usr/sbin/semodule
- name: Void installing dpkg docs and man pages on Debian based distros
copy:
content: |
# Delete locales
path-exclude=/usr/share/locale/*
# Delete man pages
path-exclude=/usr/share/man/*
# Delete docs
path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
when:
- ansible_os_family == 'Debian'
- name: Install system packages to better match a full-fledge node
package:
name: "{{ item }}"
state: present
with_items: "{{ distro_extra_packages }} + [ 'rsyslog', 'openssh-server' ]"
- name: Start needed services
service:
name: "{{ item }}"
state: started
with_items:
- rsyslog
- "{{ distro_ssh_service }}"
- name: Create distro user "{{distro_user}}"
user:
name: "{{ distro_user }}"
uid: 1000
#groups: sudo
append: yes
- name: Allow password-less sudo to "{{ distro_user }}"
copy:
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
dest: "/etc/sudoers.d/{{ distro_user }}"
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
authorized_key:
user: "{{ distro_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

View File

@@ -0,0 +1,86 @@
- name: set_fact distro_setup
set_fact:
distro_setup: "{{ distro_settings[node_distro] }}"
- name: set_fact other distro settings
set_fact:
distro_image: "{{ distro_setup['image'] }}"
distro_init: "{{ distro_setup['init'] }}"
distro_pid1_exe: "{{ distro_setup['pid1_exe'] }}"
distro_raw_setup: "{{ distro_setup['raw_setup'] }}"
distro_raw_setup_done: "{{ distro_setup['raw_setup_done'] }}"
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}"
- name: Create dind node containers from "containers" inventory section
docker_container:
image: "{{ distro_image }}"
name: "{{ item }}"
state: started
hostname: "{{ item }}"
command: "{{ distro_init }}"
#recreate: yes
privileged: true
tmpfs:
- /sys/module/nf_conntrack/parameters
volumes:
- /boot:/boot
- /lib/modules:/lib/modules
- "{{ item }}:/dind/docker"
register: containers
with_items: "{{groups.containers}}"
tags:
- addresses
- name: Gather list of containers IPs
set_fact:
addresses: "{{ containers.results | map(attribute='ansible_facts') | map(attribute='docker_container') | map(attribute='NetworkSettings') | map(attribute='IPAddress') | list }}"
tags:
- addresses
- name: Create inventory_builder helper already set with the list of node containers' IPs
template:
src: inventory_builder.sh.j2
dest: /tmp/kubespray.dind.inventory_builder.sh
mode: 0755
tags:
- addresses
- name: Install needed packages into node containers via raw, need to wait for possible systemd packages to finish installing
raw: |
# agetty processes churn a lot of cpu time failing on inexistent ttys, early STOP them, to rip them in below task
pkill -STOP agetty || true
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done
{{ distro_raw_setup }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("SKIPPED") < 0
- name: Remove gettys from node containers
raw: |
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done
systemctl disable {{ distro_agetty_svc }}
systemctl stop {{ distro_agetty_svc }}
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
changed_when: false
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id
cmp /etc/machine-id /etc/machine-id~ || true
systemctl daemon-reload
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"
with_items: "{{ containers.results }}"
register: result
changed_when: result.stdout.find("removed") >= 0

View File

@@ -0,0 +1,3 @@
#!/bin/bash
# NOTE: if you change HOST_PREFIX, you also need to edit ./hosts [containers] section
HOST_PREFIX=kube-node python3 contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %}

View File

@@ -0,0 +1,93 @@
#!/bin/bash
# Q&D test'em all: creates full DIND kubespray deploys
# for each distro, verifying it via netchecker.
info() {
local msg="$*"
local date="$(date -Isec)"
echo "INFO: [$date] $msg"
}
pass_or_fail() {
local rc="$?"
local msg="$*"
local date="$(date -Isec)"
[ $rc -eq 0 ] && echo "PASS: [$date] $msg" || echo "FAIL: [$date] $msg"
return $rc
}
test_distro() {
local distro=${1:?};shift
local extra="${*:-}"
local prefix="$distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../..
INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
# expand $extra with -e in front of each word
extra_args=""; for extra_arg in $extra; do extra_args="$extra_args -e $extra_arg"; done
ansible-playbook --become -e ansible_ssh_user=$distro -i \
${INVENTORY_DIR}/hosts.ini cluster.yml \
-e @contrib/dind/kubespray-dind.yaml -e bootstrap_os=$distro ${extra_args}
pass_or_fail "$prefix: kubespray"
) || return 1
local node0=${NODES[0]}
docker exec ${node0} kubectl get pod --all-namespaces
pass_or_fail "$prefix: kube-api" || return 1
let retries=60
while ((retries--)); do
# Some CNI may set NodePort on "main" node interface address (thus no localhost NodePort)
# e.g. kube-router: https://github.com/cloudnativelabs/kube-router/pull/217
docker exec ${node0} curl -m2 -s http://${NETCHECKER_HOST:?}:31081/api/v1/connectivity_check | grep successfully && break
sleep 2
done
[ $retries -ge 0 ]
pass_or_fail "$prefix: netcheck" || return 1
}
NODES=($(egrep ^kube-node hosts))
NETCHECKER_HOST=localhost
: ${OUTPUT_DIR:=./out}
mkdir -p ${OUTPUT_DIR}
# The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos)
# EXTRAS=(
# 'kube_network_plugin=calico'
# 'kube_network_plugin=flannel'
# 'kube_network_plugin=weave'
# )
# that will be tested in a "combinatory" way (e.g. from above there'll be
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output.
#
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run.
SPECS=${*:?Missing SPEC files, e.g. test-most_distros-some_CNIs.env}
for spec in ${SPECS}; do
unset DISTROS EXTRAS
echo "Loading file=${spec} ..."
. ${spec} || continue
: ${DISTROS:?} || continue
echo "DISTROS=${DISTROS[@]}"
echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}"
let n=1
for distro in ${DISTROS[@]}; do
for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra
docker rm -f ${NODES[@]}
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{
info "${distro}[${extra}] START: file_out=${file_out}"
time test_distro ${distro} ${extra}
} |& tee ${file_out}
# sleeping for the sake of the human to verify if they want
sleep 2m
done
done
done
egrep -H '^(....:|real)' $(ls -tr ${OUTPUT_DIR}/*.out)

View File

@@ -0,0 +1,11 @@
# Test spec file: used from ./run-test-distros.sh, will run
# each distro in $DISTROS overloading main kubespray ansible-playbook run
# Get all DISTROS from distro.yaml (shame no yaml parsing, but nuff anyway)
# DISTROS="${*:-$(egrep -o '^ \w+' group_vars/all/distro.yaml|paste -s)}"
DISTROS=(debian ubuntu centos fedora)
# Each line below will be added as --extra-vars to main playbook run
EXTRAS=(
'kube_network_plugin=calico'
'kube_network_plugin=weave'
)

View File

@@ -0,0 +1,8 @@
DISTROS=(debian centos)
NETCHECKER_HOST=${NODES[0]}
EXTRAS=(
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":true,"kube_router_run_service_proxy":true}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":false}'
'kube_network_plugin=kube-router {"kubeadm_enabled":false,"kube_router_run_service_proxy":true}'
)

View File

@@ -0,0 +1,8 @@
DISTROS=(debian centos)
EXTRAS=(
'kube_network_plugin=calico {"kubeadm_enabled":true}'
'kube_network_plugin=canal {"kubeadm_enabled":true}'
'kube_network_plugin=cilium {"kubeadm_enabled":true}'
'kube_network_plugin=flannel {"kubeadm_enabled":true}'
'kube_network_plugin=weave {"kubeadm_enabled":true}'
)

10
contrib/metallb/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Deploy MetalLB into Kubespray/Kubernetes
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/metallb/metallb.yml
```

View File

@@ -0,0 +1,6 @@
---
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -0,0 +1,7 @@
---
metallb:
ip_range: "10.5.0.50-10.5.0.99"
limits:
cpu: "100m"
memory: "100Mi"
port: "7472"

View File

@@ -0,0 +1,17 @@
---
- name: "Kubernetes Apps | Lay Down MetalLB"
become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
with_items: ["metallb.yml", "metallb-config.yml"]
register: "rendering"
when:
- "inventory_hostname == groups['kube-master'][0]"
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@@ -0,0 +1,13 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: loadbalanced
protocol: layer2
addresses:
- {{ metallb.ip_range }}

View File

@@ -0,0 +1,254 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["metallb-speaker"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: leader-election
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.6.2
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.6.2
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
---

View File

@@ -12,7 +12,7 @@
# ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node1 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube-master]
# node1

View File

@@ -2,7 +2,7 @@
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
glusterfs_ppa_version: "4.1"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster

View File

@@ -2,7 +2,7 @@
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
glusterfs_ppa_version: "3.12"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster

View File

@@ -1,2 +1,2 @@
---
glusterfs_daemon: glusterfs-server
glusterfs_daemon: glusterd

View File

@@ -0,0 +1,16 @@
# Deploy Heketi/Glusterfs into Kubespray/Kubernetes
This playbook aims to automate [this](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md) tutorial. It deploys heketi/glusterfs into kubernetes and sets up a storageclass.
## Client Setup
Heketi provides a CLI that provides users with a means to administer the deployment and configuration of GlusterFS in Kubernetes. [Download and install the heketi-cli](https://github.com/heketi/heketi/releases) on your client machine.
## Install
Copy the inventory.yml.sample over to inventory/sample/k8s_heketi_inventory.yml and change it according to your setup.
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi.yml
```
## Tear down
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
```

View File

@@ -0,0 +1,9 @@
---
- hosts: kube-master[0]
roles:
- { role: tear-down }
- hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -0,0 +1,10 @@
---
- hosts: heketi-node
roles:
- { role: prepare }
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -0,0 +1,26 @@
all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
children:
k8s-cluster:
vars:
kubelet_fail_swap_on: false
children:
kube-master:
hosts:
node1:
etcd:
hosts:
node2:
kube-node:
hosts: &kube_nodes
node1:
node2:
node3:
node4:
heketi-node:
vars:
disk_volume_device_1: "/dev/vdb"
hosts:
<<: *kube_nodes

View File

@@ -0,0 +1 @@
jmespath

View File

@@ -0,0 +1,24 @@
---
- name: "Load lvm kernel modules"
become: true
with_items:
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
modprobe:
name: "{{ item }}"
state: "present"
- name: "Install glusterfs mount utils (RedHat)"
become: true
yum:
name: "glusterfs-fuse"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install glusterfs mount utils (Debian)"
become: true
apt:
name: "glusterfs-client"
state: "present"
when: "ansible_os_family == 'Debian'"

View File

@@ -0,0 +1 @@
---

View File

@@ -0,0 +1,3 @@
---
- name: "stop port forwarding"
command: "killall "

View File

@@ -0,0 +1,56 @@
# Bootstrap heketi
- name: "Get state of heketi service, deployment and pods."
register: "initial_heketi_state"
changed_when: false
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology
- name: "Get heketi initial pod state."
register: "initial_heketi_pod"
command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
changed_when: false
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume
- name: "Prepare heketi volumes."
include_tasks: "bootstrap/volumes.yml"
# Remove bootstrap heketi
- name: "Tear down bootstrap."
include_tasks: "bootstrap/tear-down.yml"
# Prepare heketi storage
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
# ensure endpoints actually exist before trying to move database data to it
- name: "Create heketi storage."
include_tasks: "bootstrap/storage.yml"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"

View File

@@ -0,0 +1,24 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
become: true
template: { src: "heketi-bootstrap.json.j2", dest: "{{ kube_config_dir }}/heketi-bootstrap.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Wait for heketi bootstrap to complete."
changed_when: false
register: "initial_heketi_state"
vars:
initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -0,0 +1,33 @@
---
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
- name: "Create heketi storage."
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
state: "present"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
register: "heketi_storage_result"
- name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json"
changed_when: false
register: "heketi_storage_state"
vars:
heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until:
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
retries: 60
delay: 5

View File

@@ -0,0 +1,14 @@
---
- name: "Get existing Heketi deploy resources."
command: "{{ bin_dir }}/kubectl get all --selector=\"deploy-heketi\" -o=json"
register: "heketi_resources"
changed_when: false
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5

View File

@@ -0,0 +1,26 @@
---
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "render"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,41 @@
---
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined" , msg: "Heketi database volume does not exist." }

View File

@@ -0,0 +1,4 @@
---
- name: "Clean up left over jobs."
command: "{{ bin_dir }}/kubectl delete jobs,pods --selector=\"deploy-heketi\""
changed_when: false

View File

@@ -0,0 +1,38 @@
---
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
template: { src: "glusterfs-daemonset.json.j2", dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" }
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/glusterfs-daemonset.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Kubernetes Apps | Label GlusterFS nodes"
include_tasks: "glusterfs/label.yml"
with_items: "{{ groups['heketi-node'] }}"
loop_control:
loop_var: "node"
- name: "Kubernetes Apps | Wait for daemonset to become available."
register: "daemonset_state"
command: "{{ bin_dir }}/kubectl get daemonset glusterfs --output=json --ignore-not-found=true"
changed_when: false
vars:
daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3"
retries: 60
delay: 5
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
template: { src: "heketi-service-account.json.j2", dest: "{{ kube_config_dir }}/heketi-service-account.json" }
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-service-account.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,11 @@
---
- register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- assert: { that: "label_present|length > 0", msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs." }

View File

@@ -0,0 +1,26 @@
---
- name: "Kubernetes Apps | Lay Down Heketi"
become: true
template: { src: "heketi-deployment.json.j2", dest: "{{ kube_config_dir }}/heketi-deployment.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-deployment.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Ensure heketi is up and running."
changed_when: false
register: "heketi_state"
vars:
heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -0,0 +1,30 @@
---
- name: "Kubernetes Apps | GlusterFS"
include_tasks: "glusterfs.yml"
- name: "Kubernetes Apps | Heketi Secrets"
include_tasks: "secret.yml"
- name: "Kubernetes Apps | Test Heketi"
register: "heketi_service_state"
command: "{{bin_dir}}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Bootstrap Heketi"
when: "heketi_service_state.stdout == \"\""
include_tasks: "bootstrap.yml"
- name: "Kubernetes Apps | Heketi"
include_tasks: "heketi.yml"
- name: "Kubernetes Apps | Heketi Topology"
include_tasks: "topology.yml"
- name: "Kubernetes Apps | Heketi Storage"
include_tasks: "storage.yml"
- name: "Kubernetes Apps | Storage Class"
include_tasks: "storageclass.yml"
- name: "Clean up"
include_tasks: "cleanup.yml"

View File

@@ -0,0 +1,27 @@
---
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "clusterrolebinding_state.stdout != \"\"", message: "Cluster role binding is not present." }
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- name: "Render Heketi secret configuration."
become: true
template:
src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json"
- name: "Deploy Heketi config secret"
when: "secret_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- assert: { that: "secret_state.stdout != \"\"", message: "Heketi config secret is not present." }

View File

@@ -0,0 +1,12 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Storage"
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
template: { src: "heketi-storage.json.j2", dest: "{{ kube_config_dir }}/heketi-storage.json" }
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,25 @@
---
- name: "Test storage class."
command: "{{ bin_dir }}/kubectl get storageclass gluster --ignore-not-found=true --output=json"
register: "storageclass"
changed_when: false
- name: "Test heketi service."
command: "{{ bin_dir }}/kubectl get service heketi --ignore-not-found=true --output=json"
register: "heketi_service"
changed_when: false
- name: "Ensure heketi service is available."
assert: { that: "heketi_service.stdout != \"\"" }
- name: "Render storage class configuration."
become: true
vars:
endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"
register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
filename: "{{ kube_config_dir }}/storageclass.yml"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,25 @@
---
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "rendering"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,144 @@
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs": "deployment"
},
"annotations": {
"description": "GlusterFS Daemon Set",
"tags": "glusterfs"
}
},
"spec": {
"template": {
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs-node": "daemonset"
}
},
"spec": {
"nodeSelector": {
"storagenode" : "glusterfs"
},
"hostNetwork": true,
"containers": [
{
"image": "gluster/gluster-centos:gluster4u0_centos7",
"imagePullPolicy": "IfNotPresent",
"name": "glusterfs",
"volumeMounts": [
{
"name": "glusterfs-heketi",
"mountPath": "/var/lib/heketi"
},
{
"name": "glusterfs-run",
"mountPath": "/run"
},
{
"name": "glusterfs-lvm",
"mountPath": "/run/lvm"
},
{
"name": "glusterfs-etc",
"mountPath": "/etc/glusterfs"
},
{
"name": "glusterfs-logs",
"mountPath": "/var/log/glusterfs"
},
{
"name": "glusterfs-config",
"mountPath": "/var/lib/glusterd"
},
{
"name": "glusterfs-dev",
"mountPath": "/dev"
},
{
"name": "glusterfs-cgroup",
"mountPath": "/sys/fs/cgroup"
}
],
"securityContext": {
"capabilities": {},
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
}
}
],
"volumes": [
{
"name": "glusterfs-heketi",
"hostPath": {
"path": "/var/lib/heketi"
}
},
{
"name": "glusterfs-run"
},
{
"name": "glusterfs-lvm",
"hostPath": {
"path": "/run/lvm"
}
},
{
"name": "glusterfs-etc",
"hostPath": {
"path": "/etc/glusterfs"
}
},
{
"name": "glusterfs-logs",
"hostPath": {
"path": "/var/log/glusterfs"
}
},
{
"name": "glusterfs-config",
"hostPath": {
"path": "/var/lib/glusterd"
}
},
{
"name": "glusterfs-dev",
"hostPath": {
"path": "/dev"
}
},
{
"name": "glusterfs-cgroup",
"hostPath": {
"path": "/sys/fs/cgroup"
}
}
]
}
}
}
}

View File

@@ -0,0 +1,133 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "deploy-heketi"
},
"ports": [
{
"name": "deploy-heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-deployment",
"deploy-heketi": "deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"name": "deploy-heketi",
"labels": {
"name": "deploy-heketi",
"glusterfs": "heketi-pod",
"deploy-heketi": "pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"imagePullPolicy": "Always",
"name": "deploy-heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db"
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,159 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "heketi-db-backup",
"labels": {
"glusterfs": "heketi-db",
"heketi": "db"
}
},
"data": {
},
"type": "Opaque"
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "heketi"
},
"ports": [
{
"name": "heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"name": "heketi",
"labels": {
"name": "heketi",
"glusterfs": "heketi-pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"imagePullPolicy": "Always",
"name": "heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"mountPath": "/backupdb",
"name": "heketi-db-secret"
},
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db",
"glusterfs": {
"endpoints": "heketi-storage-endpoints",
"path": "heketidbstorage"
}
},
{
"name": "heketi-db-secret",
"secret": {
"secretName": "heketi-db-backup"
}
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,7 @@
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "heketi-service-account"
}
}

View File

@@ -0,0 +1,54 @@
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"subsets": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"addresses": [
{
"ip": "{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
}
],
"ports": [
{
"port": 1
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"spec": {
"ports": [
{
"port": 1,
"targetPort": 0
}
]
},
"status": {
"loadBalancer": {}
}
}
]
}

View File

@@ -0,0 +1,44 @@
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "{{ heketi_admin_key }}"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "{{ heketi_user_key }}"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}

View File

@@ -0,0 +1,12 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://{{ endpoint_address }}:8080"
restuser: "admin"
restuserkey: "{{ heketi_admin_key }}"

View File

@@ -0,0 +1,34 @@
{
"clusters": [
{
"nodes": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"node": {
"hostnames": {
"manage": [
"{{ node }}"
],
"storage": [
"{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
]
},
"zone": 1
},
"devices": [
{
"name": "{{ hostvars[node]['disk_volume_device_1'] }}",
"destroydata": false
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
}
]
}

View File

@@ -0,0 +1,46 @@
---
- name: "Install lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'Debian'"
- name: "Get volume group information."
become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups"
ignore_errors: true
changed_when: false
- name: "Remove volume groups."
become: true
command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
become: true
command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true
- name: "Remove lvm utils (RedHat)"
become: true
yum:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat'"
- name: "Remove lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'Debian'"

View File

@@ -0,0 +1,51 @@
---
- name: "Remove storage class."
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
include_tasks: "../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over."
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs."
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service."
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding"
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret"
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup"
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account"
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
register: "secrets"
changed_when: false
- name: "Remove heketi storage secret"
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true

View File

@@ -20,7 +20,7 @@ BuildRequires: python2-setuptools
BuildRequires: python-d2to1
BuildRequires: python2-pbr
Requires: ansible >= 2.4.0
Requires: ansible >= 2.5.0
Requires: python-jinja2 >= 2.10
Requires: python-netaddr
Requires: python-pbr

View File

@@ -22,8 +22,6 @@ export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export TF_VAR_AWS_DEFAULT_REGION="zzz"
```
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
@@ -45,7 +43,7 @@ ssh -F ./ssh-bastion.conf user@$ip
Example (this one assumes you are using CoreOS)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -b --become-user=root --flush-cache
```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
@@ -113,9 +111,9 @@ the `AWS CLI` with the following command:
aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
```
***Ansible Inventory doesnt get created:***
***Ansible Inventory doesn't get created:***
It could happen that Terraform doesnt create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
It could happen that Terraform doesn't create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
**Architecture**

View File

@@ -181,7 +181,7 @@ data "template_file" "inventory" {
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers {

View File

@@ -2,8 +2,9 @@
${connection_strings_master}
${connection_strings_node}
${connection_strings_etcd}
${public_ip_address_bastion}
[bastion]
${public_ip_address_bastion}
[kube-master]
@@ -24,4 +25,4 @@ kube-master
[k8s-cluster:vars]
${elb_api_fqdn}
${elb_api_fqdn}

View File

@@ -31,3 +31,5 @@ default_tags = {
# Env = "devtest"
# Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"

View File

@@ -103,3 +103,7 @@ variable "default_tags" {
description = "Default tags for all resources"
type = "map"
}
variable "inventory_file" {
description = "Where to store the generated inventory file"
}

View File

@@ -8,6 +8,23 @@ Openstack.
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
most modern installs of OpenStack that support the basic services.
### Known compatible public clouds
- [Auro](https://auro.io/)
- [BetaCloud](https://www.betacloud.io/)
- [CityCloud](https://www.citycloud.com/)
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
### Known incompatible public clouds
- T-Systems / Open Telekom Cloud: requires `wait_until_associated`
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
@@ -223,6 +240,9 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
#### Terraform state files
@@ -258,7 +278,7 @@ $ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack
if you chose to create a bastion host, this script will create
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for Ansible to
be able to access your machines tunneling through the bastion's IP address. If
be able to access your machines tunneling through the bastion's IP address. If
you want to manually handle the ssh tunneling to these machines, please delete
or move that file. If you want to use this, just leave it there, as ansible will
pick it up automatically.
@@ -339,11 +359,6 @@ If it fails try to connect manually via SSH. It could be something as simple as
### Configure cluster variables
Edit `inventory/$CLUSTER/group_vars/all.yml`:
- Set variable **bootstrap_os** appropriately for your desired image:
```
# Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: coreos
```
- **bin_dir**:
```
# Directory where the binaries will be installed
@@ -422,14 +437,6 @@ $ kubectl config use-context default-system
kubectl version
```
If you are using floating ip addresses then you may get this error:
```
Unable to connect to the server: x509: certificate is valid for 10.0.0.6, 10.0.0.6, 10.233.0.1, 127.0.0.1, not 132.249.238.25
```
You can tell kubectl to ignore this condition by adding the
`--insecure-skip-tls-verify` option.
## GlusterFS
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
[GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)

View File

@@ -6,6 +6,7 @@ module "network" {
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
use_neutron = "${var.use_neutron}"
}
module "ips" {
@@ -50,7 +51,10 @@ module "compute" {
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}"
network_id = "${module.network.router_id}"
}

View File

@@ -3,72 +3,63 @@ resource "openstack_compute_keypair_v2" "k8s" {
public_key = "${chomp(file(var.public_key_path))}"
}
resource "openstack_compute_secgroup_v2" "k8s_master" {
resource "openstack_networking_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
rule {
ip_protocol = "tcp"
from_port = "6443"
to_port = "6443"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "bastion" {
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
description = "${var.cluster_name} - Bastion Server"
rule {
ip_protocol = "tcp"
from_port = "22"
to_port = "22"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_secgroup_v2" "k8s" {
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${length(var.bastion_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
}
resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
}
resource "openstack_compute_secgroup_v2" "worker" {
resource "openstack_networking_secgroup_rule_v2" "k8s" {
direction = "ingress"
ethertype = "IPv4"
remote_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_v2" "worker" {
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
}
rule {
ip_protocol = "tcp"
from_port = "30000"
to_port = "32767"
cidr = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
count = "${length(var.worker_allowed_ports)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "${lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")}"
port_range_min = "${lookup(var.worker_allowed_ports[count.index], "port_range_min")}"
port_range_max = "${lookup(var.worker_allowed_ports[count.index], "port_range_max")}"
remote_ip_prefix = "${lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")}"
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
}
resource "openstack_compute_instance_v2" "bastion" {
@@ -82,8 +73,8 @@ resource "openstack_compute_instance_v2" "bastion" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"default",
]
@@ -111,9 +102,9 @@ resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
@@ -141,9 +132,9 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
metadata = {
@@ -170,7 +161,7 @@ resource "openstack_compute_instance_v2" "etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}"]
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
metadata = {
ssh_user = "${var.ssh_user}"
@@ -192,8 +183,8 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
@@ -217,8 +208,8 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s_master.name}",
"${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
metadata = {
@@ -241,15 +232,15 @@ resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.bastion.name}",
"${openstack_compute_secgroup_v2.worker.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
@@ -271,14 +262,14 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
"${openstack_compute_secgroup_v2.worker.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
}
@@ -321,7 +312,7 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"default",
]

View File

@@ -46,7 +46,9 @@ variable "network_name" {}
variable "flavor_bastion" {}
variable "network_id" {}
variable "network_id" {
default = ""
}
variable "k8s_master_fips" {
type = "list"
@@ -60,6 +62,18 @@ variable "bastion_fips" {
type = "list"
}
variable "bastion_allowed_remote_ips" {
type = "list"
}
variable "supplementary_master_groups" {
default = ""
}
variable "supplementary_node_groups" {
default = ""
}
variable "worker_allowed_ports" {
type = "list"
}

View File

@@ -12,4 +12,6 @@ variable "external_net" {}
variable "network_name" {}
variable "router_id" {}
variable "router_id" {
default = ""
}

View File

@@ -1,16 +1,19 @@
resource "openstack_networking_router_v2" "k8s" {
name = "${var.cluster_name}-router"
count = "${var.use_neutron}"
admin_state_up = "true"
external_network_id = "${var.external_net}"
}
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
count = "${var.use_neutron}"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
count = "${var.use_neutron}"
network_id = "${openstack_networking_network_v2.k8s.id}"
cidr = "${var.subnet_cidr}"
ip_version = 4
@@ -18,6 +21,7 @@ resource "openstack_networking_subnet_v2" "k8s" {
}
resource "openstack_networking_router_interface_v2" "k8s" {
count = "${var.use_neutron}"
router_id = "${openstack_networking_router_v2.k8s.id}"
subnet_id = "${openstack_networking_subnet_v2.k8s.id}"
}

View File

@@ -1,7 +1,12 @@
output "router_id" {
value = "${openstack_networking_router_interface_v2.k8s.id}"
value = "${element(concat(openstack_networking_router_v2.k8s.*.id, list("")), 0)}"
}
output "router_internal_port_id" {
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
}
output "subnet_id" {
value = "${openstack_networking_subnet_v2.k8s.id}"
value = "${element(concat(openstack_networking_subnet_v2.k8s.*.id, list("")), 0)}"
}

View File

@@ -9,3 +9,5 @@ variable "dns_nameservers" {
}
variable "subnet_cidr" {}
variable "use_neutron" {}

View File

@@ -43,4 +43,4 @@ network_name = "<network>"
external_net = "<UUID>"
subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]

View File

@@ -103,6 +103,11 @@ variable "network_name" {
default = "internal"
}
variable "use_neutron" {
description = "Use neutron"
default = 1
}
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
@@ -128,3 +133,26 @@ variable "supplementary_master_groups" {
description = "supplementary kubespray ansible groups for masters, such kube-node"
default = ""
}
variable "supplementary_node_groups" {
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
default = ""
}
variable "bastion_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
default = ["0.0.0.0/0"]
}
variable "worker_allowed_ports" {
type = "list"
default = [
{
"protocol" = "tcp"
"port_range_min" = 30000
"port_range_max" = 32767
"remote_ip_prefix" = "0.0.0.0/0"
}
]
}

View File

@@ -328,6 +328,7 @@ def openstack_host(resource, module_name):
attrs = {
'access_ip_v4': raw_attrs['access_ip_v4'],
'access_ip_v6': raw_attrs['access_ip_v6'],
'access_ip': raw_attrs['access_ip_v4'],
'ip': raw_attrs['network.0.fixed_ip_v4'],
'flavor': parse_dict(raw_attrs, 'flavor',
sep='_'),
@@ -685,6 +686,7 @@ def iter_host_ips(hosts, ips):
ip = ips[host_id]
host[1].update({
'access_ip_v4': ip,
'access_ip': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
})

View File

@@ -0,0 +1,31 @@
vault_deployment_type: docker
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
vault_version: 0.10.1
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
vault_image_repo: "vault"
vault_image_tag: "{{ vault_version }}"
vault_downloads:
vault:
enabled: "{{ cert_management == 'vault' }}"
container: "{{ vault_deployment_type != 'host' }}"
file: "{{ vault_deployment_type == 'host' }}"
dest: "{{local_release_dir}}/vault/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
mode: "0755"
owner: "vault"
repo: "{{ vault_image_repo }}"
sha256: "{{ vault_binary_checksum if vault_deployment_type == 'host' else vault_digest_checksum|d(none) }}"
tag: "{{ vault_image_tag }}"
unarchive: true
url: "{{ vault_download_url }}"
version: "{{ vault_version }}"
groups:
- vault
# Vault data dirs.
vault_base_dir: /etc/vault
vault_cert_dir: "{{ vault_base_dir }}/ssl"
vault_config_dir: "{{ vault_base_dir }}/config"
vault_roles_dir: "{{ vault_base_dir }}/roles"
vault_secrets_dir: "{{ vault_base_dir }}/secrets"
kube_vault_mount_path: "/kube"
etcd_vault_mount_path: "/etcd"

View File

@@ -0,0 +1 @@
ansible-modules-hashivault>=3.9.4

View File

@@ -26,6 +26,9 @@
"{{ hostvars[host]['ip'] }}",
{%- endif -%}
{%- endfor -%}
{%- for cert_alt_ip in etcd_cert_alt_ips -%}
"{{ cert_alt_ip }}",
{%- endfor -%}
"127.0.0.1","::1"
]
issue_cert_path: "{{ item }}"
@@ -62,3 +65,9 @@
with_items: "{{ etcd_node_certs_needed|d([]) }}"
when: inventory_hostname in etcd_node_cert_hosts
notify: set etcd_secret_changed
- name: gen_certs_vault | ensure file permissions
shell: >-
find {{etcd_cert_dir }} -type d -exec chmod 0755 {} \; &&
find {{etcd_cert_dir }} -type f -exec chmod 0640 {} \;
changed_when: false

View File

@@ -21,10 +21,17 @@ vault_log_dir: "/var/log/vault"
vault_version: 0.10.1
vault_binary_checksum: 66f0f1b0b221d664dd5913f8697409d7401df4bb2a19c7277e8fbad152063fae
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip"
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
# hvac==0.7.0 is broken at the moment
hvac_version: 0.6.4
# Arch of Docker images and needed packages
image_arch: "{{host_architecture}}"
vault_download_vars:
container: "{{ vault_deployment_type != 'host' }}"
dest: "vault/vault_{{ vault_version }}_linux_amd64.zip"
dest: "vault/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
enabled: true
mode: "0755"
owner: "vault"
@@ -45,7 +52,7 @@ vault_bind_address: 0.0.0.0
vault_port: 8200
vault_etcd_url: "{{ etcd_access_addresses }}"
# 8y default lease
# By default lease
vault_default_lease_ttl: 70080h
vault_max_lease_ttl: 87600h
@@ -72,6 +79,7 @@ vault_config:
cluster_name: "kubernetes-vault"
default_lease_ttl: "{{ vault_default_lease_ttl }}"
max_lease_ttl: "{{ vault_max_lease_ttl }}"
ui: "true"
listener:
tcp:
address: "{{ vault_bind_address }}:{{ vault_port }}"
@@ -118,7 +126,7 @@ vault_pki_mounts:
roles:
- name: userpass
group: userpass
password: "{{ lookup('password', inventory_dir + '/credentials/vault/userpass.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/userpass.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -132,7 +140,7 @@ vault_pki_mounts:
roles:
- name: vault
group: vault
password: "{{ lookup('password', inventory_dir + '/credentials/vault/vault.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/vault.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -145,7 +153,7 @@ vault_pki_mounts:
roles:
- name: etcd
group: etcd
password: "{{ lookup('password', inventory_dir + '/credentials/vault/etcd.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/etcd.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -160,7 +168,7 @@ vault_pki_mounts:
roles:
- name: kube-master
group: kube-master
password: "{{ lookup('password', inventory_dir + '/credentials/vault/kube-master.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/kube-master.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -168,7 +176,7 @@ vault_pki_mounts:
organization: "system:masters"
- name: front-proxy-client
group: kube-master
password: "{{ lookup('password', inventory_dir + '/credentials/vault/kube-proxy.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/kube-proxy.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -176,7 +184,7 @@ vault_pki_mounts:
organization: "system:front-proxy-client"
- name: kube-node
group: k8s-cluster
password: "{{ lookup('password', inventory_dir + '/credentials/vault/kube-node.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/kube-node.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true
@@ -184,7 +192,7 @@ vault_pki_mounts:
organization: "system:nodes"
- name: kube-proxy
group: k8s-cluster
password: "{{ lookup('password', inventory_dir + '/credentials/vault/kube-proxy.creds length=15') }}"
password: "{{ lookup('password', credentials_dir + '/vault/kube-proxy.creds length=15') }}"
policy_rules: default
role_options:
allow_any_name: true

View File

@@ -12,7 +12,7 @@
headers: "{{ vault_client_headers }}"
status_code: "{{ vault_successful_http_codes | join(',') }}"
register: vault_health_check
until: vault_health_check|succeeded
until: vault_health_check is succeeded
retries: 10
delay: "{{ retry_stagger | random + 3 }}"
run_once: yes

Some files were not shown because too many files have changed in this diff Show More